Sample records for substantially improved accuracy

  1. Preliminary study of GPS orbit determination accuracy achievable from worldwide tracking data

    NASA Technical Reports Server (NTRS)

    Larden, D. R.; Bender, P. L.

    1982-01-01

    The improvement in the orbit accuracy if high accuracy tracking data from a substantially larger number of ground stations is available was investigated. Observations from 20 ground stations indicate that 20 cm or better accuracy can be achieved for the horizontal coordinates of the GPS satellites. With this accuracy, the contribution to the error budget for determining 1000 km baselines by GPS geodetic receivers would be only about 1 cm.

  2. An improved semi-implicit method for structural dynamics analysis

    NASA Technical Reports Server (NTRS)

    Park, K. C.

    1982-01-01

    A semi-implicit algorithm is presented for direct time integration of the structural dynamics equations. The algorithm avoids the factoring of the implicit difference solution matrix and mitigates the unacceptable accuracy losses which plagued previous semi-implicit algorithms. This substantial accuracy improvement is achieved by augmenting the solution matrix with two simple diagonal matrices of the order of the integration truncation error.

  3. Frequency of Loaded Road March Training and Performance on a Loaded Road March

    DTIC Science & Technology

    1990-04-01

    heart rate through the use of beta - blockers can substantially improve shooting accuracy (29, 44). Post road march decrements in the grenade throw may...the road march. An Increase in body tremors due to fatigue or an elevated post exercise heart rate may account for this. Whole body sway while aiming...a rifle is substantially increased even after a short period of exercise (39) and this may effect accuracy. Muscle tremors increase after brief or

  4. High-precision radiometric tracking for planetary approach and encounter in the inner solar system

    NASA Technical Reports Server (NTRS)

    Christensen, C. S.; Thurman, S. W.; Davidson, J. M.; Finger, M. H.; Folkner, W. M.

    1989-01-01

    The benefits of improved radiometric tracking data have been studied for planetary approach within the inner Solar System using the Mars Rover Sample Return trajectory as a model. It was found that the benefit of improved data to approach and encounter navigation was highly dependent on the a priori uncertainties assumed for several non-estimated parameters, including those for frame-tie, Earth orientation, troposphere delay, and station locations. With these errors at their current levels, navigational performance was found to be insensitive to enhancements in data accuracy. However, when expected improvements in these errors are modeled, performance with current-accuracy data significantly improves, with substantial further improvements possible with enhancements in data accuracy.

  5. Methodology and reporting of diagnostic accuracy studies of automated perimetry in glaucoma: evaluation using a standardised approach.

    PubMed

    Fidalgo, Bruno M R; Crabb, David P; Lawrenson, John G

    2015-05-01

    To evaluate methodological and reporting quality of diagnostic accuracy studies of perimetry in glaucoma and to determine whether there had been any improvement since the publication of the Standards for Reporting of Diagnostic Accuracy (STARD) guidelines. A systematic review of English language articles published between 1993 and 2013 reporting the diagnostic accuracy of perimetry in glaucoma. Articles were appraised for methodological quality using the 14-item Quality assessment tool for diagnostic accuracy studies (QUADAS) and evaluated for quality of reporting by applying the STARD checklist. Fifty-eight articles were appraised. Overall methodological quality of these studies was moderate with a median number of QUADAS items rated as 'yes' equal to nine (out of a maximum of 14) (IQR 7-10). The studies were often poorly reported; median score of STARD items fully reported was 11 out of 25 (IQR 10-14). A comparison of the studies published in 10-year periods before and after the publication of the STARD checklist in 2003 found quality of reporting had not substantially improved. Methodological and reporting quality of diagnostic accuracy studies of perimetry is sub-optimal and appears not to have improved substantially following the development of the STARD reporting guidance. This observation is consistent with previous studies in ophthalmology and in other medical specialities. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.

  6. Preliminary study of GPS orbit determination accuracy achievable from worldwide tracking data

    NASA Technical Reports Server (NTRS)

    Larden, D. R.; Bender, P. L.

    1983-01-01

    The improvement in the orbit accuracy if high accuracy tracking data from a substantially larger number of ground stations is available was investigated. Observations from 20 ground stations indicate that 20 cm or better accuracy can be achieved for the horizontal coordinates of the GPS satellites. With this accuracy, the contribution to the error budget for determining 1000 km baselines by GPS geodetic receivers would be only about 1 cm. Previously announced in STAR as N83-14605

  7. Integrated multi-ISE arrays with improved sensitivity, accuracy and precision

    NASA Astrophysics Data System (ADS)

    Wang, Chunling; Yuan, Hongyan; Duan, Zhijuan; Xiao, Dan

    2017-03-01

    Increasing use of ion-selective electrodes (ISEs) in the biological and environmental fields has generated demand for high-sensitivity ISEs. However, improving the sensitivities of ISEs remains a challenge because of the limit of the Nernstian slope (59.2/n mV). Here, we present a universal ion detection method using an electronic integrated multi-electrode system (EIMES) that bypasses the Nernstian slope limit of 59.2/n mV, thereby enabling substantial enhancement of the sensitivity of ISEs. The results reveal that the response slope is greatly increased from 57.2 to 1711.3 mV, 57.3 to 564.7 mV and 57.7 to 576.2 mV by electronic integrated 30 Cl- electrodes, 10 F- electrodes and 10 glass pH electrodes, respectively. Thus, a tiny change in the ion concentration can be monitored, and correspondingly, the accuracy and precision are substantially improved. The EIMES is suited for all types of potentiometric sensors and may pave the way for monitoring of various ions with high accuracy and precision because of its high sensitivity.

  8. Methods for recalibration of mass spectrometry data

    DOEpatents

    Tolmachev, Aleksey V [Richland, WA; Smith, Richard D [Richland, WA

    2009-03-03

    Disclosed are methods for recalibrating mass spectrometry data that provide improvement in both mass accuracy and precision by adjusting for experimental variance in parameters that have a substantial impact on mass measurement accuracy. Optimal coefficients are determined using correlated pairs of mass values compiled by matching sets of measured and putative mass values that minimize overall effective mass error and mass error spread. Coefficients are subsequently used to correct mass values for peaks detected in the measured dataset, providing recalibration thereof. Sub-ppm mass measurement accuracy has been demonstrated on a complex fungal proteome after recalibration, providing improved confidence for peptide identifications.

  9. Improved numerical methods for turbulent viscous recirculating flows

    NASA Technical Reports Server (NTRS)

    Turan, A.; Vandoormaal, J. P.

    1988-01-01

    The performance of discrete methods for the prediction of fluid flows can be enhanced by improving the convergence rate of solvers and by increasing the accuracy of the discrete representation of the equations of motion. This report evaluates the gains in solver performance that are available when various acceleration methods are applied. Various discretizations are also examined and two are recommended because of their accuracy and robustness. Insertion of the improved discretization and solver accelerator into a TEACH mode, that has been widely applied to combustor flows, illustrates the substantial gains to be achieved.

  10. Comparison of Hybrid Classifiers for Crop Classification Using Normalized Difference Vegetation Index Time Series: A Case Study for Major Crops in North Xinjiang, China

    PubMed Central

    Hao, Pengyu; Wang, Li; Niu, Zheng

    2015-01-01

    A range of single classifiers have been proposed to classify crop types using time series vegetation indices, and hybrid classifiers are used to improve discriminatory power. Traditional fusion rules use the product of multi-single classifiers, but that strategy cannot integrate the classification output of machine learning classifiers. In this research, the performance of two hybrid strategies, multiple voting (M-voting) and probabilistic fusion (P-fusion), for crop classification using NDVI time series were tested with different training sample sizes at both pixel and object levels, and two representative counties in north Xinjiang were selected as study area. The single classifiers employed in this research included Random Forest (RF), Support Vector Machine (SVM), and See 5 (C 5.0). The results indicated that classification performance improved (increased the mean overall accuracy by 5%~10%, and reduced standard deviation of overall accuracy by around 1%) substantially with the training sample number, and when the training sample size was small (50 or 100 training samples), hybrid classifiers substantially outperformed single classifiers with higher mean overall accuracy (1%~2%). However, when abundant training samples (4,000) were employed, single classifiers could achieve good classification accuracy, and all classifiers obtained similar performances. Additionally, although object-based classification did not improve accuracy, it resulted in greater visual appeal, especially in study areas with a heterogeneous cropping pattern. PMID:26360597

  11. Genomic selection models double the accuracy of predicted breeding values for bacterial cold water disease resistance compared to a traditional pedigree-based model in rainbow trout aquaculture.

    PubMed

    Vallejo, Roger L; Leeds, Timothy D; Gao, Guangtu; Parsons, James E; Martin, Kyle E; Evenhuis, Jason P; Fragomeni, Breno O; Wiens, Gregory D; Palti, Yniv

    2017-02-01

    Previously, we have shown that bacterial cold water disease (BCWD) resistance in rainbow trout can be improved using traditional family-based selection, but progress has been limited to exploiting only between-family genetic variation. Genomic selection (GS) is a new alternative that enables exploitation of within-family genetic variation. We compared three GS models [single-step genomic best linear unbiased prediction (ssGBLUP), weighted ssGBLUP (wssGBLUP), and BayesB] to predict genomic-enabled breeding values (GEBV) for BCWD resistance in a commercial rainbow trout population, and compared the accuracy of GEBV to traditional estimates of breeding values (EBV) from a pedigree-based BLUP (P-BLUP) model. We also assessed the impact of sampling design on the accuracy of GEBV predictions. For these comparisons, we used BCWD survival phenotypes recorded on 7893 fish from 102 families, of which 1473 fish from 50 families had genotypes [57 K single nucleotide polymorphism (SNP) array]. Naïve siblings of the training fish (n = 930 testing fish) were genotyped to predict their GEBV and mated to produce 138 progeny testing families. In the following generation, 9968 progeny were phenotyped to empirically assess the accuracy of GEBV predictions made on their non-phenotyped parents. The accuracy of GEBV from all tested GS models were substantially higher than the P-BLUP model EBV. The highest increase in accuracy relative to the P-BLUP model was achieved with BayesB (97.2 to 108.8%), followed by wssGBLUP at iteration 2 (94.4 to 97.1%) and 3 (88.9 to 91.2%) and ssGBLUP (83.3 to 85.3%). Reducing the training sample size to n = ~1000 had no negative impact on the accuracy (0.67 to 0.72), but with n = ~500 the accuracy dropped to 0.53 to 0.61 if the training and testing fish were full-sibs, and even substantially lower, to 0.22 to 0.25, when they were not full-sibs. Using progeny performance data, we showed that the accuracy of genomic predictions is substantially higher than estimates obtained from the traditional pedigree-based BLUP model for BCWD resistance. Overall, we found that using a much smaller training sample size compared to similar studies in livestock, GS can substantially improve the selection accuracy and genetic gains for this trait in a commercial rainbow trout breeding population.

  12. Factoring vs linear modeling in rate estimation: a simulation study of relative accuracy.

    PubMed

    Maldonado, G; Greenland, S

    1998-07-01

    A common strategy for modeling dose-response in epidemiology is to transform ordered exposures and covariates into sets of dichotomous indicator variables (that is, to factor the variables). Factoring tends to increase estimation variance, but it also tends to decrease bias and thus may increase or decrease total accuracy. We conducted a simulation study to examine the impact of factoring on the accuracy of rate estimation. Factored and unfactored Poisson regression models were fit to follow-up study datasets that were randomly generated from 37,500 population model forms that ranged from subadditive to supramultiplicative. In the situations we examined, factoring sometimes substantially improved accuracy relative to fitting the corresponding unfactored model, sometimes substantially decreased accuracy, and sometimes made little difference. The difference in accuracy between factored and unfactored models depended in a complicated fashion on the difference between the true and fitted model forms, the strength of exposure and covariate effects in the population, and the study size. It may be difficult in practice to predict when factoring is increasing or decreasing accuracy. We recommend, therefore, that the strategy of factoring variables be supplemented with other strategies for modeling dose-response.

  13. Continuous Glucose Monitoring in Subjects with Type 1 Diabetes: Improvement in Accuracy by Correcting for Background Current

    PubMed Central

    Youssef, Joseph El; Engle, Julia M.; Massoud, Ryan G.; Ward, W. Kenneth

    2010-01-01

    Abstract Background A cause of suboptimal accuracy in amperometric glucose sensors is the presence of a background current (current produced in the absence of glucose) that is not accounted for. We hypothesized that a mathematical correction for the estimated background current of a commercially available sensor would lead to greater accuracy compared to a situation in which we assumed the background current to be zero. We also tested whether increasing the frequency of sensor calibration would improve sensor accuracy. Methods This report includes analysis of 20 sensor datasets from seven human subjects with type 1 diabetes. Data were divided into a training set for algorithm development and a validation set on which the algorithm was tested. A range of potential background currents was tested. Results Use of the background current correction of 4 nA led to a substantial improvement in accuracy (improvement of absolute relative difference or absolute difference of 3.5–5.5 units). An increase in calibration frequency led to a modest accuracy improvement, with an optimum at every 4 h. Conclusions Compared to no correction, a correction for the estimated background current of a commercially available glucose sensor led to greater accuracy and better detection of hypoglycemia and hyperglycemia. The accuracy-optimizing scheme presented here can be implemented in real time. PMID:20879968

  14. Small angle X-ray scattering and cross-linking for data assisted protein structure prediction in CASP 12 with prospects for improved accuracy.

    PubMed

    Ogorzalek, Tadeusz L; Hura, Greg L; Belsom, Adam; Burnett, Kathryn H; Kryshtafovych, Andriy; Tainer, John A; Rappsilber, Juri; Tsutakawa, Susan E; Fidelis, Krzysztof

    2018-03-01

    Experimental data offers empowering constraints for structure prediction. These constraints can be used to filter equivalently scored models or more powerfully within optimization functions toward prediction. In CASP12, Small Angle X-ray Scattering (SAXS) and Cross-Linking Mass Spectrometry (CLMS) data, measured on an exemplary set of novel fold targets, were provided to the CASP community of protein structure predictors. As solution-based techniques, SAXS and CLMS can efficiently measure states of the full-length sequence in its native solution conformation and assembly. However, this experimental data did not substantially improve prediction accuracy judged by fits to crystallographic models. One issue, beyond intrinsic limitations of the algorithms, was a disconnect between crystal structures and solution-based measurements. Our analyses show that many targets had substantial percentages of disordered regions (up to 40%) or were multimeric or both. Thus, solution measurements of flexibility and assembly support variations that may confound prediction algorithms trained on crystallographic data and expecting globular fully-folded monomeric proteins. Here, we consider the CLMS and SAXS data collected, the information in these solution measurements, and the challenges in incorporating them into computational prediction. As improvement opportunities were only partly realized in CASP12, we provide guidance on how data from the full-length biological unit and the solution state can better aid prediction of the folded monomer or subunit. We furthermore describe strategic integrations of solution measurements with computational prediction programs with the aim of substantially improving foundational knowledge and the accuracy of computational algorithms for biologically-relevant structure predictions for proteins in solution. © 2018 Wiley Periodicals, Inc.

  15. Gene masking - a technique to improve accuracy for cancer classification with high dimensionality in microarray data.

    PubMed

    Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok

    2016-12-05

    High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.

  16. Evaluation of new techniques for the calculation of internal recirculating flows

    NASA Technical Reports Server (NTRS)

    Van Doormaal, J. P.; Turan, A.; Raithby, G. D.

    1987-01-01

    The performance of discrete methods for the prediction of fluid flows can be enhanced by improving the convergence rate of solvers and by increasing the accuracy of the discrete representation of the equations of motion. This paper evaluates the gains in solver performance that are available when various acceleration methods are applied. Various discretizations are also examined and two are recommended because of their accuracy and robustness. Insertion of the improved discretization and solver accelerator into a TEACH code, that has been widely applied to combustor flows, illustrates the substantial gains that can be achieved.

  17. A Web-Based Education Program for Colorectal Lesion Diagnosis with Narrow Band Imaging Classification.

    PubMed

    Aihara, Hiroyuki; Kumar, Nitin; Thompson, Christopher C

    2018-04-19

    An education system for narrow band imaging (NBI) interpretation requires sufficient exposure to key features. However, access to didactic lectures by experienced teachers is limited in the United States. To develop and assess the effectiveness of a colorectal lesion identification tutorial. In the image analysis pretest, subjects including 9 experts and 8 trainees interpreted 50 white light (WL) and 50 NBI images of colorectal lesions. Results were not reviewed with subjects. Trainees then participated in an online tutorial emphasizing NBI interpretation in colorectal lesion analysis. A post-test was administered and diagnostic yields were compared to pre-education diagnostic yields. Under the NBI mode, experts showed higher diagnostic yields (sensitivity 91.5% [87.3-94.4], specificity 90.6% [85.1-94.2], and accuracy 91.1% [88.5-93.7] with substantial interobserver agreement [κ value 0.71]) compared to trainees (sensitivity 89.6% [84.8-93.0], specificity 80.6% [73.5-86.3], and accuracy 86.0% [82.6-89.2], with substantial interobserver agreement [κ value 0.69]). The online tutorial improved the diagnostic yields of trainees to the equivalent level of experts (sensitivity 94.1% [90.0-96.6], specificity 89.0% [83.0-93.2], and accuracy 92.0% [89.3-94.7], p < 0.001 with substantial interobserver agreement [κ value 0.78]). This short, online tutorial improved diagnostic performance and interobserver agreement. © 2018 S. Karger AG, Basel.

  18. An Evaluation of the Conservative Dual-Criterion Method for Teaching University Students to Visually Inspect AB-Design Graphs

    ERIC Educational Resources Information Center

    Stewart, Kelise K.; Carr, James E.; Brandt, Charles W.; McHenry, Meade M.

    2007-01-01

    The present study evaluated the effects of both a traditional lecture and the conservative dual-criterion (CDC) judgment aid on the ability of 6 university students to visually inspect AB-design line graphs. The traditional lecture reliably failed to improve visual inspection accuracy, whereas the CDC method substantially improved the performance…

  19. Motion correction for improving the accuracy of dual-energy myocardial perfusion CT imaging

    NASA Astrophysics Data System (ADS)

    Pack, Jed D.; Yin, Zhye; Xiong, Guanglei; Mittal, Priya; Dunham, Simon; Elmore, Kimberly; Edic, Peter M.; Min, James K.

    2016-03-01

    Coronary Artery Disease (CAD) is the leading cause of death globally [1]. Modern cardiac computed tomography angiography (CCTA) is highly effective at identifying and assessing coronary blockages associated with CAD. The diagnostic value of this anatomical information can be substantially increased in combination with a non-invasive, low-dose, correlative, quantitative measure of blood supply to the myocardium. While CT perfusion has shown promise of providing such indications of ischemia, artifacts due to motion, beam hardening, and other factors confound clinical findings and can limit quantitative accuracy. In this paper, we investigate the impact of applying a novel motion correction algorithm to correct for motion in the myocardium. This motion compensation algorithm (originally designed to correct for the motion of the coronary arteries in order to improve CCTA images) has been shown to provide substantial improvements in both overall image quality and diagnostic accuracy of CCTA. We have adapted this technique for application beyond the coronary arteries and present an assessment of its impact on image quality and quantitative accuracy within the context of dual-energy CT perfusion imaging. We conclude that motion correction is a promising technique that can help foster the routine clinical use of dual-energy CT perfusion. When combined, the anatomical information of CCTA and the hemodynamic information from dual-energy CT perfusion should facilitate better clinical decisions about which patients would benefit from treatments such as stent placement, drug therapy, or surgery and help other patients avoid the risks and costs associated with unnecessary, invasive, diagnostic coronary angiography procedures.

  20. Digital Sensor Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Ken D.; Quinn, Edward L.; Mauck, Jerry L.

    The nuclear industry has been slow to incorporate digital sensor technology into nuclear plant designs due to concerns with digital qualification issues. However, the benefits of digital sensor technology for nuclear plant instrumentation are substantial in terms of accuracy and reliability. This paper, which refers to a final report issued in 2013, demonstrates these benefits in direct comparisons of digital and analog sensor applications. Improved accuracy results from the superior operating characteristics of digital sensors. These include improvements in sensor accuracy and drift and other related parameters which reduce total loop uncertainty and thereby increase safety and operating margins. Anmore » example instrument loop uncertainty calculation for a pressure sensor application is presented to illustrate these improvements. This is a side-by-side comparison of the instrument loop uncertainty for both an analog and a digital sensor in the same pressure measurement application. Similarly, improved sensor reliability is illustrated with a sample calculation for determining the probability of failure on demand, an industry standard reliability measure. This looks at equivalent analog and digital temperature sensors to draw the comparison. The results confirm substantial reliability improvement with the digital sensor, due in large part to ability to continuously monitor the health of a digital sensor such that problems can be immediately identified and corrected. This greatly reduces the likelihood of a latent failure condition of the sensor at the time of a design basis event. Notwithstanding the benefits of digital sensors, there are certain qualification issues that are inherent with digital technology and these are described in the report. One major qualification impediment for digital sensor implementation is software common cause failure (SCCF).« less

  1. Structural reanalysis via a mixed method. [using Taylor series for accuracy improvement

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Lowder, H. E.

    1975-01-01

    A study is made of the approximate structural reanalysis technique based on the use of Taylor series expansion of response variables in terms of design variables in conjunction with the mixed method. In addition, comparisons are made with two reanalysis techniques based on the displacement method. These techniques are the Taylor series expansion and the modified reduced basis. It is shown that the use of the reciprocals of the sizing variables as design variables (which is the natural choice in the mixed method) can result in a substantial improvement in the accuracy of the reanalysis technique. Numerical results are presented for a space truss structure.

  2. Accuracy of Surgery Clerkship Performance Raters.

    ERIC Educational Resources Information Center

    Littlefield, John H.; And Others

    1991-01-01

    Interrater reliability in numerical ratings of clerkship performance (n=1,482 students) in five surgery programs was studied. Raters were classified as accurate or moderately or significantly stringent or lenient. Results indicate that increasing the proportion of accurate raters would substantially improve the precision of class rankings. (MSE)

  3. Improved navigation by combining VOR/DME information with air or inertial data

    NASA Technical Reports Server (NTRS)

    Bobick, J. C.; Bryson, A. E., Jr.

    1972-01-01

    The improvement was determined in navigational accuracy obtainable by combining VOR/DME information (from one or two stations) with air data (airspeed and heading) or with data from an inertial navigation system (INS) by means of a maximum-likelihood filter. It was found that the addition of air data to the information from one VOR/DME station reduces the RMS position error by a factor of about 2, whereas the addition of inertial data from a low-quality INS reduces the RMS position error by a factor of about 3. The use of information from two VOR/DME stations with air or inertial data yields large factors of improvement in RMS position accuracy over the use of a single VOR/DME station, roughly 15 to 20 for the air-data case and 25 to 35 for the inertial-data case. As far as position accuracy is concerned, at most one VOR station need be used. When continuously updating an INS with VOR/DME information, the use of a high-quality INS (0.01 deg/hr gyro drift) instead of a low-quality INS (1.0 deg/hr gyro drift) does not substantially improve position accuracy.

  4. Protein homology model refinement by large-scale energy optimization.

    PubMed

    Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David

    2018-03-20

    Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.

  5. Utilizing multiple scale models to improve predictions of extra-axial hemorrhage in the immature piglet.

    PubMed

    Scott, Gregory G; Margulies, Susan S; Coats, Brittany

    2016-10-01

    Traumatic brain injury (TBI) is a leading cause of death and disability in the USA. To help understand and better predict TBI, researchers have developed complex finite element (FE) models of the head which incorporate many biological structures such as scalp, skull, meninges, brain (with gray/white matter differentiation), and vasculature. However, most models drastically simplify the membranes and substructures between the pia and arachnoid membranes. We hypothesize that substructures in the pia-arachnoid complex (PAC) contribute substantially to brain deformation following head rotation, and that when included in FE models accuracy of extra-axial hemorrhage prediction improves. To test these hypotheses, microscale FE models of the PAC were developed to span the variability of PAC substructure anatomy and regional density. The constitutive response of these models were then integrated into an existing macroscale FE model of the immature piglet brain to identify changes in cortical stress distribution and predictions of extra-axial hemorrhage (EAH). Incorporating regional variability of PAC substructures substantially altered the distribution of principal stress on the cortical surface of the brain compared to a uniform representation of the PAC. Simulations of 24 non-impact rapid head rotations in an immature piglet animal model resulted in improved accuracy of EAH prediction (to 94 % sensitivity, 100 % specificity), as well as a high accuracy in regional hemorrhage prediction (to 82-100 % sensitivity, 100 % specificity). We conclude that including a biofidelic PAC substructure variability in FE models of the head is essential for improved predictions of hemorrhage at the brain/skull interface.

  6. Accuracy of clinical coding for procedures in oral and maxillofacial surgery.

    PubMed

    Khurram, S A; Warner, C; Henry, A M; Kumar, A; Mohammed-Ali, R I

    2016-10-01

    Clinical coding has important financial implications, and discrepancies in the assigned codes can directly affect the funding of a department and hospital. Over the last few years, numerous oversights have been noticed in the coding of oral and maxillofacial (OMF) procedures. To establish the accuracy and completeness of coding, we retrospectively analysed the records of patients during two time periods: March to May 2009 (324 patients), and January to March 2014 (200 patients). Two investigators independently collected and analysed the data to ensure accuracy and remove bias. A large proportion of operations were not assigned all the relevant codes, and only 32% - 33% were correct in both cycles. To our knowledge, this is the first reported audit of clinical coding in OMFS, and it highlights serious shortcomings that have substantial financial implications. Better input by the surgical team and improved communication between the surgical and coding departments will improve accuracy. Copyright © 2016 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jie; Cui, Mingjian; Hodge, Bri-Mathias

    The large variability and uncertainty in wind power generation present a concern to power system operators, especially given the increasing amounts of wind power being integrated into the electric power system. Large ramps, one of the biggest concerns, can significantly influence system economics and reliability. The Wind Forecast Improvement Project (WFIP) was to improve the accuracy of forecasts and to evaluate the economic benefits of these improvements to grid operators. This paper evaluates the ramp forecasting accuracy gained by improving the performance of short-term wind power forecasting. This study focuses on the WFIP southern study region, which encompasses most ofmore » the Electric Reliability Council of Texas (ERCOT) territory, to compare the experimental WFIP forecasts to the existing short-term wind power forecasts (used at ERCOT) at multiple spatial and temporal scales. The study employs four significant wind power ramping definitions according to the power change magnitude, direction, and duration. The optimized swinging door algorithm is adopted to extract ramp events from actual and forecasted wind power time series. The results show that the experimental WFIP forecasts improve the accuracy of the wind power ramp forecasting. This improvement can result in substantial costs savings and power system reliability enhancements.« less

  8. Forecasting Influenza Outbreaks in Boroughs and Neighborhoods of New York City.

    PubMed

    Yang, Wan; Olson, Donald R; Shaman, Jeffrey

    2016-11-01

    The ideal spatial scale, or granularity, at which infectious disease incidence should be monitored and forecast has been little explored. By identifying the optimal granularity for a given disease and host population, and matching surveillance and prediction efforts to this scale, response to emergent and recurrent outbreaks can be improved. Here we explore how granularity and representation of spatial structure affect influenza forecast accuracy within New York City. We develop network models at the borough and neighborhood levels, and use them in conjunction with surveillance data and a data assimilation method to forecast influenza activity. These forecasts are compared to an alternate system that predicts influenza for each borough or neighborhood in isolation. At the borough scale, influenza epidemics are highly synchronous despite substantial differences in intensity, and inclusion of network connectivity among boroughs generally improves forecast accuracy. At the neighborhood scale, we observe much greater spatial heterogeneity among influenza outbreaks including substantial differences in local outbreak timing and structure; however, inclusion of the network model structure generally degrades forecast accuracy. One notable exception is that local outbreak onset, particularly when signal is modest, is better predicted with the network model. These findings suggest that observation and forecast at sub-municipal scales within New York City provides richer, more discriminant information on influenza incidence, particularly at the neighborhood scale where greater heterogeneity exists, and that the spatial spread of influenza among localities can be forecast.

  9. Discovery of novel variants in genotyping arrays improves genotype retention and reduces ascertainment bias

    PubMed Central

    2012-01-01

    Background High-density genotyping arrays that measure hybridization of genomic DNA fragments to allele-specific oligonucleotide probes are widely used to genotype single nucleotide polymorphisms (SNPs) in genetic studies, including human genome-wide association studies. Hybridization intensities are converted to genotype calls by clustering algorithms that assign each sample to a genotype class at each SNP. Data for SNP probes that do not conform to the expected pattern of clustering are often discarded, contributing to ascertainment bias and resulting in lost information - as much as 50% in a recent genome-wide association study in dogs. Results We identified atypical patterns of hybridization intensities that were highly reproducible and demonstrated that these patterns represent genetic variants that were not accounted for in the design of the array platform. We characterized variable intensity oligonucleotide (VINO) probes that display such patterns and are found in all hybridization-based genotyping platforms, including those developed for human, dog, cattle, and mouse. When recognized and properly interpreted, VINOs recovered a substantial fraction of discarded probes and counteracted SNP ascertainment bias. We developed software (MouseDivGeno) that identifies VINOs and improves the accuracy of genotype calling. MouseDivGeno produced highly concordant genotype calls when compared with other methods but it uniquely identified more than 786000 VINOs in 351 mouse samples. We used whole-genome sequence from 14 mouse strains to confirm the presence of novel variants explaining 28000 VINOs in those strains. We also identified VINOs in human HapMap 3 samples, many of which were specific to an African population. Incorporating VINOs in phylogenetic analyses substantially improved the accuracy of a Mus species tree and local haplotype assignment in laboratory mouse strains. Conclusion The problems of ascertainment bias and missing information due to genotyping errors are widely recognized as limiting factors in genetic studies. We have conducted the first formal analysis of the effect of novel variants on genotyping arrays, and we have shown that these variants account for a large portion of miscalled and uncalled genotypes. Genetic studies will benefit from substantial improvements in the accuracy of their results by incorporating VINOs in their analyses. PMID:22260749

  10. Analysis of the moderate resolution imaging spectroradiometer contextual algorithm for small fire detection, Journal of Applied Remote Sensing Vol.3

    Treesearch

    W. Wang; J.J. Qu; X. Hao; Y. Liu

    2009-01-01

    In the southeastern United States, most wildland fires are of low intensity. A substantial number of these fires cannot be detected by the MODIS contextual algorithm. To improve the accuracy of fire detection for this region, the remote-sensed characteristics of these fires have to be...

  11. Iterative refinement of structure-based sequence alignments by Seed Extension

    PubMed Central

    Kim, Changhoon; Tai, Chin-Hsien; Lee, Byungkook

    2009-01-01

    Background Accurate sequence alignment is required in many bioinformatics applications but, when sequence similarity is low, it is difficult to obtain accurate alignments based on sequence similarity alone. The accuracy improves when the structures are available, but current structure-based sequence alignment procedures still mis-align substantial numbers of residues. In order to correct such errors, we previously explored the possibility of replacing the residue-based dynamic programming algorithm in structure alignment procedures with the Seed Extension algorithm, which does not use a gap penalty. Here, we describe a new procedure called RSE (Refinement with Seed Extension) that iteratively refines a structure-based sequence alignment. Results RSE uses SE (Seed Extension) in its core, which is an algorithm that we reported recently for obtaining a sequence alignment from two superimposed structures. The RSE procedure was evaluated by comparing the correctly aligned fractions of residues before and after the refinement of the structure-based sequence alignments produced by popular programs. CE, DaliLite, FAST, LOCK2, MATRAS, MATT, TM-align, SHEBA and VAST were included in this analysis and the NCBI's CDD root node set was used as the reference alignments. RSE improved the average accuracy of sequence alignments for all programs tested when no shift error was allowed. The amount of improvement varied depending on the program. The average improvements were small for DaliLite and MATRAS but about 5% for CE and VAST. More substantial improvements have been seen in many individual cases. The additional computation times required for the refinements were negligible compared to the times taken by the structure alignment programs. Conclusion RSE is a computationally inexpensive way of improving the accuracy of a structure-based sequence alignment. It can be used as a standalone procedure following a regular structure-based sequence alignment or to replace the traditional iterative refinement procedures based on residue-level dynamic programming algorithm in many structure alignment programs. PMID:19589133

  12. Real-time, resource-constrained object classification on a micro-air vehicle

    NASA Astrophysics Data System (ADS)

    Buck, Louis; Ray, Laura

    2013-12-01

    A real-time embedded object classification algorithm is developed through the novel combination of binary feature descriptors, a bag-of-visual-words object model and the cortico-striatal loop (CSL) learning algorithm. The BRIEF, ORB and FREAK binary descriptors are tested and compared to SIFT descriptors with regard to their respective classification accuracies, execution times, and memory requirements when used with CSL on a 12.6 g ARM Cortex embedded processor running at 800 MHz. Additionally, the effect of x2 feature mapping and opponent-color representations used with these descriptors is examined. These tests are performed on four data sets of varying sizes and difficulty, and the BRIEF descriptor is found to yield the best combination of speed and classification accuracy. Its use with CSL achieves accuracies between 67% and 95% of those achieved with SIFT descriptors and allows for the embedded classification of a 128x192 pixel image in 0.15 seconds, 60 times faster than classification with SIFT. X2 mapping is found to provide substantial improvements in classification accuracy for all of the descriptors at little cost, while opponent-color descriptors are offer accuracy improvements only on colorful datasets.

  13. Predictive accuracy of combined genetic and environmental risk scores.

    PubMed

    Dudbridge, Frank; Pashayan, Nora; Yang, Jian

    2018-02-01

    The substantial heritability of most complex diseases suggests that genetic data could provide useful risk prediction. To date the performance of genetic risk scores has fallen short of the potential implied by heritability, but this can be explained by insufficient sample sizes for estimating highly polygenic models. When risk predictors already exist based on environment or lifestyle, two key questions are to what extent can they be improved by adding genetic information, and what is the ultimate potential of combined genetic and environmental risk scores? Here, we extend previous work on the predictive accuracy of polygenic scores to allow for an environmental score that may be correlated with the polygenic score, for example when the environmental factors mediate the genetic risk. We derive common measures of predictive accuracy and improvement as functions of the training sample size, chip heritabilities of disease and environmental score, and genetic correlation between disease and environmental risk factors. We consider simple addition of the two scores and a weighted sum that accounts for their correlation. Using examples from studies of cardiovascular disease and breast cancer, we show that improvements in discrimination are generally small but reasonable degrees of reclassification could be obtained with current sample sizes. Correlation between genetic and environmental scores has only minor effects on numerical results in realistic scenarios. In the longer term, as the accuracy of polygenic scores improves they will come to dominate the predictive accuracy compared to environmental scores. © 2017 WILEY PERIODICALS, INC.

  14. Predictive accuracy of combined genetic and environmental risk scores

    PubMed Central

    Pashayan, Nora; Yang, Jian

    2017-01-01

    ABSTRACT The substantial heritability of most complex diseases suggests that genetic data could provide useful risk prediction. To date the performance of genetic risk scores has fallen short of the potential implied by heritability, but this can be explained by insufficient sample sizes for estimating highly polygenic models. When risk predictors already exist based on environment or lifestyle, two key questions are to what extent can they be improved by adding genetic information, and what is the ultimate potential of combined genetic and environmental risk scores? Here, we extend previous work on the predictive accuracy of polygenic scores to allow for an environmental score that may be correlated with the polygenic score, for example when the environmental factors mediate the genetic risk. We derive common measures of predictive accuracy and improvement as functions of the training sample size, chip heritabilities of disease and environmental score, and genetic correlation between disease and environmental risk factors. We consider simple addition of the two scores and a weighted sum that accounts for their correlation. Using examples from studies of cardiovascular disease and breast cancer, we show that improvements in discrimination are generally small but reasonable degrees of reclassification could be obtained with current sample sizes. Correlation between genetic and environmental scores has only minor effects on numerical results in realistic scenarios. In the longer term, as the accuracy of polygenic scores improves they will come to dominate the predictive accuracy compared to environmental scores. PMID:29178508

  15. Human supervision and microprocessor control of an optical tracking system

    NASA Technical Reports Server (NTRS)

    Bigley, W. J.; Vandenberg, J. D.

    1981-01-01

    Gunners using small calibre anti-aircraft systems have not been able to track high-speed air targets effectively. Substantial improvement in the accuracy of surface fire against attacking aircraft has been realized through the design of a director-type weapon control system. This system concept frees the gunner to exercise a supervisory/monitoring role while the computer takes over continuous target tracking. This change capitalizes on a key consideration of human factors engineering while increasing system accuracy. The advanced system design, which uses distributed microprocessor control, is discussed at the block diagram level and is contrasted with the previous implementation.

  16. Health-based risk adjustment: is inpatient and outpatient diagnostic information sufficient?

    PubMed

    Lamers, L M

    Adequate risk adjustment is critical to the success of market-oriented health care reforms in many countries. Currently used risk adjusters based on demographic and diagnostic cost groups (DCGs) do not reflect expected costs accurately. This study examines the simultaneous predictive accuracy of inpatient and outpatient morbidity measures and prior costs. DCGs, pharmacy cost groups (PCGs), and prior year's costs improve the predictive accuracy of the demographic model substantially. DCGs and PCGs seem complementary in their ability to predict future costs. However, this study shows that the combination of DCGs and PCGs still leaves room for cream skimming.

  17. Machine Learning Based Multi-Physical-Model Blending for Enhancing Renewable Energy Forecast -- Improvement via Situation Dependent Error Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Siyuan; Hwang, Youngdeok; Khabibrakhmanov, Ildar

    With increasing penetration of solar and wind energy to the total energy supply mix, the pressing need for accurate energy forecasting has become well-recognized. Here we report the development of a machine-learning based model blending approach for statistically combining multiple meteorological models for improving the accuracy of solar/wind power forecast. Importantly, we demonstrate that in addition to parameters to be predicted (such as solar irradiance and power), including additional atmospheric state parameters which collectively define weather situations as machine learning input provides further enhanced accuracy for the blended result. Functional analysis of variance shows that the error of individual modelmore » has substantial dependence on the weather situation. The machine-learning approach effectively reduces such situation dependent error thus produces more accurate results compared to conventional multi-model ensemble approaches based on simplistic equally or unequally weighted model averaging. Validation over an extended period of time results show over 30% improvement in solar irradiance/power forecast accuracy compared to forecasts based on the best individual model.« less

  18. Thematic accuracy of the NLCD 2001 land cover for the conterminous United States

    USGS Publications Warehouse

    Wickham, J.D.; Stehman, S.V.; Fry, J.A.; Smith, J.H.; Homer, Collin G.

    2010-01-01

    The land-cover thematic accuracy of NLCD 2001 was assessed from a probability-sample of 15,000 pixels. Nationwide, NLCD 2001 overall Anderson Level II and Level I accuracies were 78.7% and 85.3%, respectively. By comparison, overall accuracies at Level II and Level I for the NLCD 1992 were 58% and 80%. Forest and cropland were two classes showing substantial improvements in accuracy in NLCD 2001 relative to NLCD 1992. NLCD 2001 forest and cropland user's accuracies were 87% and 82%, respectively, compared to 80% and 43% for NLCD 1992. Accuracy results are reported for 10 geographic regions of the United States, with regional overall accuracies ranging from 68% to 86% for Level II and from 79% to 91% at Level I. Geographic variation in class-specific accuracy was strongly associated with the phenomenon that regionally more abundant land-cover classes had higher accuracy. Accuracy estimates based on several definitions of agreement are reported to provide an indication of the potential impact of reference data error on accuracy. Drawing on our experience from two NLCD national accuracy assessments, we discuss the use of designs incorporating auxiliary data to more seamlessly quantify reference data quality as a means to further advance thematic map accuracy assessment.

  19. Application of Numerical Integration and Data Fusion in Unit Vector Method

    NASA Astrophysics Data System (ADS)

    Zhang, J.

    2012-01-01

    The Unit Vector Method (UVM) is a series of orbit determination methods which are designed by Purple Mountain Observatory (PMO) and have been applied extensively. It gets the conditional equations for different kinds of data by projecting the basic equation to different unit vectors, and it suits for weighted process for different kinds of data. The high-precision data can play a major role in orbit determination, and accuracy of orbit determination is improved obviously. The improved UVM (PUVM2) promoted the UVM from initial orbit determination to orbit improvement, and unified the initial orbit determination and orbit improvement dynamically. The precision and efficiency are improved further. In this thesis, further research work has been done based on the UVM: Firstly, for the improvement of methods and techniques for observation, the types and decision of the observational data are improved substantially, it is also asked to improve the decision of orbit determination. The analytical perturbation can not meet the requirement. So, the numerical integration for calculating the perturbation has been introduced into the UVM. The accuracy of dynamical model suits for the accuracy of the real data, and the condition equations of UVM are modified accordingly. The accuracy of orbit determination is improved further. Secondly, data fusion method has been introduced into the UVM. The convergence mechanism and the defect of weighted strategy have been made clear in original UVM. The problem has been solved in this method, the calculation of approximate state transition matrix is simplified and the weighted strategy has been improved for the data with different dimension and different precision. Results of orbit determination of simulation and real data show that the work of this thesis is effective: (1) After the numerical integration has been introduced into the UVM, the accuracy of orbit determination is improved obviously, and it suits for the high-accuracy data of available observation apparatus. Compare with the classical differential improvement with the numerical integration, its calculation speed is also improved obviously. (2) After data fusion method has been introduced into the UVM, weighted distribution accords rationally with the accuracy of different kinds of data, all data are fully used and the new method is also good at numerical stability and rational weighted distribution.

  20. Opportunities for international collaboration in dog breeding from the sharing of pedigree and health data.

    PubMed

    Fikse, W F; Malm, S; Lewis, T W

    2013-09-01

    Pooling of pedigree and phenotype data from different countries may improve the accuracy of derived indicators of both genetic diversity and genetic merit of traits of interest. This study demonstrates significant migration of individuals of four pedigree dog breeds between Sweden and the United Kingdom. Correlations of estimates of genetic merit (estimated breeding values, EBVs) for the Fédération Cynologique Internationale and the British Veterinary Association and Kennel Club evaluations of hip dysplasia (HD) were strong and favourable, indicating that both scoring schemes capture substantially the same genetic trait. Therefore pooled use of phenotypic data on hip dysplasia would be expected to improve the accuracy of EBV for HD in both countries due to increased sample data. Copyright © 2013. Published by Elsevier Ltd.

  1. [Significance of auditory and kinesthetic feedback in vocal training of young professional singers (students)].

    PubMed

    Ciochină, Al D; Ciochină, Paula; Cobzeanu, M D; Burlui, Ada; Zaharia, D

    2004-01-01

    The study was to estimate the significance of auditory and kinesthetic feedback to an accurate control of fundamental frequency (F0) in 18 students beginning a professional singing education. The students sing an ascending and descending triad pattern covering their entire pitch range with and without making noise in legato and staccato and in a slow and fast tempo. F0 was measured by a computer program. The interval sizes between adjacent tones were determined and their departures from equally tempered tuning were calculated, the deviation from this tuning were used as a measure of the accuracy of intonation. Intonation accuracy was reduced by masking noise, by staccato as opposed to legato singing, and by fast as opposed to slow performance. The contribution of the auditory feedback to pitch control was not significantly improved after education, whereas the kinesthetic feedback circuit was improved in slow legato and slow staccato tasks. The results support the assumption that the kinesthetic feedback contributes substantially to intonation accuracy.

  2. Forecasting Influenza Outbreaks in Boroughs and Neighborhoods of New York City

    PubMed Central

    2016-01-01

    The ideal spatial scale, or granularity, at which infectious disease incidence should be monitored and forecast has been little explored. By identifying the optimal granularity for a given disease and host population, and matching surveillance and prediction efforts to this scale, response to emergent and recurrent outbreaks can be improved. Here we explore how granularity and representation of spatial structure affect influenza forecast accuracy within New York City. We develop network models at the borough and neighborhood levels, and use them in conjunction with surveillance data and a data assimilation method to forecast influenza activity. These forecasts are compared to an alternate system that predicts influenza for each borough or neighborhood in isolation. At the borough scale, influenza epidemics are highly synchronous despite substantial differences in intensity, and inclusion of network connectivity among boroughs generally improves forecast accuracy. At the neighborhood scale, we observe much greater spatial heterogeneity among influenza outbreaks including substantial differences in local outbreak timing and structure; however, inclusion of the network model structure generally degrades forecast accuracy. One notable exception is that local outbreak onset, particularly when signal is modest, is better predicted with the network model. These findings suggest that observation and forecast at sub-municipal scales within New York City provides richer, more discriminant information on influenza incidence, particularly at the neighborhood scale where greater heterogeneity exists, and that the spatial spread of influenza among localities can be forecast. PMID:27855155

  3. Expected Improvements in VLBI Measurements of the Earth's Orientation

    NASA Technical Reports Server (NTRS)

    Ma, Chopo

    2003-01-01

    Measurements of the Earth s orientation since the 1970s using space geodetic techniques have provided a continually expanding and improving data set for studies of the Earth s structure and the distribution of mass and angular momentum. The accuracy of current one-day measurements is better than 100 microarcsec for the motion of the pole with respect to the celestial and terrestrial reference frames and better than 3 microsec for the rotation around the pole. VLBI uniquely provides the three Earth orientation parameters (nutation and UTI) that relate the Earth to the extragalactic celestial reference frame. The accuracy and resolution of the VLBI Earth orientation time series can be expected to improve substantially in the near future because of refinements in the realization of the celestial reference frame, improved modeling of the troposphere and non-linear station motions, larger observing networks, optimized scheduling, deployment of disk-based Mark V recorders, full use of Mark IV capabilities, and e-VLBI. More radical future technical developments will be discussed.

  4. Measuring Parameters of Massive Black Hole Binaries with Partially Aligned Spins

    NASA Technical Reports Server (NTRS)

    Lang, Ryan N.; Hughes, Scott A.; Cornish, Neil J.

    2011-01-01

    The future space-based gravitational wave detector LISA will be able to measure parameters of coalescing massive black hole binaries, often to extremely high accuracy. Previous work has demonstrated that the black hole spins can have a strong impact on the accuracy of parameter measurement. Relativistic spin-induced precession modulates the waveform in a manner which can break degeneracies between parameters, in principle significantly improving how well they are measured. Recent studies have indicated, however, that spin precession may be weak for an important subset of astrophysical binary black holes: those in which the spins are aligned due to interactions with gas. In this paper, we examine how well a binary's parameters can be measured when its spins are partially aligned and compare results using waveforms that include higher post-Newtonian harmonics to those that are truncated at leading quadrupole order. We find that the weakened precession can substantially degrade parameter estimation, particularly for the "extrinsic" parameters sky position and distance. Absent higher harmonics, LISA typically localizes the sky position of a nearly aligned binary about an order of magnitude less accurately than one for which the spin orientations are random. Our knowledge of a source's sky position will thus be worst for the gas-rich systems which are most likely to produce electromagnetic counterparts. Fortunately, higher harmonics of the waveform can make up for this degradation. By including harmonics beyond the quadrupole in our waveform model, we find that the accuracy with which most of the binary's parameters are measured can be substantially improved. In some cases, the improvement is such that they are measured almost as well as when the binary spins are randomly aligned.

  5. Process improvement methods increase the efficiency, accuracy, and utility of a neurocritical care research repository.

    PubMed

    O'Connor, Sydney; Ayres, Alison; Cortellini, Lynelle; Rosand, Jonathan; Rosenthal, Eric; Kimberly, W Taylor

    2012-08-01

    Reliable and efficient data repositories are essential for the advancement of research in Neurocritical care. Various factors, such as the large volume of patients treated within the neuro ICU, their differing length and complexity of hospital stay, and the substantial amount of desired information can complicate the process of data collection. We adapted the tools of process improvement to the data collection and database design of a research repository for a Neuroscience intensive care unit. By the Shewhart-Deming method, we implemented an iterative approach to improve the process of data collection for each element. After an initial design phase, we re-evaluated all data fields that were challenging or time-consuming to collect. We then applied root-cause analysis to optimize the accuracy and ease of collection, and to determine the most efficient manner of collecting the maximal amount of data. During a 6-month period, we iteratively analyzed the process of data collection for various data elements. For example, the pre-admission medications were found to contain numerous inaccuracies after comparison with a gold standard (sensitivity 71% and specificity 94%). Also, our first method of tracking patient admissions and discharges contained higher than expected errors (sensitivity 94% and specificity 93%). In addition to increasing accuracy, we focused on improving efficiency. Through repeated incremental improvements, we reduced the number of subject records that required daily monitoring from 40 to 6 per day, and decreased daily effort from 4.5 to 1.5 h/day. By applying process improvement methods to the design of a Neuroscience ICU data repository, we achieved a threefold improvement in efficiency and increased accuracy. Although individual barriers to data collection will vary from institution to institution, a focus on process improvement is critical to overcoming these barriers.

  6. Quantum-enhanced Sensing and Efficient Quantum Computation

    DTIC Science & Technology

    2015-07-27

    accuracy. The system was used to improve quantum boson sampling tests. 15. SUBJECT TERMS EOARD, Quantum Information Processing, Transition Edge Sensors...quantum  boson  sampling (QBS) problem are reported in Ref. [7]. To substantially  increase the scale of feasible tests, we developed a new variation

  7. Transforming the HIM department into a strategic resource.

    PubMed

    Odorisio, L F; Piescik, J B

    1998-05-01

    The transformation of a traditional (HIM) health information management department into a "virtual" HIM department can offer an IDS substantial economic advantages, improve patient satisfaction levels, enhance the quality of care, and provide greater accuracy in measuring and demonstrating healthcare outcomes. The HIM department's mission should be aligned with enterprise strategies, and the IDS should invest in technology that enables HIM to respond to enterprisewide information requirements.

  8. Planning and scheduling for success

    NASA Technical Reports Server (NTRS)

    Manzanera, Ignacio

    1994-01-01

    Planning and scheduling programs are excellent management tools when properly introduced to the project management team and regularly maintained. Communications, creativity, flexibility and accuracy are substantially improved by following a simple set of rules. A planning and scheduling program will work for you if you believe in it, make others in your project team realize its benefits, and make it an extension of your project cost control philosophy.

  9. Geoid undulation accuracy

    NASA Technical Reports Server (NTRS)

    Rapp, Richard H.

    1993-01-01

    The determination of the geoid and equipotential surface of the Earth's gravity field, has long been of interest to geodesists and oceanographers. The geoid provides a surface to which the actual ocean surface can be compared with the differences implying information on the circulation patterns of the oceans. For use in oceanographic applications the geoid is ideally needed to a high accuracy and to a high resolution. There are applications that require geoid undulation information to an accuracy of +/- 10 cm with a resolution of 50 km. We are far from this goal today but substantial improvement in geoid determination has been made. In 1979 the cumulative geoid undulation error to spherical harmonic degree 20 was +/- 1.4 m for the GEM10 potential coefficient model. Today the corresponding value has been reduced to +/- 25 cm for GEM-T3 or +/- 11 cm for the OSU91A model. Similar improvements are noted by harmonic degree (wave-length) and in resolution. Potential coefficient models now exist to degree 360 based on a combination of data types. This paper discusses the accuracy changes that have taken place in the past 12 years in the determination of geoid undulations.

  10. Ultra-fast HPM detectors improve NAD(P)H FLIM

    NASA Astrophysics Data System (ADS)

    Becker, Wolfgang; Wetzker, Cornelia; Benda, Aleš

    2018-02-01

    Metabolic imaging by NAD(P)H FLIM requires the decay functions in the individual pixels to be resolved into the decay components of bound and unbound NAD(P)H. Metabolic information is contained in the lifetime and relative amplitudes of the components. The separation of the decay components and the accuracy of the amplitudes and lifetimes improves substantially by using ultra-fast HPM-100-06 and HPM-100-07 hybrid detectors. The IRF width in combination with the Becker & Hickl SPC-150N and SPC-150NX TCSPC modules is less than 20 ps. An IRF this fast does not interfere with the fluorescence decay. The usual deconvolution process in the data analysis then virtually becomes a simple curve fitting, and the parameters of the NAD(P)H decay components are obtained at unprecedented accuracy.

  11. Multidimensional Optimization of Signal Space Distance Parameters in WLAN Positioning

    PubMed Central

    Brković, Milenko; Simić, Mirjana

    2014-01-01

    Accurate indoor localization of mobile users is one of the challenging problems of the last decade. Besides delivering high speed Internet, Wireless Local Area Network (WLAN) can be used as an effective indoor positioning system, being competitive both in terms of accuracy and cost. Among the localization algorithms, nearest neighbor fingerprinting algorithms based on Received Signal Strength (RSS) parameter have been extensively studied as an inexpensive solution for delivering indoor Location Based Services (LBS). In this paper, we propose the optimization of the signal space distance parameters in order to improve precision of WLAN indoor positioning, based on nearest neighbor fingerprinting algorithms. Experiments in a real WLAN environment indicate that proposed optimization leads to substantial improvements of the localization accuracy. Our approach is conceptually simple, is easy to implement, and does not require any additional hardware. PMID:24757443

  12. IMPROVED ALGORITHMS FOR RADAR-BASED RECONSTRUCTION OF ASTEROID SHAPES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenberg, Adam H.; Margot, Jean-Luc

    We describe our implementation of a global-parameter optimizer and Square Root Information Filter into the asteroid-modeling software shape. We compare the performance of our new optimizer with that of the existing sequential optimizer when operating on various forms of simulated data and actual asteroid radar data. In all cases, the new implementation performs substantially better than its predecessor: it converges faster, produces shape models that are more accurate, and solves for spin axis orientations more reliably. We discuss potential future changes to improve shape's fitting speed and accuracy.

  13. Hidden Markov induced Dynamic Bayesian Network for recovering time evolving gene regulatory networks

    NASA Astrophysics Data System (ADS)

    Zhu, Shijia; Wang, Yadong

    2015-12-01

    Dynamic Bayesian Networks (DBN) have been widely used to recover gene regulatory relationships from time-series data in computational systems biology. Its standard assumption is ‘stationarity’, and therefore, several research efforts have been recently proposed to relax this restriction. However, those methods suffer from three challenges: long running time, low accuracy and reliance on parameter settings. To address these problems, we propose a novel non-stationary DBN model by extending each hidden node of Hidden Markov Model into a DBN (called HMDBN), which properly handles the underlying time-evolving networks. Correspondingly, an improved structural EM algorithm is proposed to learn the HMDBN. It dramatically reduces searching space, thereby substantially improving computational efficiency. Additionally, we derived a novel generalized Bayesian Information Criterion under the non-stationary assumption (called BWBIC), which can help significantly improve the reconstruction accuracy and largely reduce over-fitting. Moreover, the re-estimation formulas for all parameters of our model are derived, enabling us to avoid reliance on parameter settings. Compared to the state-of-the-art methods, the experimental evaluation of our proposed method on both synthetic and real biological data demonstrates more stably high prediction accuracy and significantly improved computation efficiency, even with no prior knowledge and parameter settings.

  14. Navigation Algorithms for the SeaWiFS Mission

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Patt, Frederick S.; McClain, Charles R. (Technical Monitor)

    2002-01-01

    The navigation algorithms for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) were designed to meet the requirement of 1-pixel accuracy-a standard deviation (sigma) of 2. The objective has been to extract the best possible accuracy from the spacecraft telemetry and avoid the need for costly manual renavigation or geometric rectification. The requirement is addressed by postprocessing of both the Global Positioning System (GPS) receiver and Attitude Control System (ACS) data in the spacecraft telemetry stream. The navigation algorithms described are separated into four areas: orbit processing, attitude sensor processing, attitude determination, and final navigation processing. There has been substantial modification during the mission of the attitude determination and attitude sensor processing algorithms. For the former, the basic approach was completely changed during the first year of the mission, from a single-frame deterministic method to a Kalman smoother. This was done for several reasons: a) to improve the overall accuracy of the attitude determination, particularly near the sub-solar point; b) to reduce discontinuities; c) to support the single-ACS-string spacecraft operation that was started after the first mission year, which causes gaps in attitude sensor coverage; and d) to handle data quality problems (which became evident after launch) in the direct-broadcast data. The changes to the attitude sensor processing algorithms primarily involved the development of a model for the Earth horizon height, also needed for single-string operation; the incorporation of improved sensor calibration data; and improved data quality checking and smoothing to handle the data quality issues. The attitude sensor alignments have also been revised multiple times, generally in conjunction with the other changes. The orbit and final navigation processing algorithms have remained largely unchanged during the mission, aside from refinements to data quality checking. Although further improvements are certainly possible, future evolution of the algorithms is expected to be limited to refinements of the methods presented here, and no substantial changes are anticipated.

  15. Never-Ending Learning for Deep Understanding of Natural Language

    DTIC Science & Technology

    2017-10-01

    CA policy clarification memorandum dated 16 Jan 09. 13. SUPPLEMENTARY NOTES 14. ABSTRACT This research has explored the thesis that very... thesis we have built on our earlier research on the Never Ending Language Learning (NELL) computer system, which has been running non- stop since... thesis that very significant amounts of background knowledge can lead to very substantial improvements in the accuracy of deep text analysis and

  16. Goldmann Tonometer Prism with an Optimized Error Correcting Applanation Surface.

    PubMed

    McCafferty, Sean; Lim, Garrett; Duncan, William; Enikov, Eniko; Schwiegerling, Jim

    2016-09-01

    We evaluate solutions for an applanating surface modification to the Goldmann tonometer prism, which substantially negates the errors due to patient variability in biomechanics. A modified Goldmann or correcting applanation tonometry surface (CATS) prism is presented which was optimized to minimize the intraocular pressure (IOP) error due to corneal thickness, stiffness, curvature, and tear film. Mathematical modeling with finite element analysis (FEA) and manometric IOP referenced cadaver eyes were used to optimize and validate the design. Mathematical modeling of the optimized CATS prism indicates an approximate 50% reduction in each of the corneal biomechanical and tear film errors. Manometric IOP referenced pressure in cadaveric eyes demonstrates substantial equivalence to GAT in nominal eyes with the CATS prism as predicted by modeling theory. A CATS modified Goldmann prism is theoretically able to significantly improve the accuracy of IOP measurement without changing Goldmann measurement technique or interpretation. Clinical validation is needed but the analysis indicates a reduction in CCT error alone to less than ±2 mm Hg using the CATS prism in 100% of a standard population compared to only 54% less than ±2 mm Hg error with the present Goldmann prism. This article presents an easily adopted novel approach and critical design parameters to improve the accuracy of a Goldmann applanating tonometer.

  17. A high accuracy sequential solver for simulation and active control of a longitudinal combustion instability

    NASA Technical Reports Server (NTRS)

    Shyy, W.; Thakur, S.; Udaykumar, H. S.

    1993-01-01

    A high accuracy convection scheme using a sequential solution technique has been developed and applied to simulate the longitudinal combustion instability and its active control. The scheme has been devised in the spirit of the Total Variation Diminishing (TVD) concept with special source term treatment. Due to the substantial heat release effect, a clear delineation of the key elements employed by the scheme, i.e., the adjustable damping factor and the source term treatment has been made. By comparing with the first-order upwind scheme previously utilized, the present results exhibit less damping and are free from spurious oscillations, offering improved quantitative accuracy while confirming the spectral analysis reported earlier. A simple feedback type of active control has been found to be capable of enhancing or attenuating the magnitude of the combustion instability.

  18. Improved COD Measurements for Organic Content in Flowback Water with High Chloride Concentrations.

    PubMed

    Cardona, Isabel; Park, Ho Il; Lin, Lian-Shin

    2016-03-01

    An improved method was used to determine chemical oxygen demand (COD) as a measure of organic content in water samples containing high chloride content. A contour plot of COD percent error in the Cl(-)-Cl(-):COD domain showed that COD errors increased with Cl(-):COD. Substantial errors (>10%) could occur in low Cl(-):COD regions (<300) for samples with low (<10 g/L) and high chloride concentrations (>25 g/L). Applying the method to flowback water samples resulted in COD concentrations ranging in 130 to 1060 mg/L, which were substantially lower than the previously reported values for flowback water samples from Marcellus Shale (228 to 21 900 mg/L). It is likely that overestimations of COD in the previous studies occurred as result of chloride interferences. Pretreatment with mercuric sulfate, and use of a low-strength digestion solution, and the contour plot to correct COD measurements are feasible steps to significantly improve the accuracy of COD measurements.

  19. About improving efficiency of the P3 M algorithms when computing the inter-particle forces in beam dynamics

    NASA Astrophysics Data System (ADS)

    Kozynchenko, Alexander I.; Kozynchenko, Sergey A.

    2017-03-01

    In the paper, a problem of improving efficiency of the particle-particle- particle-mesh (P3M) algorithm in computing the inter-particle electrostatic forces is considered. The particle-mesh (PM) part of the algorithm is modified in such a way that the space field equation is solved by the direct method of summation of potentials over the ensemble of particles lying not too close to a reference particle. For this purpose, a specific matrix "pattern" is introduced to describe the spatial field distribution of a single point charge, so the "pattern" contains pre-calculated potential values. This approach allows to reduce a set of arithmetic operations performed at the innermost of nested loops down to an addition and assignment operators and, therefore, to decrease the running time substantially. The simulation model developed in C++ substantiates this view, showing the descent accuracy acceptable in particle beam calculations together with the improved speed performance.

  20. Quantitative cardiac SPECT reconstruction with reduced image degradation due to patient anatomy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsui, B.M.W.; Zhao, X.D.; Gregoriou, G.K.

    1994-12-01

    Patient anatomy has complicated effects on cardiac SPECT images. The authors investigated reconstruction methods which substantially reduced these effects for improved image quality. A 3D mathematical cardiac-torso (MCAT) phantom which models the anatomical structures in the thorax region were used in the study. The phantom was modified to simulate variations in patient anatomy including regions of natural thinning along the myocardium, body size, diaphragmatic shape, gender, and size and shape of breasts for female patients. Distributions of attenuation coefficients and Tl-201 uptake in different organs in a normal patient were also simulated. Emission projection data were generated from the phantomsmore » including effects of attenuation and detector response. The authors have observed the attenuation-induced artifacts caused by patient anatomy in the conventional FBP reconstructed images. Accurate attenuation compensation using iterative reconstruction algorithms and attenuation maps substantially reduced the image artifacts and improved quantitative accuracy. They conclude that reconstruction methods which accurately compensate for non-uniform attenuation can substantially reduce image degradation caused by variations in patient anatomy in cardiac SPECT.« less

  1. Pairwise comparisons and visual perceptions of equal area polygons.

    PubMed

    Adamic, P; Babiy, V; Janicki, R; Kakiashvili, T; Koczkodaj, W W; Tadeusiewicz, R

    2009-02-01

    The number of studies related to visual perception has been plentiful in recent years. Participants rated the areas of five randomly generated shapes of equal area, using a reference unit area that was displayed together with the shapes. Respondents were 179 university students from Canada and Poland. The average error estimated by respondents using the unit square was 25.75%. The error was substantially decreased to 5.51% when the shapes were compared to one another in pairs. This gain of 20.24% for this two-dimensional experiment was substantially better than the 11.78% gain reported in the previous one-dimensional experiments. This is the first statistically sound two-dimensional experiment demonstrating that pairwise comparisons improve accuracy.

  2. An improved model of the Earth's gravitational field: GEM-T1

    NASA Technical Reports Server (NTRS)

    Marsh, J. G.; Lerch, F. J.; Christodoulidis, D. C.; Putney, B. H.; Felsentreger, T. L.; Sanchez, B. V.; Smith, D. E.; Klosko, S. M.; Martin, T. V.; Pavlis, E. C.

    1987-01-01

    Goddard Earth Model T1 (GEM-T1), which was developed from an analysis of direct satellite tracking observations, is the first in a new series of such models. GEM-T1 is complete to degree and order 36. It was developed using consistent reference parameters and extensive earth and ocean tidal models. It was simultaneously solved for gravitational and tidal terms, earth orientation parameters, and the orbital parameters of 580 individual satellite arcs. The solution used only satellite tracking data acquired on 17 different satellites and is predominantly based upon the precise laser data taken by third generation systems. In all, 800,000 observations were used. A major improvement in field accuracy was obtained. For marine geodetic applications, long wavelength geoidal modeling is twice as good as in earlier satellite-only GEM models. Orbit determination accuracy has also been substantially advanced over a wide range of satellites that have been tested.

  3. Robust control of electrostatic torsional micromirrors using adaptive sliding-mode control

    NASA Astrophysics Data System (ADS)

    Sane, Harshad S.; Yazdi, Navid; Mastrangelo, Carlos H.

    2005-01-01

    This paper presents high-resolution control of torsional electrostatic micromirrors beyond their inherent pull-in instability using robust sliding-mode control (SMC). The objectives of this paper are two-fold - firstly, to demonstrate the applicability of SMC for MEMS devices; secondly - to present a modified SMC algorithm that yields improved control accuracy. SMC enables compact realization of a robust controller tolerant of device characteristic variations and nonlinearities. Robustness of the control loop is demonstrated through extensive simulations and measurements on MEMS with a wide range in their characteristics. Control of two-axis gimbaled micromirrors beyond their pull-in instability with overall 10-bit pointing accuracy is confirmed experimentally. In addition, this paper presents an analysis of the sources of errors in discrete-time implementation of the control algorithm. To minimize these errors, we present an adaptive version of the SMC algorithm that yields substantial performance improvement without considerably increasing implementation complexity.

  4. Recent advances in electronic structure theory and their influence on the accuracy of ab initio potential energy surfaces

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    Recent advances in electronic structure theory and the availability of high speed vector processors have substantially increased the accuracy of ab initio potential energy surfaces. The recently developed atomic natural orbital approach for basis set contraction has reduced both the basis set incompleteness and superposition errors in molecular calculations. Furthermore, full CI calculations can often be used to calibrate a CASSCF/MRCI approach that quantitatively accounts for the valence correlation energy. These computational advances also provide a vehicle for systematically improving the calculations and for estimating the residual error in the calculations. Calculations on selected diatomic and triatomic systems will be used to illustrate the accuracy that currently can be achieved for molecular systems. In particular, the F + H2 yields HF + H potential energy hypersurface is used to illustrate the impact of these computational advances on the calculation of potential energy surfaces.

  5. Recent advances in electronic structure theory and their influence on the accuracy of ab initio potential energy surfaces

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1988-01-01

    Recent advances in electronic structure theory and the availability of high speed vector processors have substantially increased the accuracy of ab initio potential energy surfaces. The recently developed atomic natural orbital approach for basis set contraction has reduced both the basis set incompleteness and superposition errors in molecular calculations. Furthermore, full CI calculations can often be used to calibrate a CASSCF/MRCI approach that quantitatively accounts for the valence correlation energy. These computational advances also provide a vehicle for systematically improving the calculations and for estimating the residual error in the calculations. Calculations on selected diatomic and triatomic systems will be used to illustrate the accuracy that currently can be achieved for molecular systems. In particular, the F+H2 yields HF+H potential energy hypersurface is used to illustrate the impact of these computational advances on the calculation of potential energy surfaces.

  6. New insights from cluster analysis methods for RNA secondary structure prediction

    PubMed Central

    Rogers, Emily; Heitsch, Christine

    2016-01-01

    A widening gap exists between the best practices for RNA secondary structure prediction developed by computational researchers and the methods used in practice by experimentalists. Minimum free energy (MFE) predictions, although broadly used, are outperformed by methods which sample from the Boltzmann distribution and data mine the results. In particular, moving beyond the single structure prediction paradigm yields substantial gains in accuracy. Furthermore, the largest improvements in accuracy and precision come from viewing secondary structures not at the base pair level but at lower granularity/higher abstraction. This suggests that random errors affecting precision and systematic ones affecting accuracy are both reduced by this “fuzzier” view of secondary structures. Thus experimentalists who are willing to adopt a more rigorous, multilayered approach to secondary structure prediction by iterating through these levels of granularity will be much better able to capture fundamental aspects of RNA base pairing. PMID:26971529

  7. Limitations and potentials of current motif discovery algorithms

    PubMed Central

    Hu, Jianjun; Li, Bin; Kihara, Daisuke

    2005-01-01

    Computational methods for de novo identification of gene regulation elements, such as transcription factor binding sites, have proved to be useful for deciphering genetic regulatory networks. However, despite the availability of a large number of algorithms, their strengths and weaknesses are not sufficiently understood. Here, we designed a comprehensive set of performance measures and benchmarked five modern sequence-based motif discovery algorithms using large datasets generated from Escherichia coli RegulonDB. Factors that affect the prediction accuracy, scalability and reliability are characterized. It is revealed that the nucleotide and the binding site level accuracy are very low, while the motif level accuracy is relatively high, which indicates that the algorithms can usually capture at least one correct motif in an input sequence. To exploit diverse predictions from multiple runs of one or more algorithms, a consensus ensemble algorithm has been developed, which achieved 6–45% improvement over the base algorithms by increasing both the sensitivity and specificity. Our study illustrates limitations and potentials of existing sequence-based motif discovery algorithms. Taking advantage of the revealed potentials, several promising directions for further improvements are discussed. Since the sequence-based algorithms are the baseline of most of the modern motif discovery algorithms, this paper suggests substantial improvements would be possible for them. PMID:16284194

  8. Improved spatial regression analysis of diffusion tensor imaging for lesion detection during longitudinal progression of multiple sclerosis in individual subjects

    NASA Astrophysics Data System (ADS)

    Liu, Bilan; Qiu, Xing; Zhu, Tong; Tian, Wei; Hu, Rui; Ekholm, Sven; Schifitto, Giovanni; Zhong, Jianhui

    2016-03-01

    Subject-specific longitudinal DTI study is vital for investigation of pathological changes of lesions and disease evolution. Spatial Regression Analysis of Diffusion tensor imaging (SPREAD) is a non-parametric permutation-based statistical framework that combines spatial regression and resampling techniques to achieve effective detection of localized longitudinal diffusion changes within the whole brain at individual level without a priori hypotheses. However, boundary blurring and dislocation limit its sensitivity, especially towards detecting lesions of irregular shapes. In the present study, we propose an improved SPREAD (dubbed improved SPREAD, or iSPREAD) method by incorporating a three-dimensional (3D) nonlinear anisotropic diffusion filtering method, which provides edge-preserving image smoothing through a nonlinear scale space approach. The statistical inference based on iSPREAD was evaluated and compared with the original SPREAD method using both simulated and in vivo human brain data. Results demonstrated that the sensitivity and accuracy of the SPREAD method has been improved substantially by adapting nonlinear anisotropic filtering. iSPREAD identifies subject-specific longitudinal changes in the brain with improved sensitivity, accuracy, and enhanced statistical power, especially when the spatial correlation is heterogeneous among neighboring image pixels in DTI.

  9. Estimated breeding values for canine hip dysplasia radiographic traits in a cohort of Australian German Shepherd dogs.

    PubMed

    Wilson, Bethany J; Nicholas, Frank W; James, John W; Wade, Claire M; Thomson, Peter C

    2013-01-01

    Canine hip dysplasia (CHD) is a serious and common musculoskeletal disease of pedigree dogs and therefore represents both an important welfare concern and an imperative breeding priority. The typical heritability estimates for radiographic CHD traits suggest that the accuracy of breeding dog selection could be substantially improved by the use of estimated breeding values (EBVs) in place of selection based on phenotypes of individuals. The British Veterinary Association/Kennel Club scoring method is a complex measure composed of nine bilateral ordinal traits, intended to evaluate both early and late dysplastic changes. However, the ordinal nature of the traits may represent a technical challenge for calculation of EBVs using linear methods. The purpose of the current study was to calculate EBVs of British Veterinary Association/Kennel Club traits in the Australian population of German Shepherd Dogs, using linear (both as individual traits and a summed phenotype), binary and ordinal methods to determine the optimal method for EBV calculation. Ordinal EBVs correlated well with linear EBVs (r = 0.90-0.99) and somewhat well with EBVs for the sum of the individual traits (r = 0.58-0.92). Correlation of ordinal and binary EBVs varied widely (r = 0.24-0.99) depending on the trait and cut-point considered. The ordinal EBVs have increased accuracy (0.48-0.69) of selection compared with accuracies from individual phenotype-based selection (0.40-0.52). Despite the high correlations between linear and ordinal EBVs, the underlying relationship between EBVs calculated by the two methods was not always linear, leading us to suggest that ordinal models should be used wherever possible. As the population of German Shepherd Dogs which was studied was purportedly under selection for the traits studied, we examined the EBVs for evidence of a genetic trend in these traits and found substantial genetic improvement over time. This study suggests the use of ordinal EBVs could increase the rate of genetic improvement in this population.

  10. The RAPIDD ebola forecasting challenge: Synthesis and lessons learnt.

    PubMed

    Viboud, Cécile; Sun, Kaiyuan; Gaffey, Robert; Ajelli, Marco; Fumanelli, Laura; Merler, Stefano; Zhang, Qian; Chowell, Gerardo; Simonsen, Lone; Vespignani, Alessandro

    2018-03-01

    Infectious disease forecasting is gaining traction in the public health community; however, limited systematic comparisons of model performance exist. Here we present the results of a synthetic forecasting challenge inspired by the West African Ebola crisis in 2014-2015 and involving 16 international academic teams and US government agencies, and compare the predictive performance of 8 independent modeling approaches. Challenge participants were invited to predict 140 epidemiological targets across 5 different time points of 4 synthetic Ebola outbreaks, each involving different levels of interventions and "fog of war" in outbreak data made available for predictions. Prediction targets included 1-4 week-ahead case incidences, outbreak size, peak timing, and several natural history parameters. With respect to weekly case incidence targets, ensemble predictions based on a Bayesian average of the 8 participating models outperformed any individual model and did substantially better than a null auto-regressive model. There was no relationship between model complexity and prediction accuracy; however, the top performing models for short-term weekly incidence were reactive models with few parameters, fitted to a short and recent part of the outbreak. Individual model outputs and ensemble predictions improved with data accuracy and availability; by the second time point, just before the peak of the epidemic, estimates of final size were within 20% of the target. The 4th challenge scenario - mirroring an uncontrolled Ebola outbreak with substantial data reporting noise - was poorly predicted by all modeling teams. Overall, this synthetic forecasting challenge provided a deep understanding of model performance under controlled data and epidemiological conditions. We recommend such "peace time" forecasting challenges as key elements to improve coordination and inspire collaboration between modeling groups ahead of the next pandemic threat, and to assess model forecasting accuracy for a variety of known and hypothetical pathogens. Published by Elsevier B.V.

  11. Robust Hand Motion Tracking through Data Fusion of 5DT Data Glove and Nimble VR Kinect Camera Measurements

    PubMed Central

    Arkenbout, Ewout A.; de Winter, Joost C. F.; Breedveld, Paul

    2015-01-01

    Vision based interfaces for human computer interaction have gained increasing attention over the past decade. This study presents a data fusion approach of the Nimble VR vision based system, using the Kinect camera, with the contact based 5DT Data Glove. Data fusion was achieved through a Kalman filter. The Nimble VR and filter output were compared using measurements performed on (1) a wooden hand model placed in various static postures and orientations; and (2) three differently sized human hands during active finger flexions. Precision and accuracy of joint angle estimates as a function of hand posture and orientation were determined. Moreover, in light of possible self-occlusions of the fingers in the Kinect camera images, data completeness was assessed. Results showed that the integration of the Data Glove through the Kalman filter provided for the proximal interphalangeal (PIP) joints of the fingers a substantial improvement of 79% in precision, from 2.2 deg to 0.9 deg. Moreover, a moderate improvement of 31% in accuracy (being the mean angular deviation from the true joint angle) was established, from 24 deg to 17 deg. The metacarpophalangeal (MCP) joint was relatively unaffected by the Kalman filter. Moreover, the Data Glove increased data completeness, thus providing a substantial advantage over the sole use of the Nimble VR system. PMID:26694395

  12. Robust Hand Motion Tracking through Data Fusion of 5DT Data Glove and Nimble VR Kinect Camera Measurements.

    PubMed

    Arkenbout, Ewout A; de Winter, Joost C F; Breedveld, Paul

    2015-12-15

    Vision based interfaces for human computer interaction have gained increasing attention over the past decade. This study presents a data fusion approach of the Nimble VR vision based system, using the Kinect camera, with the contact based 5DT Data Glove. Data fusion was achieved through a Kalman filter. The Nimble VR and filter output were compared using measurements performed on (1) a wooden hand model placed in various static postures and orientations; and (2) three differently sized human hands during active finger flexions. Precision and accuracy of joint angle estimates as a function of hand posture and orientation were determined. Moreover, in light of possible self-occlusions of the fingers in the Kinect camera images, data completeness was assessed. Results showed that the integration of the Data Glove through the Kalman filter provided for the proximal interphalangeal (PIP) joints of the fingers a substantial improvement of 79% in precision, from 2.2 deg to 0.9 deg. Moreover, a moderate improvement of 31% in accuracy (being the mean angular deviation from the true joint angle) was established, from 24 deg to 17 deg. The metacarpophalangeal (MCP) joint was relatively unaffected by the Kalman filter. Moreover, the Data Glove increased data completeness, thus providing a substantial advantage over the sole use of the Nimble VR system.

  13. GWM-2005 - A Groundwater-Management Process for MODFLOW-2005 with Local Grid Refinement (LGR) Capability

    USGS Publications Warehouse

    Ahlfeld, David P.; Baker, Kristine M.; Barlow, Paul M.

    2009-01-01

    This report describes the Groundwater-Management (GWM) Process for MODFLOW-2005, the 2005 version of the U.S. Geological Survey modular three-dimensional groundwater model. GWM can solve a broad range of groundwater-management problems by combined use of simulation- and optimization-modeling techniques. These problems include limiting groundwater-level declines or streamflow depletions, managing groundwater withdrawals, and conjunctively using groundwater and surface-water resources. GWM was initially released for the 2000 version of MODFLOW. Several modifications and enhancements have been made to GWM since its initial release to increase the scope of the program's capabilities and to improve its operation and reporting of results. The new code, which is called GWM-2005, also was designed to support the local grid refinement capability of MODFLOW-2005. Local grid refinement allows for the simulation of one or more higher resolution local grids (referred to as child models) within a coarser grid parent model. Local grid refinement is often needed to improve simulation accuracy in regions where hydraulic gradients change substantially over short distances or in areas requiring detailed representation of aquifer heterogeneity. GWM-2005 can be used to formulate and solve groundwater-management problems that include components in both parent and child models. Although local grid refinement increases simulation accuracy, it can also substantially increase simulation run times.

  14. High performance liquid chromatographic hydrocarbon group-type analyses of mid-distillates employing fuel-derived fractions as standards

    NASA Technical Reports Server (NTRS)

    Seng, G. T.; Otterson, D. A.

    1983-01-01

    Two high performance liquid chromatographic (HPLC) methods have been developed for the determination of saturates, olefins and aromatics in petroleum and shale derived mid-distillate fuels. In one method the fuel to be analyzed is reacted with sulfuric acid, to remove a substantial portion of the aromatics, which provides a reacted fuel fraction for use in group type quantitation. The second involves the removal of a substantial portion of the saturates fraction from the HPLC system to permit the determination of olefin concentrations as low as 0.3 volume percent, and to improve the accuracy and precision of olefins determinations. Each method was evaluated using model compound mixtures and real fuel samples.

  15. Computational procedures for evaluating the sensitivity derivatives of vibration frequencies and Eigenmodes of framed structures

    NASA Technical Reports Server (NTRS)

    Fetterman, Timothy L.; Noor, Ahmed K.

    1987-01-01

    Computational procedures are presented for evaluating the sensitivity derivatives of the vibration frequencies and eigenmodes of framed structures. Both a displacement and a mixed formulation are used. The two key elements of the computational procedure are: (a) Use of dynamic reduction techniques to substantially reduce the number of degrees of freedom; and (b) Application of iterative techniques to improve the accuracy of the derivatives of the eigenmodes. The two reduction techniques considered are the static condensation and a generalized dynamic reduction technique. Error norms are introduced to assess the accuracy of the eigenvalue and eigenvector derivatives obtained by the reduction techniques. The effectiveness of the methods presented is demonstrated by three numerical examples.

  16. Soil moisture assimilation using a modified ensemble transform Kalman filter with water balance constraint

    NASA Astrophysics Data System (ADS)

    Wu, Guocan; Zheng, Xiaogu; Dan, Bo

    2016-04-01

    The shallow soil moisture observations are assimilated into Common Land Model (CoLM) to estimate the soil moisture in different layers. The forecast error is inflated to improve the analysis state accuracy and the water balance constraint is adopted to reduce the water budget residual in the assimilation procedure. The experiment results illustrate that the adaptive forecast error inflation can reduce the analysis error, while the proper inflation layer can be selected based on the -2log-likelihood function of the innovation statistic. The water balance constraint can result in reducing water budget residual substantially, at a low cost of assimilation accuracy loss. The assimilation scheme can be potentially applied to assimilate the remote sensing data.

  17. Probabilistic models of eukaryotic evolution: time for integration

    PubMed Central

    Lartillot, Nicolas

    2015-01-01

    In spite of substantial work and recent progress, a global and fully resolved picture of the macroevolutionary history of eukaryotes is still under construction. This concerns not only the phylogenetic relations among major groups, but also the general characteristics of the underlying macroevolutionary processes, including the patterns of gene family evolution associated with endosymbioses, as well as their impact on the sequence evolutionary process. All these questions raise formidable methodological challenges, calling for a more powerful statistical paradigm. In this direction, model-based probabilistic approaches have played an increasingly important role. In particular, improved models of sequence evolution accounting for heterogeneities across sites and across lineages have led to significant, although insufficient, improvement in phylogenetic accuracy. More recently, one main trend has been to move away from simple parametric models and stepwise approaches, towards integrative models explicitly considering the intricate interplay between multiple levels of macroevolutionary processes. Such integrative models are in their infancy, and their application to the phylogeny of eukaryotes still requires substantial improvement of the underlying models, as well as additional computational developments. PMID:26323768

  18. Adaptive estimation of state of charge and capacity with online identified battery model for vanadium redox flow battery

    NASA Astrophysics Data System (ADS)

    Wei, Zhongbao; Tseng, King Jet; Wai, Nyunt; Lim, Tuti Mariana; Skyllas-Kazacos, Maria

    2016-11-01

    Reliable state estimate depends largely on an accurate battery model. However, the parameters of battery model are time varying with operating condition variation and battery aging. The existing co-estimation methods address the model uncertainty by integrating the online model identification with state estimate and have shown improved accuracy. However, the cross interference may arise from the integrated framework to compromise numerical stability and accuracy. Thus this paper proposes the decoupling of model identification and state estimate to eliminate the possibility of cross interference. The model parameters are online adapted with the recursive least squares (RLS) method, based on which a novel joint estimator based on extended Kalman Filter (EKF) is formulated to estimate the state of charge (SOC) and capacity concurrently. The proposed joint estimator effectively compresses the filter order which leads to substantial improvement in the computational efficiency and numerical stability. Lab scale experiment on vanadium redox flow battery shows that the proposed method is highly authentic with good robustness to varying operating conditions and battery aging. The proposed method is further compared with some existing methods and shown to be superior in terms of accuracy, convergence speed, and computational cost.

  19. Using precise word timing information improves decoding accuracy in a multiband-accelerated multimodal reading experiment.

    PubMed

    Vu, An T; Phillips, Jeffrey S; Kay, Kendrick; Phillips, Matthew E; Johnson, Matthew R; Shinkareva, Svetlana V; Tubridy, Shannon; Millin, Rachel; Grossman, Murray; Gureckis, Todd; Bhattacharyya, Rajan; Yacoub, Essa

    2016-01-01

    The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms.

  20. Analysis of spatial distribution of land cover maps accuracy

    NASA Astrophysics Data System (ADS)

    Khatami, R.; Mountrakis, G.; Stehman, S. V.

    2017-12-01

    Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain yielded similar AUC; iv) for the larger sample size (i.e., very dense spatial sample) and per-class predictions, the spatial domain yielded larger AUC; v) increasing the sample size improved accuracy predictions with a greater benefit accruing to the spatial domain; and vi) the function used for interpolation had the smallest effect on AUC.

  1. BBMerge – Accurate paired shotgun read merging via overlap

    DOE PAGES

    Bushnell, Brian; Rood, Jonathan; Singer, Esther

    2017-10-26

    Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less

  2. BBMerge – Accurate paired shotgun read merging via overlap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bushnell, Brian; Rood, Jonathan; Singer, Esther

    Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less

  3. Leverage effect, economic policy uncertainty and realized volatility with regime switching

    NASA Astrophysics Data System (ADS)

    Duan, Yinying; Chen, Wang; Zeng, Qing; Liu, Zhicao

    2018-03-01

    In this study, we first investigate the impacts of leverage effect and economic policy uncertainty (EPU) on future volatility in the framework of regime switching. Out-of-sample results show that the HAR-RV including the leverage effect and economic policy uncertainty with regimes can achieve higher forecast accuracy than RV-type and GARCH-class models. Our robustness results further imply that these factors in the framework of regime switching can substantially improve the HAR-RV's forecast performance.

  4. Risk Mitigation Testing with the BepiColombo MPO SADA

    NASA Astrophysics Data System (ADS)

    Zemann, J.; Heinrich, B.; Skulicz, A.; Madsen, M.; Weisenstein, W.; Modugno, F.; Althaus, F.; Panhofer, T.; Osterseher, G.

    2013-09-01

    A Solar Array (SA) Drive Assembly (SADA) for the BepiColombo mission is being developed and qualified at RUAG Space Zürich (RSSZ). The system is consisting of the Solar Array Drive Mechanism (SADM) and the Solar Array Drive Electronics (SADE) which is subcontracted to RUAG Space Austria (RSA).This paper deals with the risk mitigation activities and the lesson learnt from this development. In specific following topics substantiated by bread board (BB) test results will be addressed in detail:Slipring Bread Board Test: Verification of lifetime and electrical performance of carbon brush technology Potentiometer BB Tests: Focus on lifetime verification (> 650000 revolution) and accuracy requirement SADM EM BB Test: Subcomponent (front-bearing and gearbox) characterization; complete test campaign equivalent to QM test.EM SADM/ SADE Combined Test: Verification of combined performance (accuracy, torque margin) and micro-vibration testing of SADA systemSADE Bread Board Test: Parameter optimization; Test campaign equivalent to QM testThe main improvements identified in frame of BB testing and already implemented in the SADM EM/QM and SADE EQM are:• Improved preload device for gearbox• Improved motor ball-bearing assembly• Position sensor improvements• Calibration process for potentiometer• SADE motor controller optimization toachieve required running smoothness• Overall improvement of test equipment.

  5. Counteracting structural errors in ensemble forecast of influenza outbreaks.

    PubMed

    Pei, Sen; Shaman, Jeffrey

    2017-10-13

    For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.

  6. PASTA: Ultra-Large Multiple Sequence Alignment for Nucleotide and Amino-Acid Sequences.

    PubMed

    Mirarab, Siavash; Nguyen, Nam; Guo, Sheng; Wang, Li-San; Kim, Junhyong; Warnow, Tandy

    2015-05-01

    We introduce PASTA, a new multiple sequence alignment algorithm. PASTA uses a new technique to produce an alignment given a guide tree that enables it to be both highly scalable and very accurate. We present a study on biological and simulated data with up to 200,000 sequences, showing that PASTA produces highly accurate alignments, improving on the accuracy and scalability of the leading alignment methods (including SATé). We also show that trees estimated on PASTA alignments are highly accurate--slightly better than SATé trees, but with substantial improvements relative to other methods. Finally, PASTA is faster than SATé, highly parallelizable, and requires relatively little memory.

  7. Importance of Personalized Health-Care Models: A Case Study in Activity Recognition.

    PubMed

    Zdravevski, Eftim; Lameski, Petre; Trajkovik, Vladimir; Pombo, Nuno; Garcia, Nuno

    2018-01-01

    Novel information and communication technologies create possibilities to change the future of health care. Ambient Assisted Living (AAL) is seen as a promising supplement of the current care models. The main goal of AAL solutions is to apply ambient intelligence technologies to enable elderly people to continue to live in their preferred environments. Applying trained models from health data is challenging because the personalized environments could differ significantly than the ones which provided training data. This paper investigates the effects on activity recognition accuracy using single accelerometer of personalized models compared to models built on general population. In addition, we propose a collaborative filtering based approach which provides balance between fully personalized models and generic models. The results show that the accuracy could be improved to 95% with fully personalized models, and up to 91.6% with collaborative filtering based models, which is significantly better than common models that exhibit accuracy of 85.1%. The collaborative filtering approach seems to provide highly personalized models with substantial accuracy, while overcoming the cold start problem that is common for fully personalized models.

  8. Shape accuracy optimization for cable-rib tension deployable antenna structure with tensioned cables

    NASA Astrophysics Data System (ADS)

    Liu, Ruiwei; Guo, Hongwei; Liu, Rongqiang; Wang, Hongxiang; Tang, Dewei; Song, Xiaoke

    2017-11-01

    Shape accuracy is of substantial importance in deployable structures as the demand for large-scale deployable structures in various fields, especially in aerospace engineering, increases. The main purpose of this paper is to present a shape accuracy optimization method to find the optimal pretensions for the desired shape of cable-rib tension deployable antenna structure with tensioned cables. First, an analysis model of the deployable structure is established by using finite element method. In this model, geometrical nonlinearity is considered for the cable element and beam element. Flexible deformations of the deployable structure under the action of cable network and tensioned cables are subsequently analyzed separately. Moreover, the influence of pretension of tensioned cables on natural frequencies is studied. Based on the results, a genetic algorithm is used to find a set of reasonable pretension and thus minimize structural deformation under the first natural frequency constraint. Finally, numerical simulations are presented to analyze the deployable structure under two kinds of constraints. Results show that the shape accuracy and natural frequencies of deployable structure can be effectively improved by pretension optimization.

  9. Lessons in molecular recognition. 2. Assessing and improving cross-docking accuracy.

    PubMed

    Sutherland, Jeffrey J; Nandigam, Ravi K; Erickson, Jon A; Vieth, Michal

    2007-01-01

    Docking methods are used to predict the manner in which a ligand binds to a protein receptor. Many studies have assessed the success rate of programs in self-docking tests, whereby a ligand is docked into the protein structure from which it was extracted. Cross-docking, or using a protein structure from a complex containing a different ligand, provides a more realistic assessment of a docking program's ability to reproduce X-ray results. In this work, cross-docking was performed with CDocker, Fred, and Rocs using multiple X-ray structures for eight proteins (two kinases, one nuclear hormone receptor, one serine protease, two metalloproteases, and two phosphodiesterases). While average cross-docking accuracy is not encouraging, it is shown that using the protein structure from the complex that contains the bound ligand most similar to the docked ligand increases docking accuracy for all methods ("similarity selection"). Identifying the most successful protein conformer ("best selection") and similarity selection substantially reduce the difference between self-docking and average cross-docking accuracy. We identify universal predictors of docking accuracy (i.e., showing consistent behavior across most protein-method combinations), and show that models for predicting docking accuracy built using these parameters can be used to select the most appropriate docking method.

  10. The Evolution of Image-Free Robotic Assistance in Unicompartmental Knee Arthroplasty.

    PubMed

    Lonner, Jess H; Moretti, Vincent M

    2016-01-01

    Semiautonomous robotic technology has been introduced to optimize accuracy of bone preparation, implant positioning, and soft tissue balance in unicompartmental knee arthroplasty (UKA), with the expectation that there will be a resultant improvement in implant durability and survivorship. Currently, roughly one-fifth of UKAs in the US are being performed with robotic assistance, and it is anticipated that there will be substantial growth in market penetration of robotics over the next decade. First-generation robotic technology improved substantially implant position compared to conventional methods; however, high capital costs, uncertainty regarding the value of advanced technologies, and the need for preoperative computed tomography (CT) scans were barriers to broader adoption. Newer image-free semiautonomous robotic technology optimizes both implant position and soft tissue balance, without the need for preoperative CT scans and with pricing and portability that make it suitable for use in an ambulatory surgery center setting, where approximately 40% of these systems are currently being utilized. This article will review the robotic experience for UKA, including rationale, system descriptions, and outcomes.

  11. Continuous Indoor Positioning Fusing WiFi, Smartphone Sensors and Landmarks

    PubMed Central

    Deng, Zhi-An; Wang, Guofeng; Qin, Danyang; Na, Zhenyu; Cui, Yang; Chen, Juan

    2016-01-01

    To exploit the complementary strengths of WiFi positioning, pedestrian dead reckoning (PDR), and landmarks, we propose a novel fusion approach based on an extended Kalman filter (EKF). For WiFi positioning, unlike previous fusion approaches setting measurement noise parameters empirically, we deploy a kernel density estimation-based model to adaptively measure the related measurement noise statistics. Furthermore, a trusted area of WiFi positioning defined by fusion results of previous step and WiFi signal outlier detection are exploited to reduce computational cost and improve WiFi positioning accuracy. For PDR, we integrate a gyroscope, an accelerometer, and a magnetometer to determine the user heading based on another EKF model. To reduce accumulation error of PDR and enable continuous indoor positioning, not only the positioning results but also the heading estimations are recalibrated by indoor landmarks. Experimental results in a realistic indoor environment show that the proposed fusion approach achieves substantial positioning accuracy improvement than individual positioning approaches including PDR and WiFi positioning. PMID:27608019

  12. Continuous Indoor Positioning Fusing WiFi, Smartphone Sensors and Landmarks.

    PubMed

    Deng, Zhi-An; Wang, Guofeng; Qin, Danyang; Na, Zhenyu; Cui, Yang; Chen, Juan

    2016-09-05

    To exploit the complementary strengths of WiFi positioning, pedestrian dead reckoning (PDR), and landmarks, we propose a novel fusion approach based on an extended Kalman filter (EKF). For WiFi positioning, unlike previous fusion approaches setting measurement noise parameters empirically, we deploy a kernel density estimation-based model to adaptively measure the related measurement noise statistics. Furthermore, a trusted area of WiFi positioning defined by fusion results of previous step and WiFi signal outlier detection are exploited to reduce computational cost and improve WiFi positioning accuracy. For PDR, we integrate a gyroscope, an accelerometer, and a magnetometer to determine the user heading based on another EKF model. To reduce accumulation error of PDR and enable continuous indoor positioning, not only the positioning results but also the heading estimations are recalibrated by indoor landmarks. Experimental results in a realistic indoor environment show that the proposed fusion approach achieves substantial positioning accuracy improvement than individual positioning approaches including PDR and WiFi positioning.

  13. Evaluation of Penalized and Nonpenalized Methods for Disease Prediction with Large-Scale Genetic Data.

    PubMed

    Won, Sungho; Choi, Hosik; Park, Suyeon; Lee, Juyoung; Park, Changyi; Kwon, Sunghoon

    2015-01-01

    Owing to recent improvement of genotyping technology, large-scale genetic data can be utilized to identify disease susceptibility loci and this successful finding has substantially improved our understanding of complex diseases. However, in spite of these successes, most of the genetic effects for many complex diseases were found to be very small, which have been a big hurdle to build disease prediction model. Recently, many statistical methods based on penalized regressions have been proposed to tackle the so-called "large P and small N" problem. Penalized regressions including least absolute selection and shrinkage operator (LASSO) and ridge regression limit the space of parameters, and this constraint enables the estimation of effects for very large number of SNPs. Various extensions have been suggested, and, in this report, we compare their accuracy by applying them to several complex diseases. Our results show that penalized regressions are usually robust and provide better accuracy than the existing methods for at least diseases under consideration.

  14. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis

    NASA Astrophysics Data System (ADS)

    Litjens, Geert; Sánchez, Clara I.; Timofeeva, Nadya; Hermsen, Meyke; Nagtegaal, Iris; Kovacs, Iringo; Hulsbergen-van de Kaa, Christina; Bult, Peter; van Ginneken, Bram; van der Laak, Jeroen

    2016-05-01

    Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce ‘deep learning’ as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30-40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that ‘deep learning’ holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging.

  15. Customization of UWB 3D-RTLS Based on the New Uncertainty Model of the AoA Ranging Technique

    PubMed Central

    Jachimczyk, Bartosz; Dziak, Damian; Kulesza, Wlodek J.

    2017-01-01

    The increased potential and effectiveness of Real-time Locating Systems (RTLSs) substantially influence their application spectrum. They are widely used, inter alia, in the industrial sector, healthcare, home care, and in logistic and security applications. The research aims to develop an analytical method to customize UWB-based RTLS, in order to improve their localization performance in terms of accuracy and precision. The analytical uncertainty model of Angle of Arrival (AoA) localization in a 3D indoor space, which is the foundation of the customization concept, is established in a working environment. Additionally, a suitable angular-based 3D localization algorithm is introduced. The paper investigates the following issues: the influence of the proposed correction vector on the localization accuracy; the impact of the system’s configuration and LS’s relative deployment on the localization precision distribution map. The advantages of the method are verified by comparing them with a reference commercial RTLS localization engine. The results of simulations and physical experiments prove the value of the proposed customization method. The research confirms that the analytical uncertainty model is the valid representation of RTLS’ localization uncertainty in terms of accuracy and precision and can be useful for its performance improvement. The research shows, that the Angle of Arrival localization in a 3D indoor space applying the simple angular-based localization algorithm and correction vector improves of localization accuracy and precision in a way that the system challenges the reference hardware advanced localization engine. Moreover, the research guides the deployment of location sensors to enhance the localization precision. PMID:28125056

  16. Customization of UWB 3D-RTLS Based on the New Uncertainty Model of the AoA Ranging Technique.

    PubMed

    Jachimczyk, Bartosz; Dziak, Damian; Kulesza, Wlodek J

    2017-01-25

    The increased potential and effectiveness of Real-time Locating Systems (RTLSs) substantially influence their application spectrum. They are widely used, inter alia, in the industrial sector, healthcare, home care, and in logistic and security applications. The research aims to develop an analytical method to customize UWB-based RTLS, in order to improve their localization performance in terms of accuracy and precision. The analytical uncertainty model of Angle of Arrival (AoA) localization in a 3D indoor space, which is the foundation of the customization concept, is established in a working environment. Additionally, a suitable angular-based 3D localization algorithm is introduced. The paper investigates the following issues: the influence of the proposed correction vector on the localization accuracy; the impact of the system's configuration and LS's relative deployment on the localization precision distribution map. The advantages of the method are verified by comparing them with a reference commercial RTLS localization engine. The results of simulations and physical experiments prove the value of the proposed customization method. The research confirms that the analytical uncertainty model is the valid representation of RTLS' localization uncertainty in terms of accuracy and precision and can be useful for its performance improvement. The research shows, that the Angle of Arrival localization in a 3D indoor space applying the simple angular-based localization algorithm and correction vector improves of localization accuracy and precision in a way that the system challenges the reference hardware advanced localization engine. Moreover, the research guides the deployment of location sensors to enhance the localization precision.

  17. Classifying four-category visual objects using multiple ERP components in single-trial ERP.

    PubMed

    Qin, Yu; Zhan, Yu; Wang, Changming; Zhang, Jiacai; Yao, Li; Guo, Xiaojuan; Wu, Xia; Hu, Bin

    2016-08-01

    Object categorization using single-trial electroencephalography (EEG) data measured while participants view images has been studied intensively. In previous studies, multiple event-related potential (ERP) components (e.g., P1, N1, P2, and P3) were used to improve the performance of object categorization of visual stimuli. In this study, we introduce a novel method that uses multiple-kernel support vector machine to fuse multiple ERP component features. We investigate whether fusing the potential complementary information of different ERP components (e.g., P1, N1, P2a, and P2b) can improve the performance of four-category visual object classification in single-trial EEGs. We also compare the classification accuracy of different ERP component fusion methods. Our experimental results indicate that the classification accuracy increases through multiple ERP fusion. Additional comparative analyses indicate that the multiple-kernel fusion method can achieve a mean classification accuracy higher than 72 %, which is substantially better than that achieved with any single ERP component feature (55.07 % for the best single ERP component, N1). We compare the classification results with those of other fusion methods and determine that the accuracy of the multiple-kernel fusion method is 5.47, 4.06, and 16.90 % higher than those of feature concatenation, feature extraction, and decision fusion, respectively. Our study shows that our multiple-kernel fusion method outperforms other fusion methods and thus provides a means to improve the classification performance of single-trial ERPs in brain-computer interface research.

  18. Efficient alignment-free DNA barcode analytics.

    PubMed

    Kuksa, Pavel; Pavlovic, Vladimir

    2009-11-10

    In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding.

  19. Real-time image-processing algorithm for markerless tumour tracking using X-ray fluoroscopic imaging.

    PubMed

    Mori, S

    2014-05-01

    To ensure accuracy in respiratory-gating treatment, X-ray fluoroscopic imaging is used to detect tumour position in real time. Detection accuracy is strongly dependent on image quality, particularly positional differences between the patient and treatment couch. We developed a new algorithm to improve the quality of images obtained in X-ray fluoroscopic imaging and report the preliminary results. Two oblique X-ray fluoroscopic images were acquired using a dynamic flat panel detector (DFPD) for two patients with lung cancer. The weighting factor was applied to the DFPD image in respective columns, because most anatomical structures, as well as the treatment couch and port cover edge, were aligned in the superior-inferior direction when the patient lay on the treatment couch. The weighting factors for the respective columns were varied until the standard deviation of the pixel values within the image region was minimized. Once the weighting factors were calculated, the quality of the DFPD image was improved by applying the factors to multiframe images. Applying the image-processing algorithm produced substantial improvement in the quality of images, and the image contrast was increased. The treatment couch and irradiation port edge, which were not related to a patient's position, were removed. The average image-processing time was 1.1 ms, showing that this fast image processing can be applied to real-time tumour-tracking systems. These findings indicate that this image-processing algorithm improves the image quality in patients with lung cancer and successfully removes objects not related to the patient. Our image-processing algorithm might be useful in improving gated-treatment accuracy.

  20. A Shock-Adaptive Godunov Scheme Based on the Generalised Lagrangian Formulation

    NASA Astrophysics Data System (ADS)

    Lepage, C. Y.; Hui, W. H.

    1995-12-01

    Application of the Godunov scheme to the Euler equations of gas dynamics based on the Eulerian formulation of flow smears discontinuities, sliplines especially, over several computational cells, while the accuracy in the smooth flow region is of the order O( h), where h is the cell width. Based on the generalised Lagrangian formulation (GLF) of Hui et al., the Godunov scheme yields superior accuracy. By the use of coordinate streamlines in the GLF, the slipline—itself a streamline—is resolved crisply. Infinite shock resolution is achieved through the splitting of shock-cells. An improved entropy-conservation formulation of the governing equations is also proposed for computations in smooth flow regions. Finally, the use of the GLF substantially simplifies the programming logic resulting in a very robust, accurate, and efficient scheme.

  1. A biomechanical modeling guided simultaneous motion estimation and image reconstruction technique (SMEIR-Bio) for 4D-CBCT reconstruction

    NASA Astrophysics Data System (ADS)

    Huang, Xiaokun; Zhang, You; Wang, Jing

    2017-03-01

    Four-dimensional (4D) cone-beam computed tomography (CBCT) enables motion tracking of anatomical structures and removes artifacts introduced by motion. However, the imaging time/dose of 4D-CBCT is substantially longer/higher than traditional 3D-CBCT. We previously developed a simultaneous motion estimation and image reconstruction (SMEIR) algorithm, to reconstruct high-quality 4D-CBCT from limited number of projections to reduce the imaging time/dose. However, the accuracy of SMEIR is limited in reconstructing low-contrast regions with fine structure details. In this study, we incorporate biomechanical modeling into the SMEIR algorithm (SMEIR-Bio), to improve the reconstruction accuracy at low-contrast regions with fine details. The efficacy of SMEIR-Bio is evaluated using 11 lung patient cases and compared to that of the original SMEIR algorithm. Qualitative and quantitative comparisons showed that SMEIR-Bio greatly enhances the accuracy of reconstructed 4D-CBCT volume in low-contrast regions, which can potentially benefit multiple clinical applications including the treatment outcome analysis.

  2. Do Convolutional Neural Networks Learn Class Hierarchy?

    PubMed

    Bilal, Alsallakh; Jourabloo, Amin; Ye, Mao; Liu, Xiaoming; Ren, Liu

    2018-01-01

    Convolutional Neural Networks (CNNs) currently achieve state-of-the-art accuracy in image classification. With a growing number of classes, the accuracy usually drops as the possibilities of confusion increase. Interestingly, the class confusion patterns follow a hierarchical structure over the classes. We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data. We found that this hierarchy not only dictates the confusion patterns between the classes, it furthermore dictates the learning behavior of CNNs. In particular, the early layers in these networks develop feature detectors that can separate high-level groups of classes quite well, even after a few training epochs. In contrast, the latter layers require substantially more epochs to develop specialized feature detectors that can separate individual classes. We demonstrate how these insights are key to significant improvement in accuracy by designing hierarchy-aware CNNs that accelerate model convergence and alleviate overfitting. We further demonstrate how our methods help in identifying various quality issues in the training data.

  3. 26 CFR 1.6662-1 - Overview of the accuracy-related penalty.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... one or more of the following: (a) Negligence or disregard of rules or regulations; (b) Any substantial..., i.e., the penalties for negligence or disregard of rules or regulations, substantial understatements...), respectively. The penalties for negligence and for a substantial (or gross) valuation misstatement under...

  4. Diagnostic accuracy of optical coherence tomography in actinic keratosis and basal cell carcinoma.

    PubMed

    Olsen, J; Themstrup, L; De Carvalho, N; Mogensen, M; Pellacani, G; Jemec, G B E

    2016-12-01

    Early diagnosis of non-melanoma skin cancer (NMSC) is potentially possible using optical coherence tomography (OCT) which provides non-invasive, real-time images of skin with micrometre resolution and an imaging depth of up to 2mm. OCT technology for skin imaging has undergone significant developments, improving image quality substantially. The diagnostic accuracy of any method is influenced by continuous technological development making it necessary to regularly re-evaluate methods. The objective of this study is to estimate the diagnostic accuracy of OCT in basal cell carcinomas (BCC) and actinic keratosis (AK) as well as differentiating these lesions from normal skin. A study set consisting of 142 OCT images meeting selection criterea for image quality and diagnosis of AK, BCC and normal skin was presented uniformly to two groups of blinded observers: 5 dermatologists experienced in OCT-image interpretation and 5 dermatologists with no experience in OCT. During the presentation of the study set the observers filled out a standardized questionnaire regarding the OCT diagnosis. Images were captured using a commercially available OCT machine (Vivosight ® , Michelson Diagnostics, UK). Skilled OCT observers were able to diagnose BCC lesions with a sensitivity of 86% to 95% and a specificity of 81% to 98%. Skilled observers with at least one year of OCT-experience showed an overall higher diagnostic accuracy compared to inexperienced observers. The study shows an improved diagnostic accuracy of OCT in differentiating AK and BCC from healthy skin using state-of-the-art technology compared to earlier OCT technology, especially concerning BCC diagnosis. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. A comparison of genomic selection models across time in interior spruce (Picea engelmannii × glauca) using unordered SNP imputation methods

    PubMed Central

    Ratcliffe, B; El-Dien, O G; Klápště, J; Porth, I; Chen, C; Jaquish, B; El-Kassaby, Y A

    2015-01-01

    Genomic selection (GS) potentially offers an unparalleled advantage over traditional pedigree-based selection (TS) methods by reducing the time commitment required to carry out a single cycle of tree improvement. This quality is particularly appealing to tree breeders, where lengthy improvement cycles are the norm. We explored the prospect of implementing GS for interior spruce (Picea engelmannii × glauca) utilizing a genotyped population of 769 trees belonging to 25 open-pollinated families. A series of repeated tree height measurements through ages 3–40 years permitted the testing of GS methods temporally. The genotyping-by-sequencing (GBS) platform was used for single nucleotide polymorphism (SNP) discovery in conjunction with three unordered imputation methods applied to a data set with 60% missing information. Further, three diverse GS models were evaluated based on predictive accuracy (PA), and their marker effects. Moderate levels of PA (0.31–0.55) were observed and were of sufficient capacity to deliver improved selection response over TS. Additionally, PA varied substantially through time accordingly with spatial competition among trees. As expected, temporal PA was well correlated with age-age genetic correlation (r=0.99), and decreased substantially with increasing difference in age between the training and validation populations (0.04–0.47). Moreover, our imputation comparisons indicate that k-nearest neighbor and singular value decomposition yielded a greater number of SNPs and gave higher predictive accuracies than imputing with the mean. Furthermore, the ridge regression (rrBLUP) and BayesCπ (BCπ) models both yielded equal, and better PA than the generalized ridge regression heteroscedastic effect model for the traits evaluated. PMID:26126540

  6. A comparison of genomic selection models across time in interior spruce (Picea engelmannii × glauca) using unordered SNP imputation methods.

    PubMed

    Ratcliffe, B; El-Dien, O G; Klápště, J; Porth, I; Chen, C; Jaquish, B; El-Kassaby, Y A

    2015-12-01

    Genomic selection (GS) potentially offers an unparalleled advantage over traditional pedigree-based selection (TS) methods by reducing the time commitment required to carry out a single cycle of tree improvement. This quality is particularly appealing to tree breeders, where lengthy improvement cycles are the norm. We explored the prospect of implementing GS for interior spruce (Picea engelmannii × glauca) utilizing a genotyped population of 769 trees belonging to 25 open-pollinated families. A series of repeated tree height measurements through ages 3-40 years permitted the testing of GS methods temporally. The genotyping-by-sequencing (GBS) platform was used for single nucleotide polymorphism (SNP) discovery in conjunction with three unordered imputation methods applied to a data set with 60% missing information. Further, three diverse GS models were evaluated based on predictive accuracy (PA), and their marker effects. Moderate levels of PA (0.31-0.55) were observed and were of sufficient capacity to deliver improved selection response over TS. Additionally, PA varied substantially through time accordingly with spatial competition among trees. As expected, temporal PA was well correlated with age-age genetic correlation (r=0.99), and decreased substantially with increasing difference in age between the training and validation populations (0.04-0.47). Moreover, our imputation comparisons indicate that k-nearest neighbor and singular value decomposition yielded a greater number of SNPs and gave higher predictive accuracies than imputing with the mean. Furthermore, the ridge regression (rrBLUP) and BayesCπ (BCπ) models both yielded equal, and better PA than the generalized ridge regression heteroscedastic effect model for the traits evaluated.

  7. Technical editing of research reports in biomedical journals.

    PubMed

    Wager, Elizabeth; Middleton, Philippa

    2008-10-08

    Most journals try to improve their articles by technical editing processes such as proof-reading, editing to conform to 'house styles', grammatical conventions and checking accuracy of cited references. Despite the considerable resources devoted to technical editing, we do not know whether it improves the accessibility of biomedical research findings or the utility of articles. This is an update of a Cochrane methodology review first published in 2003. To assess the effects of technical editing on research reports in peer-reviewed biomedical journals, and to assess the level of accuracy of references to these reports. We searched The Cochrane Library Issue 2, 2007; MEDLINE (last searched July 2006); EMBASE (last searched June 2007) and checked relevant articles for further references. We also searched the Internet and contacted researchers and experts in the field. Prospective or retrospective comparative studies of technical editing processes applied to original research articles in biomedical journals, as well as studies of reference accuracy. Two review authors independently assessed each study against the selection criteria and assessed the methodological quality of each study. One review author extracted the data, and the second review author repeated this. We located 32 studies addressing technical editing and 66 surveys of reference accuracy. Only three of the studies were randomised controlled trials. A 'package' of largely unspecified editorial processes applied between acceptance and publication was associated with improved readability in two studies and improved reporting quality in another two studies, while another study showed mixed results after stricter editorial policies were introduced. More intensive editorial processes were associated with fewer errors in abstracts and references. Providing instructions to authors was associated with improved reporting of ethics requirements in one study and fewer errors in references in two studies, but no difference was seen in the quality of abstracts in one randomised controlled trial. Structuring generally improved the quality of abstracts, but increased their length. The reference accuracy studies showed a median citation error rate of 38% and a median quotation error rate of 20%. Surprisingly few studies have evaluated the effects of technical editing rigorously. However there is some evidence that the 'package' of technical editing used by biomedical journals does improve papers. A substantial number of references in biomedical articles are cited or quoted inaccurately.

  8. Testing of the analytical anisotropic algorithm for photon dose calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esch, Ann van; Tillikainen, Laura; Pyykkonen, Jukka

    2006-11-15

    The analytical anisotropic algorithm (AAA) was implemented in the Eclipse (Varian Medical Systems) treatment planning system to replace the single pencil beam (SPB) algorithm for the calculation of dose distributions for photon beams. AAA was developed to improve the dose calculation accuracy, especially in heterogeneous media. The total dose deposition is calculated as the superposition of the dose deposited by two photon sources (primary and secondary) and by an electron contamination source. The photon dose is calculated as a three-dimensional convolution of Monte-Carlo precalculated scatter kernels, scaled according to the electron density matrix. For the configuration of AAA, an optimizationmore » algorithm determines the parameters characterizing the multiple source model by optimizing the agreement between the calculated and measured depth dose curves and profiles for the basic beam data. We have combined the acceptance tests obtained in three different departments for 6, 15, and 18 MV photon beams. The accuracy of AAA was tested for different field sizes (symmetric and asymmetric) for open fields, wedged fields, and static and dynamic multileaf collimation fields. Depth dose behavior at different source-to-phantom distances was investigated. Measurements were performed on homogeneous, water equivalent phantoms, on simple phantoms containing cork inhomogeneities, and on the thorax of an anthropomorphic phantom. Comparisons were made among measurements, AAA, and SPB calculations. The optimization procedure for the configuration of the algorithm was successful in reproducing the basic beam data with an overall accuracy of 3%, 1 mm in the build-up region, and 1%, 1 mm elsewhere. Testing of the algorithm in more clinical setups showed comparable results for depth dose curves, profiles, and monitor units of symmetric open and wedged beams below d{sub max}. The electron contamination model was found to be suboptimal to model the dose around d{sub max}, especially for physical wedges at smaller source to phantom distances. For the asymmetric field verification, absolute dose difference of up to 4% were observed for the most extreme asymmetries. Compared to the SPB, the penumbra modeling is considerably improved (1%, 1 mm). At the interface between solid water and cork, profiles show a better agreement with AAA. Depth dose curves in the cork are substantially better with AAA than with SPB. Improvements are more pronounced for 18 MV than for 6 MV. Point dose measurements in the thoracic phantom are mostly within 5%. In general, we can conclude that, compared to SPB, AAA improves the accuracy of dose calculations. Particular progress was made with respect to the penumbra and low dose regions. In heterogeneous materials, improvements are substantial and more pronounced for high (18 MV) than for low (6 MV) energies.« less

  9. Improving KPCA Online Extraction by Orthonormalization in the Feature Space.

    PubMed

    Souza Filho, Joao B O; Diniz, Paulo S R

    2018-04-01

    Recently, some online kernel principal component analysis (KPCA) techniques based on the generalized Hebbian algorithm (GHA) were proposed for use in large data sets, defining kernel components using concise dictionaries automatically extracted from data. This brief proposes two new online KPCA extraction algorithms, exploiting orthogonalized versions of the GHA rule. In both the cases, the orthogonalization of kernel components is achieved by the inclusion of some low complexity additional steps to the kernel Hebbian algorithm, thus not substantially affecting the computational cost of the algorithm. Results show improved convergence speed and accuracy of components extracted by the proposed methods, as compared with the state-of-the-art online KPCA extraction algorithms.

  10. PASTA: Ultra-Large Multiple Sequence Alignment for Nucleotide and Amino-Acid Sequences

    PubMed Central

    Mirarab, Siavash; Nguyen, Nam; Guo, Sheng; Wang, Li-San; Kim, Junhyong

    2015-01-01

    Abstract We introduce PASTA, a new multiple sequence alignment algorithm. PASTA uses a new technique to produce an alignment given a guide tree that enables it to be both highly scalable and very accurate. We present a study on biological and simulated data with up to 200,000 sequences, showing that PASTA produces highly accurate alignments, improving on the accuracy and scalability of the leading alignment methods (including SATé). We also show that trees estimated on PASTA alignments are highly accurate—slightly better than SATé trees, but with substantial improvements relative to other methods. Finally, PASTA is faster than SATé, highly parallelizable, and requires relatively little memory. PMID:25549288

  11. Cepheids Geometrical Distances Using Space Interferometry

    NASA Astrophysics Data System (ADS)

    Marengo, M.; Karovska, M.; Sasselov, D. D.; Sanchez, M.

    2004-05-01

    A space based interferometer with a sub-milliarcsecond resolution in the UV-optical will provide a new avenue for the calibration of primary distance indicators with unprecedented accuracy, by allowing very accurate and stable measurements of Cepheids pulsation amplitudes at wavelengths not accessible from the ground. Sasselov & Karovska (1994) have shown that interferometers allow very accurate measurements of Cepheids distances by using a ``geometric'' variant of the Baade-Wesselink method. This method has been succesfully applied to derive distances and radii of nearby Cepheids using ground-based near-IR and optical interferometers, within a 15% accuracy level. Our study shows that the main source of error in these measurements is due to the perturbing effects of the Earth atmosphere, which is the limiting factor in the interferometer stability. A space interferometer will not suffer from this intrinsic limitations, and can potentially lead to improve astronomical distance measurements by an order of magnitude in precision. We discuss here the technical requirements that a space based facility will need to carry out this project, allowing distance measurements within a few percent accuracy level. We will finally discuss how a sub-milliarcsecond resolution will allow the direct distance determination for hundreds of galactic sources, and provide a substantial improvement in the zero-point of the Cepheid distance scale.

  12. Fast dose kernel interpolation using Fourier transform with application to permanent prostate brachytherapy dosimetry.

    PubMed

    Liu, Derek; Sloboda, Ron S

    2014-05-01

    Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.

  13. Precision GPS orbit determination strategies for an earth orbiter and geodetic tracking system

    NASA Technical Reports Server (NTRS)

    Lichten, Stephen M.; Bertiger, Willy I.; Border, James S.

    1988-01-01

    Data from two 1985 GPS field tests were processed and precise GPS orbits were determined. With a combined carrier phase and pseudorange, the 1314-km repeatability improves substantially to 5 parts in 10 to the 9th (0.6 cm) in the north and 2 parts in 10 to the 8th (2-3 cm) in the other components. To achieve these levels of repeatability and accuracy, it is necessary to fine-tune the GPS solar radiation coefficients and ground station zenith tropospheric delays.

  14. HLA imputation in an admixed population: An assessment of the 1000 Genomes data as a training set.

    PubMed

    Nunes, Kelly; Zheng, Xiuwen; Torres, Margareth; Moraes, Maria Elisa; Piovezan, Bruno Z; Pontes, Gerlandia N; Kimura, Lilian; Carnavalli, Juliana E P; Mingroni Netto, Regina C; Meyer, Diogo

    2016-03-01

    Methods to impute HLA alleles based on dense single nucleotide polymorphism (SNP) data provide a valuable resource to association studies and evolutionary investigation of the MHC region. The availability of appropriate training sets is critical to the accuracy of HLA imputation, and the inclusion of samples with various ancestries is an important pre-requisite in studies of admixed populations. We assess the accuracy of HLA imputation using 1000 Genomes Project data as a training set, applying it to a highly admixed Brazilian population, the Quilombos from the state of São Paulo. To assess accuracy, we compared imputed and experimentally determined genotypes for 146 samples at 4 HLA classical loci. We found imputation accuracies of 82.9%, 81.8%, 94.8% and 86.6% for HLA-A, -B, -C and -DRB1 respectively (two-field resolution). Accuracies were improved when we included a subset of Quilombo individuals in the training set. We conclude that the 1000 Genomes data is a valuable resource for construction of training sets due to the diversity of ancestries and the potential for a large overlap of SNPs with the target population. We also show that tailoring training sets to features of the target population substantially enhances imputation accuracy. Copyright © 2016 American Society for Histocompatibility and Immunogenetics. Published by Elsevier Inc. All rights reserved.

  15. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    NASA Astrophysics Data System (ADS)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence time in PPP static and kinematic solutions compared to GPS-only PPP solutions for various observational session durations. However, this is mostly observed when the visibility of Galileo and BeiDou satellites is substantially long within an observational session. In GPS-only cases dealing with data from high elevation cut-off angles, the number of GPS satellites decreases dramatically, leading to a position accuracy and convergence time deviating from satisfactory geodetic thresholds. By contrast, respective multi-GNSS PPP solutions not only show improvement, but also lead to geodetic level accuracies even in 30° elevation cut-off. Finally, the GPS ambiguity resolution in PPP processing is investigated using the GPS satellite wide-lane fractional cycle biases, which are included in the clock products by CNES. It is shown that their addition shortens the convergence time and increases the position accuracy of PPP solutions, especially in kinematic mode. Analogous improvement is obtained in respective multi-GNSS solutions, even though the GLONASS, Galileo and BeiDou ambiguities remain float, since information about them is not provided in the clock products available to date.

  16. Multistrip western blotting to increase quantitative data output.

    PubMed

    Kiyatkin, Anatoly; Aksamitiene, Edita

    2009-01-01

    The qualitative and quantitative measurements of protein abundance and modification states are essential in understanding their functions in diverse cellular processes. Typical western blotting, though sensitive, is prone to produce substantial errors and is not readily adapted to high-throughput technologies. Multistrip western blotting is a modified immunoblotting procedure based on simultaneous electrophoretic transfer of proteins from multiple strips of polyacrylamide gels to a single membrane sheet. In comparison with the conventional technique, Multistrip western blotting increases the data output per single blotting cycle up to tenfold, allows concurrent monitoring of up to nine different proteins from the same loading of the sample, and substantially improves the data accuracy by reducing immunoblotting-derived signal errors. This approach enables statistically reliable comparison of different or repeated sets of data, and therefore is beneficial to apply in biomedical diagnostics, systems biology, and cell signaling research.

  17. Direct Detection Doppler Lidar for Spaceborne Wind Measurement

    NASA Technical Reports Server (NTRS)

    Korb, C. Laurence; Flesia, Cristina

    1999-01-01

    The theory of double edge lidar techniques for measuring the atmospheric wind using aerosol and molecular backscatter is described. Two high spectral resolution filters with opposite slopes are located about the laser frequency for the aerosol based measurement or in the wings of the Rayleigh - Brillouin profile for the molecular measurement. This doubles the signal change per unit Doppler shift and improves the measurement accuracy by nearly a factor of 2 relative to the single edge technique. For the aerosol based measurement, the use of two high resolution edge filters reduces the effects of background, Rayleigh scattering, by as much as an order of magnitude and substantially improves the measurement accuracy. Also, we describe a method that allows the Rayleigh and aerosol components of the signal to be independently determined. A measurement accuracy of 1.2 m/s can be obtained for a signal level of 1000 detected photons which corresponds to signal levels in the boundary layer. For the molecular based measurement, we describe the use of a crossover region where the sensitivity of a molecular and aerosol-based measurement are equal. This desensitizes the molecular measurement to the effects of aerosol scattering and greatly simplifies the measurement. Simulations using a conical scanning spaceborne lidar at 355 nm give an accuracy of 2-3 m/s for altitudes of 2-15 km for a 1 km vertical resolution, a satellite altitude of 400 km, and a 200 km x 200 km spatial.

  18. Development of a novel empathy-related video-feedback intervention to improve empathic accuracy of nursing students: A pilot study.

    PubMed

    Lobchuk, Michelle; Halas, Gayle; West, Christina; Harder, Nicole; Tursunova, Zulfiya; Ramraj, Chantal

    2016-11-01

    Stressed family carers engage in health-risk behaviours that can lead to chronic illness. Innovative strategies are required to bolster empathic dialogue skills that impact nursing student confidence and sensitivity in meeting carers' wellness needs. To report on the development and evaluation of a promising empathy-related video-feedback intervention and its impact on student empathic accuracy on carer health risk behaviours. A pilot quasi-experimental design study with eight pairs of 3rd year undergraduate nursing students and carers. Students participated in perspective-taking instructional and practice sessions, and a 10-minute video-recorded dialogue with carers followed by a video-tagging task. Quantitative and qualitative approaches helped us to evaluate the recruitment protocol, capture participant responses to the intervention and study tools, and develop a tool to assess student empathic accuracy. The instructional and practice sessions increased student self-awareness of biases and interest in learning empathy by video-tagging feedback. Carers felt that students were 'non-judgmental', inquisitive, and helped them to 'gain new insights' that fostered ownership to change their health-risk behaviour. There was substantial Fleiss Kappa agreement among four raters across five dyads and 67 tagged instances. In general, students and carers evaluated the intervention favourably. The results suggest areas of improvement to the recruitment protocol, perspective-taking instructions, video-tagging task, and empathic accuracy tool. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin

    2015-03-15

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR.more » Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.« less

  20. Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds.

    PubMed

    Sur, Maitreyi; Suffredini, Tony; Wessells, Stephen M; Bloom, Peter H; Lanzone, Michael; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd

    2017-01-01

    Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data.

  1. Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds

    PubMed Central

    Suffredini, Tony; Wessells, Stephen M.; Bloom, Peter H.; Lanzone, Michael; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd

    2017-01-01

    Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data. PMID:28403159

  2. Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds

    USGS Publications Warehouse

    Sur, Maitreyi; Suffredini, Tony; Wessells, Stephen M.; Bloom, Peter H.; Lanzone, Michael J.; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd

    2017-01-01

    Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data.

  3. Improving satellite-driven PM2.5 models with Moderate Resolution Imaging Spectroradiometer fire counts in the southeastern U.S.

    PubMed

    Hu, Xuefei; Waller, Lance A; Lyapustin, Alexei; Wang, Yujie; Liu, Yang

    2014-10-16

    Multiple studies have developed surface PM 2.5 (particle size less than 2.5 µm in aerodynamic diameter) prediction models using satellite-derived aerosol optical depth as the primary predictor and meteorological and land use variables as secondary variables. To our knowledge, satellite-retrieved fire information has not been used for PM 2.5 concentration prediction in statistical models. Fire data could be a useful predictor since fires are significant contributors of PM 2.5 . In this paper, we examined whether remotely sensed fire count data could improve PM 2.5 prediction accuracy in the southeastern U.S. in a spatial statistical model setting. A sensitivity analysis showed that when the radius of the buffer zone centered at each PM 2.5 monitoring site reached 75 km, fire count data generally have the greatest predictive power of PM 2.5 across the models considered. Cross validation (CV) generated an R 2 of 0.69, a mean prediction error of 2.75 µg/m 3 , and root-mean-square prediction errors (RMSPEs) of 4.29 µg/m 3 , indicating a good fit between the dependent and predictor variables. A comparison showed that the prediction accuracy was improved more substantially from the nonfire model to the fire model at sites with higher fire counts. With increasing fire counts, CV RMSPE decreased by values up to 1.5 µg/m 3 , exhibiting a maximum improvement of 13.4% in prediction accuracy. Fire count data were shown to have better performance in southern Georgia and in the spring season due to higher fire occurrence. Our findings indicate that fire count data provide a measurable improvement in PM 2.5 concentration estimation, especially in areas and seasons prone to fire events.

  4. Improving satellite-driven PM2.5 models with Moderate Resolution Imaging Spectroradiometer fire counts in the southeastern U.S

    PubMed Central

    Hu, Xuefei; Waller, Lance A.; Lyapustin, Alexei; Wang, Yujie; Liu, Yang

    2017-01-01

    Multiple studies have developed surface PM2.5 (particle size less than 2.5 µm in aerodynamic diameter) prediction models using satellite-derived aerosol optical depth as the primary predictor and meteorological and land use variables as secondary variables. To our knowledge, satellite-retrieved fire information has not been used for PM2.5 concentration prediction in statistical models. Fire data could be a useful predictor since fires are significant contributors of PM2.5. In this paper, we examined whether remotely sensed fire count data could improve PM2.5 prediction accuracy in the southeastern U.S. in a spatial statistical model setting. A sensitivity analysis showed that when the radius of the buffer zone centered at each PM2.5 monitoring site reached 75 km, fire count data generally have the greatest predictive power of PM2.5 across the models considered. Cross validation (CV) generated an R2 of 0.69, a mean prediction error of 2.75 µg/m3, and root-mean-square prediction errors (RMSPEs) of 4.29 µg/m3, indicating a good fit between the dependent and predictor variables. A comparison showed that the prediction accuracy was improved more substantially from the nonfire model to the fire model at sites with higher fire counts. With increasing fire counts, CV RMSPE decreased by values up to 1.5 µg/m3, exhibiting a maximum improvement of 13.4% in prediction accuracy. Fire count data were shown to have better performance in southern Georgia and in the spring season due to higher fire occurrence. Our findings indicate that fire count data provide a measurable improvement in PM2.5 concentration estimation, especially in areas and seasons prone to fire events. PMID:28967648

  5. Electromagnetic bone segment tracking to control femoral derotation osteotomy-A saw bone study.

    PubMed

    Geisbüsch, Andreas; Auer, Christoph; Dickhaus, Hartmut; Niklasch, Mirjam; Dreher, Thomas

    2017-05-01

    Correction of rotational gait abnormalities is common practice in pediatric orthopaedics such as in children with cerebral palsy. Femoral derotation osteotomy is established as a standard treatment, however, different authors reported substantial variability in outcomes following surgery with patients showing over- or under-correction. Only 60% of the applied correction is observed postoperatively, which strongly suggests intraoperative measurement error or loss of correction during surgery. This study was conducted to verify the impact of error sources in the derotation procedure and assess the utility of a newly developed, instrumented measurement system based on electromagnetic tracking aiming to improve the accuracy of rotational correction. A supracondylar derotation osteotomy was performed in 21 artificial femur sawbones and the amount of derotation was quantified during the procedure by the tracking system and by nine raters using a conventional goniometer. Accuracy of both measurement devices was determined by repeated computer tomography scans. Average derotation measured by the tracking system differed by 0.1° ± 1.6° from the defined reference measurement . In contrast, a high inter-rater variability was found in goniometric measurements (range: 10.8° ± 6.9°, mean interquartile distance: 6.6°). During fixation of the osteosynthesis, the tracking system reliably detected unintentional manipulation of the correction angle with a mean absolute change of 4.0° ± 3.2°. Our findings show that conventional control of femoral derotation is subject to relevant observer bias whereas instrumental tracking yields accuracy better than ±2°. The tracking system is a step towards more reliable and safe implementation of femoral correction, promising substantial improvements of patient safety in the future. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 35:1106-1112, 2017. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  6. Resolving prokaryotic taxonomy without rRNA: longer oligonucleotide word lengths improve genome and metagenome taxonomic classification.

    PubMed

    Alsop, Eric B; Raymond, Jason

    2013-01-01

    Oligonucleotide signatures, especially tetranucleotide signatures, have been used as method for homology binning by exploiting an organism's inherent biases towards the use of specific oligonucleotide words. Tetranucleotide signatures have been especially useful in environmental metagenomics samples as many of these samples contain organisms from poorly classified phyla which cannot be easily identified using traditional homology methods, including NCBI BLAST. This study examines oligonucleotide signatures across 1,424 completed genomes from across the tree of life, substantially expanding upon previous work. A comprehensive analysis of mononucleotide through nonanucleotide word lengths suggests that longer word lengths substantially improve the classification of DNA fragments across a range of sizes of relevance to high throughput sequencing. We find that, at present, heptanucleotide signatures represent an optimal balance between prediction accuracy and computational time for resolving taxonomy using both genomic and metagenomic fragments. We directly compare the ability of tetranucleotide and heptanucleotide world lengths (tetranucleotide signatures are the current standard for oligonucleotide word usage analyses) for taxonomic binning of metagenome reads. We present evidence that heptanucleotide word lengths consistently provide more taxonomic resolving power, particularly in distinguishing between closely related organisms that are often present in metagenomic samples. This implies that longer oligonucleotide word lengths should replace tetranucleotide signatures for most analyses. Finally, we show that the application of longer word lengths to metagenomic datasets leads to more accurate taxonomic binning of DNA scaffolds and have the potential to substantially improve taxonomic assignment and assembly of metagenomic data.

  7. Efficient alignment-free DNA barcode analytics

    PubMed Central

    Kuksa, Pavel; Pavlovic, Vladimir

    2009-01-01

    Background In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. Results New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Conclusion Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding. PMID:19900305

  8. An inter-observer agreement study of autofluorescence endoscopy in Barrett's esophagus among expert and non-expert endoscopists.

    PubMed

    Mannath, J; Subramanian, V; Telakis, E; Lau, K; Ramappa, V; Wireko, M; Kaye, P V; Ragunath, K

    2013-02-01

    Autofluorescence imaging (AFI), which is a "red flag" technique during Barrett's surveillance, is associated with significant false positive results. The aim of this study was to assess the inter-observer agreement (IOA) in identifying AFI-positive lesions and to assess the overall accuracy of AFI. Anonymized AFI and high resolution white light (HRE) images were prospectively collected. The AFI images were presented in random order, followed by corresponding AFI + HRE images. Three AFI experts and 3 AFI non-experts scored images after a training presentation. The IOA was calculated using kappa and accuracy was calculated with histology as gold standard. Seventy-four sets of images were prospectively collected from 63 patients (48 males, mean age 69 years). The IOA for number of AF positive lesions was fair when AFI images were presented. This improved to moderate with corresponding AFI and HRE images [experts 0.57 (0.44-0.70), non-experts 0.47 (0.35-0.62)]. The IOA for the site of AF lesion was moderate for experts and fair for non-experts using AF images, which improved to substantial for experts [κ = 0.62 (0.50-0.72)] but remained at fair for non-experts [κ = 0.28 (0.18-0.37)] with AFI + HRE. Among experts, the accuracy of identifying dysplasia was 0.76 (0.7-0.81) using AFI images and 0.85 (0.79-0.89) using AFI + HRE images. The accuracy was 0.69 (0.62-0.74) with AFI images alone and 0.75 (0.70-0.80) using AFI + HRE among non-experts. The IOA for AF positive lesions is fair to moderate using AFI images which improved with addition of HRE. The overall accuracy of identifying dysplasia was modest, and was better when AFI and HRE images were combined.

  9. The Influence of Delaying Judgments of Learning on Metacognitive Accuracy: A Meta-Analytic Review

    ERIC Educational Resources Information Center

    Rhodes, Matthew G.; Tauber, Sarah K.

    2011-01-01

    Many studies have examined the accuracy of predictions of future memory performance solicited through judgments of learning (JOLs). Among the most robust findings in this literature is that delaying predictions serves to substantially increase the relative accuracy of JOLs compared with soliciting JOLs immediately after study, a finding termed the…

  10. Using Relational Reasoning Strategies to Help Improve Clinical Reasoning Practice.

    PubMed

    Dumas, Denis; Torre, Dario M; Durning, Steven J

    2018-05-01

    Clinical reasoning-the steps up to and including establishing a diagnosis and/or therapy-is a fundamentally important mental process for physicians. Unfortunately, mounting evidence suggests that errors in clinical reasoning lead to substantial problems for medical professionals and patients alike, including suboptimal care, malpractice claims, and rising health care costs. For this reason, cognitive strategies by which clinical reasoning may be improved-and that many expert clinicians are already using-are highly relevant for all medical professionals, educators, and learners.In this Perspective, the authors introduce one group of cognitive strategies-termed relational reasoning strategies-that have been empirically shown, through limited educational and psychological research, to improve the accuracy of learners' reasoning both within and outside of the medical disciplines. The authors contend that relational reasoning strategies may help clinicians to be metacognitive about their own clinical reasoning; such strategies may also be particularly well suited for explicitly organizing clinical reasoning instruction for learners. Because the particular curricular efforts that may improve the relational reasoning of medical students are not known at this point, the authors describe the nature of previous research on relational reasoning strategies to encourage the future design, implementation, and evaluation of instructional interventions for relational reasoning within the medical education literature. The authors also call for continued research on using relational reasoning strategies and their role in clinical practice and medical education, with the long-term goal of improving diagnostic accuracy.

  11. Study on real-time force feedback for a master-slave interventional surgical robotic system.

    PubMed

    Guo, Shuxiang; Wang, Yuan; Xiao, Nan; Li, Youxiang; Jiang, Yuhua

    2018-04-13

    In robot-assisted catheterization, haptic feedback is important, but is currently lacking. In addition, conventional interventional surgical robotic systems typically employ a master-slave architecture with an open-loop force feedback, which results in inaccurate control. We develop herein a novel real-time master-slave (RTMS) interventional surgical robotic system with a closed-loop force feedback that allows a surgeon to sense the true force during remote operation, provide adequate haptic feedback, and improve control accuracy in robot-assisted catheterization. As part of this system, we also design a unique master control handle that measures the true force felt by a surgeon, providing the basis for the closed-loop control of the entire system. We use theoretical and empirical methods to demonstrate that the proposed RTMS system provides a surgeon (using the master control handle) with a more accurate and realistic force sensation, which subsequently improves the precision of the master-slave manipulation. The experimental results show a substantial increase in the control accuracy of the force feedback and an increase in operational efficiency during surgery.

  12. Improving the accuracy of sediment-associated constituent concentrations in whole storm water samples by wet-sieving

    USGS Publications Warehouse

    Selbig, W.R.; Bannerman, R.; Bowman, G.

    2007-01-01

    Sand-sized particles (>63 ??m) in whole storm water samples collected from urban runoff have the potential to produce data with substantial bias and/or poor precision both during sample splitting and laboratory analysis. New techniques were evaluated in an effort to overcome some of the limitations associated with sample splitting and analyzing whole storm water samples containing sand-sized particles. Wet-sieving separates sand-sized particles from a whole storm water sample. Once separated, both the sieved solids and the remaining aqueous (water suspension of particles less than 63 ??m) samples were analyzed for total recoverable metals using a modification of USEPA Method 200.7. The modified version digests the entire sample, rather than an aliquot, of the sample. Using a total recoverable acid digestion on the entire contents of the sieved solid and aqueous samples improved the accuracy of the derived sediment-associated constituent concentrations. Concentration values of sieved solid and aqueous samples can later be summed to determine an event mean concentration. ?? ASA, CSSA, SSSA.

  13. The PEREGRINETM program: using physics and computer simulation to improve radiation therapy for cancer

    NASA Astrophysics Data System (ADS)

    Hartmann Siantar, Christine L.; Moses, Edward I.

    1998-11-01

    When using radiation to treat cancer, doctors rely on physics and computer technology to predict where the radiation dose will be deposited in the patient. The accuracy of computerized treatment planning plays a critical role in the ultimate success or failure of the radiation treatment. Inaccurate dose calculations can result in either insufficient radiation for cure, or excessive radiation to nearby healthy tissue, which can reduce the patient's quality of life. This paper describes how advanced physics, computer, and engineering techniques originally developed for nuclear weapons and high-energy physics research are being used to predict radiation dose in cancer patients. Results for radiation therapy planning, achieved in the Lawrence Livermore National Laboratory (LLNL) 0143-0807/19/6/005/img2 program show that these tools can give doctors new insights into their patients' treatments by providing substantially more accurate dose distributions than have been available in the past. It is believed that greater accuracy in radiation therapy treatment planning will save lives by improving doctors' ability to target radiation to the tumour and reduce suffering by reducing the incidence of radiation-induced complications.

  14. Coupled forward-backward trajectory approach for nonequilibrium electron-ion dynamics

    NASA Astrophysics Data System (ADS)

    Sato, Shunsuke A.; Kelly, Aaron; Rubio, Angel

    2018-04-01

    We introduce a simple ansatz for the wave function of a many-body system based on coupled forward and backward propagating semiclassical trajectories. This method is primarily aimed at, but not limited to, treating nonequilibrium dynamics in electron-phonon systems. The time evolution of the system is obtained from the Euler-Lagrange variational principle, and we show that this ansatz yields Ehrenfest mean-field theory in the limit that the forward and backward trajectories are orthogonal, and in the limit that they coalesce. We investigate accuracy and performance of this method by simulating electronic relaxation in the spin-boson model and the Holstein model. Although this method involves only pairs of semiclassical trajectories, it shows a substantial improvement over mean-field theory, capturing quantum coherence of nuclear dynamics as well as electron-nuclear correlations. This improvement is particularly evident in nonadiabatic systems, where the accuracy of this coupled trajectory method extends well beyond the perturbative electron-phonon coupling regime. This approach thus provides an attractive route forward to the ab initio description of relaxation processes, such as thermalization, in condensed phase systems.

  15. Hot Dust in Panchromatic SED Fitting: Identification of Active Galactic Nuclei and Improved Galaxy Properties

    NASA Astrophysics Data System (ADS)

    Leja, Joel; Johnson, Benjamin D.; Conroy, Charlie; van Dokkum, Pieter

    2018-02-01

    Forward modeling of the full galaxy SED is a powerful technique, providing self-consistent constraints on stellar ages, dust properties, and metallicities. However, the accuracy of these results is contingent on the accuracy of the model. One significant source of uncertainty is the contribution of obscured AGN, as they are relatively common and can produce substantial mid-IR (MIR) emission. Here we include emission from dusty AGN torii in the Prospector SED-fitting framework, and fit the UV–IR broadband photometry of 129 nearby galaxies. We find that 10% of the fitted galaxies host an AGN contributing >10% of the observed galaxy MIR luminosity. We demonstrate the necessity of this AGN component in the following ways. First, we compare observed spectral features to spectral features predicted from our model fit to the photometry. We find that the AGN component greatly improves predictions for observed Hα and Hβ luminosities, as well as mid-infrared Akari and Spitzer/IRS spectra. Second, we show that inclusion of the AGN component changes stellar ages and SFRs by up to a factor of 10, and dust attenuations by up to a factor of 2.5. Finally, we show that the strength of our model AGN component correlates with independent AGN indicators, suggesting that these galaxies truly host AGN. Notably, only 46% of the SED-detected AGN would be detected with a simple MIR color selection. Based on these results, we conclude that SED models which fit MIR data without AGN components are vulnerable to substantial bias in their derived parameters.

  16. Assessing hippocampal development and language in early childhood: Evidence from a new application of the Automatic Segmentation Adapter Tool.

    PubMed

    Lee, Joshua K; Nordahl, Christine W; Amaral, David G; Lee, Aaron; Solomon, Marjorie; Ghetti, Simona

    2015-11-01

    Volumetric assessments of the hippocampus and other brain structures during childhood provide useful indices of brain development and correlates of cognitive functioning in typically and atypically developing children. Automated methods such as FreeSurfer promise efficient and replicable segmentation, but may include errors which are avoided by trained manual tracers. A recently devised automated correction tool that uses a machine learning algorithm to remove systematic errors, the Automatic Segmentation Adapter Tool (ASAT), was capable of substantially improving the accuracy of FreeSurfer segmentations in an adult sample [Wang et al., 2011], but the utility of ASAT has not been examined in pediatric samples. In Study 1, the validity of FreeSurfer and ASAT corrected hippocampal segmentations were examined in 20 typically developing children and 20 children with autism spectrum disorder aged 2 and 3 years. We showed that while neither FreeSurfer nor ASAT accuracy differed by disorder or age, the accuracy of ASAT corrected segmentations were substantially better than FreeSurfer segmentations in every case, using as few as 10 training examples. In Study 2, we applied ASAT to 89 typically developing children aged 2 to 4 years to examine relations between hippocampal volume, age, sex, and expressive language. Girls had smaller hippocampi overall, and in left hippocampus this difference was larger in older than younger girls. Expressive language ability was greater in older children, and this difference was larger in those with larger hippocampi, bilaterally. Overall, this research shows that ASAT is highly reliable and useful to examinations relating behavior to hippocampal structure. © 2015 Wiley Periodicals, Inc.

  17. The benefits of improved national elevation data

    USGS Publications Warehouse

    Snyder, Gregory I.

    2013-01-01

    This article describes how the National Enhanced Elevation Assessment (NEEA) has identified substantial benefits that could come about if improved elevation data were publicly available for current and emerging applications and business uses such as renewable energy, precision agriculture, and intelligent vehicle navigation and safety. In order to support these diverse needs, new national elevation data with higher resolution and accuracy are needed. The 3D Elevation Program (3DEP) initiative was developed to meet the majority of these needs and it is expected that 3DEP will result in new, unimagined information services that would result in job growth and the transformation of the geospatial community. Private-sector data collection companies are continuously evolving sensors and positioning technologies that are needed to collect improved elevation data. An initiative of this scope might also provide an opportunity for companies to improve their capabilities and produce even higher data quality and consistency at a pace that might not have otherwise occurred.

  18. Basic research for the geodynamics program

    NASA Technical Reports Server (NTRS)

    Mueller, Ivan I.

    1988-01-01

    Additional results are presented concerning a study that considers improvements over present Earth Rotation Parameter (ERP) determination methods by directly combining observations from various space geodetic systems in one adjustment. Earlier results are extended, showing that in addition to slight improvements in accuracy, substantial (a factor of three or more) improvements in precision and significant reductions in correlations between various parameters can be obtained (by combining Lunar Laser Ranging - LLR, Satellite Laser Ranging - SLR to Lageos, and Very Long Baseline Interferometry - VLBI data in one adjustment) as compared to results from individual systems. Smaller improvements are also seen over the weighted means of the individual system results. Although data transmission would not be significantly reduced, negligible additional computer time would be required if (standardized) normal equations were available from individual solutions. Suggestions for future work and implications for the New Earth Rotation Service (IERS) are also presented.

  19. Checking the predictive accuracy of basic symptoms against ultra high-risk criteria and testing of a multivariable prediction model: Evidence from a prospective three-year observational study of persons at clinical high-risk for psychosis.

    PubMed

    Hengartner, M P; Heekeren, K; Dvorsky, D; Walitza, S; Rössler, W; Theodoridou, A

    2017-09-01

    The aim of this study was to critically examine the prognostic validity of various clinical high-risk (CHR) criteria alone and in combination with additional clinical characteristics. A total of 188 CHR positive persons from the region of Zurich, Switzerland (mean age 20.5 years; 60.2% male), meeting ultra high-risk (UHR) and/or basic symptoms (BS) criteria, were followed over three years. The test battery included the Structured Interview for Prodromal Syndromes (SIPS), verbal IQ and many other screening tools. Conversion to psychosis was defined according to ICD-10 criteria for schizophrenia (F20) or brief psychotic disorder (F23). Altogether n=24 persons developed manifest psychosis within three years and according to Kaplan-Meier survival analysis, the projected conversion rate was 17.5%. The predictive accuracy of UHR was statistically significant but poor (area under the curve [AUC]=0.65, P<.05), whereas BS did not predict psychosis beyond mere chance (AUC=0.52, P=.730). Sensitivity and specificity were 0.83 and 0.47 for UHR, and 0.96 and 0.09 for BS. UHR plus BS achieved an AUC=0.66, with sensitivity and specificity of 0.75 and 0.56. In comparison, baseline antipsychotic medication yielded a predictive accuracy of AUC=0.62 (sensitivity=0.42; specificity=0.82). A multivariable prediction model comprising continuous measures of positive symptoms and verbal IQ achieved a substantially improved prognostic accuracy (AUC=0.85; sensitivity=0.86; specificity=0.85; positive predictive value=0.54; negative predictive value=0.97). We showed that BS have no predictive accuracy beyond chance, while UHR criteria poorly predict conversion to psychosis. Combining BS with UHR criteria did not improve the predictive accuracy of UHR alone. In contrast, dimensional measures of both positive symptoms and verbal IQ showed excellent prognostic validity. A critical re-thinking of binary at-risk criteria is necessary in order to improve the prognosis of psychotic disorders. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  20. Direct Detection Doppler Lidar for Spaceborne Wind Measurement

    NASA Technical Reports Server (NTRS)

    Korb, C. Laurence; Flesia, Cristina

    1999-01-01

    Aerosol and molecular based versions of the double-edge technique can be used for direct detection Doppler lidar spaceborne wind measurement. The edge technique utilizes the edge of a high spectral resolution filter for high accuracy wind measurement using direct detection lidar. The signal is split between an edge filter channel and a broadband energy monitor channel. The energy monitor channel is used for signal normalization. The edge measurement is made as a differential frequency measurement between the outgoing laser signal and the atmospheric backscattered return for each pulse. As a result the measurement is insensitive to laser and edge filter frequency jitter and drift at a level less than a few parts in 10(exp 10). We have developed double edge versions of the edge technique for aerosol and molecular-based lidar measurement of the wind. Aerosol-based wind measurements have been made at Goddard Space Flight Center and molecular-based wind measurements at the University of Geneva. We have demonstrated atmospheric measurements using these techniques for altitudes from 1 to more than 10 km. Measurement accuracies of better than 1.25 m/s have been obtained with integration times from 5 to 30 seconds. The measurements can be scaled to space and agree, within a factor of two, with satellite-based simulations of performance based on Poisson statistics. The theory of the double edge aerosol technique is described by a generalized formulation which substantially extends the capabilities of the edge technique. It uses two edges with opposite slopes located about the laser frequency at approximately the half-width of each edge filter. This doubles the signal change for a given Doppler shift and yields a factor of 1.6 improvement in the measurement accuracy compared to the single edge technique. The use of two high resolution edge filters substantially reduces the effects of Rayleigh scattering on the measurement, as much as order of magnitude, and allows the signal to noise ratio to be substantially improved in areas of low aerosol backscatter. We describe a method that allows the Rayleigh and aerosol components of the signal to be independently determined using the two edge channels and an energy monitor channel. The effects of Rayleigh scattering may then subtracted from the measurement and we show that the correction process does not significantly increase the measurement noise for Rayleigh to aerosol ratios up to 10. We show that for small Doppler shifts a measurement accuracy of 0.4 m/s can be obtained for 5000 detected photon, 1.2 m/s for 1000 detected photons, and 3.7 m/s for 50 detected photons for a Rayleigh to aerosol ratio of 5. Methods for increasing the dynamic range of the aerosol-based system to more than +/- 100 m/s are given.

  1. A Low Complexity System Based on Multiple Weighted Decision Trees for Indoor Localization

    PubMed Central

    Sánchez-Rodríguez, David; Hernández-Morera, Pablo; Quinteiro, José Ma.; Alonso-González, Itziar

    2015-01-01

    Indoor position estimation has become an attractive research topic due to growing interest in location-aware services. Nevertheless, satisfying solutions have not been found with the considerations of both accuracy and system complexity. From the perspective of lightweight mobile devices, they are extremely important characteristics, because both the processor power and energy availability are limited. Hence, an indoor localization system with high computational complexity can cause complete battery drain within a few hours. In our research, we use a data mining technique named boosting to develop a localization system based on multiple weighted decision trees to predict the device location, since it has high accuracy and low computational complexity. The localization system is built using a dataset from sensor fusion, which combines the strength of radio signals from different wireless local area network access points and device orientation information from a digital compass built-in mobile device, so that extra sensors are unnecessary. Experimental results indicate that the proposed system leads to substantial improvements on computational complexity over the widely-used traditional fingerprinting methods, and it has a better accuracy than they have. PMID:26110413

  2. Multistrip Western blotting: a tool for comparative quantitative analysis of multiple proteins.

    PubMed

    Aksamitiene, Edita; Hoek, Jan B; Kiyatkin, Anatoly

    2015-01-01

    The qualitative and quantitative measurements of protein abundance and modification states are essential in understanding their functions in diverse cellular processes. Typical Western blotting, though sensitive, is prone to produce substantial errors and is not readily adapted to high-throughput technologies. Multistrip Western blotting is a modified immunoblotting procedure based on simultaneous electrophoretic transfer of proteins from multiple strips of polyacrylamide gels to a single membrane sheet. In comparison with the conventional technique, Multistrip Western blotting increases data output per single blotting cycle up to tenfold; allows concurrent measurement of up to nine different total and/or posttranslationally modified protein expression obtained from the same loading of the sample; and substantially improves the data accuracy by reducing immunoblotting-derived signal errors. This approach enables statistically reliable comparison of different or repeated sets of data and therefore is advantageous to apply in biomedical diagnostics, systems biology, and cell signaling research.

  3. Incorporating spatial context into statistical classification of multidimensional image data

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator); Tilton, J. C.; Swain, P. H.

    1981-01-01

    Compound decision theory is employed to develop a general statistical model for classifying image data using spatial context. The classification algorithm developed from this model exploits the tendency of certain ground-cover classes to occur more frequently in some spatial contexts than in others. A key input to this contextural classifier is a quantitative characterization of this tendency: the context function. Several methods for estimating the context function are explored, and two complementary methods are recommended. The contextural classifier is shown to produce substantial improvements in classification accuracy compared to the accuracy produced by a non-contextural uniform-priors maximum likelihood classifier when these methods of estimating the context function are used. An approximate algorithm, which cuts computational requirements by over one-half, is presented. The search for an optimal implementation is furthered by an exploration of the relative merits of using spectral classes or information classes for classification and/or context function estimation.

  4. Social Skills Training for Adolescents With Intellectual Disabilities: A School-Based Evaluation.

    PubMed

    O'Handley, Roderick D; Ford, W Blake; Radley, Keith C; Helbig, Kate A; Wimberly, Joy K

    2016-07-01

    Individuals with intellectual disabilities (ID) often demonstrate impairments in social functioning, with deficits becoming more apparent during adolescence. This study evaluated the effects of the Superheroes Social Skills program, a program that combines behavioral skills training and video modeling to teach target social skills, on accurate demonstration of three target social skills in adolescents with ID. Skills taught in the present study include Expressing Wants and Needs, Conversation, and Turn Taking. Four adolescents with ID participated in a 3-week social skills intervention, with the intervention occurring twice per week. A multiple baseline across skills design was used to determine the effect of the intervention on social skill accuracy in both a training and generalization setting. All participants demonstrated substantial improvements in skill accuracy in both settings, with teacher ratings of social functioning further suggesting generalization of social skills to nontraining settings. © The Author(s) 2016.

  5. A new item response theory model to adjust data allowing examinee choice

    PubMed Central

    Costa, Marcelo Azevedo; Braga Oliveira, Rivert Paulo

    2018-01-01

    In a typical questionnaire testing situation, examinees are not allowed to choose which items they answer because of a technical issue in obtaining satisfactory statistical estimates of examinee ability and item difficulty. This paper introduces a new item response theory (IRT) model that incorporates information from a novel representation of questionnaire data using network analysis. Three scenarios in which examinees select a subset of items were simulated. In the first scenario, the assumptions required to apply the standard Rasch model are met, thus establishing a reference for parameter accuracy. The second and third scenarios include five increasing levels of violating those assumptions. The results show substantial improvements over the standard model in item parameter recovery. Furthermore, the accuracy was closer to the reference in almost every evaluated scenario. To the best of our knowledge, this is the first proposal to obtain satisfactory IRT statistical estimates in the last two scenarios. PMID:29389996

  6. High resolution microendoscopy for classification of colorectal polyps.

    PubMed

    Chang, S S; Shukla, R; Polydorides, A D; Vila, P M; Lee, M; Han, H; Kedia, P; Lewis, J; Gonzalez, S; Kim, M K; Harpaz, N; Godbold, J; Richards-Kortum, R; Anandasabapathy, S

    2013-07-01

    It can be difficult to distinguish adenomas from benign polyps during routine colonoscopy. High resolution microendoscopy (HRME) is a novel method for imaging colorectal mucosa with subcellular detail. HRME criteria for the classification of colorectal neoplasia have not been previously described. Study goals were to develop criteria to characterize HRME images of colorectal mucosa (normal, hyperplastic polyps, adenomas, cancer) and to determine the accuracy and interobserver variability for the discrimination of neoplastic from non-neoplastic polyps when these criteria were applied by novice and expert microendoscopists. Two expert pathologists created consensus HRME image criteria using images from 68 patients with polyps who had undergone colonoscopy plus HRME. Using these criteria, HRME expert and novice microendoscopists were shown a set of training images and then tested to determine accuracy and interobserver variability. Expert microendoscopists identified neoplasia with sensitivity, specificity, and accuracy of 67 % (95 % confidence interval [CI] 58 % - 75 %), 97 % (94 % - 100 %), and 87 %, respectively. Nonexperts achieved sensitivity, specificity, and accuracy of 73 % (66 % - 80 %), 91 % (80 % - 100 %), and 85 %, respectively. Overall, neoplasia were identified with sensitivity 70 % (65 % - 76 %), specificity 94 % (87 % - 100 %), and accuracy 85 %. Kappa values were: experts 0.86; nonexperts 0.72; and overall 0.78. Using the new criteria, observers achieved high specificity and substantial interobserver agreement for distinguishing benign polyps from neoplasia. Increased expertise in HRME imaging improves accuracy. This low-cost microendoscopic platform may be an alternative to confocal microendoscopy in lower-resource or community-based settings.

  7. Error Rates in Users of Automatic Face Recognition Software

    PubMed Central

    White, David; Dunn, James D.; Schmid, Alexandra C.; Kemp, Richard I.

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems. PMID:26465631

  8. Algorithms for selecting informative marker panels for population assignment.

    PubMed

    Rosenberg, Noah A

    2005-11-01

    Given a set of potential source populations, genotypes of an individual of unknown origin at a collection of markers can be used to predict the correct source population of the individual. For improved efficiency, informative markers can be chosen from a larger set of markers to maximize the accuracy of this prediction. However, selecting the loci that are individually most informative does not necessarily produce the optimal panel. Here, using genotypes from eight species--carp, cat, chicken, dog, fly, grayling, human, and maize--this univariate accumulation procedure is compared to new multivariate "greedy" and "maximin" algorithms for choosing marker panels. The procedures generally suggest similar panels, although the greedy method often recommends inclusion of loci that are not chosen by the other algorithms. In seven of the eight species, when applied to five or more markers, all methods achieve at least 94% assignment accuracy on simulated individuals, with one species--dog--producing this level of accuracy with only three markers, and the eighth species--human--requiring approximately 13-16 markers. The new algorithms produce substantial improvements over use of randomly selected markers; where differences among the methods are noticeable, the greedy algorithm leads to slightly higher probabilities of correct assignment. Although none of the approaches necessarily chooses the panel with optimal performance, the algorithms all likely select panels with performance near enough to the maximum that they all are suitable for practical use.

  9. Genomic estimation of additive and dominance effects and impact of accounting for dominance on accuracy of genomic evaluation in sheep populations.

    PubMed

    Moghaddar, N; van der Werf, J H J

    2017-12-01

    The objectives of this study were to estimate the additive and dominance variance component of several weight and ultrasound scanned body composition traits in purebred and combined cross-bred sheep populations based on single nucleotide polymorphism (SNP) marker genotypes and then to investigate the effect of fitting additive and dominance effects on accuracy of genomic evaluation. Additive and dominance variance components were estimated in a mixed model equation based on "average information restricted maximum likelihood" using additive and dominance (co)variances between animals calculated from 48,599 SNP marker genotypes. Genomic prediction was based on genomic best linear unbiased prediction (GBLUP), and the accuracy of prediction was assessed based on a random 10-fold cross-validation. Across different weight and scanned body composition traits, dominance variance ranged from 0.0% to 7.3% of the phenotypic variance in the purebred population and from 7.1% to 19.2% in the combined cross-bred population. In the combined cross-bred population, the range of dominance variance decreased to 3.1% and 9.9% after accounting for heterosis effects. Accounting for dominance effects significantly improved the likelihood of the fitting model in the combined cross-bred population. This study showed a substantial dominance genetic variance for weight and ultrasound scanned body composition traits particularly in cross-bred population; however, improvement in the accuracy of genomic breeding values was small and statistically not significant. Dominance variance estimates in combined cross-bred population could be overestimated if heterosis is not fitted in the model. © 2017 Blackwell Verlag GmbH.

  10. Inertial navigation without accelerometers

    NASA Astrophysics Data System (ADS)

    Boehm, M.

    The Kennedy-Thorndike (1932) experiment points to the feasibility of fiber-optic inertial velocimeters, to which state-of-the-art technology could furnish substantial sensitivity and accuracy improvements. Velocimeters of this type would obviate the use of both gyros and accelerometers, and allow inertial navigation to be conducted together with vehicle attitude control, through the derivation of rotation rates from the ratios of the three possible velocimeter pairs. An inertial navigator and reference system based on this approach would probably have both fewer components and simpler algorithms, due to the obviation of the first level of integration in classic inertial navigators.

  11. FRD and scrambling properties of recent non-circular fibres

    NASA Astrophysics Data System (ADS)

    Avila, Gerardo

    2012-09-01

    Optical fibres with octagonal, square and rectangular core shapes have been proposed as alternative to the circular fibres to link the telescopes to spectrographs in order to increase the accuracy of radial velocity measurements. Theoretically they offer better scrambling properties than their circular counterparts. First commercial octagonal fibres provided good near field scrambling gains. Unfortunately the far field scrambling did not show important figures. This article shows test results on new fibres from CeramOptec. The measurements show substantial improvements of the far field scrambling gains. In addition, evaluation of their focal ratio degradation (FRD) shows much better performances than previous fibres.

  12. Poly(iohexol) nanoparticles as contrast agents for in vivo X-ray computed tomography imaging.

    PubMed

    Yin, Qian; Yap, Felix Y; Yin, Lichen; Ma, Liang; Zhou, Qin; Dobrucki, Lawrence W; Fan, Timothy M; Gaba, Ron C; Cheng, Jianjun

    2013-09-18

    Biocompatible poly(iohexol) nanoparticles, prepared through cross-linking of iohexol and hexamethylene diisocyanate followed by coprecipitation of the resulting cross-linked polymer with mPEG-polylactide, were utilized as contrast agents for in vivo X-ray computed tomography (CT) imaging. Compared to conventional small-molecule contrast agents, poly(iohexol) nanoparticles exhibited substantially protracted retention within the tumor bed and a 36-fold increase in CT contrast 4 h post injection, which makes it possible to acquire CT images with improved diagnosis accuracy over a broad time frame without multiple administrations.

  13. Theories of willpower affect sustained learning.

    PubMed

    Miller, Eric M; Walton, Gregory M; Dweck, Carol S; Job, Veronika; Trzesniewski, Kali H; McClure, Samuel M

    2012-01-01

    Building cognitive abilities often requires sustained engagement with effortful tasks. We demonstrate that beliefs about willpower-whether willpower is viewed as a limited or non-limited resource-impact sustained learning on a strenuous mental task. As predicted, beliefs about willpower did not affect accuracy or improvement during the initial phases of learning; however, participants who were led to view willpower as non-limited showed greater sustained learning over the full duration of the task. These findings highlight the interactive nature of motivational and cognitive processes: motivational factors can substantially affect people's ability to recruit their cognitive resources to sustain learning over time.

  14. Theories of Willpower Affect Sustained Learning

    PubMed Central

    Miller, Eric M.; Walton, Gregory M.; Dweck, Carol S.; Job, Veronika; Trzesniewski, Kali H.; McClure, Samuel M.

    2012-01-01

    Building cognitive abilities often requires sustained engagement with effortful tasks. We demonstrate that beliefs about willpower–whether willpower is viewed as a limited or non-limited resource–impact sustained learning on a strenuous mental task. As predicted, beliefs about willpower did not affect accuracy or improvement during the initial phases of learning; however, participants who were led to view willpower as non-limited showed greater sustained learning over the full duration of the task. These findings highlight the interactive nature of motivational and cognitive processes: motivational factors can substantially affect people’s ability to recruit their cognitive resources to sustain learning over time. PMID:22745675

  15. Improving risk prediction accuracy for new soldiers in the U.S. Army by adding self-report survey data to administrative data.

    PubMed

    Bernecker, Samantha L; Rosellini, Anthony J; Nock, Matthew K; Chiu, Wai Tat; Gutierrez, Peter M; Hwang, Irving; Joiner, Thomas E; Naifeh, James A; Sampson, Nancy A; Zaslavsky, Alan M; Stein, Murray B; Ursano, Robert J; Kessler, Ronald C

    2018-04-03

    High rates of mental disorders, suicidality, and interpersonal violence early in the military career have raised interest in implementing preventive interventions with high-risk new enlistees. The Army Study to Assess Risk and Resilience in Servicemembers (STARRS) developed risk-targeting systems for these outcomes based on machine learning methods using administrative data predictors. However, administrative data omit many risk factors, raising the question whether risk targeting could be improved by adding self-report survey data to prediction models. If so, the Army may gain from routinely administering surveys that assess additional risk factors. The STARRS New Soldier Survey was administered to 21,790 Regular Army soldiers who agreed to have survey data linked to administrative records. As reported previously, machine learning models using administrative data as predictors found that small proportions of high-risk soldiers accounted for high proportions of negative outcomes. Other machine learning models using self-report survey data as predictors were developed previously for three of these outcomes: major physical violence and sexual violence perpetration among men and sexual violence victimization among women. Here we examined the extent to which this survey information increases prediction accuracy, over models based solely on administrative data, for those three outcomes. We used discrete-time survival analysis to estimate a series of models predicting first occurrence, assessing how model fit improved and concentration of risk increased when adding the predicted risk score based on survey data to the predicted risk score based on administrative data. The addition of survey data improved prediction significantly for all outcomes. In the most extreme case, the percentage of reported sexual violence victimization among the 5% of female soldiers with highest predicted risk increased from 17.5% using only administrative predictors to 29.4% adding survey predictors, a 67.9% proportional increase in prediction accuracy. Other proportional increases in concentration of risk ranged from 4.8% to 49.5% (median = 26.0%). Data from an ongoing New Soldier Survey could substantially improve accuracy of risk models compared to models based exclusively on administrative predictors. Depending upon the characteristics of interventions used, the increase in targeting accuracy from survey data might offset survey administration costs.

  16. Improving optimal control of grid-connected lithium-ion batteries through more accurate battery and degradation modelling

    NASA Astrophysics Data System (ADS)

    Reniers, Jorn M.; Mulder, Grietus; Ober-Blöbaum, Sina; Howey, David A.

    2018-03-01

    The increased deployment of intermittent renewable energy generators opens up opportunities for grid-connected energy storage. Batteries offer significant flexibility but are relatively expensive at present. Battery lifetime is a key factor in the business case, and it depends on usage, but most techno-economic analyses do not account for this. For the first time, this paper quantifies the annual benefits of grid-connected batteries including realistic physical dynamics and nonlinear electrochemical degradation. Three lithium-ion battery models of increasing realism are formulated, and the predicted degradation of each is compared with a large-scale experimental degradation data set (Mat4Bat). A respective improvement in RMS capacity prediction error from 11% to 5% is found by increasing the model accuracy. The three models are then used within an optimal control algorithm to perform price arbitrage over one year, including degradation. Results show that the revenue can be increased substantially while degradation can be reduced by using more realistic models. The estimated best case profit using a sophisticated model is a 175% improvement compared with the simplest model. This illustrates that using a simplistic battery model in a techno-economic assessment of grid-connected batteries might substantially underestimate the business case and lead to erroneous conclusions.

  17. 26 CFR 1.6662-2 - Accuracy-related penalty.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ....6662-2 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Additions to the Tax, Additional Amounts, and Assessable Penalties § 1.6662-2... attributable both to negligence and a substantial understatement of income tax, the maximum accuracy-related...

  18. Motion vector field upsampling for improved 4D cone-beam CT motion compensation of the thorax

    NASA Astrophysics Data System (ADS)

    Sauppe, Sebastian; Rank, Christopher M.; Brehm, Marcus; Paysan, Pascal; Seghers, Dieter; Kachelrieß, Marc

    2017-03-01

    To improve the accuracy of motion vector fields (MVFs) required for respiratory motion compensated (MoCo) CT image reconstruction without increasing the computational complexity of the MVF estimation approach, we propose a MVF upsampling method that is able to reduce the motion blurring in reconstructed 4D images. While respiratory gating improves the temporal resolution, it leads to sparse view sampling artifacts. MoCo image reconstruction has the potential to remove all motion artifacts while simultaneously making use of 100% of the rawdata. However the MVF accuracy is still below the temporal resolution of the CBCT data acquisition. Increasing the number of motion bins would increase reconstruction time and amplify sparse view artifacts, but not necessarily the accuracy of MVF. Therefore we propose a new method to upsample estimated MVFs and use those for MoCo. To estimate the MVFs, a modified version of the Demons algorithm is used. Our proposed method is able to interpolate the original MVFs up to a factor that each projection has its own individual MVF. To validate the method we use an artificially deformed clinical CT scan, with a breathing pattern of a real patient, and patient data acquired with a TrueBeamTM4D CBCT system (Varian Medical Systems). We evaluate our method for different numbers of respiratory bins, each again with different upsampling factors. Employing our upsampling method, motion blurring in the reconstructed 4D images, induced by irregular breathing and the limited temporal resolution of phase-correlated images, is substantially reduced.

  19. Edge technique lidar for high accuracy, high spatial resolution wind measurement in the Planetary Boundary Layer

    NASA Technical Reports Server (NTRS)

    Korb, C. L.; Gentry, Bruce M.

    1995-01-01

    The goal of the Army Research Office (ARO) Geosciences Program is to measure the three dimensional wind field in the planetary boundary layer (PBL) over a measurement volume with a 50 meter spatial resolution and with measurement accuracies of the order of 20 cm/sec. The objective of this work is to develop and evaluate a high vertical resolution lidar experiment using the edge technique for high accuracy measurement of the atmospheric wind field to meet the ARO requirements. This experiment allows the powerful capabilities of the edge technique to be quantitatively evaluated. In the edge technique, a laser is located on the steep slope of a high resolution spectral filter. This produces large changes in measured signal for small Doppler shifts. A differential frequency technique renders the Doppler shift measurement insensitive to both laser and filter frequency jitter and drift. The measurement is also relatively insensitive to the laser spectral width for widths less than the width of the edge filter. Thus, the goal is to develop a system which will yield a substantial improvement in the state of the art of wind profile measurement in terms of both vertical resolution and accuracy and which will provide a unique capability for atmospheric wind studies.

  20. A high-performance seizure detection algorithm based on Discrete Wavelet Transform (DWT) and EEG

    PubMed Central

    Chen, Duo; Wan, Suiren; Xiang, Jing; Bao, Forrest Sheng

    2017-01-01

    In the past decade, Discrete Wavelet Transform (DWT), a powerful time-frequency tool, has been widely used in computer-aided signal analysis of epileptic electroencephalography (EEG), such as the detection of seizures. One of the important hurdles in the applications of DWT is the settings of DWT, which are chosen empirically or arbitrarily in previous works. The objective of this study aimed to develop a framework for automatically searching the optimal DWT settings to improve accuracy and to reduce computational cost of seizure detection. To address this, we developed a method to decompose EEG data into 7 commonly used wavelet families, to the maximum theoretical level of each mother wavelet. Wavelets and decomposition levels providing the highest accuracy in each wavelet family were then searched in an exhaustive selection of frequency bands, which showed optimal accuracy and low computational cost. The selection of frequency bands and features removed approximately 40% of redundancies. The developed algorithm achieved promising performance on two well-tested EEG datasets (accuracy >90% for both datasets). The experimental results of the developed method have demonstrated that the settings of DWT affect its performance on seizure detection substantially. Compared with existing seizure detection methods based on wavelet, the new approach is more accurate and transferable among datasets. PMID:28278203

  1. Improved single ion cyclotron resonance mass spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyce, K.R.

    1993-01-01

    The author has improved the state of the art for precision mass spectroscopy of a mass doublet to below one part in 10[sup 10]. By alternately loading single ions into a Penning trap, the author has determined the mass ratio M(CO[sup +])/M(N[sup +][sub 2]) = 0.999 598 887 74(11), an accuracy of 1 [times] 10[sup [minus]10]. This is a factor of 4 improvement over previous measurements, and a factor of 10 better than the 1985 atomic mass table adjustment [WAA85a]. Much of the author's apparatus has been rebuilt, increasing the signal-to-noise ratio and improving the reliability of the machine. Themore » typical time needed to make and cool a single ion has been reduced from about half an hour to under 5 minutes. This was done by a combination of faster ion-making and a much faster procedure for driving out ions of the wrong species. The improved S/N, in combination with a much better signal processing algorithm to extract the ion phase and frequency from the author's data, has substantially reduced the time required for the actual measurements. This is important now that the measurement time is a substantial fraction of the cycle time (the time to make a new ion and measure it). The improvements allow over 30 comparisons in one night, compared to 2 per night previously. This not only improves the statistics, but eliminates the possibility of large non-Gaussian errors due to sudden magnetic field shifts.« less

  2. Potential benefits of magnetic suspension and balance systems

    NASA Technical Reports Server (NTRS)

    Lawing, Pierce L.; Dress, David A.; Kilgore, Robert A.

    1987-01-01

    The potential of Magnetic Suspension and Balance Systems (MSBS) to improve conventional wind tunnel testing techniques is discussed. Topics include: elimination of model geometry distortion and support interference to improve the measurement accuracy of aerodynamic coefficients; removal of testing restrictions due to supports; improved dynamic stability data; and stores separation testing. Substantial increases in wind tunnel productivity are anticipated due to the coalescence of these improvements. Specific improvements in testing methods for missiles, helicopters, fighter aircraft, twin fuselage transports and bombers, state separation, water tunnels, and automobiles are also forecast. In a more speculative vein, new wind tunnel test techniques are envisioned as a result of applying MSBS, including free-flight computer trajectories in the test section, pilot-in-the-loop and designer-in-the-loop testing, shipboard missile launch simulation, and optimization of hybrid hypersonic configurations. Also addressed are potential applications of MSBS to such diverse technologies as medical research and practice, industrial robotics, space weaponry, and ore processing in space.

  3. Comparing Pixel- and Object-Based Approaches in Effectively Classifying Wetland-Dominated Landscapes

    PubMed Central

    Berhane, Tedros M.; Lane, Charles R.; Wu, Qiusheng; Anenkhonov, Oleg A.; Chepinoga, Victor V.; Autrey, Bradley C.; Liu, Hongxing

    2018-01-01

    Wetland ecosystems straddle both terrestrial and aquatic habitats, performing many ecological functions directly and indirectly benefitting humans. However, global wetland losses are substantial. Satellite remote sensing and classification informs wise wetland management and monitoring. Both pixel- and object-based classification approaches using parametric and non-parametric algorithms may be effectively used in describing wetland structure and habitat, but which approach should one select? We conducted both pixel- and object-based image analyses (OBIA) using parametric (Iterative Self-Organizing Data Analysis Technique, ISODATA, and maximum likelihood, ML) and non-parametric (random forest, RF) approaches in the Barguzin Valley, a large wetland (~500 km2) in the Lake Baikal, Russia, drainage basin. Four Quickbird multispectral bands plus various spatial and spectral metrics (e.g., texture, Non-Differentiated Vegetation Index, slope, aspect, etc.) were analyzed using field-based regions of interest sampled to characterize an initial 18 ISODATA-based classes. Parsimoniously using a three-layer stack (Quickbird band 3, water ratio index (WRI), and mean texture) in the analyses resulted in the highest accuracy, 87.9% with pixel-based RF, followed by OBIA RF (segmentation scale 5, 84.6% overall accuracy), followed by pixel-based ML (83.9% overall accuracy). Increasing the predictors from three to five by adding Quickbird bands 2 and 4 decreased the pixel-based overall accuracy while increasing the OBIA RF accuracy to 90.4%. However, McNemar’s chi-square test confirmed no statistically significant difference in overall accuracy among the classifiers (pixel-based ML, RF, or object-based RF) for either the three- or five-layer analyses. Although potentially useful in some circumstances, the OBIA approach requires substantial resources and user input (such as segmentation scale selection—which was found to substantially affect overall accuracy). Hence, we conclude that pixel-based RF approaches are likely satisfactory for classifying wetland-dominated landscapes. PMID:29707381

  4. Comparing Pixel- and Object-Based Approaches in Effectively Classifying Wetland-Dominated Landscapes.

    PubMed

    Berhane, Tedros M; Lane, Charles R; Wu, Qiusheng; Anenkhonov, Oleg A; Chepinoga, Victor V; Autrey, Bradley C; Liu, Hongxing

    2018-01-01

    Wetland ecosystems straddle both terrestrial and aquatic habitats, performing many ecological functions directly and indirectly benefitting humans. However, global wetland losses are substantial. Satellite remote sensing and classification informs wise wetland management and monitoring. Both pixel- and object-based classification approaches using parametric and non-parametric algorithms may be effectively used in describing wetland structure and habitat, but which approach should one select? We conducted both pixel- and object-based image analyses (OBIA) using parametric (Iterative Self-Organizing Data Analysis Technique, ISODATA, and maximum likelihood, ML) and non-parametric (random forest, RF) approaches in the Barguzin Valley, a large wetland (~500 km 2 ) in the Lake Baikal, Russia, drainage basin. Four Quickbird multispectral bands plus various spatial and spectral metrics (e.g., texture, Non-Differentiated Vegetation Index, slope, aspect, etc.) were analyzed using field-based regions of interest sampled to characterize an initial 18 ISODATA-based classes. Parsimoniously using a three-layer stack (Quickbird band 3, water ratio index (WRI), and mean texture) in the analyses resulted in the highest accuracy, 87.9% with pixel-based RF, followed by OBIA RF (segmentation scale 5, 84.6% overall accuracy), followed by pixel-based ML (83.9% overall accuracy). Increasing the predictors from three to five by adding Quickbird bands 2 and 4 decreased the pixel-based overall accuracy while increasing the OBIA RF accuracy to 90.4%. However, McNemar's chi-square test confirmed no statistically significant difference in overall accuracy among the classifiers (pixel-based ML, RF, or object-based RF) for either the three- or five-layer analyses. Although potentially useful in some circumstances, the OBIA approach requires substantial resources and user input (such as segmentation scale selection-which was found to substantially affect overall accuracy). Hence, we conclude that pixel-based RF approaches are likely satisfactory for classifying wetland-dominated landscapes.

  5. Risk of bias reporting in the recent animal focal cerebral ischaemia literature.

    PubMed

    Bahor, Zsanett; Liao, Jing; Macleod, Malcolm R; Bannach-Brown, Alexandra; McCann, Sarah K; Wever, Kimberley E; Thomas, James; Ottavi, Thomas; Howells, David W; Rice, Andrew; Ananiadou, Sophia; Sena, Emily

    2017-10-15

    Findings from in vivo research may be less reliable where studies do not report measures to reduce risks of bias. The experimental stroke community has been at the forefront of implementing changes to improve reporting, but it is not known whether these efforts are associated with continuous improvements. Our aims here were firstly to validate an automated tool to assess risks of bias in published works, and secondly to assess the reporting of measures taken to reduce the risk of bias within recent literature for two experimental models of stroke. We developed and used text analytic approaches to automatically ascertain reporting of measures to reduce risk of bias from full-text articles describing animal experiments inducing middle cerebral artery occlusion (MCAO) or modelling lacunar stroke. Compared with previous assessments, there were improvements in the reporting of measures taken to reduce risks of bias in the MCAO literature but not in the lacunar stroke literature. Accuracy of automated annotation of risk of bias in the MCAO literature was 86% (randomization), 94% (blinding) and 100% (sample size calculation); and in the lacunar stroke literature accuracy was 67% (randomization), 91% (blinding) and 96% (sample size calculation). There remains substantial opportunity for improvement in the reporting of animal research modelling stroke, particularly in the lacunar stroke literature. Further, automated tools perform sufficiently well to identify whether studies report blinded assessment of outcome, but improvements are required in the tools to ascertain whether randomization and a sample size calculation were reported. © 2017 The Author(s).

  6. Living systematic reviews: an emerging opportunity to narrow the evidence-practice gap.

    PubMed

    Elliott, Julian H; Turner, Tari; Clavisi, Ornella; Thomas, James; Higgins, Julian P T; Mavergames, Chris; Gruen, Russell L

    2014-02-01

    The current difficulties in keeping systematic reviews up to date leads to considerable inaccuracy, hampering the translation of knowledge into action. Incremental advances in conventional review updating are unlikely to lead to substantial improvements in review currency. A new approach is needed. We propose living systematic review as a contribution to evidence synthesis that combines currency with rigour to enhance the accuracy and utility of health evidence. Living systematic reviews are high quality, up-to-date online summaries of health research, updated as new research becomes available, and enabled by improved production efficiency and adherence to the norms of scholarly communication. Together with innovations in primary research reporting and the creation and use of evidence in health systems, living systematic review contributes to an emerging evidence ecosystem.

  7. Metrics for Evaluating the Accuracy of Solar Power Forecasting: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, J.; Hodge, B. M.; Florita, A.

    2013-10-01

    Forecasting solar energy generation is a challenging task due to the variety of solar power systems and weather regimes encountered. Forecast inaccuracies can result in substantial economic losses and power system reliability issues. This paper presents a suite of generally applicable and value-based metrics for solar forecasting for a comprehensive set of scenarios (i.e., different time horizons, geographic locations, applications, etc.). In addition, a comprehensive framework is developed to analyze the sensitivity of the proposed metrics to three types of solar forecasting improvements using a design of experiments methodology, in conjunction with response surface and sensitivity analysis methods. The resultsmore » show that the developed metrics can efficiently evaluate the quality of solar forecasts, and assess the economic and reliability impact of improved solar forecasting.« less

  8. Satellite techniques for determining the geopotential for sea-surface elevations

    NASA Technical Reports Server (NTRS)

    Pisacane, V. L.

    1984-01-01

    Spaceborne altimetry with measurement accuracies of a few centimeters which has the potential to determine sea surface elevations necessary to compute accurate three-dimensonal geostrophic currents from traditional hydrographic observation is discussed. The limitation in this approach is the uncertainties in knowledge of the global and ocean geopotentials which produce satellite and height uncertainties about an order of magnitude larger than the goal of about 10 cm. The quantative effects of geopotential uncertainties on processing altimetry data are described. Potential near term improvements, not requiring additional spacecraft, are discussed. Even though there is substantial improvements at the longer wavelengths, the oceanographic goal will be achieved. The geopotential research mission (GRM) is described which should produce goepotential models that are capable of defining the ocean geid to 10 cm and near-Earth satellite position. The state of the art and the potential of spaceborne gravimetry is described as an alternative approach to improve our knowledge of the geopotential.

  9. Simplified curve fits for the thermodynamic properties of equilibrium air

    NASA Technical Reports Server (NTRS)

    Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.

    1987-01-01

    New, improved curve fits for the thermodynamic properties of equilibrium air have been developed. The curve fits are for pressure, speed of sound, temperature, entropy, enthalpy, density, and internal energy. These curve fits can be readily incorporated into new or existing computational fluid dynamics codes if real gas effects are desired. The curve fits are constructed from Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits. These improvements are due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25 000 K and densities from 10 to the -7 to 10 to the 3d power amagats.

  10. Widespread Nanoparticle-Assay Interference: Implications for Nanotoxicity Testing

    PubMed Central

    Ong, Kimberly J.; MacCormack, Tyson J.; Clark, Rhett J.; Ede, James D.; Ortega, Van A.; Felix, Lindsey C.; Dang, Michael K. M.; Ma, Guibin; Fenniri, Hicham; Veinot, Jonathan G. C.; Goss, Greg G.

    2014-01-01

    The evaluation of engineered nanomaterial safety has been hindered by conflicting reports demonstrating differential degrees of toxicity with the same nanoparticles. The unique properties of these materials increase the likelihood that they will interfere with analytical techniques, which may contribute to this phenomenon. We tested the potential for: 1) nanoparticle intrinsic fluorescence/absorbance, 2) interactions between nanoparticles and assay components, and 3) the effects of adding both nanoparticles and analytes to an assay, to interfere with the accurate assessment of toxicity. Silicon, cadmium selenide, titanium dioxide, and helical rosette nanotubes each affected at least one of the six assays tested, resulting in either substantial over- or under-estimations of toxicity. Simulation of realistic assay conditions revealed that interference could not be predicted solely by interactions between nanoparticles and assay components. Moreover, the nature and degree of interference cannot be predicted solely based on our current understanding of nanomaterial behaviour. A literature survey indicated that ca. 95% of papers from 2010 using biochemical techniques to assess nanotoxicity did not account for potential interference of nanoparticles, and this number had not substantially improved in 2012. We provide guidance on avoiding and/or controlling for such interference to improve the accuracy of nanotoxicity assessments. PMID:24618833

  11. Intermolecular shielding contributions studied by modeling the 13C chemical-shift tensors of organic single crystals with plane waves

    PubMed Central

    Johnston, Jessica C.; Iuliucci, Robbie J.; Facelli, Julio C.; Fitzgerald, George; Mueller, Karl T.

    2009-01-01

    In order to predict accurately the chemical shift of NMR-active nuclei in solid phase systems, magnetic shielding calculations must be capable of considering the complete lattice structure. Here we assess the accuracy of the density functional theory gauge-including projector augmented wave method, which uses pseudopotentials to approximate the nodal structure of the core electrons, to determine the magnetic properties of crystals by predicting the full chemical-shift tensors of all 13C nuclides in 14 organic single crystals from which experimental tensors have previously been reported. Plane-wave methods use periodic boundary conditions to incorporate the lattice structure, providing a substantial improvement for modeling the chemical shifts in hydrogen-bonded systems. Principal tensor components can now be predicted to an accuracy that approaches the typical experimental uncertainty. Moreover, methods that include the full solid-phase structure enable geometry optimizations to be performed on the input structures prior to calculation of the shielding. Improvement after optimization is noted here even when neutron diffraction data are used for determining the initial structures. After geometry optimization, the isotropic shift can be predicted to within 1 ppm. PMID:19831448

  12. A hierarchical spatial model for well yield in complex aquifers

    NASA Astrophysics Data System (ADS)

    Montgomery, J.; O'sullivan, F.

    2017-12-01

    Efficiently siting and managing groundwater wells requires reliable estimates of the amount of water that can be produced, or the well yield. This can be challenging to predict in highly complex, heterogeneous fractured aquifers due to the uncertainty around local hydraulic properties. Promising statistical approaches have been advanced in recent years. For instance, kriging and multivariate regression analysis have been applied to well test data with limited but encouraging levels of prediction accuracy. Additionally, some analytical solutions to diffusion in homogeneous porous media have been used to infer "effective" properties consistent with observed flow rates or drawdown. However, this is an under-specified inverse problem with substantial and irreducible uncertainty. We describe a flexible machine learning approach capable of combining diverse datasets with constraining physical and geostatistical models for improved well yield prediction accuracy and uncertainty quantification. Our approach can be implemented within a hierarchical Bayesian framework using Markov Chain Monte Carlo, which allows for additional sources of information to be incorporated in priors to further constrain and improve predictions and reduce the model order. We demonstrate the usefulness of this approach using data from over 7,000 wells in a fractured bedrock aquifer.

  13. ["Normal pressure" hydrocephalus].

    PubMed

    Philippon, Jacques

    2005-03-01

    Normal pressure hydrocephalus (NPH) or, more precisely, chronic adult hydrocephalus, is a complex condition. Even if the basic mechanism is found in an impediment to CSF absorption, the underlying pathology is heterogeneous. In secondary NPH, the disruption of normal CSF pathways, following meningitis or sub-arachnoid haemorrhage, is responsible for ventricular dilatation. However, in about half of the cases, the etiology remains obscure. NPH is more frequently found in elderly people, probably in relation with the increased incidence of cerebrovascular disease. The diagnosis of NPH is based upon a triad of clinical symptoms. The main symptom is gait disturbances, followed by urinary incontinence and various degree of cognitive changes. The latter two symptoms are not prerequisites for the diagnosis. Radiological ventricular dilatation without cortical sulcal enlargement is a key factor, as well as substantial clinical improvement after CSF withdrawal (CSF tap test). Other CSF dynamic studies and various imaging investigations have been proposed to improve diagnostic accuracy, but no simple test can predict the results of CSF drainage. The current treatment is ventriculo-peritonial shunting, ideally using an adjustable valve. Results are directly dependent upon the accuracy of the preoperative diagnosis. Post-surgical complications may be observed in about 10% of cases.

  14. Approaches to reducing photon dose calculation errors near metal implants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Jessie Y.; Followill, David S.; Howell, Reb

    Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well asmore » two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact reduction methods investigated, the authors found that O-MAR was the most consistent method, resulting in either improved dose calculation accuracy (dental case) or little impact on calculation accuracy (spine case). GSI was unsuccessful at reducing the severe artifacts caused by dental fillings and had very little impact on calculation accuracy. GSI with MARS on the other hand gave mixed results, sometimes introducing metal distortion and increasing calculation errors (titanium rectangular implant and titanium spinal hardware) but other times very successfully reducing artifacts (Cerrobend rectangular implant and dental fillings). Conclusions: Though successful at improving dose calculation accuracy upstream of metal implants, metal kernels were not found to substantially improve accuracy for clinical cases. Of the commercial artifact reduction methods investigated, O-MAR was found to be the most consistent candidate for all-purpose CT simulation imaging. The MARS algorithm for GSI should be used with caution for titanium implants, larger implants, and implants located near heterogeneities as it can distort the size and shape of implants and increase calculation errors.« less

  15. A Simple and Efficient Methodology To Improve Geometric Accuracy in Gamma Knife Radiation Surgery: Implementation in Multiple Brain Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karaiskos, Pantelis, E-mail: pkaraisk@med.uoa.gr; Gamma Knife Department, Hygeia Hospital, Athens; Moutsatsos, Argyris

    Purpose: To propose, verify, and implement a simple and efficient methodology for the improvement of total geometric accuracy in multiple brain metastases gamma knife (GK) radiation surgery. Methods and Materials: The proposed methodology exploits the directional dependence of magnetic resonance imaging (MRI)-related spatial distortions stemming from background field inhomogeneities, also known as sequence-dependent distortions, with respect to the read-gradient polarity during MRI acquisition. First, an extra MRI pulse sequence is acquired with the same imaging parameters as those used for routine patient imaging, aside from a reversal in the read-gradient polarity. Then, “average” image data are compounded from data acquiredmore » from the 2 MRI sequences and are used for treatment planning purposes. The method was applied and verified in a polymer gel phantom irradiated with multiple shots in an extended region of the GK stereotactic space. Its clinical impact in dose delivery accuracy was assessed in 15 patients with a total of 96 relatively small (<2 cm) metastases treated with GK radiation surgery. Results: Phantom study results showed that use of average MR images eliminates the effect of sequence-dependent distortions, leading to a total spatial uncertainty of less than 0.3 mm, attributed mainly to gradient nonlinearities. In brain metastases patients, non-eliminated sequence-dependent distortions lead to target localization uncertainties of up to 1.3 mm (mean: 0.51 ± 0.37 mm) with respect to the corresponding target locations in the “average” MRI series. Due to these uncertainties, a considerable underdosage (5%-32% of the prescription dose) was found in 33% of the studied targets. Conclusions: The proposed methodology is simple and straightforward in its implementation. Regarding multiple brain metastases applications, the suggested approach may substantially improve total GK dose delivery accuracy in smaller, outlying targets.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lou, K; Rice University, Houston, TX; Sun, X

    Purpose: To study the feasibility of clinical on-line proton beam range verification with PET imaging Methods: We simulated a 179.2-MeV proton beam with 5-mm diameter irradiating a PMMA phantom of human brain size, which was then imaged by a brain PET with 300*300*100-mm{sup 3} FOV and different system sensitivities and spatial resolutions. We calculated the mean and standard deviation of positron activity range (AR) from reconstructed PET images, with respect to different data acquisition times (from 5 sec to 300 sec with 5-sec step). We also developed a technique, “Smoothed Maximum Value (SMV)”, to improve AR measurement under a givenmore » dose. Furthermore, we simulated a human brain irradiated by a 110-MeV proton beam of 50-mm diameter with 0.3-Gy dose at Bragg peak and imaged by the above PET system with 40% system sensitivity at the center of FOV and 1.7-mm spatial resolution. Results: MC Simulations on the PMMA phantom showed that, regardless of PET system sensitivities and spatial resolutions, the accuracy and precision of AR were proportional to the reciprocal of the square root of image count if image smoothing was not applied. With image smoothing or SMV method, the accuracy and precision could be substantially improved. For a cylindrical PMMA phantom (200 mm diameter and 290 mm long), the accuracy and precision of AR measurement could reach 1.0 and 1.7 mm, with 100-sec data acquired by the brain PET. The study with a human brain showed it was feasible to achieve sub-millimeter accuracy and precision of AR measurement with acquisition time within 60 sec. Conclusion: This study established the relationship between count statistics and the accuracy and precision of activity-range verification. It showed the feasibility of clinical on-line BR verification with high-performance PET systems and improved AR measurement techniques. Cancer Prevention and Research Institute of Texas grant RP120326, NIH grant R21CA187717, The Cancer Center Support (Core) Grant CA016672 to MD Anderson Cancer Center.« less

  17. The Release 6 reference sequence of the Drosophila melanogaster genome

    DOE PAGES

    Hoskins, Roger A.; Carlson, Joseph W.; Wan, Kenneth H.; ...

    2015-01-14

    Drosophila melanogaster plays an important role in molecular, genetic, and genomic studies of heredity, development, metabolism, behavior, and human disease. The initial reference genome sequence reported more than a decade ago had a profound impact on progress in Drosophila research, and improving the accuracy and completeness of this sequence continues to be important to further progress. We previously described improvement of the 117-Mb sequence in the euchromatic portion of the genome and 21 Mb in the heterochromatic portion, using a whole-genome shotgun assembly, BAC physical mapping, and clone-based finishing. Here, we report an improved reference sequence of the single-copy andmore » middle-repetitive regions of the genome, produced using cytogenetic mapping to mitotic and polytene chromosomes, clone-based finishing and BAC fingerprint verification, ordering of scaffolds by alignment to cDNA sequences, incorporation of other map and sequence data, and validation by whole-genome optical restriction mapping. These data substantially improve the accuracy and completeness of the reference sequence and the order and orientation of sequence scaffolds into chromosome arm assemblies. Representation of the Y chromosome and other heterochromatic regions is particularly improved. The new 143.9-Mb reference sequence, designated Release 6, effectively exhausts clone-based technologies for mapping and sequencing. Highly repeat-rich regions, including large satellite blocks and functional elements such as the ribosomal RNA genes and the centromeres, are largely inaccessible to current sequencing and assembly methods and remain poorly represented. In conclusion, further significant improvements will require sequencing technologies that do not depend on molecular cloning and that produce very long reads.« less

  18. The Release 6 reference sequence of the Drosophila melanogaster genome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoskins, Roger A.; Carlson, Joseph W.; Wan, Kenneth H.

    Drosophila melanogaster plays an important role in molecular, genetic, and genomic studies of heredity, development, metabolism, behavior, and human disease. The initial reference genome sequence reported more than a decade ago had a profound impact on progress in Drosophila research, and improving the accuracy and completeness of this sequence continues to be important to further progress. We previously described improvement of the 117-Mb sequence in the euchromatic portion of the genome and 21 Mb in the heterochromatic portion, using a whole-genome shotgun assembly, BAC physical mapping, and clone-based finishing. Here, we report an improved reference sequence of the single-copy andmore » middle-repetitive regions of the genome, produced using cytogenetic mapping to mitotic and polytene chromosomes, clone-based finishing and BAC fingerprint verification, ordering of scaffolds by alignment to cDNA sequences, incorporation of other map and sequence data, and validation by whole-genome optical restriction mapping. These data substantially improve the accuracy and completeness of the reference sequence and the order and orientation of sequence scaffolds into chromosome arm assemblies. Representation of the Y chromosome and other heterochromatic regions is particularly improved. The new 143.9-Mb reference sequence, designated Release 6, effectively exhausts clone-based technologies for mapping and sequencing. Highly repeat-rich regions, including large satellite blocks and functional elements such as the ribosomal RNA genes and the centromeres, are largely inaccessible to current sequencing and assembly methods and remain poorly represented. In conclusion, further significant improvements will require sequencing technologies that do not depend on molecular cloning and that produce very long reads.« less

  19. Methodology in diagnostic laboratory test research in clinical chemistry and clinical chemistry and laboratory medicine.

    PubMed

    Lumbreras-Lacarra, Blanca; Ramos-Rincón, José Manuel; Hernández-Aguado, Ildefonso

    2004-03-01

    The application of epidemiologic principles to clinical diagnosis has been less developed than in other clinical areas. Knowledge of the main flaws affecting diagnostic laboratory test research is the first step for improving its quality. We assessed the methodologic aspects of articles on laboratory tests. We included articles that estimated indexes of diagnostic accuracy (sensitivity and specificity) and were published in Clinical Chemistry or Clinical Chemistry and Laboratory Medicine in 1996, 2001, and 2002. Clinical Chemistry has paid special attention to this field of research since 1996 by publishing recommendations, checklists, and reviews. Articles were identified through electronic searches in Medline. The strategy combined the Mesh term "sensitivity and specificity" (exploded) with the text words "specificity", "false negative", and "accuracy". We examined adherence to seven methodologic criteria used in the study by Reid et al. (JAMA1995;274:645-51) of papers published in general medical journals. Three observers evaluated each article independently. Seventy-nine articles fulfilled the inclusion criteria. The percentage of studies that satisfied each criterion improved from 1996 to 2002. Substantial improvement was observed in reporting of the statistical uncertainty of indices of diagnostic accuracy, in criteria based on clinical information from the study population (spectrum composition), and in avoidance of workup bias. Analytical reproducibility was reported frequently (68%), whereas information about indeterminate results was rarely provided. The mean number of methodologic criteria satisfied showed a statistically significant increase over the 3 years in Clinical Chemistry but not in Clinical Chemistry and Laboratory Medicine. The methodologic quality of the articles on diagnostic test research published in Clinical Chemistry and Clinical Chemistry and Laboratory Medicine is comparable to the quality observed in the best general medical journals. The methodologic aspects that most need improvement are those linked to the clinical information of the populations studied. Editorial actions aimed to increase the quality of reporting of diagnostic studies could have a relevant positive effect, as shown by the improvement observed in Clinical Chemistry.

  20. Genetic Variance Partitioning and Genome-Wide Prediction with Allele Dosage Information in Autotetraploid Potato.

    PubMed

    Endelman, Jeffrey B; Carley, Cari A Schmitz; Bethke, Paul C; Coombs, Joseph J; Clough, Mark E; da Silva, Washington L; De Jong, Walter S; Douches, David S; Frederick, Curtis M; Haynes, Kathleen G; Holm, David G; Miller, J Creighton; Muñoz, Patricio R; Navarro, Felix M; Novy, Richard G; Palta, Jiwan P; Porter, Gregory A; Rak, Kyle T; Sathuvalli, Vidyasagar R; Thompson, Asunta L; Yencho, G Craig

    2018-05-01

    As one of the world's most important food crops, the potato ( Solanum tuberosum L.) has spurred innovation in autotetraploid genetics, including in the use of SNP arrays to determine allele dosage at thousands of markers. By combining genotype and pedigree information with phenotype data for economically important traits, the objectives of this study were to (1) partition the genetic variance into additive vs. nonadditive components, and (2) determine the accuracy of genome-wide prediction. Between 2012 and 2017, a training population of 571 clones was evaluated for total yield, specific gravity, and chip fry color. Genomic covariance matrices for additive ( G ), digenic dominant ( D ), and additive × additive epistatic ( G # G ) effects were calculated using 3895 markers, and the numerator relationship matrix ( A ) was calculated from a 13-generation pedigree. Based on model fit and prediction accuracy, mixed model analysis with G was superior to A for yield and fry color but not specific gravity. The amount of additive genetic variance captured by markers was 20% of the total genetic variance for specific gravity, compared to 45% for yield and fry color. Within the training population, including nonadditive effects improved accuracy and/or bias for all three traits when predicting total genotypic value. When six F 1 populations were used for validation, prediction accuracy ranged from 0.06 to 0.63 and was consistently lower (0.13 on average) without allele dosage information. We conclude that genome-wide prediction is feasible in potato and that it will improve selection for breeding value given the substantial amount of nonadditive genetic variance in elite germplasm. Copyright © 2018 by the Genetics Society of America.

  1. Diagnostic Classification of Schizophrenia Patients on the Basis of Regional Reward-Related fMRI Signal Patterns

    PubMed Central

    Koch, Stefan P.; Hägele, Claudia; Haynes, John-Dylan; Heinz, Andreas; Schlagenhauf, Florian; Sterzer, Philipp

    2015-01-01

    Functional neuroimaging has provided evidence for altered function of mesolimbic circuits implicated in reward processing, first and foremost the ventral striatum, in patients with schizophrenia. While such findings based on significant group differences in brain activations can provide important insights into the pathomechanisms of mental disorders, the use of neuroimaging results from standard univariate statistical analysis for individual diagnosis has proven difficult. In this proof of concept study, we tested whether the predictive accuracy for the diagnostic classification of schizophrenia patients vs. healthy controls could be improved using multivariate pattern analysis (MVPA) of regional functional magnetic resonance imaging (fMRI) activation patterns for the anticipation of monetary reward. With a searchlight MVPA approach using support vector machine classification, we found that the diagnostic category could be predicted from local activation patterns in frontal, temporal, occipital and midbrain regions, with a maximal cluster peak classification accuracy of 93% for the right pallidum. Region-of-interest based MVPA for the ventral striatum achieved a maximal cluster peak accuracy of 88%, whereas the classification accuracy on the basis of standard univariate analysis reached only 75%. Moreover, using support vector regression we could additionally predict the severity of negative symptoms from ventral striatal activation patterns. These results show that MVPA can be used to substantially increase the accuracy of diagnostic classification on the basis of task-related fMRI signal patterns in a regionally specific way. PMID:25799236

  2. Estimation of genomic prediction accuracy from reference populations with varying degrees of relationship.

    PubMed

    Lee, S Hong; Clark, Sam; van der Werf, Julius H J

    2017-01-01

    Genomic prediction is emerging in a wide range of fields including animal and plant breeding, risk prediction in human precision medicine and forensic. It is desirable to establish a theoretical framework for genomic prediction accuracy when the reference data consists of information sources with varying degrees of relationship to the target individuals. A reference set can contain both close and distant relatives as well as 'unrelated' individuals from the wider population in the genomic prediction. The various sources of information were modeled as different populations with different effective population sizes (Ne). Both the effective number of chromosome segments (Me) and Ne are considered to be a function of the data used for prediction. We validate our theory with analyses of simulated as well as real data, and illustrate that the variation in genomic relationships with the target is a predictor of the information content of the reference set. With a similar amount of data available for each source, we show that close relatives can have a substantially larger effect on genomic prediction accuracy than lesser related individuals. We also illustrate that when prediction relies on closer relatives, there is less improvement in prediction accuracy with an increase in training data or marker panel density. We release software that can estimate the expected prediction accuracy and power when combining different reference sources with various degrees of relationship to the target, which is useful when planning genomic prediction (before or after collecting data) in animal, plant and human genetics.

  3. Comparison of modal identification techniques using a hybrid-data approach

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.

    1986-01-01

    Modal identification of seemingly simple structures, such as the generic truss is often surprisingly difficult in practice due to high modal density, nonlinearities, and other nonideal factors. Under these circumstances, different data analysis techniques can generate substantially different results. The initial application of a new hybrid-data method for studying the performance characteristics of various identification techniques with such data is summarized. This approach offers new pieces of information for the system identification researcher. First, it allows actual experimental data to be used in the studies, while maintaining the traditional advantage of using simulated data. That is, the identification technique under study is forced to cope with the complexities of real data, yet the performance can be measured unquestionably for the artificial modes because their true parameters are known. Secondly, the accuracy achieved for the true structural modes in the data can be estimated from the accuracy achieved for the artificial modes if the results show similar characteristics. This similarity occurred in the study, for example, for a weak structural mode near 56 Hz. It may even be possible--eventually--to use the error information from the artificial modes to improve the identification accuracy for the structural modes.

  4. Precisely and Accurately Inferring Single-Molecule Rate Constants

    PubMed Central

    Kinz-Thompson, Colin D.; Bailey, Nevette A.; Gonzalez, Ruben L.

    2017-01-01

    The kinetics of biomolecular systems can be quantified by calculating the stochastic rate constants that govern the biomolecular state versus time trajectories (i.e., state trajectories) of individual biomolecules. To do so, the experimental signal versus time trajectories (i.e., signal trajectories) obtained from observing individual biomolecules are often idealized to generate state trajectories by methods such as thresholding or hidden Markov modeling. Here, we discuss approaches for idealizing signal trajectories and calculating stochastic rate constants from the resulting state trajectories. Importantly, we provide an analysis of how the finite length of signal trajectories restrict the precision of these approaches, and demonstrate how Bayesian inference-based versions of these approaches allow rigorous determination of this precision. Similarly, we provide an analysis of how the finite lengths and limited time resolutions of signal trajectories restrict the accuracy of these approaches, and describe methods that, by accounting for the effects of the finite length and limited time resolution of signal trajectories, substantially improve this accuracy. Collectively, therefore, the methods we consider here enable a rigorous assessment of the precision, and a significant enhancement of the accuracy, with which stochastic rate constants can be calculated from single-molecule signal trajectories. PMID:27793280

  5. Optical-fiber-based laser-induced breakdown spectroscopy for detection of early caries

    NASA Astrophysics Data System (ADS)

    Sasazawa, Shuhei; Kakino, Satoko; Matsuura, Yuji

    2015-06-01

    A laser-induced breakdown spectroscopy (LIBS) system targeting for the in vivo analysis of tooth enamel is described. The system is planned to enable real-time analysis of teeth during laser dental treatment by utilizing a hollow optical fiber that transmits both Q-switched Nd:YAG laser light for LIBS and infrared Er:YAG laser light for tooth ablation. The sensitivity of caries detection was substantially improved by expanding the spectral region under analysis to ultraviolet (UV) light and by focusing on emission peaks of Zn in the UV region. Subsequently, early caries were distinguished from healthy teeth with accuracy rates above 80% in vitro.

  6. Improved performance of the laser guide star adaptive optics system at Lick Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, J R; Avicola, K; Bauman, B J

    1999-07-20

    Results of experiments with the laser guide star adaptive optics system on the 3-meter Shane telescope at Lick Observatory have demonstrated a factor of 4 performance improvement over previous results. Stellar images recorded at a wavelength of 2 {micro}m were corrected to over 40% of the theoretical diffraction-limited peak intensity. For the previous two years, this sodium-layer laser guide star system has corrected stellar images at this wavelength to {approx}10% of the theoretical peak intensity limit. After a campaign to improve the beam quality of the laser system, and to improve calibration accuracy and stability of the adaptive optics systemmore » using new techniques for phase retrieval and phase-shifting diffraction interferometry, the system performance has been substantially increased. The next step will be to use the Lick system for astronomical science observations, and to demonstrate this level of performance with the new system being installed on the 10-meter Keck II telescope.« less

  7. All-Digital Time-Domain CMOS Smart Temperature Sensor with On-Chip Linearity Enhancement.

    PubMed

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, Yi

    2016-01-30

    This paper proposes the first all-digital on-chip linearity enhancement technique for improving the accuracy of the time-domain complementary metal-oxide semiconductor (CMOS) smart temperature sensor. To facilitate on-chip application and intellectual property reuse, an all-digital time-domain smart temperature sensor was implemented using 90 nm Field Programmable Gate Arrays (FPGAs). Although the inverter-based temperature sensor has a smaller circuit area and lower complexity, two-point calibration must be used to achieve an acceptable inaccuracy. With the help of a calibration circuit, the influence of process variations was reduced greatly for one-point calibration support, reducing the test costs and time. However, the sensor response still exhibited a large curvature, which substantially affected the accuracy of the sensor. Thus, an on-chip linearity-enhanced circuit is proposed to linearize the curve and achieve a new linearity-enhanced output. The sensor was implemented on eight different Xilinx FPGA using 118 slices per sensor in each FPGA to demonstrate the benefits of the linearization. Compared with the unlinearized version, the maximal inaccuracy of the linearized version decreased from 5 °C to 2.5 °C after one-point calibration in a range of -20 °C to 100 °C. The sensor consumed 95 μW using 1 kSa/s. The proposed linearity enhancement technique significantly improves temperature sensing accuracy, avoiding costly curvature compensation while it is fully synthesizable for future Very Large Scale Integration (VLSI) system.

  8. All-Digital Time-Domain CMOS Smart Temperature Sensor with On-Chip Linearity Enhancement

    PubMed Central

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, Yi

    2016-01-01

    This paper proposes the first all-digital on-chip linearity enhancement technique for improving the accuracy of the time-domain complementary metal-oxide semiconductor (CMOS) smart temperature sensor. To facilitate on-chip application and intellectual property reuse, an all-digital time-domain smart temperature sensor was implemented using 90 nm Field Programmable Gate Arrays (FPGAs). Although the inverter-based temperature sensor has a smaller circuit area and lower complexity, two-point calibration must be used to achieve an acceptable inaccuracy. With the help of a calibration circuit, the influence of process variations was reduced greatly for one-point calibration support, reducing the test costs and time. However, the sensor response still exhibited a large curvature, which substantially affected the accuracy of the sensor. Thus, an on-chip linearity-enhanced circuit is proposed to linearize the curve and achieve a new linearity-enhanced output. The sensor was implemented on eight different Xilinx FPGA using 118 slices per sensor in each FPGA to demonstrate the benefits of the linearization. Compared with the unlinearized version, the maximal inaccuracy of the linearized version decreased from 5 °C to 2.5 °C after one-point calibration in a range of −20 °C to 100 °C. The sensor consumed 95 μW using 1 kSa/s. The proposed linearity enhancement technique significantly improves temperature sensing accuracy, avoiding costly curvature compensation while it is fully synthesizable for future Very Large Scale Integration (VLSI) system. PMID:26840316

  9. Using Public Participation to Improve MELs Energy Data Collection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Iris; Kloss, Margarita; Brown, Rich

    Miscellaneous Electric Loads (MELs) have proliferated in the last decade, and comprise an increasing share of building energy consumption. Because of the diversity of MELs and our lack of understanding about how people use them, large-scale data collection is needed to inform meaningful energy reduction strategies. Traditional methods of data collection, however, usually incur high labor and metering equipment expenses. As an alternative, this paper investigates the feasibility of crowdsourcing data collection to satisfy at least part of the data collection needs with acceptable accuracy. This study assessed the reliability and accuracy of crowdsourced data, by recruiting over 20 volunteersmore » (from the 2012 Lawrence Berkeley Lab, Open House event) to test our crowdsourcing protocol. The protocol asked volunteers to perform the following tasks for three test products with increasing complexity - record power meter and product characteristics, identify all power settings available, and report the measured power. Based on our collected data and analysis, we concluded that volunteers performed reasonably well for devices with functionalities with which they are familiar, and might not produce highly accurate field measurements for complex devices. Accuracy will likely improve when participants are measuring the power used by devices in their home which they know how to operate, by providing more specific instructions including instructional videos. When integrated with existing programs such as the Home Energy Saver tool, crowdsourcing data collection from individual homeowners has the potential to generate a substantial amount of information about MELs energy use in homes.« less

  10. Accuracy of Genomic Prediction in Switchgrass (Panicum virgatum L.) Improved by Accounting for Linkage Disequilibrium

    PubMed Central

    Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.; Mitchell, Robert B.; Vogel, Kenneth P.; Buell, C. Robin; Casler, Michael D.

    2016-01-01

    Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height, and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs. PMID:26869619

  11. Improvement of the GRACE star camera data based on the revision of the combination method

    NASA Astrophysics Data System (ADS)

    Bandikova, Tamara; Flury, Jakob

    2014-11-01

    The new release of the sensor and instrument data (Level-1B release 02) of the Gravity Recovery and Climate Experiment (GRACE) had a substantial impact on the improvement of the overall accuracy of the gravity field models. This has implied that improvements on the sensor data level can still significantly contribute to arriving closer to the GRACE baseline accuracy. The recent analysis of the GRACE star camera data (SCA1B RL02) revealed their unexpectedly higher noise. As the star camera (SCA) data are essential for the processing of the K-band ranging data and the accelerometer data, thorough investigation of the data set was needed. We fully reexamined the SCA data processing from Level-1A to Level-1B with focus on the combination method of the data delivered by the two SCA heads. In the first step, we produced and compared our own combined attitude solution by applying two different combination methods on the SCA Level-1A data. The first method introduces the information about the anisotropic accuracy of the star camera measurement in terms of a weighing matrix. This method was applied in the official processing as well. The alternative method merges only the well determined SCA boresight directions. This method was implemented on the GRACE SCA data for the first time. Both methods were expected to provide optimal solution characteristic by the full accuracy about all three axes, which was confirmed. In the second step, we analyzed the differences between the official SCA1B RL02 data generated by the Jet Propulsion Laboratory (JPL) and our solution. SCA1B RL02 contains systematically higher noise of about a factor 3-4. The data analysis revealed that the reason is the incorrect implementation of algorithms in the JPL processing routines. After correct implementation of the combination method, significant improvement within the whole spectrum was achieved. Based on these results, the official reprocessing of the SCA data is suggested, as the SCA attitude data are one of the key observations needed for the gravity field recovery.

  12. Comparison of computer systems and ranking criteria for automatic melanoma detection in dermoscopic images.

    PubMed

    Møllersen, Kajsa; Zortea, Maciel; Schopf, Thomas R; Kirchesch, Herbert; Godtliebsen, Fred

    2017-01-01

    Melanoma is the deadliest form of skin cancer, and early detection is crucial for patient survival. Computer systems can assist in melanoma detection, but are not widespread in clinical practice. In 2016, an open challenge in classification of dermoscopic images of skin lesions was announced. A training set of 900 images with corresponding class labels and semi-automatic/manual segmentation masks was released for the challenge. An independent test set of 379 images, of which 75 were of melanomas, was used to rank the participants. This article demonstrates the impact of ranking criteria, segmentation method and classifier, and highlights the clinical perspective. We compare five different measures for diagnostic accuracy by analysing the resulting ranking of the computer systems in the challenge. Choice of performance measure had great impact on the ranking. Systems that were ranked among the top three for one measure, dropped to the bottom half when changing performance measure. Nevus Doctor, a computer system previously developed by the authors, was used to participate in the challenge, and investigate the impact of segmentation and classifier. The diagnostic accuracy when using an automatic versus the semi-automatic/manual segmentation is investigated. The unexpected small impact of segmentation method suggests that improvements of the automatic segmentation method w.r.t. resemblance to semi-automatic/manual segmentation will not improve diagnostic accuracy substantially. A small set of similar classification algorithms are used to investigate the impact of classifier on the diagnostic accuracy. The variability in diagnostic accuracy for different classifier algorithms was larger than the variability for segmentation methods, and suggests a focus for future investigations. From a clinical perspective, the misclassification of a melanoma as benign has far greater cost than the misclassification of a benign lesion. For computer systems to have clinical impact, their performance should be ranked by a high-sensitivity measure.

  13. Positional reference system for ultraprecision machining

    DOEpatents

    Arnold, Jones B.; Burleson, Robert R.; Pardue, Robert M.

    1982-01-01

    A stable positional reference system for use in improving the cutting tool-to-part contour position in numerical controlled-multiaxis metal turning machines is provided. The reference system employs a plurality of interferometers referenced to orthogonally disposed metering bars which are substantially isolated from machine strain induced position errors for monitoring the part and tool positions relative to the metering bars. A microprocessor-based control system is employed in conjunction with the plurality of position interferometers and part contour description data inputs to calculate error components for each axis of movement and output them to corresponding axis drives with appropriate scaling and error compensation. Real-time position control, operating in combination with the reference system, makes possible the positioning of the cutting points of a tool along a part locus with a substantially greater degree of accuracy than has been attained previously in the art by referencing and then monitoring only the tool motion relative to a reference position located on the machine base.

  14. Positional reference system for ultraprecision machining

    DOEpatents

    Arnold, J.B.; Burleson, R.R.; Pardue, R.M.

    1980-09-12

    A stable positional reference system for use in improving the cutting tool-to-part contour position in numerical controlled-multiaxis metal turning machines is provided. The reference system employs a plurality of interferometers referenced to orthogonally disposed metering bars which are substantially isolated from machine strain induced position errors for monitoring the part and tool positions relative to the metering bars. A microprocessor-based control system is employed in conjunction with the plurality of positions interferometers and part contour description data input to calculate error components for each axis of movement and output them to corresponding axis driven with appropriate scaling and error compensation. Real-time position control, operating in combination with the reference system, makes possible the positioning of the cutting points of a tool along a part locus with a substantially greater degree of accuracy than has been attained previously in the art by referencing and then monitoring only the tool motion relative to a reference position located on the machine base.

  15. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    NASA Astrophysics Data System (ADS)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  16. A new capacitive long-range displacement nanometer sensor with differential sensing structure based on time-grating

    NASA Astrophysics Data System (ADS)

    Yu, Zhicheng; Peng, Kai; Liu, Xiaokang; Pu, Hongji; Chen, Ziran

    2018-05-01

    High-precision displacement sensors, which can measure large displacements with nanometer resolution, are key components in many ultra-precision fabrication machines. In this paper, a new capacitive nanometer displacement sensor with differential sensing structure is proposed for long-range linear displacement measurements based on an approach denoted time grating. Analytical models established using electric field coupling theory and an area integral method indicate that common-mode interference will result in a first-harmonic error in the measurement results. To reduce the common-mode interference, the proposed sensor design employs a differential sensing structure, which adopts a second group of induction electrodes spatially separated from the first group of induction electrodes by a half-pitch length. Experimental results based on a prototype sensor demonstrate that the measurement accuracy and the stability of the sensor are substantially improved after adopting the differential sensing structure. Finally, a prototype sensor achieves a measurement accuracy of  ±200 nm over the full 200 mm measurement range of the sensor.

  17. Investigation on Beam-Blocker-Based Scatter Correction Method for Improving CT Number Accuracy

    NASA Astrophysics Data System (ADS)

    Lee, Hoyeon; Min, Jonghwan; Lee, Taewon; Pua, Rizza; Sabir, Sohail; Yoon, Kown-Ha; Kim, Hokyung; Cho, Seungryong

    2017-03-01

    Cone-beam computed tomography (CBCT) is gaining widespread use in various medical and industrial applications but suffers from substantially larger amount of scatter than that in the conventional diagnostic CT resulting in relatively poor image quality. Various methods that can reduce and/or correct for the scatter in the CBCT have therefore been developed. Scatter correction method that uses a beam-blocker has been considered a direct measurement-based approach providing accurate scatter estimation from the data in the shadows of the beam-blocker. To the best of our knowledge, there has been no record reporting the significance of the scatter from the beam-blocker itself in such correction methods. In this paper, we identified the scatter from the beam-blocker that is detected in the object-free projection data investigated its influence on the image accuracy of CBCT reconstructed images, and developed a scatter correction scheme that takes care of this scatter as well as the scatter from the scanned object.

  18. An empirical study of scanner system parameters

    NASA Technical Reports Server (NTRS)

    Landgrebe, D.; Biehl, L.; Simmons, W.

    1976-01-01

    The selection of the current combination of parametric values (instantaneous field of view, number and location of spectral bands, signal-to-noise ratio, etc.) of a multispectral scanner is a complex problem due to the strong interrelationship these parameters have with one another. The study was done with the proposed scanner known as Thematic Mapper in mind. Since an adequate theoretical procedure for this problem has apparently not yet been devised, an empirical simulation approach was used with candidate parameter values selected by the heuristic means. The results obtained using a conventional maximum likelihood pixel classifier suggest that although the classification accuracy declines slightly as the IFOV is decreased this is more than made up by an improved mensuration accuracy. Further, the use of a classifier involving both spatial and spectral features shows a very substantial tendency to resist degradation as the signal-to-noise ratio is decreased. And finally, further evidence is provided of the importance of having at least one spectral band in each of the major available portions of the optical spectrum.

  19. Estimating commercial property prices: an application of cokriging with housing prices as ancillary information

    NASA Astrophysics Data System (ADS)

    Montero-Lorenzo, José-María; Larraz-Iribas, Beatriz; Páez, Antonio

    2009-12-01

    A vast majority of the recent literature on spatial hedonic analysis has been concerned with residential property values, with only very few examples of studies focused on commercial property prices. The dearth of studies can be attributed to some of the challenges faced in the analysis of commercial properties, in particular the scarcity of information compared to residential transactions. In order to address this issue, in this paper we propose the use of cokriging and housing prices as ancillary information to estimate commercial property prices. Cokriging takes into account the spatial autocorrelation structure of property prices, and the use of more abundant information on housing prices helps to improve the accuracy of property value estimates. A case study of Toledo in Spain, a city for which commercial activity stemming from tourism is one of the key elements of the economy in the city, demonstrates that substantial accuracy and precision gains can be obtained from the use of cokriging.

  20. Semilocal Exchange Energy Functional for Two-Dimensional Quantum Systems: A Step Beyond Generalized Gradient Approximations.

    PubMed

    Jana, Subrata; Samal, Prasanjit

    2017-06-29

    Semilocal density functionals for the exchange-correlation energy of electrons are extensively used as they produce realistic and accurate results for finite and extended systems. The choice of techniques plays a crucial role in constructing such functionals of improved accuracy and efficiency. An accurate and efficient semilocal exchange energy functional in two dimensions is constructed by making use of the corresponding hole which is derived based on the density matrix expansion. The exchange hole involved is localized under the generalized coordinate transformation and satisfies all the relevant constraints. Comprehensive testing and excellent performance of the functional is demonstrated versus exact exchange results. The accuracy of results obtained by using the newly constructed functional is quite remarkable as it substantially reduces the errors present in the local and nonempirical exchange functionals proposed so far for two-dimensional quantum systems. The underlying principles involved in the functional construction are physically appealing and hold promise for developing range separated and nonlocal exchange functionals in two dimensions.

  1. Optical signal processing using photonic reservoir computing

    NASA Astrophysics Data System (ADS)

    Salehi, Mohammad Reza; Dehyadegari, Louiza

    2014-10-01

    As a new approach to recognition and classification problems, photonic reservoir computing has such advantages as parallel information processing, power efficient and high speed. In this paper, a photonic structure has been proposed for reservoir computing which is investigated using a simple, yet, non-partial noisy time series prediction task. This study includes the application of a suitable topology with self-feedbacks in a network of SOA's - which lends the system a strong memory - and leads to adjusting adequate parameters resulting in perfect recognition accuracy (100%) for noise-free time series, which shows a 3% improvement over previous results. For the classification of noisy time series, the rate of accuracy showed a 4% increase and amounted to 96%. Furthermore, an analytical approach was suggested to solve rate equations which led to a substantial decrease in the simulation time, which is an important parameter in classification of large signals such as speech recognition, and better results came up compared with previous works.

  2. Ultra-precise micro-motion stage for optical scanning test

    NASA Astrophysics Data System (ADS)

    Chen, Wen; Zhang, Jianhuan; Jiang, Nan

    2009-05-01

    This study aims at the application of optical sensing technology in a 2D flexible hinge test stage. Optical fiber sensor which is manufactured taking advantage of the various unique properties of optical fiber, such as good electric insulation properties, resistance of electromagnetic disturbance, sparkless property and availability in flammable and explosive environment, has lots of good properties, such as high accuracy and wide dynamic range, repeatable, etc. and is applied in 2D flexible hinge stage driven by PZT. Several micro-bending structures are designed utilizing the characteristics of the flexible hinge stage. And through experiments, the optimal micro-bending tooth structure and the scope of displacement sensor trip under this optimal micro-bending tooth structure are derived. These experiments demonstrate that the application of optical fiber displacement sensor in 2D flexible hinge stage driven by PZT substantially broadens the dynamic testing range and improves the sensitivity of this apparatus. Driving accuracy and positioning stability are enhanced as well. [1,2

  3. Impact of point-of-care implementation of Xpert® MTB/RIF: product vs. process innovation.

    PubMed

    Schumacher, S G; Thangakunam, B; Denkinger, C M; Oliver, A A; Shakti, K B; Qin, Z Z; Michael, J S; Luo, R; Pai, M; Christopher, D J

    2015-09-01

    Both product innovation (e.g., more sensitive tests) and process innovation (e.g., a point-of-care [POC] testing programme) could improve patient outcomes. To study the respective contributions of product and process innovation in improving patient outcomes. We implemented a POC programme using Xpert(®) MTB/RIF in an out-patient clinic of a tertiary care hospital in India. We measured the impact of process innovation by comparing time to diagnosis with routine testing vs. POC testing. We measured the impact of product innovation by comparing accuracy and time to diagnosis using smear microscopy vs. POC Xpert. We enrolled 1012 patients over a 15-month period. Xpert had high accuracy, but the incremental value of one Xpert over two smears was only 6% (95%CI 3-12). Implementing Xpert as a routine laboratory test did not reduce the time to diagnosis compared to smear-based diagnosis. In contrast, the POC programme reduced the time to diagnosis by 5.5 days (95%CI 4.3-6.7), but required dedicated staff and substantial adaptation of clinic workflow. Process innovation by way of a POC Xpert programme had a greater impact on time to diagnosis than the product per se, and can yield important improvements in patient care that are complementary to those achieved by introducing innovative technologies.

  4. Deconvolution improves the accuracy and depth sensitivity of time-resolved measurements

    NASA Astrophysics Data System (ADS)

    Diop, Mamadou; St. Lawrence, Keith

    2013-03-01

    Time-resolved (TR) techniques have the potential to distinguish early- from late-arriving photons. Since light travelling through superficial tissue is detected earlier than photons that penetrate the deeper layers, time-windowing can in principle be used to improve the depth sensitivity of TR measurements. However, TR measurements also contain instrument contributions - referred to as the instrument-response-function (IRF) - which cause temporal broadening of the measured temporal-point-spread-function (TPSF). In this report, we investigate the influence of the IRF on pathlength-resolved absorption changes (Δμa) retrieved from TR measurements using the microscopic Beer-Lambert law (MBLL). TPSFs were acquired on homogeneous and two-layer tissue-mimicking phantoms with varying optical properties. The measured IRF and TPSFs were deconvolved to recover the distribution of time-of-flights (DTOFs) of the detected photons. The microscopic Beer-Lambert law was applied to early and late time-windows of the TPSFs and DTOFs to access the effects of the IRF on pathlength-resolved Δμa. The analysis showed that the late part of the TPSFs contains substantial contributions from early-arriving photons, due to the smearing effects of the IRF, which reduced its sensitivity to absorption changes occurring in deep layers. We also demonstrated that the effects of the IRF can be efficiently eliminated by applying a robust deconvolution technique, thereby improving the accuracy and sensitivity of TR measurements to deep-tissue absorption changes.

  5. Increased functional connectivity between cortical hand areas and praxis network associated with training-related improvements in non-dominant hand precision drawing.

    PubMed

    Philip, Benjamin A; Frey, Scott H

    2016-07-01

    Chronic forced use of the non-dominant left hand yields substantial improvements in the precision and quality of writing and drawing. These changes may arise from increased access by the non-dominant (right) hemisphere to dominant (left) hemisphere mechanisms specialized for end-point precision control. To evaluate this prediction, 22 healthy right-handed adults underwent resting state functional connectivity (FC) MRI scans before and after 10 days of training on a left hand precision drawing task. 89% of participants significantly improved left hand speed, accuracy, and smoothness. Smoothness gains were specific to the trained left hand and persistent: 6 months after training, 71% of participants exhibited above-baseline movement smoothness. Contrary to expectations, we found no evidence of increased FC between right and left hemisphere hand areas. Instead, training-related improvements in left hand movement smoothness were associated with increased FC between both sensorimotor hand areas and a left-lateralized parieto-prefrontal network implicated in manual praxis. By contrast, skill retention at 6 months was predicted by changes including decreased FC between the representation of the trained left hand and bilateral sensorimotor, parietal, and premotor cortices, possibly reflecting consolidation and a disengagement of early learning processes. These data indicate that modest amounts of training (<200min total) can induce substantial, persistent improvements the precision and quality of non-dominant hand control in healthy adults, supported by strengthened connectivity between bilateral sensorimotor hand areas and a left-lateralized parieto-prefrontal praxis network. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Scale-adaptive compressive tracking with feature integration

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Li, Jicheng; Chen, Xiao; Li, Shuxin

    2016-05-01

    Numerous tracking-by-detection methods have been proposed for robust visual tracking, among which compressive tracking (CT) has obtained some promising results. A scale-adaptive CT method based on multifeature integration is presented to improve the robustness and accuracy of CT. We introduce a keypoint-based model to achieve the accurate scale estimation, which can additionally give a prior location of the target. Furthermore, by the high efficiency of data-independent random projection matrix, multiple features are integrated into an effective appearance model to construct the naïve Bayes classifier. At last, an adaptive update scheme is proposed to update the classifier conservatively. Experiments on various challenging sequences demonstrate substantial improvements by our proposed tracker over CT and other state-of-the-art trackers in terms of dealing with scale variation, abrupt motion, deformation, and illumination changes.

  7. School-based social skills training for preschool-age children with autism spectrum disorder.

    PubMed

    Radley, Keith C; Hanglein, Jeanine; Arak, Marisa

    2016-11-01

    Individuals with autism spectrum disorder display impairments in social interactions and communication that appear at early ages and result in short- and long-term negative outcomes. As such, there is a need for effective social skills training programs for young children with autism spectrum disorder-particularly interventions capable of being delivered in educational settings. The study evaluated the effects of the Superheroes Social Skills program on accurate demonstration of social skills in young children with autism spectrum disorder. Two preschool-age children with autism spectrum disorder participated in a weekly social skills intervention. A multiple probe design across skills was used to determine the effects of the intervention. Both participants demonstrated substantial improvements in skill accuracy. Social skills checklists also indicated improvements in social functioning over baseline levels. © The Author(s) 2016.

  8. Fast forward kinematics algorithm for real-time and high-precision control of the 3-RPS parallel mechanism

    NASA Astrophysics Data System (ADS)

    Wang, Yue; Yu, Jingjun; Pei, Xu

    2018-06-01

    A new forward kinematics algorithm for the mechanism of 3-RPS (R: Revolute; P: Prismatic; S: Spherical) parallel manipulators is proposed in this study. This algorithm is primarily based on the special geometric conditions of the 3-RPS parallel mechanism, and it eliminates the errors produced by parasitic motions to improve and ensure accuracy. Specifically, the errors can be less than 10-6. In this method, only the group of solutions that is consistent with the actual situation of the platform is obtained rapidly. This algorithm substantially improves calculation efficiency because the selected initial values are reasonable, and all the formulas in the calculation are analytical. This novel forward kinematics algorithm is well suited for real-time and high-precision control of the 3-RPS parallel mechanism.

  9. Super-resolution mapping using multi-viewing CHRIS/PROBA data

    NASA Astrophysics Data System (ADS)

    Dwivedi, Manish; Kumar, Vinay

    2016-04-01

    High-spatial resolution Remote Sensing (RS) data provides detailed information which ensures high-definition visual image analysis of earth surface features. These data sets also support improved information extraction capabilities at a fine scale. In order to improve the spatial resolution of coarser resolution RS data, the Super Resolution Reconstruction (SRR) technique has become widely acknowledged which focused on multi-angular image sequences. In this study multi-angle CHRIS/PROBA data of Kutch area is used for SR image reconstruction to enhance the spatial resolution from 18 m to 6m in the hope to obtain a better land cover classification. Various SR approaches like Projection onto Convex Sets (POCS), Robust, Iterative Back Projection (IBP), Non-Uniform Interpolation and Structure-Adaptive Normalized Convolution (SANC) chosen for this study. Subjective assessment through visual interpretation shows substantial improvement in land cover details. Quantitative measures including peak signal to noise ratio and structural similarity are used for the evaluation of the image quality. It was observed that SANC SR technique using Vandewalle algorithm for the low resolution image registration outperformed the other techniques. After that SVM based classifier is used for the classification of SRR and data resampled to 6m spatial resolution using bi-cubic interpolation. A comparative analysis is carried out between classified data of bicubic interpolated and SR derived images of CHRIS/PROBA and SR derived classified data have shown a significant improvement of 10-12% in the overall accuracy. The results demonstrated that SR methods is able to improve spatial detail of multi-angle images as well as the classification accuracy.

  10. The Accuracy of Student Self-Assessments of English-Chinese Bidirectional Interpretation: A Longitudinal Quantitative Study

    ERIC Educational Resources Information Center

    Han, Chao; Riazi, Mehdi

    2018-01-01

    The accuracy of self-assessment has long been examined empirically in higher education research, producing a substantial body of literature that casts light on numerous potential moderators. However, despite the growing popularity of self-assessment in interpreter training and education, very limited evidence-based research has been initiated to…

  11. SU-E-T-329: Dosimetric Impact of Implementing Metal Artifact Reduction Methods and Metal Energy Deposition Kernels for Photon Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, J; Followill, D; Howell, R

    2015-06-15

    Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titaniummore » and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus, these two strategies do have the potential to improve accuracy for patients with metal implants in certain scenarios. This work was supported by Public Health Service grants CA 180803 and CA 10953 awarded by the National Cancer Institute, United States of Health and Human Services, and in part by Mobius Medical Systems.« less

  12. Exploring Capabilities of SENTINEL-2 for Vegetation Mapping Using Random Forest

    NASA Astrophysics Data System (ADS)

    Saini, R.; Ghosh, S. K.

    2018-04-01

    Accurate vegetation mapping is essential for monitoring crop and sustainable agricultural practice. This study aims to explore the capabilities of Sentinel-2 data over Landsat-8 Operational Land Imager (OLI) data for vegetation mapping. Two combination of Sentinel-2 dataset have been considered, first combination is 4-band dataset at 10m resolution which consists of NIR, R, G and B bands, while second combination is generated by stacking 4 bands having 10 m resolution along with other six sharpened bands using Gram-Schmidt algorithm. For Landsat-8 OLI dataset, six multispectral bands have been pan-sharpened to have a spatial resolution of 15 m using Gram-Schmidt algorithm. Random Forest (RF) and Maximum Likelihood classifier (MLC) have been selected for classification of images. It is found that, overall accuracy achieved by RF for 4-band, 10-band dataset of Sentinel-2 and Landsat-8 OLI are 88.38 %, 90.05 % and 86.68 % respectively. While, MLC give an overall accuracy of 85.12 %, 87.14 % and 83.56 % for 4-band, 10-band Sentinel and Landsat-8 OLI respectively. Results shown that 10-band Sentinel-2 dataset gives highest accuracy and shows a rise of 3.37 % for RF and 3.58 % for MLC compared to Landsat-8 OLI. However, all the classes show significant improvement in accuracy but a major rise in accuracy is observed for Sugarcane, Wheat and Fodder for Sentinel 10-band imagery. This study substantiates the fact that Sentinel-2 data can be utilized for mapping of vegetation with a good degree of accuracy when compared to Landsat-8 OLI specifically when objective is to map a sub class of vegetation.

  13. The role of computerized diagnostic proposals in the interpretation of the 12-lead electrocardiogram by cardiology and non-cardiology fellows.

    PubMed

    Novotny, Tomas; Bond, Raymond; Andrsova, Irena; Koc, Lumir; Sisakova, Martina; Finlay, Dewar; Guldenring, Daniel; Spinar, Jindrich; Malik, Marek

    2017-05-01

    Most contemporary 12-lead electrocardiogram (ECG) devices offer computerized diagnostic proposals. The reliability of these automated diagnoses is limited. It has been suggested that incorrect computer advice can influence physician decision-making. This study analyzed the role of diagnostic proposals in the decision process by a group of fellows of cardiology and other internal medicine subspecialties. A set of 100 clinical 12-lead ECG tracings was selected covering both normal cases and common abnormalities. A team of 15 junior Cardiology Fellows and 15 Non-Cardiology Fellows interpreted the ECGs in 3 phases: without any diagnostic proposal, with a single diagnostic proposal (half of them intentionally incorrect), and with four diagnostic proposals (only one of them being correct) for each ECG. Self-rated confidence of each interpretation was collected. Availability of diagnostic proposals significantly increased the diagnostic accuracy (p<0.001). Nevertheless, in case of a single proposal (either correct or incorrect) the increase of accuracy was present in interpretations with correct diagnostic proposals, while the accuracy was substantially reduced with incorrect proposals. Confidence levels poorly correlated with interpretation scores (rho≈2, p<0.001). Logistic regression showed that an interpreter is most likely to be correct when the ECG offers a correct diagnostic proposal (OR=10.87) or multiple proposals (OR=4.43). Diagnostic proposals affect the diagnostic accuracy of ECG interpretations. The accuracy is significantly influenced especially when a single diagnostic proposal (either correct or incorrect) is provided. The study suggests that the presentation of multiple computerized diagnoses is likely to improve the diagnostic accuracy of interpreters. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Multivariate pattern analysis for MEG: A comparison of dissimilarity measures.

    PubMed

    Guggenmos, Matthias; Sterzer, Philipp; Cichy, Radoslaw Martin

    2018-06-01

    Multivariate pattern analysis (MVPA) methods such as decoding and representational similarity analysis (RSA) are growing rapidly in popularity for the analysis of magnetoencephalography (MEG) data. However, little is known about the relative performance and characteristics of the specific dissimilarity measures used to describe differences between evoked activation patterns. Here we used a multisession MEG data set to qualitatively characterize a range of dissimilarity measures and to quantitatively compare them with respect to decoding accuracy (for decoding) and between-session reliability of representational dissimilarity matrices (for RSA). We tested dissimilarity measures from a range of classifiers (Linear Discriminant Analysis - LDA, Support Vector Machine - SVM, Weighted Robust Distance - WeiRD, Gaussian Naïve Bayes - GNB) and distances (Euclidean distance, Pearson correlation). In addition, we evaluated three key processing choices: 1) preprocessing (noise normalisation, removal of the pattern mean), 2) weighting decoding accuracies by decision values, and 3) computing distances in three different partitioning schemes (non-cross-validated, cross-validated, within-class-corrected). Four main conclusions emerged from our results. First, appropriate multivariate noise normalization substantially improved decoding accuracies and the reliability of dissimilarity measures. Second, LDA, SVM and WeiRD yielded high peak decoding accuracies and nearly identical time courses. Third, while using decoding accuracies for RSA was markedly less reliable than continuous distances, this disadvantage was ameliorated by decision-value-weighting of decoding accuracies. Fourth, the cross-validated Euclidean distance provided unbiased distance estimates and highly replicable representational dissimilarity matrices. Overall, we strongly advise the use of multivariate noise normalisation as a general preprocessing step, recommend LDA, SVM and WeiRD as classifiers for decoding and highlight the cross-validated Euclidean distance as a reliable and unbiased default choice for RSA. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Genomic selection across multiple breeding cycles in applied bread wheat breeding.

    PubMed

    Michel, Sebastian; Ametz, Christian; Gungor, Huseyin; Epure, Doru; Grausgruber, Heinrich; Löschenberger, Franziska; Buerstmayr, Hermann

    2016-06-01

    We evaluated genomic selection across five breeding cycles of bread wheat breeding. Bias of within-cycle cross-validation and methods for improving the prediction accuracy were assessed. The prospect of genomic selection has been frequently shown by cross-validation studies using the same genetic material across multiple environments, but studies investigating genomic selection across multiple breeding cycles in applied bread wheat breeding are lacking. We estimated the prediction accuracy of grain yield, protein content and protein yield of 659 inbred lines across five independent breeding cycles and assessed the bias of within-cycle cross-validation. We investigated the influence of outliers on the prediction accuracy and predicted protein yield by its components traits. A high average heritability was estimated for protein content, followed by grain yield and protein yield. The bias of the prediction accuracy using populations from individual cycles using fivefold cross-validation was accordingly substantial for protein yield (17-712 %) and less pronounced for protein content (8-86 %). Cross-validation using the cycles as folds aimed to avoid this bias and reached a maximum prediction accuracy of [Formula: see text] = 0.51 for protein content, [Formula: see text] = 0.38 for grain yield and [Formula: see text] = 0.16 for protein yield. Dropping outlier cycles increased the prediction accuracy of grain yield to [Formula: see text] = 0.41 as estimated by cross-validation, while dropping outlier environments did not have a significant effect on the prediction accuracy. Independent validation suggests, on the other hand, that careful consideration is necessary before an outlier correction is undertaken, which removes lines from the training population. Predicting protein yield by multiplying genomic estimated breeding values of grain yield and protein content raised the prediction accuracy to [Formula: see text] = 0.19 for this derived trait.

  16. Increasing horizontal resolution in numerical weather prediction and climate simulations: illusion or panacea?

    PubMed

    Wedi, Nils P

    2014-06-28

    The steady path of doubling the global horizontal resolution approximately every 8 years in numerical weather prediction (NWP) at the European Centre for Medium Range Weather Forecasts may be substantially altered with emerging novel computing architectures. It coincides with the need to appropriately address and determine forecast uncertainty with increasing resolution, in particular, when convective-scale motions start to be resolved. Blunt increases in the model resolution will quickly become unaffordable and may not lead to improved NWP forecasts. Consequently, there is a need to accordingly adjust proven numerical techniques. An informed decision on the modelling strategy for harnessing exascale, massively parallel computing power thus also requires a deeper understanding of the sensitivity to uncertainty--for each part of the model--and ultimately a deeper understanding of multi-scale interactions in the atmosphere and their numerical realization in ultra-high-resolution NWP and climate simulations. This paper explores opportunities for substantial increases in the forecast efficiency by judicious adjustment of the formal accuracy or relative resolution in the spectral and physical space. One path is to reduce the formal accuracy by which the spectral transforms are computed. The other pathway explores the importance of the ratio used for the horizontal resolution in gridpoint space versus wavenumbers in spectral space. This is relevant for both high-resolution simulations as well as ensemble-based uncertainty estimation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  17. Using the SWAT model to improve process descriptions and define hydrologic partitioning in South Korea

    NASA Astrophysics Data System (ADS)

    Shope, C. L.; Maharjan, G. R.; Tenhunen, J.; Seo, B.; Kim, K.; Riley, J.; Arnhold, S.; Koellner, T.; Ok, Y. S.; Peiffer, S.; Kim, B.; Park, J.-H.; Huwe, B.

    2014-02-01

    Watershed-scale modeling can be a valuable tool to aid in quantification of water quality and yield; however, several challenges remain. In many watersheds, it is difficult to adequately quantify hydrologic partitioning. Data scarcity is prevalent, accuracy of spatially distributed meteorology is difficult to quantify, forest encroachment and land use issues are common, and surface water and groundwater abstractions substantially modify watershed-based processes. Our objective is to assess the capability of the Soil and Water Assessment Tool (SWAT) model to capture event-based and long-term monsoonal rainfall-runoff processes in complex mountainous terrain. To accomplish this, we developed a unique quality-control, gap-filling algorithm for interpolation of high-frequency meteorological data. We used a novel multi-location, multi-optimization calibration technique to improve estimations of catchment-wide hydrologic partitioning. The interdisciplinary model was calibrated to a unique combination of statistical, hydrologic, and plant growth metrics. Our results indicate scale-dependent sensitivity of hydrologic partitioning and substantial influence of engineered features. The addition of hydrologic and plant growth objective functions identified the importance of culverts in catchment-wide flow distribution. While this study shows the challenges of applying the SWAT model to complex terrain and extreme environments; by incorporating anthropogenic features into modeling scenarios, we can enhance our understanding of the hydroecological impact.

  18. A fast dual wavelength laser beam fluid-less optical CT scanner for radiotherapy 3D gel dosimetry I: design and development

    NASA Astrophysics Data System (ADS)

    Ramm, Daniel

    2018-02-01

    Three dimensional dosimetry by optical CT readout of radiosensitive gels or solids has previously been indicated as a solution for measurement of radiotherapy 3D dose distributions. The clinical uptake of these dosimetry methods has been limited, partly due to impracticalities of the optical readout such as the expertise and labour required for refractive index fluid matching. In this work a fast laser beam optical CT scanner is described, featuring fluid-less and dual wavelength operation. A second laser with a different wavelength is used to provide an alternative reference scan to the commonly used pre-irradiation scan. Transmission data for both wavelengths is effectively acquired simultaneously, giving a single scan process. Together with the elimination of refractive index fluid matching issues, scanning practicality is substantially improved. Image quality and quantitative accuracy were assessed for both dual and single wavelength methods. The dual wavelength scan technique gave improvements in uniformity of reconstructed optical attenuation coefficients in the sample 3D volume. This was due to a reduction of artefacts caused by scan to scan changes. Optical attenuation measurement accuracy was similar for both dual and single wavelength modes of operation. These results established the basis for further work on dosimetric performance.

  19. Efficient method to design RF pulses for parallel excitation MRI using gridding and conjugate gradient

    PubMed Central

    Feng, Shuo

    2014-01-01

    Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns. PMID:24834420

  20. Efficient method to design RF pulses for parallel excitation MRI using gridding and conjugate gradient.

    PubMed

    Feng, Shuo; Ji, Jim

    2014-04-01

    Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns.

  1. Using chronic disease risk factors to adjust Medicare capitation payments

    PubMed Central

    Schauffler, Helen Halpin; Howland, Jonathan; Cobb, Janet

    1992-01-01

    This study evaluates the use of risk factors for chronic disease as health status adjusters for Medicare's capitation formula, the average adjusted per capita costs (AAPCC). Risk factor data for the surviving members of the Framingham Study cohort who were examined in 1982-83 were merged with 100 percent Medicare payment data for 1984 and 1985, matching on Social Security number and sex. Seven different AAPCC models were estimated to assess the independent contributions of risk factors and measures of prior utilization and disability in increasing the explanatory power of AAPCC. The findings suggest that inclusion of risk factors for chronic disease as health status adjusters can improve substantially the predictive accuracy of AAPCC. PMID:10124441

  2. Optical temperature compensation schemes of spectral modulation sensors for aircraft engine control

    NASA Astrophysics Data System (ADS)

    Berkcan, Ertugrul

    1993-02-01

    Optical temperature compensation schemes for the ratiometric interrogation of spectral modulation sensors for source temperature robustness are presented. We have obtained better than 50 - 100X decrease of the temperature coefficient of the sensitivity using these types of compensation. We have also developed a spectrographic interrogation scheme that provides increased source temperature robustness; this affords a significantly improved accuracy over FADEC temperature ranges as well as temperature coefficient of the sensitivity that is substantially and further reduced. This latter compensation scheme can be integrated in a small E/O package including the detection, analog and digital signal processing. We find that these interrogation schemes can be used within a detector spatially multiplexed architecture.

  3. Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.

    PubMed

    Reimers, Stian; Stewart, Neil

    2016-09-01

    Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems.

  4. Estimating energy expenditure from heart rate in older adults: a case for calibration.

    PubMed

    Schrack, Jennifer A; Zipunnikov, Vadim; Goldsmith, Jeff; Bandeen-Roche, Karen; Crainiceanu, Ciprian M; Ferrucci, Luigi

    2014-01-01

    Accurate measurement of free-living energy expenditure is vital to understanding changes in energy metabolism with aging. The efficacy of heart rate as a surrogate for energy expenditure is rooted in the assumption of a linear function between heart rate and energy expenditure, but its validity and reliability in older adults remains unclear. To assess the validity and reliability of the linear function between heart rate and energy expenditure in older adults using different levels of calibration. Heart rate and energy expenditure were assessed across five levels of exertion in 290 adults participating in the Baltimore Longitudinal Study of Aging. Correlation and random effects regression analyses assessed the linearity of the relationship between heart rate and energy expenditure and cross-validation models assessed predictive performance. Heart rate and energy expenditure were highly correlated (r=0.98) and linear regardless of age or sex. Intra-person variability was low but inter-person variability was high, with substantial heterogeneity of the random intercept (s.d. =0.372) despite similar slopes. Cross-validation models indicated individual calibration data substantially improves accuracy predictions of energy expenditure from heart rate, reducing the potential for considerable measurement bias. Although using five calibration measures provided the greatest reduction in the standard deviation of prediction errors (1.08 kcals/min), substantial improvement was also noted with two (0.75 kcals/min). These findings indicate standard regression equations may be used to make population-level inferences when estimating energy expenditure from heart rate in older adults but caution should be exercised when making inferences at the individual level without proper calibration.

  5. Detection of increased vasa vasorum in artery walls: improving CT number accuracy using image deconvolution

    NASA Astrophysics Data System (ADS)

    Rajendran, Kishore; Leng, Shuai; Jorgensen, Steven M.; Abdurakhimova, Dilbar; Ritman, Erik L.; McCollough, Cynthia H.

    2017-03-01

    Changes in arterial wall perfusion are an indicator of early atherosclerosis. This is characterized by an increased spatial density of vasa vasorum (VV), the micro-vessels that supply oxygen and nutrients to the arterial wall. Detection of increased VV during contrast-enhanced computed tomography (CT) imaging is limited due to contamination from blooming effect from the contrast-enhanced lumen. We report the application of an image deconvolution technique using a measured system point-spread function, on CT data obtained from a photon-counting CT system to reduce blooming and to improve the CT number accuracy of arterial wall, which enhances detection of increased VV. A phantom study was performed to assess the accuracy of the deconvolution technique. A porcine model was created with enhanced VV in one carotid artery; the other carotid artery served as a control. CT images at an energy range of 25-120 keV were reconstructed. CT numbers were measured for multiple locations in the carotid walls and for multiple time points, pre and post contrast injection. The mean CT number in the carotid wall was compared between the left (increased VV) and right (control) carotid arteries. Prior to deconvolution, results showed similar mean CT numbers in the left and right carotid wall due to the contamination from blooming effect, limiting the detection of increased VV in the left carotid artery. After deconvolution, the mean CT number difference between the left and right carotid arteries was substantially increased at all the time points, enabling detection of the increased VV in the artery wall.

  6. Accuracy of genomic prediction in switchgrass ( Panicum virgatum L.) improved by accounting for linkage disequilibrium

    DOE PAGES

    Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.; ...

    2016-02-11

    Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height,more » and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Furthermore, some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.« less

  7. Brain 18F-FDG PET in the diagnosis of neurodegenerative dementias: comparison with perfusion SPECT and with clinical evaluations lacking nuclear imaging.

    PubMed

    Silverman, Daniel H S

    2004-04-01

    The clinical identification and differential diagnosis of dementias is especially challenging in the early stages, but the need for early, accurate diagnosis has become more important, now that several medications for the treatment of mild to moderate Alzheimer's disease (AD) are available. Many neurodegenerative diseases produce significant brain-function alterations detectable with PET or SPECT even when structural images with CT or MRI reveal no specific abnormalities. (18)F-FDG PET images of AD demonstrate focally decreased cerebral metabolism involving especially the posterior cingulate and neocortical association cortices, while largely sparing the basal ganglia, thalamus, cerebellum, and cortex mediating primary sensory and motor functions. Assessment of the precise diagnostic accuracy of PET had until recently been hindered by the paucity of data on diagnoses made using PET and confirmed by definitive histopathologic examination. In the past few years, however, studies comparing neuropathologic examination with PET have established reliable and consistent accuracy for diagnostic evaluations using PET-accuracies substantially exceeding those of comparable studies of the diagnostic value of SPECT or of both modalities assessed side by side, or of clinical evaluations done without nuclear imaging. Similar data are emerging concerning the prognostic value of (18)F-FDG PET. Improvements in the ability of PET to identify very early changes associated with AD and other neurodegenerative dementias are currently outpacing improvements in therapeutic options, but with advances in potential preventive and disease-modifying treatments appearing imminent, early detection and diagnosis will play an increasing role in the management of dementing illness.

  8. A new computational strategy for predicting essential genes.

    PubMed

    Cheng, Jian; Wu, Wenwu; Zhang, Yinwen; Li, Xiangchen; Jiang, Xiaoqian; Wei, Gehong; Tao, Shiheng

    2013-12-21

    Determination of the minimum gene set for cellular life is one of the central goals in biology. Genome-wide essential gene identification has progressed rapidly in certain bacterial species; however, it remains difficult to achieve in most eukaryotic species. Several computational models have recently been developed to integrate gene features and used as alternatives to transfer gene essentiality annotations between organisms. We first collected features that were widely used by previous predictive models and assessed the relationships between gene features and gene essentiality using a stepwise regression model. We found two issues that could significantly reduce model accuracy: (i) the effect of multicollinearity among gene features and (ii) the diverse and even contrasting correlations between gene features and gene essentiality existing within and among different species. To address these issues, we developed a novel model called feature-based weighted Naïve Bayes model (FWM), which is based on Naïve Bayes classifiers, logistic regression, and genetic algorithm. The proposed model assesses features and filters out the effects of multicollinearity and diversity. The performance of FWM was compared with other popular models, such as support vector machine, Naïve Bayes model, and logistic regression model, by applying FWM to reciprocally predict essential genes among and within 21 species. Our results showed that FWM significantly improves the accuracy and robustness of essential gene prediction. FWM can remarkably improve the accuracy of essential gene prediction and may be used as an alternative method for other classification work. This method can contribute substantially to the knowledge of the minimum gene sets required for living organisms and the discovery of new drug targets.

  9. Accuracy of genomic prediction in switchgrass ( Panicum virgatum L.) improved by accounting for linkage disequilibrium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.

    Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height,more » and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Furthermore, some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.« less

  10. Impact of transmission intensity on the accuracy of genotyping to distinguish recrudescence from new infection in antimalarial clinical trials.

    PubMed

    Greenhouse, Bryan; Dokomajilar, Christian; Hubbard, Alan; Rosenthal, Philip J; Dorsey, Grant

    2007-09-01

    Antimalarial clinical trials use genotyping techniques to distinguish new infection from recrudescence. In areas of high transmission, the accuracy of genotyping may be compromised due to the high number of infecting parasite strains. We compared the accuracies of genotyping methods, using up to six genotyping markers, to assign outcomes for two large antimalarial trials performed in areas of Africa with different transmission intensities. We then estimated the probability of genotyping misclassification and its effect on trial results. At a moderate-transmission site, three genotyping markers were sufficient to generate accurate estimates of treatment failure. At a high-transmission site, even with six markers, estimates of treatment failure were 20% for amodiaquine plus artesunate and 17% for artemether-lumefantrine, regimens expected to be highly efficacious. Of the observed treatment failures for these two regimens, we estimated that at least 45% and 35%, respectively, were new infections misclassified as recrudescences. Increasing the number of genotyping markers improved the ability to distinguish new infection from recrudescence at a moderate-transmission site, but using six markers appeared inadequate at a high-transmission site. Genotyping-adjusted estimates of treatment failure from high-transmission sites may represent substantial overestimates of the true risk of treatment failure.

  11. Design of fluidic self-assembly bonds for precise component positioning

    NASA Astrophysics Data System (ADS)

    Ramadoss, Vivek; Crane, Nathan B.

    2008-02-01

    Self Assembly is a promising alternative to conventional pick and place robotic assembly of micro components. Its benefits include parallel integration of parts with low equipment costs. Various approaches to self assembly have been demonstrated, yet demanding applications like assembly of micro-optical devices require increased positioning accuracy. This paper proposes a new method for design of self assembly bonds that addresses this need. Current methods have zero force at the desired assembly position and low stiffness. This allows small disturbance forces to create significant positioning errors. The proposed method uses a substrate assembly feature to provide a high accuracy alignment guide to the part. The capillary bond region of the part and substrate are then modified to create a non-zero positioning force to maintain the part in the desired assembly position. Capillary force models show that this force aligns the part to the substrate assembly feature and reduces sensitivity of part position to process variation. Thus, the new configuration can substantially improve positioning accuracy of capillary self-assembly. This will result in a dramatic decrease in positioning errors in the micro parts. Various binding site designs are analyzed and guidelines are proposed for the design of an effective assembly bond using this new approach.

  12. TU-E-BRB-03: Overview of Proposed TG-132 Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brock, K.

    2015-06-15

    Deformable image registration (DIR) is developing rapidly and is poised to substantially improve dose fusion accuracy for adaptive and retreatment planning and motion management and PET fusion to enhance contour delineation for treatment planning. However, DIR dose warping accuracy is difficult to quantify, in general, and particularly difficult to do so on a patient-specific basis. As clinical DIR options become more widely available, there is an increased need to understand the implications of incorporating DIR into clinical workflow. Several groups have assessed DIR accuracy in clinically relevant scenarios, but no comprehensive review material is yet available. This session will alsomore » discuss aspects of the AAPM Task Group 132 on the Use of Image Registration and Data Fusion Algorithms and Techniques in Radiotherapy Treatment Planning official report, which provides recommendations for DIR clinical use. We will summarize and compare various commercial DIR software options, outline successful clinical techniques, show specific examples with discussion of appropriate and inappropriate applications of DIR, discuss the clinical implications of DIR, provide an overview of current DIR error analysis research, review QA options and research phantom development and present TG-132 recommendations. Learning Objectives: Compare/contrast commercial DIR software and QA options Overview clinical DIR workflow for retreatment To understand uncertainties introduced by DIR Review TG-132 proposed recommendations.« less

  13. Improving patient safety via automated laboratory-based adverse event grading.

    PubMed

    Niland, Joyce C; Stiller, Tracey; Neat, Jennifer; Londrc, Adina; Johnson, Dina; Pannoni, Susan

    2012-01-01

    The identification and grading of adverse events (AEs) during the conduct of clinical trials is a labor-intensive and error-prone process. This paper describes and evaluates a software tool developed by City of Hope to automate complex algorithms to assess laboratory results and identify and grade AEs. We compared AEs identified by the automated system with those previously assessed manually, to evaluate missed/misgraded AEs. We also conducted a prospective paired time assessment of automated versus manual AE assessment. We found a substantial improvement in accuracy/completeness with the automated grading tool, which identified an additional 17% of severe grade 3-4 AEs that had been missed/misgraded manually. The automated system also provided an average time saving of 5.5 min per treatment course. With 400 ongoing treatment trials at City of Hope and an average of 1800 laboratory results requiring assessment per study, the implications of these findings for patient safety are enormous.

  14. A novel clinical decision support system using improved adaptive genetic algorithm for the assessment of fetal well-being.

    PubMed

    Ravindran, Sindhu; Jambek, Asral Bahari; Muthusamy, Hariharan; Neoh, Siew-Chin

    2015-01-01

    A novel clinical decision support system is proposed in this paper for evaluating the fetal well-being from the cardiotocogram (CTG) dataset through an Improved Adaptive Genetic Algorithm (IAGA) and Extreme Learning Machine (ELM). IAGA employs a new scaling technique (called sigma scaling) to avoid premature convergence and applies adaptive crossover and mutation techniques with masking concepts to enhance population diversity. Also, this search algorithm utilizes three different fitness functions (two single objective fitness functions and multi-objective fitness function) to assess its performance. The classification results unfold that promising classification accuracy of 94% is obtained with an optimal feature subset using IAGA. Also, the classification results are compared with those of other Feature Reduction techniques to substantiate its exhaustive search towards the global optimum. Besides, five other benchmark datasets are used to gauge the strength of the proposed IAGA algorithm.

  15. Addition of lateral bending range of motion measurement to standard sagittal measurement to improve diagnosis sensitivity of ligamentous injury in the human lower cervical spine.

    PubMed

    Leahy, P Devin; Puttlitz, Christian M

    2016-01-01

    This study examined the cervical spine range of motion (ROM) resulting from whiplash-type hyperextension and hyperflexion type ligamentous injuries, and sought to improve the accuracy of specific diagnosis of these injuries. The study was accomplished by measurement of ROM throughout axial rotation, lateral bending, and flexion and extension, using a validated finite element model of the cervical spine that was modified to simulate hyperextension and/or hyperflexion injuries. It was found that the kinematic difference between hyperextension and hyperflexion injuries was minimal throughout the combined flexion and extension ROM measurement that is commonly used for clinical diagnosis of cervical ligamentous injury. However, the two injuries demonstrated substantially different ROM under axial rotation and lateral bending. It is recommended that other bending axes beyond flexion and extension are incorporated into clinical diagnosis of cervical ligamentous injury.

  16. National and International Security Applications of Cryogenic Detectors—Mostly Nuclear Safeguards

    NASA Astrophysics Data System (ADS)

    Rabin, Michael W.

    2009-12-01

    As with science, so with security—in both arenas, the extraordinary sensitivity of cryogenic sensors enables high-confidence detection and high-precision measurement even of the faintest signals. Science applications are more mature, but several national and international security applications have been identified where cryogenic detectors have high potential payoff. International safeguards and nuclear forensics are areas needing new technology and methods to boost speed, sensitivity, precision and accuracy. Successfully applied, improved nuclear materials analysis will help constrain nuclear materials diversion pathways and contribute to treaty verification. Cryogenic microcalorimeter detectors for X-ray, gamma-ray, neutron, and alpha-particle spectrometry are under development with these aims in mind. In each case the unsurpassed energy resolution of microcalorimeters reveals previously invisible spectral features of nuclear materials. Preliminary results of quantitative analysis indicate substantial improvements are still possible, but significant work will be required to fully understand the ultimate performance limits.

  17. On what it means to know someone: a matter of pragmatics.

    PubMed

    Gill, Michael J; Swann, William B

    2004-03-01

    Two studies provide support for W. B. Swann's (1984) argument that perceivers achieve substantial pragmatic accuracy--accuracy that facilitates the achievement of relationship-specific interaction goals--in their social relationships. Study 1 assessed the extent to which group members reached consensus regarding the behavior of a member in familiar (as compared with unfamiliar) contexts and found that groups do indeed achieve this form of pragmatic accuracy. Study 2 assessed the degree of insight romantic partners had into the self-views of their partners on relationship-relevant (as compared with less relevant) traits and found that couples do indeed achieve this form of pragmatic accuracy. Furthermore, pragmatic accuracy was uniquely associated with relationship harmony. Implications for a functional approach to person perception are discussed.

  18. Use of Web-based training for quality improvement between a field immunohistochemistry laboratory in Nigeria and its United States-based partner institution.

    PubMed

    Oluwasola, Abideen O; Malaka, David; Khramtsov, Andrey Ilyich; Ikpatt, Offiong Francis; Odetunde, Abayomi; Adeyanju, Oyinlolu Olorunsogo; Sveen, Walmy Elisabeth; Falusi, Adeyinka Gloria; Huo, Dezheng; Olopade, Olufunmilayo Ibironke

    2013-12-01

    The importance of hormone receptor status in assigning treatment and the potential use of human epidermal growth factor receptor 2 (HER2)-targeted therapy have made it beneficial for laboratories to improve detection techniques. Because interlaboratory variability in immunohistochemistry (IHC) tests may also affect studies of breast cancer subtypes in different countries, we undertook a Web-based quality improvement training and a comparative study of accuracy of immunohistochemical tests of breast cancer biomarkers between a well-established laboratory in the United States (University of Chicago) and a field laboratory in Ibadan, Nigeria. Two hundred and thirty-two breast tumor blocks were evaluated for estrogen receptors (ERs), progesterone receptors (PRs), and HER2 status at both laboratories using tissue microarray technique. Initially, concordance analysis revealed κ scores of 0.42 (moderate agreement) for ER, 0.41 (moderate agreement) for PR, and 0.39 (fair agreement) for HER2 between the 2 laboratories. Antigen retrieval techniques and scoring methods were identified as important reasons for discrepancy. Web-based conferences using Web conferencing tools such as Skype and WebEx were then held periodically to discuss IHC staining protocols and standard scoring systems and to resolve discrepant cases. After quality assurance and training, the agreement improved to 0.64 (substantial agreement) for ER, 0.60 (moderate agreement) for PR, and 0.75 (substantial agreement) for HER2. We found Web-based conferences and digital microscopy useful and cost-effective tools for quality assurance of IHC, consultation, and collaboration between distant laboratories. Quality improvement exercises in testing of tumor biomarkers will reduce misclassification in epidemiologic studies of breast cancer subtypes and provide much needed capacity building in resource-poor countries. © 2013.

  19. A Collaborative Brain-Computer Interface for Improving Human Performance

    PubMed Central

    Wang, Yijun; Jung, Tzyy-Ping

    2011-01-01

    Electroencephalogram (EEG) based brain-computer interfaces (BCI) have been studied since the 1970s. Currently, the main focus of BCI research lies on the clinical use, which aims to provide a new communication channel to patients with motor disabilities to improve their quality of life. However, the BCI technology can also be used to improve human performance for normal healthy users. Although this application has been proposed for a long time, little progress has been made in real-world practices due to technical limits of EEG. To overcome the bottleneck of low single-user BCI performance, this study proposes a collaborative paradigm to improve overall BCI performance by integrating information from multiple users. To test the feasibility of a collaborative BCI, this study quantitatively compares the classification accuracies of collaborative and single-user BCI applied to the EEG data collected from 20 subjects in a movement-planning experiment. This study also explores three different methods for fusing and analyzing EEG data from multiple subjects: (1) Event-related potentials (ERP) averaging, (2) Feature concatenating, and (3) Voting. In a demonstration system using the Voting method, the classification accuracy of predicting movement directions (reaching left vs. reaching right) was enhanced substantially from 66% to 80%, 88%, 93%, and 95% as the numbers of subjects increased from 1 to 5, 10, 15, and 20, respectively. Furthermore, the decision of reaching direction could be made around 100–250 ms earlier than the subject's actual motor response by decoding the ERP activities arising mainly from the posterior parietal cortex (PPC), which are related to the processing of visuomotor transmission. Taken together, these results suggest that a collaborative BCI can effectively fuse brain activities of a group of people to improve the overall performance of natural human behavior. PMID:21655253

  20. Performance Ratings: Designs for Evaluating Their Validity and Accuracy.

    DTIC Science & Technology

    1986-07-01

    ratees with substantial validity and with little bias due to the ethod for rating. Convergent validity and discriminant validity account for approximately...The expanded research design suggests that purpose for the ratings has little influence on the multitrait-multimethod properties of the ratings...Convergent and discriminant validity again account for substantial differences in the ratings of performance. Little method bias is present; both methods of

  1. Real-time dose computation: GPU-accelerated source modeling and superposition/convolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacques, Robert; Wong, John; Taylor, Russell

    Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for themore » total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3} times were 8.3 and 94 s, respectively, on an AMD (Sunnyvale, CA) Opteron 254 (two cores, 2.8 GHz). Conclusions: The authors have completed a comprehensive, GPU-accelerated dose engine in order to provide a substantial performance gain over CPU based implementations. Real-time dose computation is feasible with the accuracy levels of the superposition/convolution algorithm.« less

  2. Optimizing selection of training and auxiliary data for operational land cover classification for the LCMAP initiative

    NASA Astrophysics Data System (ADS)

    Zhu, Zhe; Gallant, Alisa L.; Woodcock, Curtis E.; Pengra, Bruce; Olofsson, Pontus; Loveland, Thomas R.; Jin, Suming; Dahal, Devendra; Yang, Limin; Auch, Roger F.

    2016-12-01

    The U.S. Geological Survey's Land Change Monitoring, Assessment, and Projection (LCMAP) initiative is a new end-to-end capability to continuously track and characterize changes in land cover, use, and condition to better support research and applications relevant to resource management and environmental change. Among the LCMAP product suite are annual land cover maps that will be available to the public. This paper describes an approach to optimize the selection of training and auxiliary data for deriving the thematic land cover maps based on all available clear observations from Landsats 4-8. Training data were selected from map products of the U.S. Geological Survey's Land Cover Trends project. The Random Forest classifier was applied for different classification scenarios based on the Continuous Change Detection and Classification (CCDC) algorithm. We found that extracting training data proportionally to the occurrence of land cover classes was superior to an equal distribution of training data per class, and suggest using a total of 20,000 training pixels to classify an area about the size of a Landsat scene. The problem of unbalanced training data was alleviated by extracting a minimum of 600 training pixels and a maximum of 8000 training pixels per class. We additionally explored removing outliers contained within the training data based on their spectral and spatial criteria, but observed no significant improvement in classification results. We also tested the importance of different types of auxiliary data that were available for the conterminous United States, including: (a) five variables used by the National Land Cover Database, (b) three variables from the cloud screening "Function of mask" (Fmask) statistics, and (c) two variables from the change detection results of CCDC. We found that auxiliary variables such as a Digital Elevation Model and its derivatives (aspect, position index, and slope), potential wetland index, water probability, snow probability, and cloud probability improved the accuracy of land cover classification. Compared to the original strategy of the CCDC algorithm (500 pixels per class), the use of the optimal strategy improved the classification accuracies substantially (15-percentage point increase in overall accuracy and 4-percentage point increase in minimum accuracy).

  3. Differentiating neoplastic from benign lesions of the pancreas: translational techniques.

    PubMed

    Khalid, Asif

    2009-11-01

    There has been substantial recent progress in our ability to image and sample the pancreas leading to the improved recognition of benign and premalignant conditions of the pancreas such as autoimmune pancreatitis (AIP) and mucinous lesions (mucinous cystic neoplasms [MCN] and intraductal papillary mucinous neoplasms [IPMN]), respectively. Clinically relevant and difficult situations that continue to be faced in this context include differentiating MCN and IPMN from nonmucinous pancreatic cysts, the early detection of malignant degeneration in MCN and IPMN, and accurate differentiation between pancreatic cancer and inflammatory masses, especially AIP. These challenges arise primarily due to the less than perfect sensitivity for malignancy utilizing cytological samples obtained via EUS and ERCP. Aspirates from pancreatic cysts are often paucicellular further limiting the accuracy of cytology. One approach to improve the diagnostic yield from these very small samples is through the use of molecular techniques. Because the development of pancreatic cancer and malignant degeneration in MCN and IPMN is associated with well studied genetic insults including oncogene activation (eg, k-ras), tumor suppressor gene losses (eg, p53, p16, and DPC4), and genome maintenance gene mutations (eg, BRCA2 and telomerase), detecting these molecular abnormalities may aid in improving our diagnostic accuracy. A number of studies have shown the utility of testing clinical samples from pancreatic lesions and bile duct strictures for these molecular markers of malignancy to differentiate between cancer and inflammation. The information from these studies will be discussed with emphasis on how to use this information in clinical practice.

  4. The influence of atmospheric grid resolution in a climate model-forced ice sheet simulation

    NASA Astrophysics Data System (ADS)

    Lofverstrom, Marcus; Liakka, Johan

    2018-04-01

    Coupled climate-ice sheet simulations have been growing in popularity in recent years. Experiments of this type are however challenging as ice sheets evolve over multi-millennial timescales, which is beyond the practical integration limit of most Earth system models. A common method to increase model throughput is to trade resolution for computational efficiency (compromise accuracy for speed). Here we analyze how the resolution of an atmospheric general circulation model (AGCM) influences the simulation quality in a stand-alone ice sheet model. Four identical AGCM simulations of the Last Glacial Maximum (LGM) were run at different horizontal resolutions: T85 (1.4°), T42 (2.8°), T31 (3.8°), and T21 (5.6°). These simulations were subsequently used as forcing of an ice sheet model. While the T85 climate forcing reproduces the LGM ice sheets to a high accuracy, the intermediate resolution cases (T42 and T31) fail to build the Eurasian ice sheet. The T21 case fails in both Eurasia and North America. Sensitivity experiments using different surface mass balance parameterizations improve the simulations of the Eurasian ice sheet in the T42 case, but the compromise is a substantial ice buildup in Siberia. The T31 and T21 cases do not improve in the same way in Eurasia, though the latter simulates the continent-wide Laurentide ice sheet in North America. The difficulty to reproduce the LGM ice sheets in the T21 case is in broad agreement with previous studies using low-resolution atmospheric models, and is caused by a substantial deterioration of the model climate between the T31 and T21 resolutions. It is speculated that this deficiency may demonstrate a fundamental problem with using low-resolution atmospheric models in these types of experiments.

  5. Can verbal working memory training improve reading?

    PubMed

    Banales, Erin; Kohnen, Saskia; McArthur, Genevieve

    2015-01-01

    The aim of the current study was to determine whether poor verbal working memory is associated with poor word reading accuracy because the former causes the latter, or the latter causes the former. To this end, we tested whether (a) verbal working memory training improves poor verbal working memory or poor word reading accuracy, and whether (b) reading training improves poor reading accuracy or verbal working memory in a case series of four children with poor word reading accuracy and verbal working memory. Each child completed 8 weeks of verbal working memory training and 8 weeks of reading training. Verbal working memory training improved verbal working memory in two of the four children, but did not improve their reading accuracy. Similarly, reading training improved word reading accuracy in all children, but did not improve their verbal working memory. These results suggest that the causal links between verbal working memory and reading accuracy may not be as direct as has been assumed.

  6. Improved method for implicit Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, F. B.; Martin, W. R.

    2001-01-01

    The Implicit Monte Carlo (IMC) method has been used for over 30 years to analyze radiative transfer problems, such as those encountered in stellar atmospheres or inertial confinement fusion. Reference [2] provided an exact error analysis of IMC for 0-D problems and demonstrated that IMC can exhibit substantial errors when timesteps are large. These temporal errors are inherent in the method and are in addition to spatial discretization errors and approximations that address nonlinearities (due to variation of physical constants). In Reference [3], IMC and four other methods were analyzed in detail and compared on both theoretical grounds and themore » accuracy of numerical tests. As discussed in, two alternative schemes for solving the radiative transfer equations, the Carter-Forest (C-F) method and the Ahrens-Larsen (A-L) method, do not exhibit the errors found in IMC; for 0-D, both of these methods are exact for all time, while for 3-D, A-L is exact for all time and C-F is exact within a timestep. These methods can yield substantially superior results to IMC.« less

  7. Tumour functional sphericity from PET images: prognostic value in NSCLC and impact of delineation method.

    PubMed

    Hatt, Mathieu; Laurent, Baptiste; Fayad, Hadi; Jaouen, Vincent; Visvikis, Dimitris; Le Rest, Catherine Cheze

    2018-04-01

    Sphericity has been proposed as a parameter for characterizing PET tumour volumes, with complementary prognostic value with respect to SUV and volume in both head and neck cancer and lung cancer. The objective of the present study was to investigate its dependency on tumour delineation and the resulting impact on its prognostic value. Five segmentation methods were considered: two thresholds (40% and 50% of SUV max ), ant colony optimization, fuzzy locally adaptive Bayesian (FLAB), and gradient-aided region-based active contour. The accuracy of each method in extracting sphericity was evaluated using a dataset of 176 simulated, phantom and clinical PET images of tumours with associated ground truth. The prognostic value of sphericity and its complementary value with respect to volume for each segmentation method was evaluated in a cohort of 87 patients with stage II/III lung cancer. Volume and associated sphericity values were dependent on the segmentation method. The correlation between segmentation accuracy and sphericity error was moderate (|ρ| from 0.24 to 0.57). The accuracy in measuring sphericity was not dependent on volume (|ρ| < 0.4). In the patients with lung cancer, sphericity had prognostic value, although lower than that of volume, except for that derived using FLAB for which when combined with volume showed a small improvement over volume alone (hazard ratio 2.67, compared with 2.5). Substantial differences in patient prognosis stratification were observed depending on the segmentation method used. Tumour functional sphericity was found to be dependent on the segmentation method, although the accuracy in retrieving the true sphericity was not dependent on tumour volume. In addition, even accurate segmentation can lead to an inaccurate sphericity value, and vice versa. Sphericity had similar or lower prognostic value than volume alone in the patients with lung cancer, except when determined using the FLAB method for which there was a small improvement in stratification when the parameters were combined.

  8. Sampling factors influencing accuracy of sperm kinematic analysis.

    PubMed

    Owen, D H; Katz, D F

    1993-01-01

    Sampling conditions that influence the accuracy of experimental measurement of sperm head kinematics were studied by computer simulation methods. Several archetypal sperm trajectories were studied. First, mathematical models of typical flagellar beats were input to hydrodynamic equations of sperm motion. The instantaneous swimming velocities of such sperm were computed over sequences of flagellar beat cycles, from which the resulting trajectories were determined. In a second, idealized approach, direct mathematical models of trajectories were utilized, based upon similarities to the previous hydrodynamic constructs. In general, it was found that analyses of sampling factors produced similar results for the hydrodynamic and idealized trajectories. A number of experimental sampling factors were studied, including the number of sperm head positions measured per flagellar beat, and the time interval over which these measurements are taken. It was found that when one flagellar beat is sampled, values of amplitude of lateral head displacement (ALH) and linearity (LIN) approached their actual values when five or more sample points per beat were taken. Mean angular displacement (MAD) values, however, remained sensitive to sampling rate even when large sampling rates were used. Values of MAD were also much more sensitive to the initial starting point of the sampling procedure than were ALH or LIN. On the basis of these analyses of measurement accuracy for individual sperm, simulations were then performed of cumulative effects when studying entire populations of motile cells. It was found that substantial (double digit) errors occurred in the mean values of curvilinear velocity (VCL), LIN, and MAD under the conditions of 30 video frames per second and 0.5 seconds of analysis time. Increasing the analysis interval to 1 second did not appreciably improve the results. However, increasing the analysis rate to 60 frames per second significantly reduced the errors. These findings thus suggest that computer-aided sperm analysis (CASA) application at 60 frames per second will significantly improve the accuracy of kinematic analysis in most applications to human and other mammalian sperm.

  9. Resolving occlusion and segmentation errors in multiple video object tracking

    NASA Astrophysics Data System (ADS)

    Cheng, Hsu-Yung; Hwang, Jenq-Neng

    2009-02-01

    In this work, we propose a method to integrate the Kalman filter and adaptive particle sampling for multiple video object tracking. The proposed framework is able to detect occlusion and segmentation error cases and perform adaptive particle sampling for accurate measurement selection. Compared with traditional particle filter based tracking methods, the proposed method generates particles only when necessary. With the concept of adaptive particle sampling, we can avoid degeneracy problem because the sampling position and range are dynamically determined by parameters that are updated by Kalman filters. There is no need to spend time on processing particles with very small weights. The adaptive appearance for the occluded object refers to the prediction results of Kalman filters to determine the region that should be updated and avoids the problem of using inadequate information to update the appearance under occlusion cases. The experimental results have shown that a small number of particles are sufficient to achieve high positioning and scaling accuracy. Also, the employment of adaptive appearance substantially improves the positioning and scaling accuracy on the tracking results.

  10. Detection of intracavitary uterine pathology using offline analysis of three-dimensional ultrasound volumes: interobserver agreement and diagnostic accuracy.

    PubMed

    Van den Bosch, T; Valentin, L; Van Schoubroeck, D; Luts, J; Bignardi, T; Condous, G; Epstein, E; Leone, F P; Testa, A C; Van Huffel, S; Bourne, T; Timmerman, D

    2012-10-01

    To estimate the diagnostic accuracy and interobserver agreement in predicting intracavitary uterine pathology at offline analysis of three-dimensional (3D) ultrasound volumes of the uterus. 3D volumes (unenhanced ultrasound and gel infusion sonography with and without power Doppler, i.e. four volumes per patient) of 75 women presenting with abnormal uterine bleeding at a 'bleeding clinic' were assessed offline by six examiners. The sonologists were asked to provide a tentative diagnosis. A histological diagnosis was obtained by hysteroscopy with biopsy or operative hysteroscopy. Proliferative, secretory or atrophic endometrium was classified as 'normal' histology; endometrial polyps, intracavitary myomas, endometrial hyperplasia and endometrial cancer were classified as 'abnormal' histology. The diagnostic accuracy of the six sonologists with regard to normal/abnormal histology and interobserver agreement were estimated. Intracavitary pathology was diagnosed at histology in 39% of patients. Agreement between the ultrasound diagnosis and the histological diagnosis (normal vs abnormal) ranged from 67 to 83% for the six sonologists. In 45% of cases all six examiners agreed with regard to the presence/absence of intracavitary pathology. The percentage agreement between any two examiners ranged from 65 to 91% (Cohen's κ, 0.31-0.81). The Schouten κ for all six examiners was 0.51 (95% CI, 0.40-0.62), while the highest Schouten κ for any three examiners was 0.69. When analyzing stored 3D ultrasound volumes, agreement between sonologists with regard to classifying the endometrium/uterine cavity as normal or abnormal as well as the diagnostic accuracy varied substantially. Possible actions to improve interobserver agreement and diagnostic accuracy include optimization of image quality and the use of a consistent technique for analyzing the 3D volumes. Copyright © 2012 ISUOG. Published by John Wiley & Sons, Ltd.

  11. Soldier Performance and Mood States Following a Strenuous Road March

    DTIC Science & Technology

    1990-01-01

    13) and the more intense the exercise, the greater the elevation (14). Reductions in heart rate through the use of beta - blockers can substantially...extreme physical fatigue. Shooting accuracy degraded severely under these conditions. An increase in body tremors due to fatigue or elevated post...exercise (9) and this may effect shooting accuracy. Muscle tremors increase after brief or prolonged muscular contractions (10, 11) and such tremors

  12. Increased genomic prediction accuracy in wheat breeding using a large Australian panel.

    PubMed

    Norman, Adam; Taylor, Julian; Tanaka, Emi; Telfer, Paul; Edwards, James; Martinant, Jean-Pierre; Kuchel, Haydn

    2017-12-01

    Genomic prediction accuracy within a large panel was found to be substantially higher than that previously observed in smaller populations, and also higher than QTL-based prediction. In recent years, genomic selection for wheat breeding has been widely studied, but this has typically been restricted to population sizes under 1000 individuals. To assess its efficacy in germplasm representative of commercial breeding programmes, we used a panel of 10,375 Australian wheat breeding lines to investigate the accuracy of genomic prediction for grain yield, physical grain quality and other physiological traits. To achieve this, the complete panel was phenotyped in a dedicated field trial and genotyped using a custom Axiom TM Affymetrix SNP array. A high-quality consensus map was also constructed, allowing the linkage disequilibrium present in the germplasm to be investigated. Using the complete SNP array, genomic prediction accuracies were found to be substantially higher than those previously observed in smaller populations and also more accurate compared to prediction approaches using a finite number of selected quantitative trait loci. Multi-trait genetic correlations were also assessed at an additive and residual genetic level, identifying a negative genetic correlation between grain yield and protein as well as a positive genetic correlation between grain size and test weight.

  13. Antarctic ice shelf thickness from CryoSat-2 radar altimetry

    NASA Astrophysics Data System (ADS)

    Chuter, Stephen; Bamber, Jonathan

    2016-04-01

    The Antarctic ice shelves provide buttressing to the inland grounded ice sheet, and therefore play a controlling role in regulating ice dynamics and mass imbalance. Accurate knowledge of ice shelf thickness is essential for input-output method mass balance calculations, sub-ice shelf ocean models and buttressing parameterisations in ice sheet models. Ice shelf thickness has previously been inferred from satellite altimetry elevation measurements using the assumption of hydrostatic equilibrium, as direct measurements of ice thickness do not provide the spatial coverage necessary for these applications. The sensor limitations of previous radar altimeters have led to poor data coverage and a lack of accuracy, particularly the grounding zone where a break in slope exists. We present a new ice shelf thickness dataset using four years (2011-2014) of CryoSat-2 elevation measurements, with its SARIn dual antennae mode of operation alleviating the issues affecting previous sensors. These improvements and the dense across track spacing of the satellite has resulted in ˜92% coverage of the ice shelves, with substantial improvements, for example, of over 50% across the Venable and Totten Ice Shelves in comparison to the previous dataset. Significant improvements in coverage and accuracy are also seen south of 81.5° for the Ross and Filchner-Ronne Ice Shelves. Validation of the surface elevation measurements, used to derive ice thickness, against NASA ICESat laser altimetry data shows a mean bias of less than 1 m (equivalent to less than 9 m in ice thickness) and a fourfold decrease in standard deviation in comparison to the previous continental dataset. Importantly, the most substantial improvements are found in the grounding zone. Validation of the derived thickness data has been carried out using multiple Radio Echo Sounding (RES) campaigns across the continent. Over the Amery ice shelf, where extensive RES measurements exist, the mean difference between the datasets is 3.3% and 4.7% across the whole shelf and within 10 km of the grounding line, respectively. These represent a two to three fold improvement in accuracy when compared to the previous data product. The impact of these improvements on Input-Output estimates of mass balance is illustrated for the Abbot Ice Shelf. Our new product shows a mean reduction of 29% in thickness at the grounding line when compared to the previous dataset as well as the elimination of non-physical 'data spikes' that were prevalent in the previous product in areas of complex terrain. The reduction in grounding line thickness equates to a change in mass balance for the areas from -14±9 GTyr-1to -4±9 GTyr-1. We show examples from other sectors including the Getz and George VI ice shelves. The updated estimate is more consistent with the positive surface elevation rate in this region obtained from satellite altimetry. The new thickness dataset will greatly reduce the uncertainty in Input-Output estimates of mass balance for the ˜30% of the grounding line of Antarctica where direct ice thickness measurements do not exist.

  14. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures

    PubMed Central

    Theobald, Douglas L.; Wuttke, Deborah S.

    2008-01-01

    Summary THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. PMID:16777907

  15. An on-line calibration technique for improved blade by blade tip clearance measurement

    NASA Astrophysics Data System (ADS)

    Sheard, A. G.; Westerman, G. C.; Killeen, B.

    A description of a capacitance-based tip clearance measurement system which integrates a novel technique for calibrating the capacitance probe in situ is presented. The on-line calibration system allows the capacitance probe to be calibrated immediately prior to use, providing substantial operational advantages and maximizing measurement accuracy. The possible error sources when it is used in service are considered, and laboratory studies of performance to ascertain their magnitude are discussed. The 1.2-mm diameter FM capacitance probe is demonstrated to be insensitive to variations in blade tip thickness from 1.25 to 1.45 mm. Over typical compressor blading the probe's range was four times the variation in blade to blade clearance encountered in engine run components.

  16. Approximate analytical description of the elastic strain field due to an inclusion in a continuous medium with cubic anisotropy

    NASA Astrophysics Data System (ADS)

    Nenashev, A. V.; Koshkarev, A. A.; Dvurechenskii, A. V.

    2018-03-01

    We suggest an approach to the analytical calculation of the strain distribution due to an inclusion in elastically anisotropic media for the case of cubic anisotropy. The idea consists in the approximate reduction of the anisotropic problem to a (simpler) isotropic problem. This gives, for typical semiconductors, an improvement in accuracy by an order of magnitude, compared to the isotropic approximation. Our method allows using, in the case of elastically anisotropic media, analytical solutions obtained for isotropic media only, such as analytical formulas for the strain due to polyhedral inclusions. The present work substantially extends the applicability of analytical results, making them more suitable for describing real systems, such as epitaxial quantum dots.

  17. Selection of a seventh spectral band for the LANDSAT-D thematic mapper

    NASA Technical Reports Server (NTRS)

    Holmes, Q. A. (Principal Investigator); Nuesch, D. R.

    1978-01-01

    The author has identified the following significant results. Each of the candidate bands were examined in terms of the feasibility of gathering high quality imagery from space while taking into account solar illumination, atmospheric attenuation, and the signal/noise ratio achievable within the TM sensor constraints. For the 2.2 micron region and the thermal IR region, inband signal values were calculated from representative spectral reflectance/emittance curves and a linear discriminant analysis was employed to predict classification accuracies. Based upon the substantial improvement (from 78 t0 92%) in discriminating zones of hydrothermally altered rocks from unaltered zones, over a broad range of observation conditions, a 2.08-2.35 micron spectral band having a ground resolution of 30 meters was recommended.

  18. Automated thematic mapping and change detection of ERTS-A images. [digital interpretation of Arizona imagery

    NASA Technical Reports Server (NTRS)

    Gramenopoulos, N. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. For the recognition of terrain types, spatial signatures are developed from the diffraction patterns of small areas of ERTS-1 images. This knowledge is exploited for the measurements of a small number of meaningful spatial features from the digital Fourier transforms of ERTS-1 image cells containing 32 x 32 picture elements. Using these spatial features and a heuristic algorithm, the terrain types in the vicinity of Phoenix, Arizona were recognized by the computer with a high accuracy. Then, the spatial features were combined with spectral features and using the maximum likelihood criterion the recognition accuracy of terrain types increased substantially. It was determined that the recognition accuracy with the maximum likelihood criterion depends on the statistics of the feature vectors. Nonlinear transformations of the feature vectors are required so that the terrain class statistics become approximately Gaussian. It was also determined that for a given geographic area the statistics of the classes remain invariable for a period of a month but vary substantially between seasons.

  19. Gravity model improvement using the DORIS tracking system on the SPOT 2 satellite

    NASA Technical Reports Server (NTRS)

    Nerem, R. S.; Lerch, F. J.; Williamson, R. G.; Klosko, S. M.; Robbins, J. W.; Patel, G. B.

    1994-01-01

    A high-precision radiometric satellite tracking system, Doppler Orbitography and Radio-positioning Integrated by Satellite system (DORIS), has recently been developed by the French space agency, Centre National d'Etudes Spatiales (CNES). DORIS was designed to provide tracking support for missions such as the joint United States/French TOPEX/Poseidon. As part of the flight testing process, a DORIS package was flown on the French SPOT 2 satellite. A substantial quantity of geodetic quality tracking data was obtained on SPOT 2 from an extensive international DORIS tracking network. These data were analyzed to assess their accuracy and to evaluate the gravitational modeling enhancements provided by these data in combination with the Goddard Earth Model-T3 (GEM-T3) gravitational model. These observations have noise levels of 0.4 to 0.5 mm/s, with few residual systematic effects. Although the SPOT 2 satellite experiences high atmospheric drag forces, the precision and global coverage of the DORIS tracking data have enabled more extensive orbit parameterization to mitigate these effects. As a result, the SPOT 2 orbital errors have been reduced to an estimated radial accuracy in the 10-20 cm RMS range. The addition of these data, which encompass many regions heretofore lacking in precision satellite tracking, has significantly improved GEM-T3 and allowed greatly improved orbit accuracies for Sun-synchronous satellites like SPOT 2 (such as ERS 1 and EOS). Comparison of the ensuing gravity model with other contemporary fields (GRIM-4C2, TEG2B, and OSU91A) provides a means to assess the current state of knowledge of the Earth's gravity field. Thus, the DORIS experiment on SPOT 2 has provided a strong basis for evaluating this new orbit tracking technology and has demonstrated the important contribution of the DORIS network to the success of the TOPEX/Poseidon mission.

  20. Improving the spatial and temporal resolution with quantification of uncertainty and errors in earth observation data sets using Data Interpolating Empirical Orthogonal Functions methodology

    NASA Astrophysics Data System (ADS)

    El Serafy, Ghada; Gaytan Aguilar, Sandra; Ziemba, Alexander

    2016-04-01

    There is an increasing use of process-based models in the investigation of ecological systems and scenario predictions. The accuracy and quality of these models are improved when run with high spatial and temporal resolution data sets. However, ecological data can often be difficult to collect which manifests itself through irregularities in the spatial and temporal domain of these data sets. Through the use of Data INterpolating Empirical Orthogonal Functions(DINEOF) methodology, earth observation products can be improved to have full spatial coverage within the desired domain as well as increased temporal resolution to daily and weekly time step, those frequently required by process-based models[1]. The DINEOF methodology results in a degree of error being affixed to the refined data product. In order to determine the degree of error introduced through this process, the suspended particulate matter and chlorophyll-a data from MERIS is used with DINEOF to produce high resolution products for the Wadden Sea. These new data sets are then compared with in-situ and other data sources to determine the error. Also, artificial cloud cover scenarios are conducted in order to substantiate the findings from MERIS data experiments. Secondly, the accuracy of DINEOF is explored to evaluate the variance of the methodology. The degree of accuracy is combined with the overall error produced by the methodology and reported in an assessment of the quality of DINEOF when applied to resolution refinement of chlorophyll-a and suspended particulate matter in the Wadden Sea. References [1] Sirjacobs, D.; Alvera-Azcárate, A.; Barth, A.; Lacroix, G.; Park, Y.; Nechad, B.; Ruddick, K.G.; Beckers, J.-M. (2011). Cloud filling of ocean colour and sea surface temperature remote sensing products over the Southern North Sea by the Data Interpolating Empirical Orthogonal Functions methodology. J. Sea Res. 65(1): 114-130. Dx.doi.org/10.1016/j.seares.2010.08.002

  1. The use of imprecise processing to improve accuracy in weather & climate prediction

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, T. N.

    2014-08-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations. This would allow higher resolution models to be run at the same computational cost.

  2. Optimizing care of ventilated infants by improving weighing accuracy on incubator scales.

    PubMed

    El-Kafrawy, Ula; Taylor, R J

    2016-01-01

    To determine the accuracy of weighing ventilated infants on incubator scales and whether the accuracy can be improved by the addition of a ventilator tube compensator (VTC) device to counterbalance the force exerted by the ventilator tubing. Body weights on integral incubator scales were compared in ventilated infants (with and without a VTC), with body weights on standalone electronic scales (true weight). Individual and series of trend weights were obtained on the infants. The method of Bland and Altman was used to assess the introduced bias. The study included 60 ventilated infants; 66% of them weighed <1000 g. A total of 102 paired-weight datasets for 30 infants undergoing conventional ventilation and 30 undergoing high frequency oscillator ventilation (HFOV) supported by a SensorMedics oscillator, (with and without a VTC) were obtained. The mean differences and (95% CI for the bias) between the integral and true scale weighing methods was 60.8 g (49.1 g to 72.5 g) without and -2.8 g (-8.9 g to 3.3 g) with a VTC in HFOV infants; 41.0 g (32.1 g to 50.0 g) without and -5.1 g (-9.3 g to -0.8 g) with a VTC for conventionally ventilated infants. Differences of greater than 2% were considered clinically relevant and occurred in 93.8% without and 20.8% with a VTC in HFOV infants and 81.5% without and 27.8% with VTC in conventionally ventilated infants. The use of the VTC device represents a substantial improvement on the current practice for weighing ventilated infants, particularly in the extreme preterm infants where an over- or underestimated weight can have important clinical implications for treatment. A large-scale clinical trial to validate these findings is needed.

  3. Improving Dose Determination Accuracy in Nonstandard Fields of the Varian TrueBeam Accelerator

    NASA Astrophysics Data System (ADS)

    Hyun, Megan A.

    In recent years, the use of flattening-filter-free (FFF) linear accelerators in radiation-based cancer therapy has gained popularity, especially for hypofractionated treatments (high doses of radiation given in few sessions). However, significant challenges to accurate radiation dose determination remain. If physicists cannot accurately determine radiation dose in a clinical setting, cancer patients treated with these new machines will not receive safe, accurate and effective treatment. In this study, an extensive characterization of two commonly used clinical radiation detectors (ionization chambers and diodes) and several potential reference detectors (thermoluminescent dosimeters, plastic scintillation detectors, and alanine pellets) has been performed to investigate their use in these challenging, nonstandard fields. From this characterization, reference detectors were identified for multiple beam sizes, and correction factors were determined to improve dosimetric accuracy for ionization chambers and diodes. A validated computational (Monte Carlo) model of the TrueBeam(TM) accelerator, including FFF beam modes, was also used to calculate these correction factors, which compared favorably to measured results. Small-field corrections of up to 18 % were shown to be necessary for clinical detectors such as microionization chambers. Because the impact of these large effects on treatment delivery is not well known, a treatment planning study was completed using actual hypofractionated brain, spine, and lung treatments that were delivered at the UW Carbone Cancer Center. This study demonstrated that improperly applying these detector correction factors can have a substantial impact on patient treatments. This thesis work has taken important steps toward improving the accuracy of FFF dosimetry through rigorous experimentally and Monte-Carlo-determined correction factors, the validation of an important published protocol (TG-51) for use with FFF reference fields, and a demonstration of the clinical significance of small-field correction factors. These results will facilitate the safe, accurate and effective use of this treatment modality in the clinic.

  4. A fuzzy set preference model for market share analysis

    NASA Technical Reports Server (NTRS)

    Turksen, I. B.; Willson, Ian A.

    1992-01-01

    Consumer preference models are widely used in new product design, marketing management, pricing, and market segmentation. The success of new products depends on accurate market share prediction and design decisions based on consumer preferences. The vague linguistic nature of consumer preferences and product attributes, combined with the substantial differences between individuals, creates a formidable challenge to marketing models. The most widely used methodology is conjoint analysis. Conjoint models, as currently implemented, represent linguistic preferences as ratio or interval-scaled numbers, use only numeric product attributes, and require aggregation of individuals for estimation purposes. It is not surprising that these models are costly to implement, are inflexible, and have a predictive validity that is not substantially better than chance. This affects the accuracy of market share estimates. A fuzzy set preference model can easily represent linguistic variables either in consumer preferences or product attributes with minimal measurement requirements (ordinal scales), while still estimating overall preferences suitable for market share prediction. This approach results in flexible individual-level conjoint models which can provide more accurate market share estimates from a smaller number of more meaningful consumer ratings. Fuzzy sets can be incorporated within existing preference model structures, such as a linear combination, using the techniques developed for conjoint analysis and market share estimation. The purpose of this article is to develop and fully test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation), and how much to make (market share prediction).

  5. Adaptive Laplacian filtering for sensorimotor rhythm-based brain-computer interfaces.

    PubMed

    Lu, Jun; McFarland, Dennis J; Wolpaw, Jonathan R

    2013-02-01

    Sensorimotor rhythms (SMRs) are 8-30 Hz oscillations in the electroencephalogram (EEG) recorded from the scalp over sensorimotor cortex that change with movement and/or movement imagery. Many brain-computer interface (BCI) studies have shown that people can learn to control SMR amplitudes and can use that control to move cursors and other objects in one, two or three dimensions. At the same time, if SMR-based BCIs are to be useful for people with neuromuscular disabilities, their accuracy and reliability must be improved substantially. These BCIs often use spatial filtering methods such as common average reference (CAR), Laplacian (LAP) filter or common spatial pattern (CSP) filter to enhance the signal-to-noise ratio of EEG. Here, we test the hypothesis that a new filter design, called an 'adaptive Laplacian (ALAP) filter', can provide better performance for SMR-based BCIs. An ALAP filter employs a Gaussian kernel to construct a smooth spatial gradient of channel weights and then simultaneously seeks the optimal kernel radius of this spatial filter and the regularization parameter of linear ridge regression. This optimization is based on minimizing the leave-one-out cross-validation error through a gradient descent method and is computationally feasible. Using a variety of kinds of BCI data from a total of 22 individuals, we compare the performances of ALAP filter to CAR, small LAP, large LAP and CSP filters. With a large number of channels and limited data, ALAP performs significantly better than CSP, CAR, small LAP and large LAP both in classification accuracy and in mean-squared error. Using fewer channels restricted to motor areas, ALAP is still superior to CAR, small LAP and large LAP, but equally matched to CSP. Thus, ALAP may help to improve the accuracy and robustness of SMR-based BCIs.

  6. Using stable isotopes to associate migratory shorebirds with their wintering locations in Argentina

    USGS Publications Warehouse

    Farmer, A.H.; Abril, M.; Fernandez, M.; Torres, J.; Kester, C.; Bern, C.

    2004-01-01

    We are evaluating the use of stable isotopes to identify the wintering areas of Neotropical migratory shorebirds in Argentina. Our goal is to associate individual birds, captured on the breeding grounds or in migration with specific winter sites, thereby helping to identify distinct areas used by different subpopulations. In January and February 2002 and 2003, we collected flight feathers from shorebirds at 23 wintering sites distributed across seven province s in Argentina (n = 170). Feathers samples were pre- pared and analyzed for δ13C, δ15N, δ34S, δ18O and δD by continuous flow methods. A discriminant function based on deuterium alone was not an accurate predictor of a shorebird’s province of origin, ranging from 8% correct (Santiago del Estero) to 80% correct (San ta Cruz). When other isotopes were included, the prediction accuracy increased substantially (from 56% in Buenos Aires to 100% in Tucumán). The improvement in accuracy was due to C/N, which separated D-depleted sites in the Andes from those in the south, and the inclusion of S separated sites with respect to their distance from the Atlantic. We also were able to correctly discriminate shorebirds from among two closely spaced sites within the province of Tierra del Fuego. These results suggest the feasibility of identifying the origin of a shorebird at a provincial level of accuracy, as well as uniquely identifying birds from some closely spaced sites. There is a high degree of intra- and inter-bird variability, especially in the Pampas region, where there is wide variety of wetland/water conditions. In that important shorebird region, the variability itself may in fact be the “signature.” Future addition of trace elements to the analyses may improve predictions based solely on stable isotopes.

  7. Adaptive Laplacian filtering for sensorimotor rhythm-based brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Lu, Jun; McFarland, Dennis J.; Wolpaw, Jonathan R.

    2013-02-01

    Objective. Sensorimotor rhythms (SMRs) are 8-30 Hz oscillations in the electroencephalogram (EEG) recorded from the scalp over sensorimotor cortex that change with movement and/or movement imagery. Many brain-computer interface (BCI) studies have shown that people can learn to control SMR amplitudes and can use that control to move cursors and other objects in one, two or three dimensions. At the same time, if SMR-based BCIs are to be useful for people with neuromuscular disabilities, their accuracy and reliability must be improved substantially. These BCIs often use spatial filtering methods such as common average reference (CAR), Laplacian (LAP) filter or common spatial pattern (CSP) filter to enhance the signal-to-noise ratio of EEG. Here, we test the hypothesis that a new filter design, called an ‘adaptive Laplacian (ALAP) filter’, can provide better performance for SMR-based BCIs. Approach. An ALAP filter employs a Gaussian kernel to construct a smooth spatial gradient of channel weights and then simultaneously seeks the optimal kernel radius of this spatial filter and the regularization parameter of linear ridge regression. This optimization is based on minimizing the leave-one-out cross-validation error through a gradient descent method and is computationally feasible. Main results. Using a variety of kinds of BCI data from a total of 22 individuals, we compare the performances of ALAP filter to CAR, small LAP, large LAP and CSP filters. With a large number of channels and limited data, ALAP performs significantly better than CSP, CAR, small LAP and large LAP both in classification accuracy and in mean-squared error. Using fewer channels restricted to motor areas, ALAP is still superior to CAR, small LAP and large LAP, but equally matched to CSP. Significance. Thus, ALAP may help to improve the accuracy and robustness of SMR-based BCIs.

  8. Does the Prostate Imaging-Reporting and Data System (PI-RADS) version 2 improve accuracy in reporting anterior lesions on multiparametric magnetic resonance imaging (mpMRI)?

    PubMed

    Hoffmann, Richard; Logan, Callum; O'Callaghan, Michael; Gormly, Kirsten; Chan, Ken; Foreman, Darren

    2018-01-01

    Multiparametric MRI (mpMRI) is useful in detecting anterior prostate tumours. Due to the location of anterior tumours, they are often diagnosed with a large size and may be suspicious for extra-prostatic extension (EPE). We aim to evaluate whether PI-RADS v2 is more accurate in assessing anterior prostate lesions identified on mpMRI compared to PI-RADS v1. Patients with anterior prostate lesions diagnosed on mpMRI who proceeded to a cognitive fusion transperineal prostate biopsy were identified. Each mpMRI was blinded and read by two experienced prostate MRI radiologists and assigned a PI-RADS v1 and PI-RADS v2 score, and the presence of EPE was estimated. Correlation was made with transperineal histopathology and, where relevant, radical prostatectomy histopathology. Concordance measures between PI-RADS v1 and PI-RADS v2, and between examiners of the same PI-RADS score were calculated using a weighted kappa. Fifty-eight consecutive men were identified. Concordance between the examiners for PI-RADS v1 and for v2 showed substantial agreement (version 1: weighted kappa 0.71; version 2: weighted kappa 0.69). There was no difference in accuracy when using PI-RADS v1 or PI-RADS v2 to predict clinically significant cancer. There was poor correlation between EPE measured on mpMRI compared with EPE in radical prostatectomy histopathology. PI-RADS v2 is reproducible between radiologists but does not have improved accuracy for diagnosing anterior tumours of the prostate when compared to PI-RADS v1. Multiparametric MRI is accurate at detecting anterior tumours with a sensitivity of 86-88%.

  9. Effect of two layouts on high technology AAC navigation and content location by people with aphasia.

    PubMed

    Wallace, Sarah E; Hux, Karen

    2014-03-01

    Navigating high-technology augmentative and alternative communication (AAC) devices with dynamic displays can be challenging for people with aphasia. The purpose of this study was to determine which of two AAC interfaces two people with aphasia could use most efficiently and accurately. The researchers used a BCB'C' alternating treatment design to provide device-use instruction to two people with severe aphasia regarding two personalised AAC interfaces that had different navigation layouts but identical content. One interface had static buttons for homepage and go-back features, and the other interface had static buttons in a navigation ring layout. Throughout treatment, the researchers monitored participants' mastery patterns regarding navigation efficiency and accuracy when locating target messages. Participants' accuracy and efficiency improved with both interfaces given intervention; however, the navigation ring layout appeared more transparent and better facilitated navigation than the homepage layout. People with aphasia can learn to navigate computerised devices; however, interface layout can substantially affect the efficiency and accuracy with which they locate messages. Given intervention incorporating errorless learning principles, people with chronic aphasia can learn to navigate across multiple device levels to locate target sentences. Both navigation ring and homepage interfaces may be used by people with aphasia. Some people with aphasia may be more consistent and efficient in finding target sentences using the navigation ring interface than the homepage interface. Additionally, the navigation ring interface may be more transparent and easier for people with aphasia to master--that is, they may require fewer intervention sessions to learn to navigate the navigation ring interface. Generalisation of learning may result from use of the navigation ring interface. Specifically, people with aphasia may improve navigation with the homepage interface as a result of instruction on the navigation interface, but not vice versa.

  10. Investigation of the interpolation method to improve the distributed strain measurement accuracy in optical frequency domain reflectometry systems.

    PubMed

    Cui, Jiwen; Zhao, Shiyuan; Yang, Di; Ding, Zhenyang

    2018-02-20

    We use a spectrum interpolation technique to improve the distributed strain measurement accuracy in a Rayleigh-scatter-based optical frequency domain reflectometry sensing system. We demonstrate that strain accuracy is not limited by the "uncertainty principle" that exists in the time-frequency analysis. Different interpolation methods are investigated and used to improve the accuracy of peak position of the cross-correlation and, therefore, improve the accuracy of the strain. Interpolation implemented by padding zeros on one side of the windowed data in the spatial domain, before the inverse fast Fourier transform, is found to have the best accuracy. Using this method, the strain accuracy and resolution are both improved without decreasing the spatial resolution. The strain of 3 μϵ within the spatial resolution of 1 cm at the position of 21.4 m is distinguished, and the measurement uncertainty is 3.3 μϵ.

  11. Compositing multitemporal remote sensing data sets

    USGS Publications Warehouse

    Qi, J.; Huete, A.R.; Hood, J.; Kerr, Y.

    1993-01-01

    To eliminate cloud and atmosphere-affected pixels, the compositing of multi temporal remote sensing data sets is done by selecting the maximum vale of the normalized different vegetation index (NDVI) within a compositing period. The NDVI classifier, however, is strongly affected by surface type and anisotropic properties, sensor viewing geometries, and atmospheric conditions. Consequently, the composited, multi temporal, remote sensing data contain substantial noise from these external conditions. Consequently, the composited, multi temporal, remote sensing data contain substantial noise from these external effects. To improve the accuracy of compositing products, two key approaches can be taken: one is to refine the compositing classifier (NDVI) and the other is to improve existing compositing algorithms. In this project, an alternative classifier was developed and an alternative pixel selection criterion was proposed for compositing. The new classifier and the alternative compositing algorithm were applied to an advanced very high resolution radiometer data set of different biome types in the United States. The results were compared with the maximum value compositing and the best index slope extraction algorithms. The new approaches greatly reduced the high frequency noises related to the external factors and repainted more reliable data. The results suggest that the geometric-optical canopy properties of specific biomes may be needed in compositing. Limitations of the new approaches include the dependency of pixel selection on the length of the composite period and data discontinuity.

  12. A fast multi-resolution approach to tomographic PIV

    NASA Astrophysics Data System (ADS)

    Discetti, Stefano; Astarita, Tommaso

    2012-03-01

    Tomographic particle image velocimetry (Tomo-PIV) is a recently developed three-component, three-dimensional anemometric non-intrusive measurement technique, based on an optical tomographic reconstruction applied to simultaneously recorded images of the distribution of light intensity scattered by seeding particles immersed into the flow. Nowadays, the reconstruction process is carried out mainly by iterative algebraic reconstruction techniques, well suited to handle the problem of limited number of views, but computationally intensive and memory demanding. The adoption of the multiplicative algebraic reconstruction technique (MART) has become more and more accepted. In the present work, a novel multi-resolution approach is proposed, relying on the adoption of a coarser grid in the first step of the reconstruction to obtain a fast estimation of a reliable and accurate first guess. A performance assessment, carried out on three-dimensional computer-generated distributions of particles, shows a substantial acceleration of the reconstruction process for all the tested seeding densities with respect to the standard method based on 5 MART iterations; a relevant reduction in the memory storage is also achieved. Furthermore, a slight accuracy improvement is noticed. A modified version, improved by a multiplicative line of sight estimation of the first guess on the compressed configuration, is also tested, exhibiting a further remarkable decrease in both memory storage and computational effort, mostly at the lowest tested seeding densities, while retaining the same performances in terms of accuracy.

  13. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardy, David J., E-mail: dhardy@illinois.edu; Schulten, Klaus; Wolff, Matthew A.

    2016-03-21

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation methodmore » (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle–mesh Ewald method falls short.« less

  14. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations.

    PubMed

    Hardy, David J; Wolff, Matthew A; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D

    2016-03-21

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.

  15. Assimilation of snow covered area information into hydrologic and land-surface models

    USGS Publications Warehouse

    Clark, M.P.; Slater, A.G.; Barrett, A.P.; Hay, L.E.; McCabe, G.J.; Rajagopalan, B.; Leavesley, G.H.

    2006-01-01

    This paper describes a data assimilation method that uses observations of snow covered area (SCA) to update hydrologic model states in a mountainous catchment in Colorado. The assimilation method uses SCA information as part of an ensemble Kalman filter to alter the sub-basin distribution of snow as well as the basin water balance. This method permits an optimal combination of model simulations and observations, as well as propagation of information across model states. Sensitivity experiments are conducted with a fairly simple snowpack/water-balance model to evaluate effects of the data assimilation scheme on simulations of streamflow. The assimilation of SCA information results in minor improvements in the accuracy of streamflow simulations near the end of the snowmelt season. The small effect from SCA assimilation is initially surprising. It can be explained both because a substantial portion of snowmelts before any bare ground is exposed, and because the transition from 100% to 0% snow coverage occurs fairly quickly. Both of these factors are basin-dependent. Satellite SCA information is expected to be most useful in basins where snow cover is ephemeral. The data assimilation strategy presented in this study improved the accuracy of the streamflow simulation, indicating that SCA is a useful source of independent information that can be used as part of an integrated data assimilation strategy. ?? 2005 Elsevier Ltd. All rights reserved.

  16. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Hardy, David J.; Wolff, Matthew A.; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D.

    2016-03-01

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.

  17. Semantic integration to identify overlapping functional modules in protein interaction networks

    PubMed Central

    Cho, Young-Rae; Hwang, Woochang; Ramanathan, Murali; Zhang, Aidong

    2007-01-01

    Background The systematic analysis of protein-protein interactions can enable a better understanding of cellular organization, processes and functions. Functional modules can be identified from the protein interaction networks derived from experimental data sets. However, these analyses are challenging because of the presence of unreliable interactions and the complex connectivity of the network. The integration of protein-protein interactions with the data from other sources can be leveraged for improving the effectiveness of functional module detection algorithms. Results We have developed novel metrics, called semantic similarity and semantic interactivity, which use Gene Ontology (GO) annotations to measure the reliability of protein-protein interactions. The protein interaction networks can be converted into a weighted graph representation by assigning the reliability values to each interaction as a weight. We presented a flow-based modularization algorithm to efficiently identify overlapping modules in the weighted interaction networks. The experimental results show that the semantic similarity and semantic interactivity of interacting pairs were positively correlated with functional co-occurrence. The effectiveness of the algorithm for identifying modules was evaluated using functional categories from the MIPS database. We demonstrated that our algorithm had higher accuracy compared to other competing approaches. Conclusion The integration of protein interaction networks with GO annotation data and the capability of detecting overlapping modules substantially improve the accuracy of module identification. PMID:17650343

  18. Exposure to Mobile Phone-Emitted Electromagnetic Fields and Human Attention: No Evidence of a Causal Relationship.

    PubMed

    Curcio, Giuseppe

    2018-01-01

    In the past 20 years of research regarding effects of mobile phone-derived electromagnetic fields (EMFs) on human cognition, attention has been one of the first and most extensively investigated functions. Different domains investigated covered selective, sustained, and divided attention. Here, the most relevant studies on this topic have been reviewed and discussed. A total of 43 studies are reported and summarized: of these, 31 indicated a total absence of statistically significant difference between real and sham signal, 9 showed a partial improvement of attentional performance (mainly increase in speed of performance and/or improvement of accuracy) as a function of real exposure, while the remaining 3 showed inconsistent results (i.e., increased speed in some tasks and slowing in others) or even a worsening in performance (reduced speed and/or deteriorated accuracy). These results are independent of the specific attentional domain investigated. This scenario allows to conclude that there is a substantial lack of evidence about a negative influence of non-ionizing radiations on attention functioning. Nonetheless, published literature is very heterogeneous under the point of view of methodology (type of signal, exposure time, blinding), dosimetry (accurate evaluation of specific absorption rate-SAR or emitted power), and statistical analyses, making arduous a conclusive generalization to everyday life. Some remarks and suggestions regarding future research are proposed.

  19. Confocal laser endomicroscopy for in vivo diagnosis of Barrett's oesophagus and associated neoplasia: a pilot study conducted in a single Italian centre.

    PubMed

    Trovato, Cristina; Sonzogni, Angelica; Ravizza, Davide; Fiori, Giancarla; Tamayo, Darina; De Roberto, Giuseppe; de Leone, Annalisa; De Lisi, Stefania; Crosta, Cristiano

    2013-05-01

    Diagnosis and management of Barrett's oesophagus are controversial. Technical improvements in real-time recognition of intestinal metaplasia and neoplastic foci provide the chance for more effective target biopsies. Confocal laser endomicroscopy allows to analyze living cells during endoscopy. To assess the diagnostic accuracy, inter- and intra-observer variability of endomicroscopy for detecting in vivo neoplasia (dysplasia and/or early neoplasia) in Barrett's oesophagus. Prospective pilot study. Patients referred for known Barrett's oesophagus were screened. Endomicroscopy was carried out in a circular fashion, every 1-2 cm, on the whole columnar-lined distal oesophagus. Visible lesions, when present, were analyzed first. Targeted biopsies were taken. Confocal images were classified according to confocal Barrett classification. Endomicroscopic and histological findings were compared. Forty-eight out of 50 screened patients underwent endomicroscopy. Visible lesions were observed in 3 patients. In a per-biopsy analysis, Barrett's-oesophagus-associated neoplasia could be predicted with an accuracy of 98.1%. The agreement between endomicroscopic and histological results was substantial (κ=0.76). This study suggests that endomicroscopy can provide in vivo diagnosis of Barrett's oesophagus-associated neoplasia. Because it allows for the study of larger surface areas of the mucosa, endomicroscopy may lead to significant improvements in the in vivo screening and surveillance of Barrett's oesophagus. Copyright © 2013 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  20. A peptide-retrieval strategy enables significant improvement of quantitative performance without compromising confidence of identification.

    PubMed

    Tu, Chengjian; Shen, Shichen; Sheng, Quanhu; Shyr, Yu; Qu, Jun

    2017-01-30

    Reliable quantification of low-abundance proteins in complex proteomes is challenging largely owing to the limited number of spectra/peptides identified. In this study we developed a straightforward method to improve the quantitative accuracy and precision of proteins by strategically retrieving the less confident peptides that were previously filtered out using the standard target-decoy search strategy. The filtered-out MS/MS spectra matched to confidently-identified proteins were recovered, and the peptide-spectrum-match FDR were re-calculated and controlled at a confident level of FDR≤1%, while protein FDR maintained at ~1%. We evaluated the performance of this strategy in both spectral count- and ion current-based methods. >60% increase of total quantified spectra/peptides was respectively achieved for analyzing a spike-in sample set and a public dataset from CPTAC. Incorporating the peptide retrieval strategy significantly improved the quantitative accuracy and precision, especially for low-abundance proteins (e.g. one-hit proteins). Moreover, the capacity of confidently discovering significantly-altered proteins was also enhanced substantially, as demonstrated with two spike-in datasets. In summary, improved quantitative performance was achieved by this peptide recovery strategy without compromising confidence of protein identification, which can be readily implemented in a broad range of quantitative proteomics techniques including label-free or labeling approaches. We hypothesize that more quantifiable spectra and peptides in a protein, even including less confident peptides, could help reduce variations and improve protein quantification. Hence the peptide retrieval strategy was developed and evaluated in two spike-in sample sets with different LC-MS/MS variations using both MS1- and MS2-based quantitative approach. The list of confidently identified proteins using the standard target-decoy search strategy was fixed and more spectra/peptides with less confidence matched to confident proteins were retrieved. However, the total peptide-spectrum-match false discovery rate (PSM FDR) after retrieval analysis was still controlled at a confident level of FDR≤1%. As expected, the penalty for occasionally incorporating incorrect peptide identifications is negligible by comparison with the improvements in quantitative performance. More quantifiable peptides, lower missing value rate, better quantitative accuracy and precision were significantly achieved for the same protein identifications by this simple strategy. This strategy is theoretically applicable for any quantitative approaches in proteomics and thereby provides more quantitative information, especially on low-abundance proteins. Published by Elsevier B.V.

  1. Weather models as virtual sensors to data-driven rainfall predictions in urban watersheds

    NASA Astrophysics Data System (ADS)

    Cozzi, Lorenzo; Galelli, Stefano; Pascal, Samuel Jolivet De Marc; Castelletti, Andrea

    2013-04-01

    Weather and climate predictions are a key element of urban hydrology where they are used to inform water management and assist in flood warning delivering. Indeed, the modelling of the very fast dynamics of urbanized catchments can be substantially improved by the use of weather/rainfall predictions. For example, in Singapore Marina Reservoir catchment runoff processes have a very short time of concentration (roughly one hour) and observational data are thus nearly useless for runoff predictions and weather prediction are required. Unfortunately, radar nowcasting methods do not allow to carrying out long - term weather predictions, whereas numerical models are limited by their coarse spatial scale. Moreover, numerical models are usually poorly reliable because of the fast motion and limited spatial extension of rainfall events. In this study we investigate the combined use of data-driven modelling techniques and weather variables observed/simulated with a numerical model as a way to improve rainfall prediction accuracy and lead time in the Singapore metropolitan area. To explore the feasibility of the approach, we use a Weather Research and Forecast (WRF) model as a virtual sensor network for the input variables (the states of the WRF model) to a machine learning rainfall prediction model. More precisely, we combine an input variable selection method and a non-parametric tree-based model to characterize the empirical relation between the rainfall measured at the catchment level and all possible weather input variables provided by WRF model. We explore different lead time to evaluate the model reliability for different long - term predictions, as well as different time lags to see how past information could improve results. Results show that the proposed approach allow a significant improvement of the prediction accuracy of the WRF model on the Singapore urban area.

  2. Asking better questions: How presentation formats influence information search.

    PubMed

    Wu, Charley M; Meder, Björn; Filimon, Flavia; Nelson, Jonathan D

    2017-08-01

    While the influence of presentation formats have been widely studied in Bayesian reasoning tasks, we present the first systematic investigation of how presentation formats influence information search decisions. Four experiments were conducted across different probabilistic environments, where subjects (N = 2,858) chose between 2 possible search queries, each with binary probabilistic outcomes, with the goal of maximizing classification accuracy. We studied 14 different numerical and visual formats for presenting information about the search environment, constructed across 6 design features that have been prominently related to improvements in Bayesian reasoning accuracy (natural frequencies, posteriors, complement, spatial extent, countability, and part-to-whole information). The posterior variants of the icon array and bar graph formats led to the highest proportion of correct responses, and were substantially better than the standard probability format. Results suggest that presenting information in terms of posterior probabilities and visualizing natural frequencies using spatial extent (a perceptual feature) were especially helpful in guiding search decisions, although environments with a mixture of probabilistic and certain outcomes were challenging across all formats. Subjects who made more accurate probability judgments did not perform better on the search task, suggesting that simple decision heuristics may be used to make search decisions without explicitly applying Bayesian inference to compute probabilities. We propose a new take-the-difference (TTD) heuristic that identifies the accuracy-maximizing query without explicit computation of posterior probabilities. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Fully Integrated, Miniature, High-Frequency Flow Probe Utilizing MEMS Leadless SOI Technology

    NASA Technical Reports Server (NTRS)

    Ned, Alex; Kurtz, Anthony; Shang, Tonghuo; Goodman, Scott; Giemette. Gera (d)

    2013-01-01

    This work focused on developing, fabricating, and fully calibrating a flowangle probe for aeronautics research by utilizing the latest microelectromechanical systems (MEMS), leadless silicon on insulator (SOI) sensor technology. While the concept of angle probes is not new, traditional devices had been relatively large due to fabrication constraints; often too large to resolve flow structures necessary for modern aeropropulsion measurements such as inlet flow distortions and vortices, secondary flows, etc. Mea surements of this kind demanded a new approach to probe design to achieve sizes on the order of 0.1 in. (.3 mm) diameter or smaller, and capable of meeting demanding requirements for accuracy and ruggedness. This approach invoked the use of stateof- the-art processing techniques to install SOI sensor chips directly onto the probe body, thus eliminating redundancy in sensor packaging and probe installation that have historically forced larger probe size. This also facilitated a better thermal match between the chip and its mount, improving stability and accuracy. Further, the leadless sensor technology with which the SOI sensing element is fabricated allows direct mounting and electrical interconnecting of the sensor to the probe body. This leadless technology allowed a rugged wire-out approach that is performed at the sensor length scale, thus achieving substantial sensor size reductions. The technology is inherently capable of high-frequency and high-accuracy performance in high temperatures and harsh environments.

  4. Evaluation of Flexible Force Sensors for Pressure Monitoring in Treatment of Chronic Venous Disorders.

    PubMed

    Parmar, Suresh; Khodasevych, Iryna; Troynikov, Olga

    2017-08-21

    The recent use of graduated compression therapy for treatment of chronic venous disorders such as leg ulcers and oedema has led to considerable research interest in flexible and low-cost force sensors. Properly applied low pressure during compression therapy can substantially improve the treatment of chronic venous disorders. However, achievement of the recommended low pressure levels and its accurate determination in real-life conditions is still a challenge. Several thin and flexible force sensors, which can also function as pressure sensors, are commercially available, but their real-life sensing performance has not been evaluated. Moreover, no researchers have reported information on sensor performance during static and dynamic loading within the realistic test conditions required for compression therapy. This research investigated the sensing performance of five low-cost commercial pressure sensors on a human-leg-like test apparatus and presents quantitative results on the accuracy and drift behaviour of these sensors in both static and dynamic conditions required for compression therapy. Extensive experimental work on this new human-leg-like test setup demonstrated its utility for evaluating the sensors. Results showed variation in static and dynamic sensing performance, including accuracy and drift characteristics. Only one commercially available pressure sensor was found to reliably deliver accuracy of 95% and above for all three test pressure points of 30, 50 and 70 mmHg.

  5. Evaluation of Flexible Force Sensors for Pressure Monitoring in Treatment of Chronic Venous Disorders

    PubMed Central

    Parmar, Suresh; Khodasevych, Iryna; Troynikov, Olga

    2017-01-01

    The recent use of graduated compression therapy for treatment of chronic venous disorders such as leg ulcers and oedema has led to considerable research interest in flexible and low-cost force sensors. Properly applied low pressure during compression therapy can substantially improve the treatment of chronic venous disorders. However, achievement of the recommended low pressure levels and its accurate determination in real-life conditions is still a challenge. Several thin and flexible force sensors, which can also function as pressure sensors, are commercially available, but their real-life sensing performance has not been evaluated. Moreover, no researchers have reported information on sensor performance during static and dynamic loading within the realistic test conditions required for compression therapy. This research investigated the sensing performance of five low-cost commercial pressure sensors on a human-leg-like test apparatus and presents quantitative results on the accuracy and drift behaviour of these sensors in both static and dynamic conditions required for compression therapy. Extensive experimental work on this new human-leg-like test setup demonstrated its utility for evaluating the sensors. Results showed variation in static and dynamic sensing performance, including accuracy and drift characteristics. Only one commercially available pressure sensor was found to reliably deliver accuracy of 95% and above for all three test pressure points of 30, 50 and 70 mmHg. PMID:28825672

  6. Application of Virtual Navigation with Multimodality Image Fusion in Foramen Ovale Cannulation.

    PubMed

    Qiu, Xixiong; Liu, Weizong; Zhang, Mingdong; Lin, Hengzhou; Zhou, Shoujun; Lei, Yi; Xia, Jun

    2017-11-01

    Idiopathic trigeminal neuralgia (ITN) can be effectively treated with radiofrequency thermocoagulation. However, this procedure requires cannulation of the foramen ovale, and conventional cannulation methods are associated with high failure rates. Multimodality imaging can improve the accuracy of cannulation because each imaging method can compensate for the drawbacks of the other. We aim to determine the feasibility and accuracy of percutaneous foramen ovale cannulation under the guidance of virtual navigation with multimodality image fusion in a self-designed anatomical model of human cadaveric heads. Five cadaveric head specimens were investigated in this study. Spiral computed tomography (CT) scanning clearly displayed the foramen ovale in all five specimens (10 foramina), which could not be visualized using two-dimensional ultrasound alone. The ultrasound and spiral CT images were fused, and percutaneous cannulation of the foramen ovale was performed under virtual navigation. After this, spiral CT scanning was immediately repeated to confirm the accuracy of the cannulation. Postprocedural spiral CT confirmed that the ultrasound and CT images had been successfully fused for all 10 foramina, which were accurately and successfully cannulated. The success rates of both image fusion and cannulation were 100%. Virtual navigation with multimodality image fusion can substantially facilitate foramen ovale cannulation and is worthy of clinical application. © 2017 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  7. TU-E-BRB-00: Deformable Image Registration: Is It Right for Your Clinic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2015-06-15

    Deformable image registration (DIR) is developing rapidly and is poised to substantially improve dose fusion accuracy for adaptive and retreatment planning and motion management and PET fusion to enhance contour delineation for treatment planning. However, DIR dose warping accuracy is difficult to quantify, in general, and particularly difficult to do so on a patient-specific basis. As clinical DIR options become more widely available, there is an increased need to understand the implications of incorporating DIR into clinical workflow. Several groups have assessed DIR accuracy in clinically relevant scenarios, but no comprehensive review material is yet available. This session will alsomore » discuss aspects of the AAPM Task Group 132 on the Use of Image Registration and Data Fusion Algorithms and Techniques in Radiotherapy Treatment Planning official report, which provides recommendations for DIR clinical use. We will summarize and compare various commercial DIR software options, outline successful clinical techniques, show specific examples with discussion of appropriate and inappropriate applications of DIR, discuss the clinical implications of DIR, provide an overview of current DIR error analysis research, review QA options and research phantom development and present TG-132 recommendations. Learning Objectives: Compare/contrast commercial DIR software and QA options Overview clinical DIR workflow for retreatment To understand uncertainties introduced by DIR Review TG-132 proposed recommendations.« less

  8. Characterisation of energy response of Al2O3:C optically stimulated luminescent dosemeters (OSLDs) using cavity theory

    PubMed Central

    Scarboro, S. B.; Kry, S. F.

    2013-01-01

    Aluminium oxide (Al2O3:C) is a common material used in optically stimulated luminescent dosemeters (OSLDs). OSLDs have a known energy dependence, which can impact on the accuracy of dose measurements, especially for lower photon energies, where the dosemeter can overrespond by a factor of 3–4. The purpose of this work was to characterise the response of Al2O3:C using cavity theory and to evaluate the applicability of this approach for polyenergetic photon beams. The cavity theory energy response showed good agreement (within 2 %) with the corresponding measured values. A comparison with measured values reported in the literature for low-energy polyenergetic spectra showed more varied agreement (within 6 % on average). The discrepancy between these results is attributed to differences in the raw photon energy spectra used to calculate the energy response. Analysis of the impact of the photon energy spectra versus the mean photon energy showed improved accuracy if the energy response was determined using the entire photon spectrum rather than the mean photon energy. If not accounted for, the overresponse due to photon energy could introduce substantial inaccuracy in dose measurement using OSLDs, and the results of this study indicate that cavity theory may be used to determine the response with reasonable accuracy. PMID:22653437

  9. Discrete Indoor Three-Dimensional Localization System Based on Neural Networks Using Visible Light Communication

    PubMed Central

    Ley-Bosch, Carlos; Quintana-Suárez, Miguel A.

    2018-01-01

    Indoor localization estimation has become an attractive research topic due to growing interest in location-aware services. Many research works have proposed solving this problem by using wireless communication systems based on radiofrequency. Nevertheless, those approaches usually deliver an accuracy of up to two metres, since they are hindered by multipath propagation. On the other hand, in the last few years, the increasing use of light-emitting diodes in illumination systems has provided the emergence of Visible Light Communication technologies, in which data communication is performed by transmitting through the visible band of the electromagnetic spectrum. This brings a brand new approach to high accuracy indoor positioning because this kind of network is not affected by electromagnetic interferences and the received optical power is more stable than radio signals. Our research focus on to propose a fingerprinting indoor positioning estimation system based on neural networks to predict the device position in a 3D environment. Neural networks are an effective classification and predictive method. The localization system is built using a dataset of received signal strength coming from a grid of different points. From the these values, the position in Cartesian coordinates (x,y,z) is estimated. The use of three neural networks is proposed in this work, where each network is responsible for estimating the position by each axis. Experimental results indicate that the proposed system leads to substantial improvements to accuracy over the widely-used traditional fingerprinting methods, yielding an accuracy above 99% and an average error distance of 0.4 mm. PMID:29601525

  10. Systematic reviews of diagnostic tests in endocrinology: an audit of methods, reporting, and performance.

    PubMed

    Spencer-Bonilla, Gabriela; Singh Ospina, Naykky; Rodriguez-Gutierrez, Rene; Brito, Juan P; Iñiguez-Ariza, Nicole; Tamhane, Shrikant; Erwin, Patricia J; Murad, M Hassan; Montori, Victor M

    2017-07-01

    Systematic reviews provide clinicians and policymakers estimates of diagnostic test accuracy and their usefulness in clinical practice. We identified all available systematic reviews of diagnosis in endocrinology, summarized the diagnostic accuracy of the tests included, and assessed the credibility and clinical usefulness of the methods and reporting. We searched Ovid MEDLINE, EMBASE, and Cochrane CENTRAL from inception to December 2015 for systematic reviews and meta-analyses reporting accuracy measures of diagnostic tests in endocrinology. Experienced reviewers independently screened for eligible studies and collected data. We summarized the results, methods, and reporting of the reviews. We performed subgroup analyses to categorize diagnostic tests as most useful based on their accuracy. We identified 84 systematic reviews; half of the tests included were classified as helpful when positive, one-fourth as helpful when negative. Most authors adequately reported how studies were identified and selected and how their trustworthiness (risk of bias) was judged. Only one in three reviews, however, reported an overall judgment about trustworthiness and one in five reported using adequate meta-analytic methods. One in four reported contacting authors for further information and about half included only patients with diagnostic uncertainty. Up to half of the diagnostic endocrine tests in which the likelihood ratio was calculated or provided are likely to be helpful in practice when positive as are one-quarter when negative. Most diagnostic systematic reviews in endocrine lack methodological rigor, protection against bias, and offer limited credibility. Substantial efforts, therefore, seem necessary to improve the quality of diagnostic systematic reviews in endocrinology.

  11. Towards accurate modeling of noncovalent interactions for protein rigidity analysis.

    PubMed

    Fox, Naomi; Streinu, Ileana

    2013-01-01

    Protein rigidity analysis is an efficient computational method for extracting flexibility information from static, X-ray crystallography protein data. Atoms and bonds are modeled as a mechanical structure and analyzed with a fast graph-based algorithm, producing a decomposition of the flexible molecule into interconnected rigid clusters. The result depends critically on noncovalent atomic interactions, primarily on how hydrogen bonds and hydrophobic interactions are computed and modeled. Ongoing research points to the stringent need for benchmarking rigidity analysis software systems, towards the goal of increasing their accuracy and validating their results, either against each other and against biologically relevant (functional) parameters. We propose two new methods for modeling hydrogen bonds and hydrophobic interactions that more accurately reflect a mechanical model, without being computationally more intensive. We evaluate them using a novel scoring method, based on the B-cubed score from the information retrieval literature, which measures how well two cluster decompositions match. To evaluate the modeling accuracy of KINARI, our pebble-game rigidity analysis system, we use a benchmark data set of 20 proteins, each with multiple distinct conformations deposited in the Protein Data Bank. Cluster decompositions for them were previously determined with the RigidFinder method from Gerstein's lab and validated against experimental data. When KINARI's default tuning parameters are used, an improvement of the B-cubed score over a crude baseline is observed in 30% of this data. With our new modeling options, improvements were observed in over 70% of the proteins in this data set. We investigate the sensitivity of the cluster decomposition score with case studies on pyruvate phosphate dikinase and calmodulin. To substantially improve the accuracy of protein rigidity analysis systems, thorough benchmarking must be performed on all current systems and future extensions. We have measured the gain in performance by comparing different modeling methods for noncovalent interactions. We showed that new criteria for modeling hydrogen bonds and hydrophobic interactions can significantly improve the results. The two new methods proposed here have been implemented and made publicly available in the current version of KINARI (v1.3), together with the benchmarking tools, which can be downloaded from our software's website, http://kinari.cs.umass.edu.

  12. Towards accurate modeling of noncovalent interactions for protein rigidity analysis

    PubMed Central

    2013-01-01

    Background Protein rigidity analysis is an efficient computational method for extracting flexibility information from static, X-ray crystallography protein data. Atoms and bonds are modeled as a mechanical structure and analyzed with a fast graph-based algorithm, producing a decomposition of the flexible molecule into interconnected rigid clusters. The result depends critically on noncovalent atomic interactions, primarily on how hydrogen bonds and hydrophobic interactions are computed and modeled. Ongoing research points to the stringent need for benchmarking rigidity analysis software systems, towards the goal of increasing their accuracy and validating their results, either against each other and against biologically relevant (functional) parameters. We propose two new methods for modeling hydrogen bonds and hydrophobic interactions that more accurately reflect a mechanical model, without being computationally more intensive. We evaluate them using a novel scoring method, based on the B-cubed score from the information retrieval literature, which measures how well two cluster decompositions match. Results To evaluate the modeling accuracy of KINARI, our pebble-game rigidity analysis system, we use a benchmark data set of 20 proteins, each with multiple distinct conformations deposited in the Protein Data Bank. Cluster decompositions for them were previously determined with the RigidFinder method from Gerstein's lab and validated against experimental data. When KINARI's default tuning parameters are used, an improvement of the B-cubed score over a crude baseline is observed in 30% of this data. With our new modeling options, improvements were observed in over 70% of the proteins in this data set. We investigate the sensitivity of the cluster decomposition score with case studies on pyruvate phosphate dikinase and calmodulin. Conclusion To substantially improve the accuracy of protein rigidity analysis systems, thorough benchmarking must be performed on all current systems and future extensions. We have measured the gain in performance by comparing different modeling methods for noncovalent interactions. We showed that new criteria for modeling hydrogen bonds and hydrophobic interactions can significantly improve the results. The two new methods proposed here have been implemented and made publicly available in the current version of KINARI (v1.3), together with the benchmarking tools, which can be downloaded from our software's website, http://kinari.cs.umass.edu. PMID:24564209

  13. How Do Different Ways of Measuring Individual Differences in Zero-Acquaintance Personality Judgment Accuracy Correlate With Each Other?

    PubMed

    Hall, Judith A; Back, Mitja D; Nestler, Steffen; Frauendorfer, Denise; Schmid Mast, Marianne; Ruben, Mollie A

    2018-04-01

    This research compares two different approaches that are commonly used to measure accuracy of personality judgment: the trait accuracy approach wherein participants discriminate among targets on a given trait, thus making intertarget comparisons, and the profile accuracy approach wherein participants discriminate between traits for a given target, thus making intratarget comparisons. We examined correlations between these methods as well as correlations among accuracies for judging specific traits. The present article documents relations among these approaches based on meta-analysis of five studies of zero-acquaintance impressions of the Big Five traits. Trait accuracies correlated only weakly with overall and normative profile accuracy. Substantial convergence between the trait and profile accuracy methods was only found when an aggregate of all five trait accuracies was correlated with distinctive profile accuracy. Importantly, however, correlations between the trait and profile accuracy approaches were reduced to negligibility when statistical overlap was corrected by removing the respective trait from the profile correlations. Moreover, correlations of the separate trait accuracies with each other were very weak. Different ways of measuring individual differences in personality judgment accuracy are not conceptually and empirically the same, but rather represent distinct abilities that rely on different judgment processes. © 2017 Wiley Periodicals, Inc.

  14. Perceptual experience and posttest improvements in perceptual accuracy and consistency.

    PubMed

    Wagman, Jeffrey B; McBride, Dawn M; Trefzger, Amanda J

    2008-08-01

    Two experiments investigated the relationship between perceptual experience (during practice) and posttest improvements in perceptual accuracy and consistency. Experiment 1 investigated the potential relationship between how often knowledge of results (KR) is provided during a practice session and posttest improvements in perceptual accuracy. Experiment 2 investigated the potential relationship between how often practice (PR) is provided during a practice session and posttest improvements in perceptual consistency. The results of both experiments are consistent with previous findings that perceptual accuracy improves only when practice includes KR and that perceptual consistency improves regardless of whether practice includes KR. In addition, the results showed that although there is a relationship between how often KR is provided during a practice session and posttest improvements in perceptual accuracy, there is no relationship between how often PR is provided during a practice session and posttest improvements in consistency.

  15. Orbit determination of highly elliptical Earth orbiters using improved Doppler data-processing modes

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.

    1995-01-01

    A navigation error covariance analysis of four highly elliptical Earth orbits is described, with apogee heights ranging from 20,000 to 76,800 km and perigee heights ranging from 1,000 to 5,000 km. This analysis differs from earlier studies in that improved navigation data-processing modes were used to reduce the radio metric data. For this study, X-band (8.4-GHz) Doppler data were assumed to be acquired from two Deep Space Network radio antennas and reconstructed orbit errors propagated over a single day. Doppler measurements were formulated as total-count phase measurements and compared to the traditional formulation of differenced-count frequency measurements. In addition, an enhanced data-filtering strategy was used, which treated the principal ground system calibration errors affecting the data as filter parameters. Results suggest that a 40- to 60-percent accuracy improvement may be achievable over traditional data-processing modes in reconstructed orbit errors, with a substantial reduction in reconstructed velocity errors at perigee. Historically, this has been a regime in which stringent navigation requirements have been difficult to meet by conventional methods.

  16. Improving Hospital-Wide Early Resource Allocation through Machine Learning.

    PubMed

    Gartner, Daniel; Padman, Rema

    2015-01-01

    The objective of this paper is to evaluate the extent to which early determination of diagnosis-related groups (DRGs) can be used for better allocation of scarce hospital resources. When elective patients seek admission, the true DRG, currently determined only at discharge, is unknown. We approach the problem of early DRG determination in three stages: (1) test how much a Naïve Bayes classifier can improve classification accuracy as compared to a hospital's current approach; (2) develop a statistical program that makes admission and scheduling decisions based on the patients' clincial pathways and scarce hospital resources; and (3) feed the DRG as classified by the Naïve Bayes classifier and the hospitals' baseline approach into the model (which we evaluate in simulation). Our results reveal that the DRG grouper performs poorly in classifying the DRG correctly before admission while the Naïve Bayes approach substantially improves the classification task. The results from the connection of the classification method with the mathematical program also reveal that resource allocation decisions can be more effective and efficient with the hybrid approach.

  17. Molecular classification of idiopathic pulmonary fibrosis: personalized medicine, genetics and biomarkers.

    PubMed

    Hambly, Nathan; Shimbori, Chiko; Kolb, Martin

    2015-10-01

    Idiopathic pulmonary fibrosis (IPF) is a chronic and progressive fibrotic lung disease associated with high morbidity and poor survival. Characterized by substantial disease heterogeneity, the diagnostic considerations, clinical course and treatment response in individual patients can be variable. In the past decade, with the advent of high-throughput proteomic and genomic technologies, our understanding of the pathogenesis of IPF has greatly improved and has led to the recognition of novel treatment targets and numerous putative biomarkers. Molecular biomarkers with mechanistic plausibility are highly desired in IPF, where they have the potential to accelerate drug development, facilitate early detection in susceptible individuals, improve prognostic accuracy and inform treatment recommendations. Although the search for candidate biomarkers remains in its infancy, attractive targets such as MUC5B and MPP7 have already been validated in large cohorts and have demonstrated their potential to improve clinical predictors beyond that of routine clinical practices. The discovery and implementation of future biomarkers will face many challenges, but with strong collaborative efforts among scientists, clinicians and the industry the ultimate goal of personalized medicine may be realized. © 2015 Asian Pacific Society of Respirology.

  18. Bringing influenza vaccines into the 21st century

    PubMed Central

    Settembre, Ethan C; Dormitzer, Philip R; Rappuoli, Rino

    2014-01-01

    The recent H7N9 influenza outbreak in China highlights the need for influenza vaccine production systems that are robust and can quickly generate substantial quantities of vaccines that target new strains for pandemic and seasonal immunization. Although the influenza vaccine system, a public-private partnership, has been effective in providing vaccines, there are areas for improvement. Technological advances such as mammalian cell culture production and synthetic vaccine seeds provide a means to increase the speed and accuracy of targeting new influenza strains with mass-produced vaccines by dispensing with the need for egg isolation, adaptation, and reassortment of vaccine viruses. New influenza potency assays that no longer require the time-consuming step of generating sheep antisera could further speed vaccine release. Adjuvants that increase the breadth of the elicited immune response and allow dose sparing provide an additional means to increase the number of available vaccine doses. Together these technologies can improve the influenza vaccination system in the near term. In the longer term, disruptive technologies, such as RNA-based flu vaccines and ‘universal’ flu vaccines, offer a promise of a dramatically improved influenza vaccine system. PMID:24378716

  19. Bringing influenza vaccines into the 21st century.

    PubMed

    Settembre, Ethan C; Dormitzer, Philip R; Rappuoli, Rino

    2014-01-01

    The recent H7N9 influenza outbreak in China highlights the need for influenza vaccine production systems that are robust and can quickly generate substantial quantities of vaccines that target new strains for pandemic and seasonal immunization. Although the influenza vaccine system, a public-private partnership, has been effective in providing vaccines, there are areas for improvement. Technological advances such as mammalian cell culture production and synthetic vaccine seeds provide a means to increase the speed and accuracy of targeting new influenza strains with mass-produced vaccines by dispensing with the need for egg isolation, adaptation, and reassortment of vaccine viruses. New influenza potency assays that no longer require the time-consuming step of generating sheep antisera could further speed vaccine release. Adjuvants that increase the breadth of the elicited immune response and allow dose sparing provide an additional means to increase the number of available vaccine doses. Together these technologies can improve the influenza vaccination system in the near term. In the longer term, disruptive technologies, such as RNA-based flu vaccines and 'universal' flu vaccines, offer a promise of a dramatically improved influenza vaccine system.

  20. Simplified curve fits for the thermodynamic properties of equilibrium air

    NASA Technical Reports Server (NTRS)

    Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.

    1986-01-01

    New improved curve fits for the thermodynamic properties of equilibrium air were developed. The curve fits are for p = p(e,rho), a = a(e,rho), T = T(e,rho), s = s(e,rho), T = T(p,rho), h = h(p,rho), rho = rho(p,s), e = e(p,s) and a = a(p,s). These curve fits can be readily incorporated into new or existing Computational Fluid Dynamics (CFD) codes if real-gas effects are desired. The curve fits were constructed using Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits appearing in NASA CR-2470. These improvements were due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25,000 K and densities from 10 to the minus 7th to 100 amagats (rho/rho sub 0).

  1. Robotics and the spine: a review of current and ongoing applications.

    PubMed

    Shweikeh, Faris; Amadio, Jordan P; Arnell, Monica; Barnard, Zachary R; Kim, Terrence T; Johnson, J Patrick; Drazin, Doniel

    2014-03-01

    Robotics in the operating room has shown great use and versatility in multiple surgical fields. Robot-assisted spine surgery has gained significant favor over its relatively short existence, due to its intuitive promise of higher surgical accuracy and better outcomes with fewer complications. Here, the authors analyze the existing literature on this growing technology in the era of minimally invasive spine surgery. In an attempt to provide the most recent, up-to-date review of the current literature on robotic spine surgery, a search of the existing literature was conducted to obtain all relevant studies on robotics as it relates to its application in spine surgery and other interventions. In all, 45 articles were included in the analysis. The authors discuss the current status of this technology and its potential in multiple arenas of spinal interventions, mainly spine surgery and spine biomechanics testing. There are numerous potential advantages and limitations to robotic spine surgery, as suggested in published case reports and in retrospective and prospective studies. Randomized controlled trials are few in number and show conflicting results regarding accuracy. The present limitations may be surmountable with future technological improvements, greater surgeon experience, reduced cost, improved operating room dynamics, and more training of surgical team members. Given the promise of robotics for improvements in spine surgery and spine biomechanics testing, more studies are needed to further explore the applicability of this technology in the spinal operating room. Due to the significant cost of the robotic equipment, studies are needed to substantiate that the increased equipment costs will result in significant benefits that will justify the expense.

  2. Improved Motor-Timing: Effects of Synchronized Metro-Nome Training on Golf Shot Accuracy

    PubMed Central

    Sommer, Marius; Rönnqvist, Louise

    2009-01-01

    This study investigates the effect of synchronized metronome training (SMT) on motor timing and how this training might affect golf shot accuracy. Twenty-six experienced male golfers participated (mean age 27 years; mean golf handicap 12.6) in this study. Pre- and post-test investigations of golf shots made by three different clubs were conducted by use of a golf simulator. The golfers were randomized into two groups: a SMT group and a Control group. After the pre-test, the golfers in the SMT group completed a 4-week SMT program designed to improve their motor timing, the golfers in the Control group were merely training their golf-swings during the same time period. No differences between the two groups were found from the pre-test outcomes, either for motor timing scores or for golf shot accuracy. However, the post-test results after the 4-weeks SMT showed evident motor timing improvements. Additionally, significant improvements for golf shot accuracy were found for the SMT group and with less variability in their performance. No such improvements were found for the golfers in the Control group. As with previous studies that used a SMT program, this study’s results provide further evidence that motor timing can be improved by SMT and that such timing improvement also improves golf accuracy. Key points This study investigates the effect of synchronized metronome training (SMT) on motor timing and how this training might affect golf shot accuracy. A randomized control group design was used. The 4 week SMT intervention showed significant improvements in motor timing, golf shot accuracy, and lead to less variability. We conclude that this study’s results provide further evidence that motor timing can be improved by SMT training and that such timing improvement also improves golf accuracy. PMID:24149608

  3. The potential for intelligent decision support systems to improve the quality and consistency of medication reviews.

    PubMed

    Bindoff, I; Stafford, A; Peterson, G; Kang, B H; Tenni, P

    2012-08-01

    Drug-related problems (DRPs) are of serious concern worldwide, particularly for the elderly who often take many medications simultaneously. Medication reviews have been demonstrated to improve medication usage, leading to reductions in DRPs and potential savings in healthcare costs. However, medication reviews are not always of a consistently high standard, and there is often room for improvement in the quality of their findings. Our aim was to produce computerized intelligent decision support software that can improve the consistency and quality of medication review reports, by helping to ensure that DRPs relevant to a patient are overlooked less frequently. A system that largely achieved this goal was previously published, but refinements have been made. This paper examines the results of both the earlier and newer systems. Two prototype multiple-classification ripple-down rules medication review systems were built, the second being a refinement of the first. Each of the systems was trained incrementally using a human medication review expert. The resultant knowledge bases were analysed and compared, showing factors such as accuracy, time taken to train, and potential errors avoided. The two systems performed well, achieving accuracies of approximately 80% and 90%, after being trained on only a small number of cases (126 and 244 cases, respectively). Through analysis of the available data, it was estimated that without the system intervening, the expert training the first prototype would have missed approximately 36% of potentially relevant DRPs, and the second 43%. However, the system appeared to prevent the majority of these potential expert errors by correctly identifying the DRPs for them, leaving only an estimated 8% error rate for the first expert and 4% for the second. These intelligent decision support systems have shown a clear potential to substantially improve the quality and consistency of medication reviews, which should in turn translate into improved medication usage if they were implemented into routine use. © 2011 Blackwell Publishing Ltd.

  4. Local Analysis Approach for Short Wavelength Geopotential Variations

    NASA Astrophysics Data System (ADS)

    Bender, P. L.

    2009-12-01

    The value of global spherical harmonic analyses for determining 15 day to 30 day changes in the Earth's gravity field has been demonstrated extensively using data from the GRACE mission and previous missions. However, additional useful information appears to be obtainable from local analyses of the data. A number of such analyses have been carried out by various groups. In the energy approximation, the changes in the height of the satellite altitude geopotential can be determined from the post-fit changes in the satellite separation during individual one-revolution arcs of data from a GRACE-type pair of satellites in a given orbit. For a particular region, it is assumed that short wavelength spatial variations for the arcs crossing that region during a time T of interest would be used to determine corrections to the spherical harmonic results. The main issue in considering higher measurement accuracy in future missions is how much improvement in spatial resolution can be achieved. For this, the shortest wavelengths that can be determined are the most important. And, while the longer wavelength variations are affected by mass distribution changes over much of the globe, the shorter wavelength ones hopefully will be determined mainly by more local changes in the mass distribution. Future missions are expected to have much higher accuracy for measuring changes in the satellite separation than GRACE. However, how large an improvement in the derived results in hydrology will be achieved is still very much a matter of study, particularly because of the effects of uncertainty in the time variations in the atmospheric and oceanic mass distributions. To be specific, it will be assumed that improving the spatial resolution in continental regions away from the coastlines is the objective, and that the satellite altitude is in the range of roughly 290 to 360 km made possible for long missions by drag-free operation. The advantages of putting together the short wavelength results from different arcs crossing the region can be seen most easily for an orbit with moderate inclination, such as 50 to 65 deg., so that the crossing angle between south-to-north (S-N) and N-S passes is fairly large over most regions well away from the poles. In that case, after filtering to pass the shorter wavelengths, the results for a given time interval can be combined to give the short wavelength W-E variations in the geopotential efficiently. For continents with extensive meteorological measurements available, like Europe and North America, a very rough guess at the surface mass density variation uncertainties is about 3 kg/m^2. This is based on the apparent accuracy of carefully calibrated surface pressure measurements. If a substantial part of the resulting uncertainties in the geopotential height at satellite altitude are at wavelengths less than about 1,500 km, they will dominate the measurement uncertainty at short spatial wavelengths for a GRACE-type mission with laser interferometry. This would be the case, even if the uncertainty in the atmospheric and oceanic mass distribution at large distances has a fairly small effect. However, the geopotential accuracy would still be substantially better than for the results achievable with a microwave ranging system.

  5. Evaluating the accuracy of SHAPE-directed RNA secondary structure predictions

    PubMed Central

    Sükösd, Zsuzsanna; Swenson, M. Shel; Kjems, Jørgen; Heitsch, Christine E.

    2013-01-01

    Recent advances in RNA structure determination include using data from high-throughput probing experiments to improve thermodynamic prediction accuracy. We evaluate the extent and nature of improvements in data-directed predictions for a diverse set of 16S/18S ribosomal sequences using a stochastic model of experimental SHAPE data. The average accuracy for 1000 data-directed predictions always improves over the original minimum free energy (MFE) structure. However, the amount of improvement varies with the sequence, exhibiting a correlation with MFE accuracy. Further analysis of this correlation shows that accurate MFE base pairs are typically preserved in a data-directed prediction, whereas inaccurate ones are not. Thus, the positive predictive value of common base pairs is consistently higher than the directed prediction accuracy. Finally, we confirm sequence dependencies in the directability of thermodynamic predictions and investigate the potential for greater accuracy improvements in the worst performing test sequence. PMID:23325843

  6. Ocean Heat Content Reveals Secrets of Fish Migrations

    PubMed Central

    Luo, Jiangang; Ault, Jerald S.; Shay, Lynn K.; Hoolihan, John P.; Prince, Eric D.; Brown, Craig A.; Rooker, Jay R.

    2015-01-01

    For centuries, the mechanisms surrounding spatially complex animal migrations have intrigued scientists and the public. We present a new methodology using ocean heat content (OHC), a habitat metric that is normally a fundamental part of hurricane intensity forecasting, to estimate movements and migration of satellite-tagged marine fishes. Previous satellite-tagging research of fishes using archival depth, temperature and light data for geolocations have been too coarse to resolve detailed ocean habitat utilization. We combined tag data with OHC estimated from ocean circulation and transport models in an optimization framework that substantially improved geolocation accuracy over SST-based tracks. The OHC-based movement track provided the first quantitative evidence that many of the tagged highly migratory fishes displayed affinities for ocean fronts and eddies. The OHC method provides a new quantitative tool for studying dynamic use of ocean habitats, migration processes and responses to environmental changes by fishes, and further, improves ocean animal tracking and extends satellite-based animal tracking data for other potential physical, ecological, and fisheries applications. PMID:26484541

  7. Accuracy and Precision of Radioactivity Quantification in Nuclear Medicine Images

    PubMed Central

    Frey, Eric C.; Humm, John L.; Ljungberg, Michael

    2012-01-01

    The ability to reliably quantify activity in nuclear medicine has a number of increasingly important applications. Dosimetry for targeted therapy treatment planning or for approval of new imaging agents requires accurate estimation of the activity in organs, tumors, or voxels at several imaging time points. Another important application is the use of quantitative metrics derived from images, such as the standard uptake value commonly used in positron emission tomography (PET), to diagnose and follow treatment of tumors. These measures require quantification of organ or tumor activities in nuclear medicine images. However, there are a number of physical, patient, and technical factors that limit the quantitative reliability of nuclear medicine images. There have been a large number of improvements in instrumentation, including the development of hybrid single-photon emission computed tomography/computed tomography and PET/computed tomography systems, and reconstruction methods, including the use of statistical iterative reconstruction methods, which have substantially improved the ability to obtain reliable quantitative information from planar, single-photon emission computed tomography, and PET images. PMID:22475429

  8. Inexact hardware for modelling weather & climate

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, Tim

    2014-05-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing exact calculations in exchange for improvements in performance and potentially accuracy and a reduction in power consumption. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud resolving atmospheric modelling. The impact of both, hardware induced faults and low precision arithmetic is tested in the dynamical core of a global atmosphere model. Our simulations show that both approaches to inexact calculations do not substantially affect the quality of the model simulations, provided they are restricted to act only on smaller scales. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations.

  9. A large synthetic peptide and phosphopeptide reference library for mass spectrometry-based proteomics.

    PubMed

    Marx, Harald; Lemeer, Simone; Schliep, Jan Erik; Matheron, Lucrece; Mohammed, Shabaz; Cox, Jürgen; Mann, Matthias; Heck, Albert J R; Kuster, Bernhard

    2013-06-01

    We present a peptide library and data resource of >100,000 synthetic, unmodified peptides and their phosphorylated counterparts with known sequences and phosphorylation sites. Analysis of the library by mass spectrometry yielded a data set that we used to evaluate the merits of different search engines (Mascot and Andromeda) and fragmentation methods (beam-type collision-induced dissociation (HCD) and electron transfer dissociation (ETD)) for peptide identification. We also compared the sensitivities and accuracies of phosphorylation-site localization tools (Mascot Delta Score, PTM score and phosphoRS), and we characterized the chromatographic behavior of peptides in the library. We found that HCD identified more peptides and phosphopeptides than did ETD, that phosphopeptides generally eluted later from reversed-phase columns and were easier to identify than unmodified peptides and that current computational tools for proteomics can still be substantially improved. These peptides and spectra will facilitate the development, evaluation and improvement of experimental and computational proteomic strategies, such as separation techniques and the prediction of retention times and fragmentation patterns.

  10. Advanced Cell Classifier: User-Friendly Machine-Learning-Based Software for Discovering Phenotypes in High-Content Imaging Data.

    PubMed

    Piccinini, Filippo; Balassa, Tamas; Szkalisity, Abel; Molnar, Csaba; Paavolainen, Lassi; Kujala, Kaisa; Buzas, Krisztina; Sarazova, Marie; Pietiainen, Vilja; Kutay, Ulrike; Smith, Kevin; Horvath, Peter

    2017-06-28

    High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Dendritic network models: Improving isoscapes and quantifying influence of landscape and in-stream processes on strontium isotopes in rivers

    USGS Publications Warehouse

    Brennan, Sean R.; Torgersen, Christian E.; Hollenbeck, Jeff P.; Fernandez, Diego P.; Jensen, Carrie K; Schindler, Daniel E.

    2016-01-01

    A critical challenge for the Earth sciences is to trace the transport and flux of matter within and among aquatic, terrestrial, and atmospheric systems. Robust descriptions of isotopic patterns across space and time, called “isoscapes,” form the basis of a rapidly growing and wide-ranging body of research aimed at quantifying connectivity within and among Earth's systems. However, isoscapes of rivers have been limited by conventional Euclidean approaches in geostatistics and the lack of a quantitative framework to apportion the influence of processes driven by landscape features versus in-stream phenomena. Here we demonstrate how dendritic network models substantially improve the accuracy of isoscapes of strontium isotopes and partition the influence of hydrologic transport versus local geologic features on strontium isotope ratios in a large Alaska river. This work illustrates the analytical power of dendritic network models for the field of isotope biogeochemistry, particularly for provenance studies of modern and ancient animals.

  12. Methods of operation of apparatus measuring formation resistivity from within a cased well having one measurement and two compensation steps

    DOEpatents

    Vail, III, William B.

    1993-01-01

    Methods of operation of an apparatus having at least two pairs of voltage measurement electrodes vertically disposed in a cased well to measure the resistivity of adjacent geological formations from inside the cased well. During stationary measurements with the apparatus at a fixed vertical depth within the cased well, the invention herein discloses methods of operation which include a measurement step and subsequent first and second compensation steps respectively resulting in improved accuracy of measurement. First and second order errors of measurement are identified, and the measurement step and two compensation steps provide methods to substantially eliminate their influence on the results. A multiple frequency apparatus adapted to movement within the well is described which simultaneously provide the measurement and two compensation steps.

  13. Ultrahigh precision nonlinear reflectivity measurement system for saturable absorber mirrors with self-referenced fluence characterization.

    PubMed

    Orsila, Lasse; Härkönen, Antti; Hyyti, Janne; Guina, Mircea; Steinmeyer, Günter

    2014-08-01

    Measurement of nonlinear optical reflectivity of saturable absorber devices is discussed. A setup is described that enables absolute accuracy of reflectivity measurements better than 0.3%. A repeatability within 0.02% is shown for saturable absorbers with few-percent modulation depth. The setup incorporates an in situ knife-edge characterization of beam diameters, making absolute reflectivity estimations and determination of saturation fluences significantly more reliable. Additionally, several measures are discussed to substantially improve the reliability of the reflectivity measurements. At its core, the scheme exploits the limits of state-of-the-art digital lock-in technology but also greatly benefits from a fiber-based master-oscillator power-amplifier source, the use of an integrating sphere, and simultaneous comparison with a linear reflectivity standard.

  14. Construction of the radiation oncology teaching files system for charged particle radiotherapy.

    PubMed

    Masami, Mukai; Yutaka, Ando; Yasuo, Okuda; Naoto, Takahashi; Yoshihisa, Yoda; Hiroshi, Tsuji; Tadashi, Kamada

    2013-01-01

    Our hospital started the charged particle therapy since 1996. New institutions for charged particle therapy are planned in the world. Our hospital are accepting many visitors from those newly planned medical institutions and having many opportunities to provide with the training to them. Based upon our experiences, we have developed the radiation oncology teaching files system for charged particle therapy. We adopted the PowerPoint of Microsoft as a basic framework of our teaching files system. By using our export function of the viewer any physician can create teaching files easily and effectively. Now our teaching file system has 33 cases for clinical and physics contents. We expect that we can improve the safety and accuracy of charged particle therapy by using our teaching files system substantially.

  15. Administrative review process for adjudicating initial disability claims. Final rule.

    PubMed

    2006-03-31

    The Social Security Administration is committed to providing the high quality of service the American people expect and deserve. In light of the significant growth in the number of disability claims and the increased complexity of those claims, the need to make substantial changes in our disability determination process has become urgent. We are publishing a final rule that amends our administrative review process for applications for benefits that are based on whether you are disabled under title II of the Social Security Act (the Act), or applications for supplemental security income (SSI) payments that are based on whether you are disabled or blind under title XVI of the Act. We expect that this final rule will improve the accuracy, consistency, and timeliness of decision-making throughout the disability determination process.

  16. Cell Nucleus-Targeting Zwitterionic Carbon Dots.

    PubMed

    Jung, Yun Kyung; Shin, Eeseul; Kim, Byeong-Su

    2015-12-22

    An innovative nucleus-targeting zwitterionic carbon dot (CD) vehicle has been developed for anticancer drug delivery and optical monitoring. The zwitterionic functional groups of the CDs introduced by a simple one-step synthesis using β-alanine as a passivating and zwitterionic ligand allow cytoplasmic uptake and subsequent nuclear translocation of the CDs. Moreover, multicolor fluorescence improves the accuracy of the CDs as an optical code. The CD-based drug delivery system constructed by non-covalent grafting of doxorubicin, exhibits superior antitumor efficacy owing to enhanced nuclear delivery in vitro and tumor accumulation in vivo, resulting in highly effective tumor growth inhibition. Since the zwitterionic CDs are highly biocompatible and effectively translocated into the nucleus, it provides a compelling solution to a multifunctional nanoparticle for substantially enhanced nuclear uptake of drugs and optical monitoring of translocation.

  17. Cell Nucleus-Targeting Zwitterionic Carbon Dots

    PubMed Central

    Jung, Yun Kyung; Shin, Eeseul; Kim, Byeong-Su

    2015-01-01

    An innovative nucleus-targeting zwitterionic carbon dot (CD) vehicle has been developed for anticancer drug delivery and optical monitoring. The zwitterionic functional groups of the CDs introduced by a simple one-step synthesis using β-alanine as a passivating and zwitterionic ligand allow cytoplasmic uptake and subsequent nuclear translocation of the CDs. Moreover, multicolor fluorescence improves the accuracy of the CDs as an optical code. The CD-based drug delivery system constructed by non-covalent grafting of doxorubicin, exhibits superior antitumor efficacy owing to enhanced nuclear delivery in vitro and tumor accumulation in vivo, resulting in highly effective tumor growth inhibition. Since the zwitterionic CDs are highly biocompatible and effectively translocated into the nucleus, it provides a compelling solution to a multifunctional nanoparticle for substantially enhanced nuclear uptake of drugs and optical monitoring of translocation. PMID:26689549

  18. A kinematic/kinetic hybrid airplane simulator model : draft.

    DOT National Transportation Integrated Search

    2008-01-01

    A kinematics-based flight model, for normal flight : regimes, currently uses precise flight data to achieve a high : level of aircraft realism. However, it was desired to further : increase the models accuracy, without a substantial increase in : ...

  19. A kinematic/kinetic hybrid airplane simulator model.

    DOT National Transportation Integrated Search

    2008-01-01

    A kinematics-based flight model, for normal flight : regimes, currently uses precise flight data to achieve a high : level of aircraft realism. However, it was desired to further : increase the models accuracy, without a substantial increase in : ...

  20. Advanced relativistic VLBI model for geodesy

    NASA Astrophysics Data System (ADS)

    Soffel, Michael; Kopeikin, Sergei; Han, Wen-Biao

    2017-07-01

    Our present relativistic part of the geodetic VLBI model for Earthbound antennas is a consensus model which is considered as a standard for processing high-precision VLBI observations. It was created as a compromise between a variety of relativistic VLBI models proposed by different authors as documented in the IERS Conventions 2010. The accuracy of the consensus model is in the picosecond range for the group delay but this is not sufficient for current geodetic purposes. This paper provides a fully documented derivation of a new relativistic model having an accuracy substantially higher than one picosecond and based upon a well accepted formalism of relativistic celestial mechanics, astrometry and geodesy. Our new model fully confirms the consensus model at the picosecond level and in several respects goes to a great extent beyond it. More specifically, terms related to the acceleration of the geocenter are considered and kept in the model, the gravitational time-delay due to a massive body (planet, Sun, etc.) with arbitrary mass and spin-multipole moments is derived taking into account the motion of the body, and a new formalism for the time-delay problem of radio sources located at finite distance from VLBI stations is presented. Thus, the paper presents a substantially elaborated theoretical justification of the consensus model and its significant extension that allows researchers to make concrete estimates of the magnitude of residual terms of this model for any conceivable configuration of the source of light, massive bodies, and VLBI stations. The largest terms in the relativistic time delay which can affect the current VLBI observations are from the quadrupole and the angular momentum of the gravitating bodies that are known from the literature. These terms should be included in the new geodetic VLBI model for improving its consistency.

  1. Functional neurological symptom disorders in a pediatric emergency room: diagnostic accuracy, features, and outcome.

    PubMed

    de Gusmão, Claudio M; Guerriero, Réjean M; Bernson-Leung, Miya Elizabeth; Pier, Danielle; Ibeziako, Patricia I; Bujoreanu, Simona; Maski, Kiran P; Urion, David K; Waugh, Jeff L

    2014-08-01

    In children, functional neurological symptom disorders are frequently the basis for presentation for emergency care. Pediatric epidemiological and outcome data remain scarce. Assess diagnostic accuracy of trainee's first impression in our pediatric emergency room; describe manner of presentation, demographic data, socioeconomic impact, and clinical outcomes, including parental satisfaction. (1) More than 1 year, psychiatry consultations for neurology patients with a functional neurological symptom disorder were retrospectively reviewed. (2) For 3 months, all children whose emergency room presentation suggested the diagnosis were prospectively collected. (3) Three to six months after prospective collection, families completed a structured telephone interview on outcome measures. Twenty-seven patients were retrospectively assessed; 31 patients were prospectively collected. Trainees' accurately predicted the diagnosis in 93% (retrospective) and 94% (prospective) cohorts. Mixed presentations were most common (usually sensory-motor changes, e.g. weakness and/or paresthesias). Associated stressors were mundane and ubiquitous, rarely severe. Families were substantially affected, reporting mean symptom duration 7.4 (standard error of the mean ± 1.33) weeks, missing 22.4 (standard error of the mean ± 5.47) days of school, and 8.3 (standard error of the mean ± 2.88) of parental workdays (prospective cohort). At follow-up, 78% were symptom free. Parental dissatisfaction was rare, attributed to poor rapport and/or insufficient information conveyed. Trainees' clinical impression was accurate in predicting a later diagnosis of functional neurological symptom disorder. Extraordinary life stressors are not required to trigger the disorder in children. Although prognosis is favorable, families incur substantial economic burden and negative educational impact. Improving recognition and appropriately communicating the diagnosis may speed access to treatment and potentially reduce the disability and cost of this disorder. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Combining Passive Microwave Rain Rate Retrieval with Visible and Infrared Cloud Classification.

    NASA Astrophysics Data System (ADS)

    Miller, Shawn William

    The relation between cloud type and rain rate has been investigated here from different approaches. Previous studies and intercomparisons have indicated that no single passive microwave rain rate algorithm is an optimal choice for all types of precipitating systems. Motivated by the upcoming Tropical Rainfall Measuring Mission (TRMM), an algorithm which combines visible and infrared cloud classification with passive microwave rain rate estimation was developed and analyzed in a preliminary manner using data from the Tropical Ocean Global Atmosphere-Coupled Ocean Atmosphere Response Experiment (TOGA-COARE). Overall correlation with radar rain rate measurements across five case studies showed substantial improvement in the combined algorithm approach when compared to the use of any single microwave algorithm. An automated neural network cloud classifier for use over both land and ocean was independently developed and tested on Advanced Very High Resolution Radiometer (AVHRR) data. The global classifier achieved strict accuracy for 82% of the test samples, while a more localized version achieved strict accuracy for 89% of its own test set. These numbers provide hope for the eventual development of a global automated cloud classifier for use throughout the tropics and the temperate zones. The localized classifier was used in conjunction with gridded 15-minute averaged radar rain rates at 8km resolution produced from the current operational network of National Weather Service (NWS) radars, to investigate the relation between cloud type and rain rate over three regions of the continental United States and adjacent waters. The results indicate a substantially lower amount of available moisture in the Front Range of the Rocky Mountains than in the Midwest or in the eastern Gulf of Mexico.

  3. The robustness of false memory for emotional pictures.

    PubMed

    Bessette-Symons, Brandy A

    2018-02-01

    Emotional material is commonly reported to be more accurately recognised; however, there is substantial evidence of increased false alarm rates (FAR) for emotional material and several reports of stronger influences on response bias than accuracy. This pattern is more frequently reported for words than pictures. Research on the mechanisms underlying bias differences has mostly focused on word lists under short retention intervals. This article presents four series of experiments examining recognition memory for emotional pictures while varying arousal and the control over the content of the pictures at two retention intervals, and one study measuring the relatedness of the series picture sets. Under the shorter retention interval, emotion increased false alarms and reduced accuracy. Under the longer retention interval emotion increased hit rates and FAR, resulting in reduced accuracy and/or bias. At both retention intervals, the pattern of valence effects differed based on the arousal associated with the picture sets. Emotional pictures were found to be more related than neutral pictures in each set; however, the influence of relatedness alone does not provide an adequate explanation for all emotional differences. The results demonstrate substantial emotional differences in picture recognition that vary based on valence, arousal and retention interval.

  4. A systematic review of the PTSD Checklist's diagnostic accuracy studies using QUADAS.

    PubMed

    McDonald, Scott D; Brown, Whitney L; Benesek, John P; Calhoun, Patrick S

    2015-09-01

    Despite the popularity of the PTSD Checklist (PCL) as a clinical screening test, there has been no comprehensive quality review of studies evaluating its diagnostic accuracy. A systematic quality assessment of 22 diagnostic accuracy studies of the English-language PCL using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) assessment tool was conducted to examine (a) the quality of diagnostic accuracy studies of the PCL, and (b) whether quality has improved since the 2003 STAndards for the Reporting of Diagnostic accuracy studies (STARD) initiative regarding reporting guidelines for diagnostic accuracy studies. Three raters independently applied the QUADAS tool to each study, and a consensus among the 4 authors is reported. Findings indicated that although studies generally met standards in several quality areas, there is still room for improvement. Areas for improvement include establishing representativeness, adequately describing clinical and demographic characteristics of the sample, and presenting better descriptions of important aspects of test and reference standard execution. Only 2 studies met each of the 14 quality criteria. In addition, study quality has not appreciably improved since the publication of the STARD Statement in 2003. Recommendations for the improvement of diagnostic accuracy studies of the PCL are discussed. (c) 2015 APA, all rights reserved).

  5. Research on material removal accuracy analysis and correction of removal function during ion beam figuring

    NASA Astrophysics Data System (ADS)

    Wu, Weibin; Dai, Yifan; Zhou, Lin; Xu, Mingjin

    2016-09-01

    Material removal accuracy has a direct impact on the machining precision and efficiency of ion beam figuring. By analyzing the factors suppressing the improvement of material removal accuracy, we conclude that correcting the removal function deviation and reducing the removal material amount during each iterative process could help to improve material removal accuracy. Removal function correcting principle can effectively compensate removal function deviation between actual figuring and simulated processes, while experiments indicate that material removal accuracy decreases with a long machining time, so a small amount of removal material in each iterative process is suggested. However, more clamping and measuring steps will be introduced in this way, which will also generate machining errors and suppress the improvement of material removal accuracy. On this account, a free-measurement iterative process method is put forward to improve material removal accuracy and figuring efficiency by using less measuring and clamping steps. Finally, an experiment on a φ 100-mm Zerodur planar is preformed, which shows that, in similar figuring time, three free-measurement iterative processes could improve the material removal accuracy and the surface error convergence rate by 62.5% and 17.6%, respectively, compared with a single iterative process.

  6. Classifying coastal resources by integrating optical and radar imagery and color infrared photography

    USGS Publications Warehouse

    Ramsey, Elijah W.; Nelson, Gene A.; Sapkota, Sijan

    1998-01-01

    A progressive classification of a marsh and forest system using Landsat Thematic Mapper (TM), color infrared (CIR) photograph, and ERS-1 synthetic aperture radar (SAR) data improved classification accuracy when compared to classification using solely TM reflective band data. The classification resulted in a detailed identification of differences within a nearly monotypic black needlerush marsh. Accuracy percentages of these classes were surprisingly high given the complexities of classification. The detailed classification resulted in a more accurate portrayal of the marsh transgressive sequence than was obtainable with TM data alone. Individual sensor contribution to the improved classification was compared to that using only the six reflective TM bands. Individually, the green reflective CIR and SAR data identified broad categories of water, marsh, and forest. In combination with TM, SAR and the green CIR band each improved overall accuracy by about 3% and 15% respectively. The SAR data improved the TM classification accuracy mostly in the marsh classes. The green CIR data also improved the marsh classification accuracy and accuracies in some water classes. The final combination of all sensor data improved almost all class accuracies from 2% to 70% with an overall improvement of about 20% over TM data alone. Not only was the identification of vegetation types improved, but the spatial detail of the classification approached 10 m in some areas.

  7. Gated integrator with signal baseline subtraction

    DOEpatents

    Wang, X.

    1996-12-17

    An ultrafast, high precision gated integrator includes an opamp having differential inputs. A signal to be integrated is applied to one of the differential inputs through a first input network, and a signal indicative of the DC offset component of the signal to be integrated is applied to the other of the differential inputs through a second input network. A pair of electronic switches in the first and second input networks define an integrating period when they are closed. The first and second input networks are substantially symmetrically constructed of matched components so that error components introduced by the electronic switches appear symmetrically in both input circuits and, hence, are nullified by the common mode rejection of the integrating opamp. The signal indicative of the DC offset component is provided by a sample and hold circuit actuated as the integrating period begins. The symmetrical configuration of the integrating circuit improves accuracy and speed by balancing out common mode errors, by permitting the use of high speed switching elements and high speed opamps and by permitting the use of a small integrating time constant. The sample and hold circuit substantially eliminates the error caused by the input signal baseline offset during a single integrating window. 5 figs.

  8. Gated integrator with signal baseline subtraction

    DOEpatents

    Wang, Xucheng

    1996-01-01

    An ultrafast, high precision gated integrator includes an opamp having differential inputs. A signal to be integrated is applied to one of the differential inputs through a first input network, and a signal indicative of the DC offset component of the signal to be integrated is applied to the other of the differential inputs through a second input network. A pair of electronic switches in the first and second input networks define an integrating period when they are closed. The first and second input networks are substantially symmetrically constructed of matched components so that error components introduced by the electronic switches appear symmetrically in both input circuits and, hence, are nullified by the common mode rejection of the integrating opamp. The signal indicative of the DC offset component is provided by a sample and hold circuit actuated as the integrating period begins. The symmetrical configuration of the integrating circuit improves accuracy and speed by balancing out common mode errors, by permitting the use of high speed switching elements and high speed opamps and by permitting the use of a small integrating time constant. The sample and hold circuit substantially eliminates the error caused by the input signal baseline offset during a single integrating window.

  9. An independent brain-computer interface using covert non-spatial visual selective attention

    NASA Astrophysics Data System (ADS)

    Zhang, Dan; Maye, Alexander; Gao, Xiaorong; Hong, Bo; Engel, Andreas K.; Gao, Shangkai

    2010-02-01

    In this paper, a novel independent brain-computer interface (BCI) system based on covert non-spatial visual selective attention of two superimposed illusory surfaces is described. Perception of two superimposed surfaces was induced by two sets of dots with different colors rotating in opposite directions. The surfaces flickered at different frequencies and elicited distinguishable steady-state visual evoked potentials (SSVEPs) over parietal and occipital areas of the brain. By selectively attending to one of the two surfaces, the SSVEP amplitude at the corresponding frequency was enhanced. An online BCI system utilizing the attentional modulation of SSVEP was implemented and a 3-day online training program with healthy subjects was carried out. The study was conducted with Chinese subjects at Tsinghua University, and German subjects at University Medical Center Hamburg-Eppendorf (UKE) using identical stimulation software and equivalent technical setup. A general improvement of control accuracy with training was observed in 8 out of 18 subjects. An averaged online classification accuracy of 72.6 ± 16.1% was achieved on the last training day. The system renders SSVEP-based BCI paradigms possible for paralyzed patients with substantial head or ocular motor impairments by employing covert attention shifts instead of changing gaze direction.

  10. Fully- and weakly-nonlinear biperiodic traveling waves in shallow water

    NASA Astrophysics Data System (ADS)

    Hirakawa, Tomoaki; Okamura, Makoto

    2018-04-01

    We directly calculate fully nonlinear traveling waves that are periodic in two independent horizontal directions (biperiodic) in shallow water. Based on the Riemann theta function, we also calculate exact periodic solutions to the Kadomtsev-Petviashvili (KP) equation, which can be obtained by assuming weakly-nonlinear, weakly-dispersive, weakly-two-dimensional waves. To clarify how the accuracy of the biperiodic KP solution is affected when some of the KP approximations are not satisfied, we compare the fully- and weakly-nonlinear periodic traveling waves of various wave amplitudes, wave depths, and interaction angles. As the interaction angle θ decreases, the wave frequency and the maximum wave height of the biperiodic KP solution both increase, and the central peak sharpens and grows beyond the height of the corresponding direct numerical solutions, indicating that the biperiodic KP solution cannot qualitatively model direct numerical solutions for θ ≲ 45^\\circ . To remedy the weak two-dimensionality approximation, we apply the correction of Yeh et al (2010 Eur. Phys. J. Spec. Top. 185 97-111) to the biperiodic KP solution, which substantially improves the solution accuracy and results in wave profiles that are indistinguishable from most other cases.

  11. Nuclear Medicine in Diagnosis of Prosthetic Valve Endocarditis: An Update

    PubMed Central

    Musso, Maria; Petrosillo, Nicola

    2015-01-01

    Over the past decades cardiovascular disease management has been substantially improved by the increasing introduction of medical devices as prosthetic valves. The yearly rate of infective endocarditis (IE) in patient with a prosthetic valve is approximately 3 cases per 1,000 patients. The fatality rate of prosthetic valve endocarditis (PVE) remains stable over the years, in part due to the aging of the population. The diagnostic value of echocardiography in diagnosis is operator-dependent and its sensitivity can decrease in presence of intracardiac devices and valvular prosthesis. The modified Duke criteria are considered the gold standard for diagnosing IE; their sensibility is 80%, but in clinical practice their diagnostic accuracy in PVE is lower, resulting inconclusively in nearly 30% of cases. In the last years, these new imaging modalities have gained an increasing attention because they make it possible to diagnose an IE earlier than the structural alterations occurring. Several studies have been conducted in order to assess the diagnostic accuracy of various nuclear medicine techniques in diagnosis of PVE. We performed a review of the literature to assess the available evidence on the role of nuclear medicine techniques in the diagnosis of PVE. PMID:25695043

  12. Interrater reliability of the new criteria for behavioral variant frontotemporal dementia.

    PubMed

    Lamarre, Amanda K; Rascovsky, Katya; Bostrom, Alan; Toofanian, Parnian; Wilkins, Sarah; Sha, Sharon J; Perry, David C; Miller, Zachary A; Naasan, Georges; Laforce, Robert; Hagen, Jayne; Takada, Leonel T; Tartaglia, Maria Carmela; Kang, Gail; Galasko, Douglas; Salmon, David P; Farias, Sarah Tomaszewski; Kaur, Berneet; Olichney, John M; Quitania Park, Lovingly; Mendez, Mario F; Tsai, Po-Heng; Teng, Edmond; Dickerson, Bradford Clark; Domoto-Reilly, Kimiko; McGinnis, Scott; Miller, Bruce L; Kramer, Joel H

    2013-05-21

    To evaluate the interrater reliability of the new International Behavioural Variant FTD Criteria Consortium (FTDC) criteria for behavioral variant frontotemporal dementia (bvFTD). Twenty standardized clinical case modules were developed for patients with a range of neurodegenerative diagnoses, including bvFTD, primary progressive aphasia (nonfluent, semantic, and logopenic variant), Alzheimer disease, and Lewy body dementia. Eighteen blinded raters reviewed the modules and 1) rated the presence or absence of core diagnostic features for the FTDC criteria, and 2) provided an overall diagnostic rating. Interrater reliability was determined by κ statistics for multiple raters with categorical ratings. The mean κ value for diagnostic agreement was 0.81 for possible bvFTD and 0.82 for probable bvFTD ("almost perfect agreement"). Interrater reliability for 4 of the 6 core features had "substantial" agreement (behavioral disinhibition, perseverative/compulsive, sympathy/empathy, hyperorality; κ = 0.61-0.80), whereas 2 had "moderate" agreement (apathy/inertia, neuropsychological; κ = 0.41-0.6). Clinician years of experience did not significantly influence rater accuracy. The FTDC criteria show promise for improving the diagnostic accuracy and reliability of clinicians and researchers. As disease-altering therapies are developed, accurate differential diagnosis between bvFTD and other neurodegenerative diseases will become increasingly important.

  13. An independent brain-computer interface using covert non-spatial visual selective attention.

    PubMed

    Zhang, Dan; Maye, Alexander; Gao, Xiaorong; Hong, Bo; Engel, Andreas K; Gao, Shangkai

    2010-02-01

    In this paper, a novel independent brain-computer interface (BCI) system based on covert non-spatial visual selective attention of two superimposed illusory surfaces is described. Perception of two superimposed surfaces was induced by two sets of dots with different colors rotating in opposite directions. The surfaces flickered at different frequencies and elicited distinguishable steady-state visual evoked potentials (SSVEPs) over parietal and occipital areas of the brain. By selectively attending to one of the two surfaces, the SSVEP amplitude at the corresponding frequency was enhanced. An online BCI system utilizing the attentional modulation of SSVEP was implemented and a 3-day online training program with healthy subjects was carried out. The study was conducted with Chinese subjects at Tsinghua University, and German subjects at University Medical Center Hamburg-Eppendorf (UKE) using identical stimulation software and equivalent technical setup. A general improvement of control accuracy with training was observed in 8 out of 18 subjects. An averaged online classification accuracy of 72.6 +/- 16.1% was achieved on the last training day. The system renders SSVEP-based BCI paradigms possible for paralyzed patients with substantial head or ocular motor impairments by employing covert attention shifts instead of changing gaze direction.

  14. Improving pointing of Toruń 32-m radio telescope: effects of rail surface irregularities

    NASA Astrophysics Data System (ADS)

    Lew, Bartosz

    2018-03-01

    Over the last few years a number of software and hardware improvements have been implemented to the 32-m Cassegrain radio telescope located near Toruń. The 19-bit angle encoders have been upgraded to 29-bit in azimuth and elevation axes. The control system has been substantially improved, in order to account for a number of previously-neglected, astrometric effects that are relevant for milli-degree pointing. In the summer 2015, as a result of maintenance works, the orientation of the secondary mirror has been slightly altered, which resulted in worsening of the pointing precision, much below the nominal telescope capabilities. In preparation for observations at the highest available frequency of 30-GHz, we use One Centimeter Receiver Array (OCRA), to take the most accurate pointing data ever collected with the telescope, and we analyze it in order to improve the pointing precision. We introduce a new generalized pointing model that, for the first time, accounts for the rail irregularities, and we show that the telescope can have root mean square pointing accuracy at the level < 8″ and < 12″ in azimuth and elevation respectively. Finally, we discuss the implemented pointing improvements in the light of effects that may influence their long-term stability.

  15. The availability of prior ECGs improves paramedic accuracy in recognizing ST-segment elevation myocardial infarction.

    PubMed

    O'Donnell, Daniel; Mancera, Mike; Savory, Eric; Christopher, Shawn; Schaffer, Jason; Roumpf, Steve

    2015-01-01

    Early and accurate identification of ST-elevation myocardial infarction (STEMI) by prehospital providers has been shown to significantly improve door to balloon times and improve patient outcomes. Previous studies have shown that paramedic accuracy in reading 12 lead ECGs can range from 86% to 94%. However, recent studies have demonstrated that accuracy diminishes for the more uncommon STEMI presentations (e.g. lateral). Unlike hospital physicians, paramedics rarely have the ability to review previous ECGs for comparison. Whether or not a prior ECG can improve paramedic accuracy is not known. The availability of prior ECGs improves paramedic accuracy in ECG interpretation. 130 paramedics were given a single clinical scenario. Then they were randomly assigned 12 computerized prehospital ECGs, 6 with and 6 without an accompanying prior ECG. All ECGs were obtained from a local STEMI registry. For each ECG paramedics were asked to determine whether or not there was a STEMI and to rate their confidence in their interpretation. To determine if the old ECGs improved accuracy we used a mixed effects logistic regression model to calculate p-values between the control and intervention. The addition of a previous ECG improved the accuracy of identifying STEMIs from 75.5% to 80.5% (p=0.015). A previous ECG also increased paramedic confidence in their interpretation (p=0.011). The availability of previous ECGs improves paramedic accuracy and enhances their confidence in interpreting STEMIs. Further studies are needed to evaluate this impact in a clinical setting. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. 3He Lung Morphometry Technique: Accuracy Analysis and Pulse Sequence Optimization

    PubMed Central

    Sukstanskii, A.L.; Conradi, M.S.; Yablonskiy, D.A.

    2010-01-01

    The 3He lung morphometry technique (Yablonskiy et al, JAP, 2009), based on MRI measurements of hyperpolarized gas diffusion in lung airspaces, provides unique information on the lung microstructure at the alveolar level. 3D tomographic images of standard morphological parameters (mean airspace chord length, lung parenchyma surface-to-volume ratio, and the number of alveoli per unit lung volume) can be created from a rather short (several seconds) MRI scan. These parameters are most commonly used to characterize lung morphometry but were not previously available from in vivo studies. A background of the 3He lung morphometry technique is based on a previously proposed model of lung acinar airways, treated as cylindrical passages of external radius R covered by alveolar sleeves of depth h, and on a theory of gas diffusion in these airways. The initial works approximated the acinar airways as very long cylinders, all with the same R and h. The present work aims at analyzing effects of realistic acinar airway structures, incorporating airway branching, physiological airway lengths, a physiological ratio of airway ducts and sacs, and distributions of R and h. By means of Monte Carlo computer simulations, we demonstrate that our technique allows rather accurate measurements of geometrical and morphological parameters of acinar airways. In particular, the accuracy of determining one of the most important physiological parameter of acinar airways – surface-to-volume ratio – does not exceed several percent. Second, we analyze the effect of the susceptibility induced inhomogeneous magnetic field on the parameter estimate and demonstrate that this effect is rather negligible at B0 ≤ 3T and becomes substantial only at higher B0 Third, we theoretically derive an optimal choice of MR pulse sequence parameters, which should be used to acquire a series of diffusion attenuated MR signals, allowing a substantial decrease in the acquisition time and improvement in accuracy of the results. It is demonstrated that the optimal choice represents three not equidistant b-values: b1 = 0, b2 ~ 2 s/cm2, b3 ~ 8 s/cm2. PMID:20937564

  17. Accuracy of the Atherosclerotic Cardiovascular Risk Equation in a Large Contemporary, Multiethnic Real-World Population

    PubMed Central

    Rana, Jamal S.; Tabada, Grace H.; Solomon, Matthew D.; Lo, Joan C.; Jaffe, Marc G.; Sung, Sue Hee; Ballantyne, Christie M.; Go, Alan S.

    2016-01-01

    Background The accuracy of the 2013 American College of Cardiology/American Heart Association (ACC/AHA) risk equation for atherosclerotic cardiovascular disease (ASCVD) events in contemporary and ethnically diverse populations is not well understood. Objectives We sought to evaluate the accuracy of the 2013 ACC/AHA risk equation within a large, multiethnic population in clinical care. Methods The target population for consideration of cholesterol-lowering therapy in a large, integrated health care delivery system population was identified in 2008 and followed through 2013. The main analyses excluded those with known ASCVD, diabetes mellitus, low-density lipoprotein cholesterol levels <70 or ≥190 mg/dl, prior statin use, or incomplete 5-year follow-up. Patient characteristics were obtained from electronic medical records and ASCVD events were ascertained using validated algorithms for hospitalization databases and death certificates. We compared predicted versus observed 5-year ASCVD risk, overall and by sex and race/ethnicity. We additionally examined predicted versus observed risk in patients with diabetes mellitus. Results Among 307,591 eligible adults without diabetes between 40 and 75 years of age, 22,283 were black, 52,917 Asian/Pacific Islander, and 18,745 Hispanic. We observed 2,061 ASCVD events during 1,515,142 person-years. In each 5-year predicted ASCVD risk category, observed 5-year ASCVD risk was substantially lower: 0.20% for predicted risk <2.50%; 0.65% for predicted risk 2.50 to 3.74%; 0.90% for predicted risk 3.75 to 4.99%; and 1.85% for predicted risk ≥5.00%, with C: 0.74. Similar ASCVD risk overestimation and poor calibration with moderate discrimination (C: 0.68 to 0.74) was observed in sex, racial/ethnic, and socioeconomic status subgroups, and in sensitivity analyses among patients receiving statins for primary prevention. Calibration among 4,242 eligible adults with diabetes was improved, but discrimination was worse (C: 0.64). Conclusions In a large, contemporary “real-world” population, the ACC/AHA Pooled Cohort risk equation substantially overestimated actual 5-year risk in adults without diabetes, overall and across sociodemographic subgroups. PMID:27151343

  18. AIRS Version 6 Products and Data Services at NASA GES DISC

    NASA Astrophysics Data System (ADS)

    Ding, F.; Savtchenko, A. K.; Hearty, T. J.; Theobald, M. L.; Vollmer, B.; Esfandiari, E.

    2013-12-01

    The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is the home of processing, archiving, and distribution services for data from the Atmospheric Infrared Sounder (AIRS) mission. The AIRS mission is entering its 11th year of global observations of the atmospheric state, including temperature and humidity profiles, outgoing longwave radiation, cloud properties, and trace gases. The GES DISC, in collaboration with the AIRS Project, released data from the Version 6 algorithm in early 2013. The new algorithm represents a significant improvement over previous versions in terms of greater stability, yield, and quality of products. Among the most substantial advances are: improved soundings of Tropospheric and Sea Surface Temperatures; larger improvements with increasing cloud cover; improved retrievals of surface spectral emissivity; near-complete removal of spurious temperature bias trends seen in earlier versions; substantially improved retrieval yield (i.e., number of soundings accepted for output) for climate studies; AIRS-Only retrievals with comparable accuracy to AIRS+AMSU (Advanced Microwave Sounding Unit) retrievals; and more realistic hemispheric seasonal variability and global distribution of carbon monoxide. The GES DISC is working to bring the distribution services up-to-date with these new developments. Our focus is on popular services, like variable subsetting and quality screening, which are impacted by the new elements in Version 6. Other developments in visualization services, such as Giovanni, Near-Real Time imagery, and a granule-map viewer, are progressing along with the introduction of the new data; each service presents its own challenge. This presentation will demonstrate the most significant improvements in Version 6 AIRS products, such as newly added variables (higher resolution outgoing longwave radiation, new cloud property products, etc.), the new quality control schema, and improved retrieval yields. We will also demonstrate the various distribution and visualization services for AIRS data products. The cloud properties, model physics, and water and energy cycles research communities are invited to take advantage of the improvements in Version 6 AIRS products and the various services at GES DISC which provide them.

  19. A method for accounting for maintenance costs in flux balance analysis improves the prediction of plant cell metabolic phenotypes under stress conditions.

    PubMed

    Cheung, C Y Maurice; Williams, Thomas C R; Poolman, Mark G; Fell, David A; Ratcliffe, R George; Sweetlove, Lee J

    2013-09-01

    Flux balance models of metabolism generally utilize synthesis of biomass as the main determinant of intracellular fluxes. However, the biomass constraint alone is not sufficient to predict realistic fluxes in central heterotrophic metabolism of plant cells because of the major demand on the energy budget due to transport costs and cell maintenance. This major limitation can be addressed by incorporating transport steps into the metabolic model and by implementing a procedure that uses Pareto optimality analysis to explore the trade-off between ATP and NADPH production for maintenance. This leads to a method for predicting cell maintenance costs on the basis of the measured flux ratio between the oxidative steps of the oxidative pentose phosphate pathway and glycolysis. We show that accounting for transport and maintenance costs substantially improves the accuracy of fluxes predicted from a flux balance model of heterotrophic Arabidopsis cells in culture, irrespective of the objective function used in the analysis. Moreover, when the new method was applied to cells under control, elevated temperature and hyper-osmotic conditions, only elevated temperature led to a substantial increase in cell maintenance costs. It is concluded that the hyper-osmotic conditions tested did not impose a metabolic stress, in as much as the metabolic network is not forced to devote more resources to cell maintenance. © 2013 The Authors The Plant Journal © 2013 John Wiley & Sons Ltd.

  20. Investigating the impact of audio instruction and audio-visual biofeedback for lung cancer radiation therapy

    NASA Astrophysics Data System (ADS)

    George, Rohini

    Lung cancer accounts for 13% of all cancers in the Unites States and is the leading cause of deaths among both men and women. The five-year survival for lung cancer patients is approximately 15%.(ACS facts & figures) Respiratory motion decreases accuracy of thoracic radiotherapy during imaging and delivery. To account for respiration, generally margins are added during radiation treatment planning, which may cause a substantial dose delivery to normal tissues and increase the normal tissue toxicity. To alleviate the above-mentioned effects of respiratory motion, several motion management techniques are available which can reduce the doses to normal tissues, thereby reducing treatment toxicity and allowing dose escalation to the tumor. This may increase the survival probability of patients who have lung cancer and are receiving radiation therapy. However the accuracy of these motion management techniques are inhibited by respiration irregularity. The rationale of this thesis was to study the improvement in regularity of respiratory motion by breathing coaching for lung cancer patients using audio instructions and audio-visual biofeedback. A total of 331 patient respiratory motion traces, each four minutes in length, were collected from 24 lung cancer patients enrolled in an IRB-approved breathing-training protocol. It was determined that audio-visual biofeedback significantly improved the regularity of respiratory motion compared to free breathing and audio instruction, thus improving the accuracy of respiratory gated radiotherapy. It was also observed that duty cycles below 30% showed insignificant reduction in residual motion while above 50% there was a sharp increase in residual motion. The reproducibility of exhale based gating was higher than that of inhale base gating. Modeling the respiratory cycles it was found that cosine and cosine 4 models had the best correlation with individual respiratory cycles. The overall respiratory motion probability distribution function could be approximated to a normal distribution function. A statistical analysis was also performed to investigate if a patient's physical, tumor or general characteristics played a role in identifying whether he/she responded positively to the coaching type---signified by a reduction in the variability of respiratory motion. The analysis demonstrated that, although there were some characteristics like disease type and dose per fraction that were significant with respect to time-independent analysis, there were no significant time trends observed for the inter-session or intra-session analysis. Based on patient feedback with the existing audio-visual biofeedback system used for the study and research performed on other feedback systems, an improved audio-visual biofeedback system was designed. It is hoped the widespread clinical implementation of audio-visual biofeedback for radiotherapy will improve the accuracy of lung cancer radiotherapy.

  1. An efficient and accurate 3D displacements tracking strategy for digital volume correlation

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Wang, Bo; Wu, Dafang; Lubineau, Gilles

    2014-07-01

    Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost.

  2. Paperless anesthesia: uses and abuses of these data.

    PubMed

    Anderson, Brian J; Merry, Alan F

    2015-12-01

    Demonstrably accurate records facilitate clinical decision making, improve patient safety, provide better defense against frivolous lawsuits, and enable better medical policy decisions. Anesthesia Information Management Systems (AIMS) have the potential to improve on the accuracy and reliability of handwritten records. Interfaces with electronic recording systems within the hospital or wider community allow correlation of anesthesia relevant data with biochemistry laboratory results, billing sections, radiological units, pharmacy, earlier patient records, and other systems. Electronic storage of large and accurate datasets has lent itself to quality assurance, enhancement of patient safety, research, cost containment, scheduling, anesthesia training initiatives, and has even stimulated organizational change. The time for record making may be increased by AIMS, but in some cases has been reduced. The question of impact on vigilance is not entirely settled, but substantial negative effects seem to be unlikely. The usefulness of these large databases depends on the accuracy of data and they may be incorrect or incomplete. Consequent biases are threats to the validity of research results. Data mining of biomedical databases makes it easier for individuals with political, social, or economic agendas to generate misleading research findings for the purpose of manipulating public opinion and swaying policymakers. There remains a fear that accessibility of data may have undesirable regulatory or legal consequences. Increasing regulation of treatment options during the perioperative period through regulated policies could reduce autonomy for clinicians. These fears are as yet unsubstantiated. © 2015 John Wiley & Sons Ltd.

  3. A nearest-neighbour discretisation of the regularized stokeslet boundary integral equation

    NASA Astrophysics Data System (ADS)

    Smith, David J.

    2018-04-01

    The method of regularized stokeslets is extensively used in biological fluid dynamics due to its conceptual simplicity and meshlessness. This simplicity carries a degree of cost in computational expense and accuracy because the number of degrees of freedom used to discretise the unknown surface traction is generally significantly higher than that required by boundary element methods. We describe a meshless method based on nearest-neighbour interpolation that significantly reduces the number of degrees of freedom required to discretise the unknown traction, increasing the range of problems that can be practically solved, without excessively complicating the task of the modeller. The nearest-neighbour technique is tested against the classical problem of rigid body motion of a sphere immersed in very viscous fluid, then applied to the more complex biophysical problem of calculating the rotational diffusion timescales of a macromolecular structure modelled by three closely-spaced non-slender rods. A heuristic for finding the required density of force and quadrature points by numerical refinement is suggested. Matlab/GNU Octave code for the key steps of the algorithm is provided, which predominantly use basic linear algebra operations, with a full implementation being provided on github. Compared with the standard Nyström discretisation, more accurate and substantially more efficient results can be obtained by de-refining the force discretisation relative to the quadrature discretisation: a cost reduction of over 10 times with improved accuracy is observed. This improvement comes at minimal additional technical complexity. Future avenues to develop the algorithm are then discussed.

  4. A prior feature SVM – MRF based method for mouse brain segmentation

    PubMed Central

    Wu, Teresa; Bae, Min Hyeok; Zhang, Min; Pan, Rong; Badea, Alexandra

    2012-01-01

    We introduce an automated method, called prior feature Support Vector Machine- Markov Random Field (pSVMRF), to segment three-dimensional mouse brain Magnetic Resonance Microscopy (MRM) images. Our earlier work, extended MRF (eMRF) integrated Support Vector Machine (SVM) and Markov Random Field (MRF) approaches, leading to improved segmentation accuracy; however, the computation of eMRF is very expensive, which may limit its performance on segmentation and robustness. In this study pSVMRF reduces training and testing time for SVM, while boosting segmentation performance. Unlike the eMRF approach, where MR intensity information and location priors are linearly combined, pSVMRF combines this information in a nonlinear fashion, and enhances the discriminative ability of the algorithm. We validate the proposed method using MR imaging of unstained and actively stained mouse brain specimens, and compare segmentation accuracy with two existing methods: eMRF and MRF. C57BL/6 mice are used for training and testing, using cross validation. For formalin fixed C57BL/6 specimens, pSVMRF outperforms both eMRF and MRF. The segmentation accuracy for C57BL/6 brains, stained or not, was similar for larger structures like hippocampus and caudate putamen, (~87%), but increased substantially for smaller regions like susbtantia nigra (from 78.36% to 91.55%), and anterior commissure (from ~50% to ~80%). To test segmentation robustness against increased anatomical variability we add two strains, BXD29 and a transgenic mouse model of Alzheimer’s Disease. Segmentation accuracy for new strains is 80% for hippocampus, and caudate putamen, indicating that pSVMRF is a promising approach for phenotyping mouse models of human brain disorders. PMID:21988893

  5. A prior feature SVM-MRF based method for mouse brain segmentation.

    PubMed

    Wu, Teresa; Bae, Min Hyeok; Zhang, Min; Pan, Rong; Badea, Alexandra

    2012-02-01

    We introduce an automated method, called prior feature Support Vector Machine-Markov Random Field (pSVMRF), to segment three-dimensional mouse brain Magnetic Resonance Microscopy (MRM) images. Our earlier work, extended MRF (eMRF) integrated Support Vector Machine (SVM) and Markov Random Field (MRF) approaches, leading to improved segmentation accuracy; however, the computation of eMRF is very expensive, which may limit its performance on segmentation and robustness. In this study pSVMRF reduces training and testing time for SVM, while boosting segmentation performance. Unlike the eMRF approach, where MR intensity information and location priors are linearly combined, pSVMRF combines this information in a nonlinear fashion, and enhances the discriminative ability of the algorithm. We validate the proposed method using MR imaging of unstained and actively stained mouse brain specimens, and compare segmentation accuracy with two existing methods: eMRF and MRF. C57BL/6 mice are used for training and testing, using cross validation. For formalin fixed C57BL/6 specimens, pSVMRF outperforms both eMRF and MRF. The segmentation accuracy for C57BL/6 brains, stained or not, was similar for larger structures like hippocampus and caudate putamen, (~87%), but increased substantially for smaller regions like susbtantia nigra (from 78.36% to 91.55%), and anterior commissure (from ~50% to ~80%). To test segmentation robustness against increased anatomical variability we add two strains, BXD29 and a transgenic mouse model of Alzheimer's disease. Segmentation accuracy for new strains is 80% for hippocampus, and caudate putamen, indicating that pSVMRF is a promising approach for phenotyping mouse models of human brain disorders. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Deep-learning derived features for lung nodule classification with limited datasets

    NASA Astrophysics Data System (ADS)

    Thammasorn, P.; Wu, W.; Pierce, L. A.; Pipavath, S. N.; Lampe, P. D.; Houghton, A. M.; Haynor, D. R.; Chaovalitwongse, W. A.; Kinahan, P. E.

    2018-02-01

    Only a few percent of indeterminate nodules found in lung CT images are cancer. However, enabling earlier diagnosis is important to avoid invasive procedures or long-time surveillance to those benign nodules. We are evaluating a classification framework using radiomics features derived with a machine learning approach from a small data set of indeterminate CT lung nodule images. We used a retrospective analysis of 194 cases with pulmonary nodules in the CT images with or without contrast enhancement from lung cancer screening clinics. The nodules were contoured by a radiologist and texture features of the lesion were calculated. In addition, sematic features describing shape were categorized. We also explored a Multiband network, a feature derivation path that uses a modified convolutional neural network (CNN) with a Triplet Network. This was trained to create discriminative feature representations useful for variable-sized nodule classification. The diagnostic accuracy was evaluated for multiple machine learning algorithms using texture, shape, and CNN features. In the CT contrast-enhanced group, the texture or semantic shape features yielded an overall diagnostic accuracy of 80%. Use of a standard deep learning network in the framework for feature derivation yielded features that substantially underperformed compared to texture and/or semantic features. However, the proposed Multiband approach of feature derivation produced results similar in diagnostic accuracy to the texture and semantic features. While the Multiband feature derivation approach did not outperform the texture and/or semantic features, its equivalent performance indicates promise for future improvements to increase diagnostic accuracy. Importantly, the Multiband approach adapts readily to different size lesions without interpolation, and performed well with relatively small amount of training data.

  7. An Extreme Learning Machine-Based Neuromorphic Tactile Sensing System for Texture Recognition.

    PubMed

    Rasouli, Mahdi; Chen, Yi; Basu, Arindam; Kukreja, Sunil L; Thakor, Nitish V

    2018-04-01

    Despite significant advances in computational algorithms and development of tactile sensors, artificial tactile sensing is strikingly less efficient and capable than the human tactile perception. Inspired by efficiency of biological systems, we aim to develop a neuromorphic system for tactile pattern recognition. We particularly target texture recognition as it is one of the most necessary and challenging tasks for artificial sensory systems. Our system consists of a piezoresistive fabric material as the sensor to emulate skin, an interface that produces spike patterns to mimic neural signals from mechanoreceptors, and an extreme learning machine (ELM) chip to analyze spiking activity. Benefiting from intrinsic advantages of biologically inspired event-driven systems and massively parallel and energy-efficient processing capabilities of the ELM chip, the proposed architecture offers a fast and energy-efficient alternative for processing tactile information. Moreover, it provides the opportunity for the development of low-cost tactile modules for large-area applications by integration of sensors and processing circuits. We demonstrate the recognition capability of our system in a texture discrimination task, where it achieves a classification accuracy of 92% for categorization of ten graded textures. Our results confirm that there exists a tradeoff between response time and classification accuracy (and information transfer rate). A faster decision can be achieved at early time steps or by using a shorter time window. This, however, results in deterioration of the classification accuracy and information transfer rate. We further observe that there exists a tradeoff between the classification accuracy and the input spike rate (and thus energy consumption). Our work substantiates the importance of development of efficient sparse codes for encoding sensory data to improve the energy efficiency. These results have a significance for a wide range of wearable, robotic, prosthetic, and industrial applications.

  8. Enabling multi-level relevance feedback on PubMed by integrating rank learning into DBMS.

    PubMed

    Yu, Hwanjo; Kim, Taehoon; Oh, Jinoh; Ko, Ilhwan; Kim, Sungchul; Han, Wook-Shin

    2010-04-16

    Finding relevant articles from PubMed is challenging because it is hard to express the user's specific intention in the given query interface, and a keyword query typically retrieves a large number of results. Researchers have applied machine learning techniques to find relevant articles by ranking the articles according to the learned relevance function. However, the process of learning and ranking is usually done offline without integrated with the keyword queries, and the users have to provide a large amount of training documents to get a reasonable learning accuracy. This paper proposes a novel multi-level relevance feedback system for PubMed, called RefMed, which supports both ad-hoc keyword queries and a multi-level relevance feedback in real time on PubMed. RefMed supports a multi-level relevance feedback by using the RankSVM as the learning method, and thus it achieves higher accuracy with less feedback. RefMed "tightly" integrates the RankSVM into RDBMS to support both keyword queries and the multi-level relevance feedback in real time; the tight coupling of the RankSVM and DBMS substantially improves the processing time. An efficient parameter selection method for the RankSVM is also proposed, which tunes the RankSVM parameter without performing validation. Thereby, RefMed achieves a high learning accuracy in real time without performing a validation process. RefMed is accessible at http://dm.postech.ac.kr/refmed. RefMed is the first multi-level relevance feedback system for PubMed, which achieves a high accuracy with less feedback. It effectively learns an accurate relevance function from the user's feedback and efficiently processes the function to return relevant articles in real time.

  9. Enabling multi-level relevance feedback on PubMed by integrating rank learning into DBMS

    PubMed Central

    2010-01-01

    Background Finding relevant articles from PubMed is challenging because it is hard to express the user's specific intention in the given query interface, and a keyword query typically retrieves a large number of results. Researchers have applied machine learning techniques to find relevant articles by ranking the articles according to the learned relevance function. However, the process of learning and ranking is usually done offline without integrated with the keyword queries, and the users have to provide a large amount of training documents to get a reasonable learning accuracy. This paper proposes a novel multi-level relevance feedback system for PubMed, called RefMed, which supports both ad-hoc keyword queries and a multi-level relevance feedback in real time on PubMed. Results RefMed supports a multi-level relevance feedback by using the RankSVM as the learning method, and thus it achieves higher accuracy with less feedback. RefMed "tightly" integrates the RankSVM into RDBMS to support both keyword queries and the multi-level relevance feedback in real time; the tight coupling of the RankSVM and DBMS substantially improves the processing time. An efficient parameter selection method for the RankSVM is also proposed, which tunes the RankSVM parameter without performing validation. Thereby, RefMed achieves a high learning accuracy in real time without performing a validation process. RefMed is accessible at http://dm.postech.ac.kr/refmed. Conclusions RefMed is the first multi-level relevance feedback system for PubMed, which achieves a high accuracy with less feedback. It effectively learns an accurate relevance function from the user’s feedback and efficiently processes the function to return relevant articles in real time. PMID:20406504

  10. A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem

    PubMed Central

    Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.

    2013-01-01

    Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554

  11. The in-flight calibration of the Hubble Space Telescope attitude sensors

    NASA Technical Reports Server (NTRS)

    Welter, Gary L.

    1991-01-01

    A detailed review of the in-flight calibration of the Hubble Space Telescope attitude sensors is presented. The review, which covers the period from the April 24, 1990, launch of the spacecraft until the time of this writing (June 1991), describes the calibrations required and accuracies achieved for the four principal attitude sensing systems on the spacecraft: the magnetometers, the fixed head star trackers, the gyroscopes, and the fine guidance sensors (FGS's). In contrast to the other three sensor groups, the Hubble Telecope's FGS's are unique in the precision and performance levels being attempted; spacecraft control and astrometric research at the near-milliarcsecond level are the ultimate goals. FGS calibration accuracies at the 20-milliarcsecond level have already been achieved, and plans for new data acquisitions and reductions that should substantially improve these results are in progress. A summary of the basic attributes of each of the four sensor groups with respect to its usage as an attitude measuring system is presented, followed by a discussion of the calibration items of interest for that group. The calibration items are as follows: for the magnetometers, the corrections for the spacecraft's static and time-varying magnetic fields; for the fixed-head star trackers, their relative alignments and use in performing onboard attitude updates; for the gyroscopes, their scale factors, alignments, and drift rate biases; and for the FGS's, their magnifications, optical distortions, and alignments. The discussion covers the procedures used for each calibration, as well as the order of the calibrations within the general flow of orbital verification activities. It also includes a synopsis of current plans for the eventual calibration of the FGS's to achieve their near-milliarcsecond design accuracy. The conclusions include a table indicating the current and predicted ultimate accuracies for each of the calibration items.

  12. The role of blood vessels in high-resolution volume conductor head modeling of EEG.

    PubMed

    Fiederer, L D J; Vorwerk, J; Lucka, F; Dannhauer, M; Yang, S; Dümpelmann, M; Schulze-Bonhage, A; Aertsen, A; Speck, O; Wolters, C H; Ball, T

    2016-03-01

    Reconstruction of the electrical sources of human EEG activity at high spatio-temporal accuracy is an important aim in neuroscience and neurological diagnostics. Over the last decades, numerous studies have demonstrated that realistic modeling of head anatomy improves the accuracy of source reconstruction of EEG signals. For example, including a cerebro-spinal fluid compartment and the anisotropy of white matter electrical conductivity were both shown to significantly reduce modeling errors. Here, we for the first time quantify the role of detailed reconstructions of the cerebral blood vessels in volume conductor head modeling for EEG. To study the role of the highly arborized cerebral blood vessels, we created a submillimeter head model based on ultra-high-field-strength (7T) structural MRI datasets. Blood vessels (arteries and emissary/intraosseous veins) were segmented using Frangi multi-scale vesselness filtering. The final head model consisted of a geometry-adapted cubic mesh with over 17×10(6) nodes. We solved the forward model using a finite-element-method (FEM) transfer matrix approach, which allowed reducing computation times substantially and quantified the importance of the blood vessel compartment by computing forward and inverse errors resulting from ignoring the blood vessels. Our results show that ignoring emissary veins piercing the skull leads to focal localization errors of approx. 5 to 15mm. Large errors (>2cm) were observed due to the carotid arteries and the dense arterial vasculature in areas such as in the insula or in the medial temporal lobe. Thus, in such predisposed areas, errors caused by neglecting blood vessels can reach similar magnitudes as those previously reported for neglecting white matter anisotropy, the CSF or the dura - structures which are generally considered important components of realistic EEG head models. Our findings thus imply that including a realistic blood vessel compartment in EEG head models will be helpful to improve the accuracy of EEG source analyses particularly when high accuracies in brain areas with dense vasculature are required. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Feature instructions improve face-matching accuracy

    PubMed Central

    Bindemann, Markus

    2018-01-01

    Identity comparisons of photographs of unfamiliar faces are prone to error but important for applied settings, such as person identification at passport control. Finding techniques to improve face-matching accuracy is therefore an important contemporary research topic. This study investigated whether matching accuracy can be improved by instruction to attend to specific facial features. Experiment 1 showed that instruction to attend to the eyebrows enhanced matching accuracy for optimized same-day same-race face pairs but not for other-race faces. By contrast, accuracy was unaffected by instruction to attend to the eyes, and declined with instruction to attend to ears. Experiment 2 replicated the eyebrow-instruction improvement with a different set of same-race faces, comprising both optimized same-day and more challenging different-day face pairs. These findings suggest that instruction to attend to specific features can enhance face-matching accuracy, but feature selection is crucial and generalization across face sets may be limited. PMID:29543822

  14. Stratospheric Aerosol--Observations, Processes, and Impact on Climate

    NASA Technical Reports Server (NTRS)

    Kresmer, Stefanie; Thomason, Larry W.; von Hobe, Marc; Hermann, Markus; Deshler, Terry; Timmreck, Claudia; Toohey, Matthew; Stenke, Andrea; Schwarz, Joshua P.; Weigel, Ralf; hide

    2016-01-01

    Interest in stratospheric aerosol and its role in climate have increased over the last decade due to the observed increase in stratospheric aerosol since 2000 and the potential for changes in the sulfur cycle induced by climate change. This review provides an overview about the advances in stratospheric aerosol research since the last comprehensive assessment of stratospheric aerosol was published in 2006. A crucial development since 2006 is the substantial improvement in the agreement between in situ and space-based inferences of stratospheric aerosol properties during volcanically quiescent periods. Furthermore, new measurement systems and techniques, both in situ and space based, have been developed for measuring physical aerosol properties with greater accuracy and for characterizing aerosol composition. However, these changes induce challenges to constructing a long-term stratospheric aerosol climatology. Currently, changes in stratospheric aerosol levels less than 20% cannot be confidently quantified. The volcanic signals tend to mask any nonvolcanically driven change, making them difficult to understand. While the role of carbonyl sulfide as a substantial and relatively constant source of stratospheric sulfur has been confirmed by new observations and model simulations, large uncertainties remain with respect to the contribution from anthropogenic sulfur dioxide emissions. New evidence has been provided that stratospheric aerosol can also contain small amounts of nonsulfatematter such as black carbon and organics. Chemistry-climate models have substantially increased in quantity and sophistication. In many models the implementation of stratospheric aerosol processes is coupled to radiation and/or stratospheric chemistry modules to account for relevant feedback processes.

  15. Using object-based image analysis to conduct high-resolution conifer extraction at regional spatial scales

    USGS Publications Warehouse

    Coates, Peter S.; Gustafson, K. Benjamin; Roth, Cali L.; Chenaille, Michael P.; Ricca, Mark A.; Mauch, Kimberly; Sanchez-Chopitea, Erika; Kroger, Travis J.; Perry, William M.; Casazza, Michael L.

    2017-08-10

    The distribution and abundance of pinyon (Pinus monophylla) and juniper (Juniperus osteosperma, J. occidentalis) trees (hereinafter, "pinyon-juniper") in sagebrush (Artemisia spp.) ecosystems of the Great Basin in the Western United States has increased substantially since the late 1800s. Distributional expansion and infill of pinyon-juniper into sagebrush ecosystems threatens the ecological function and economic viability of these ecosystems within the Great Basin, and is now a major contemporary challenge facing land and wildlife managers. Particularly, pinyon-juniper encroachment into intact sagebrush ecosystems has been identified as a primary threat facing populations of greater sage-grouse (Centrocercus urophasianus; hereinafter, "sage-grouse"), which is a sagebrush obligate species. Even seemingly innocuous scatterings of isolated pinyon-juniper in an otherwise intact sagebrush landscape can negatively affect survival and reproduction of sage-grouse. Therefore, accurate and high-resolution maps of pinyon-juniper distribution and abundance (indexed by canopy cover) across broad geographic extents would help guide land management decisions that better target areas for pinyon-juniper removal projects (for example, fuel reduction, habitat improvement for sage-grouse, and other sagebrush species) and facilitate science that further quantifies ecological effects of pinyon-juniper encroachment on sage-grouse populations and sagebrush ecosystem processes. Hence, we mapped pinyon-juniper (referred to as conifers for actual mapping) at a 1 × 1-meter (m) high resolution across the entire range of previously mapped sage-grouse habitat in Nevada and northeastern California.We used digital orthophoto quad tiles from National Agriculture Imagery Program (2010, 2013) as base imagery, and then classified conifers using automated feature extraction methodology with the program Feature Analyst™. This method relies on machine learning algorithms that extract features from imagery based on their spectral and spatial signatures. We classified conifers in 6,230 tiles and then tested for errors of omission and commission using confusion matrices. Accuracy ranged from 79.1 to 96.8, with an overall accuracy of 84.3 percent across all mapped areas. An estimated accuracy coefficient (kappa) indicated substantial to nearly perfect agreement, which varied across mapped areas. For this mapping process across the entire mapping extent, four sets of products are available at https://doi.org/10.5066/F7348HVC, including (1) a shapefile representing accuracy results linked to mapping subunits; (2) binary rasters representing conifer presence or absence at a 1 × 1 m resolution; (3) a 30 × 30 m resolution raster representing percentages of conifer canopy cover within each cell from 0 to 100; and (4) 1 × 1 m resolution canopy cover classification rasters derived from a 50-m-radius moving window analysis. The latter two products can be reclassified in a geographic information system (GIS) into user-specified bins to meet different objectives, which include approximations for phases of encroachment. These products complement, and in some cases improve upon, existing conifer maps in the Western United States, and will help facilitate sage-grouse habitat management and sagebrush ecosystem restoration.

  16. Improved accuracy and precision of tracer kinetic parameters by joint fitting to variable flip angle and dynamic contrast enhanced MRI data.

    PubMed

    Dickie, Ben R; Banerji, Anita; Kershaw, Lucy E; McPartlin, Andrew; Choudhury, Ananya; West, Catharine M; Rose, Chris J

    2016-10-01

    To improve the accuracy and precision of tracer kinetic model parameter estimates for use in dynamic contrast enhanced (DCE) MRI studies of solid tumors. Quantitative DCE-MRI requires an estimate of precontrast T1 , which is obtained prior to fitting a tracer kinetic model. As T1 mapping and tracer kinetic signal models are both a function of precontrast T1 it was hypothesized that its joint estimation would improve the accuracy and precision of both precontrast T1 and tracer kinetic model parameters. Accuracy and/or precision of two-compartment exchange model (2CXM) parameters were evaluated for standard and joint fitting methods in well-controlled synthetic data and for 36 bladder cancer patients. Methods were compared under a number of experimental conditions. In synthetic data, joint estimation led to statistically significant improvements in the accuracy of estimated parameters in 30 of 42 conditions (improvements between 1.8% and 49%). Reduced accuracy was observed in 7 of the remaining 12 conditions. Significant improvements in precision were observed in 35 of 42 conditions (between 4.7% and 50%). In clinical data, significant improvements in precision were observed in 18 of 21 conditions (between 4.6% and 38%). Accuracy and precision of DCE-MRI parameter estimates are improved when signal models are fit jointly rather than sequentially. Magn Reson Med 76:1270-1281, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  17. Quality improvement of International Classification of Diseases, 9th revision, diagnosis coding in radiation oncology: single-institution prospective study at University of California, San Francisco.

    PubMed

    Chen, Chien P; Braunstein, Steve; Mourad, Michelle; Hsu, I-Chow J; Haas-Kogan, Daphne; Roach, Mack; Fogh, Shannon E

    2015-01-01

    Accurate International Classification of Diseases (ICD) diagnosis coding is critical for patient care, billing purposes, and research endeavors. In this single-institution study, we evaluated our baseline ICD-9 (9th revision) diagnosis coding accuracy, identified the most common errors contributing to inaccurate coding, and implemented a multimodality strategy to improve radiation oncology coding. We prospectively studied ICD-9 coding accuracy in our radiation therapy--specific electronic medical record system. Baseline ICD-9 coding accuracy was obtained from chart review targeting ICD-9 coding accuracy of all patients treated at our institution between March and June of 2010. To improve performance an educational session highlighted common coding errors, and a user-friendly software tool, RadOnc ICD Search, version 1.0, for coding radiation oncology specific diagnoses was implemented. We then prospectively analyzed ICD-9 coding accuracy for all patients treated from July 2010 to June 2011, with the goal of maintaining 80% or higher coding accuracy. Data on coding accuracy were analyzed and fed back monthly to individual providers. Baseline coding accuracy for physicians was 463 of 661 (70%) cases. Only 46% of physicians had coding accuracy above 80%. The most common errors involved metastatic cases, whereby primary or secondary site ICD-9 codes were either incorrect or missing, and special procedures such as stereotactic radiosurgery cases. After implementing our project, overall coding accuracy rose to 92% (range, 86%-96%). The median accuracy for all physicians was 93% (range, 77%-100%) with only 1 attending having accuracy below 80%. Incorrect primary and secondary ICD-9 codes in metastatic cases showed the most significant improvement (10% vs 2% after intervention). Identifying common coding errors and implementing both education and systems changes led to significantly improved coding accuracy. This quality assurance project highlights the potential problem of ICD-9 coding accuracy by physicians and offers an approach to effectively address this shortcoming. Copyright © 2015. Published by Elsevier Inc.

  18. Do specialized flowers promote reproductive isolation? Realized pollination accuracy of three sympatric Pedicularis species

    PubMed Central

    Armbruster, W. Scott; Shi, Xiao-Qing; Huang, Shuang-Quan

    2014-01-01

    Background and Aims Interest in pollinator-mediated evolutionary divergence of flower phenotype and speciation in plants has been at the core of plant evolutionary studies since Darwin. Specialized pollination is predicted to lead to reproductive isolation and promote speciation among sympatric species by promoting partitioning of (1) the species of pollinators used, (2) when pollinators are used, or (3) the sites of pollen placement. Here this last mechanism is investigated by observing the pollination accuracy of sympatric Pedicularis species (Orobanchacae). Methods Pollinator behaviour was observed on three species of Pedicularis (P. densispica, P. tricolor and P. dichotoma) in the Hengduan Mountains, south-west China. Using fluorescent powder and dyed pollen, the accuracy was assessed of stigma contact with, and pollen deposition on, pollinating bumble-bees, respectively. Key Results All three species of Pedicularis were pollinated by bumble-bees. It was found that the adaptive accuracy of female function was much higher than that of male function in all three flower species. Although peak pollen deposition corresponded to the optimal location on the pollinator (i.e. the site of stigma contact) for each species, substantial amounts of pollen were scattered over much of the bees' bodies. Conclusions The Pedicularis species studied in the eastern Himalayan region did not conform with Grant's ‘Pedicularis Model’ of mechanical reproductive isolation. The specialized flowers of this diverse group of plants seem unlikely to have increased the potential for reproductive isolation or influenced rates of speciation. It is suggested instead that the extreme species richness of the Pedicularis clade was generated in other ways and that specialized flowers and substantial pollination accuracy evolved as a response to selection generated by the diversity of co-occurring congeners. PMID:24047714

  19. Enhancement of dynamic myocardial perfusion PET images based on low-rank plus sparse decomposition.

    PubMed

    Lu, Lijun; Ma, Xiaomian; Mohy-Ud-Din, Hassan; Ma, Jianhua; Feng, Qianjin; Rahmim, Arman; Chen, Wufan

    2018-02-01

    The absolute quantification of dynamic myocardial perfusion (MP) PET imaging is challenged by the limited spatial resolution of individual frame images due to division of the data into shorter frames. This study aims to develop a method for restoration and enhancement of dynamic PET images. We propose that the image restoration model should be based on multiple constraints rather than a single constraint, given the fact that the image characteristic is hardly described by a single constraint alone. At the same time, it may be possible, but not optimal, to regularize the image with multiple constraints simultaneously. Fortunately, MP PET images can be decomposed into a superposition of background vs. dynamic components via low-rank plus sparse (L + S) decomposition. Thus, we propose an L + S decomposition based MP PET image restoration model and express it as a convex optimization problem. An iterative soft thresholding algorithm was developed to solve the problem. Using realistic dynamic 82 Rb MP PET scan data, we optimized and compared its performance with other restoration methods. The proposed method resulted in substantial visual as well as quantitative accuracy improvements in terms of noise versus bias performance, as demonstrated in extensive 82 Rb MP PET simulations. In particular, the myocardium defect in the MP PET images had improved visual as well as contrast versus noise tradeoff. The proposed algorithm was also applied on an 8-min clinical cardiac 82 Rb MP PET study performed on the GE Discovery PET/CT, and demonstrated improved quantitative accuracy (CNR and SNR) compared to other algorithms. The proposed method is effective for restoration and enhancement of dynamic PET images. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. The combination of pH monitoring in the most distal esophagus and symptom association analysis markedly improves the clinical value of esophageal pH tests.

    PubMed

    Hall, Mats Guerrero Garcia; Wenner, Jörgen; Öberg, Stefan

    2016-01-01

    The poor sensitivity of esophageal pH monitoring substantially limits the clinical value of the test. The aim of this study was to compare the diagnostic accuracy of esophageal pH monitoring and symptom association analysis performed at the conventional level with that obtained in the most distal esophagus. Eighty-two patients with typical reflux symptoms and 49 asymptomatic subjects underwent dual 48-h pH monitoring with the electrodes positioned immediately above, and 6 cm above the squamo-columnar junction (SCJ). The degree of esophageal acid exposure and the temporal relationship between reflux events and symptoms were evaluated. The sensitivity of pH recording and the diagnostic yield of Symptom Association Probability (SAP) were significantly higher for pH monitoring performed at the distal compared with the conventional level (82% versus 65%, p<0.001 and 74% versus 62%, p<0.001, respectively). The greatest improvement was observed in patients with non-erosive disease. In this group, the sensitivity increased from 46% at the standard level to 66% immediately above the SCJ, and with the combination of a positive SAP as a marker for a positive pH test, the diagnostic yield further increased to 94%. The diagnostic accuracy of esophageal pH monitoring in the most distal esophagus is superior to that performed at the conventional level and it is further improved with the combination of symptom association analysis. PH monitoring with the pH electrode positioned immediately above the SCJ should be introduced in clinical practice and always combined with symptom association analysis.

  1. Use and Customization of Risk Scores for Predicting Cardiovascular Events Using Electronic Health Record Data.

    PubMed

    Wolfson, Julian; Vock, David M; Bandyopadhyay, Sunayan; Kottke, Thomas; Vazquez-Benitez, Gabriela; Johnson, Paul; Adomavicius, Gediminas; O'Connor, Patrick J

    2017-04-24

    Clinicians who are using the Framingham Risk Score (FRS) or the American College of Cardiology/American Heart Association Pooled Cohort Equations (PCE) to estimate risk for their patients based on electronic health data (EHD) face 4 questions. (1) Do published risk scores applied to EHD yield accurate estimates of cardiovascular risk? (2) Are FRS risk estimates, which are based on data that are up to 45 years old, valid for a contemporary patient population seeking routine care? (3) Do the PCE make the FRS obsolete? (4) Does refitting the risk score using EHD improve the accuracy of risk estimates? Data were extracted from the EHD of 84 116 adults aged 40 to 79 years who received care at a large healthcare delivery and insurance organization between 2001 and 2011. We assessed calibration and discrimination for 4 risk scores: published versions of FRS and PCE and versions obtained by refitting models using a subset of the available EHD. The published FRS was well calibrated (calibration statistic K=9.1, miscalibration ranging from 0% to 17% across risk groups), but the PCE displayed modest evidence of miscalibration (calibration statistic K=43.7, miscalibration from 9% to 31%). Discrimination was similar in both models (C-index=0.740 for FRS, 0.747 for PCE). Refitting the published models using EHD did not substantially improve calibration or discrimination. We conclude that published cardiovascular risk models can be successfully applied to EHD to estimate cardiovascular risk; the FRS remains valid and is not obsolete; and model refitting does not meaningfully improve the accuracy of risk estimates. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.

  2. Improving accuracy of genomic prediction in Brangus cattle by adding animals with imputed low-density SNP genotypes.

    PubMed

    Lopes, F B; Wu, X-L; Li, H; Xu, J; Perkins, T; Genho, J; Ferretti, R; Tait, R G; Bauck, S; Rosa, G J M

    2018-02-01

    Reliable genomic prediction of breeding values for quantitative traits requires the availability of sufficient number of animals with genotypes and phenotypes in the training set. As of 31 October 2016, there were 3,797 Brangus animals with genotypes and phenotypes. These Brangus animals were genotyped using different commercial SNP chips. Of them, the largest group consisted of 1,535 animals genotyped by the GGP-LDV4 SNP chip. The remaining 2,262 genotypes were imputed to the SNP content of the GGP-LDV4 chip, so that the number of animals available for training the genomic prediction models was more than doubled. The present study showed that the pooling of animals with both original or imputed 40K SNP genotypes substantially increased genomic prediction accuracies on the ten traits. By supplementing imputed genotypes, the relative gains in genomic prediction accuracies on estimated breeding values (EBV) were from 12.60% to 31.27%, and the relative gain in genomic prediction accuracies on de-regressed EBV was slightly small (i.e. 0.87%-18.75%). The present study also compared the performance of five genomic prediction models and two cross-validation methods. The five genomic models predicted EBV and de-regressed EBV of the ten traits similarly well. Of the two cross-validation methods, leave-one-out cross-validation maximized the number of animals at the stage of training for genomic prediction. Genomic prediction accuracy (GPA) on the ten quantitative traits was validated in 1,106 newly genotyped Brangus animals based on the SNP effects estimated in the previous set of 3,797 Brangus animals, and they were slightly lower than GPA in the original data. The present study was the first to leverage currently available genotype and phenotype resources in order to harness genomic prediction in Brangus beef cattle. © 2018 Blackwell Verlag GmbH.

  3. A contrastive study on the influences of radial and three-dimensional satellite gravity gradiometry on the accuracy of the Earth's gravitational field recovery

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Hsu, Hou-Tse; Zhong, Min; Yun, Mei-Juan

    2012-10-01

    The accuracy of the Earth's gravitational field measured from the gravity field and steady-state ocean circulation explorer (GOCE), up to 250 degrees, influenced by the radial gravity gradient Vzz and three-dimensional gravity gradient Vij from the satellite gravity gradiometry (SGG) are contrastively demonstrated based on the analytical error model and numerical simulation, respectively. Firstly, the new analytical error model of the cumulative geoid height, influenced by the radial gravity gradient Vzz and three-dimensional gravity gradient Vij are established, respectively. In 250 degrees, the GOCE cumulative geoid height error measured by the radial gravity gradient Vzz is about 2½ times higher than that measured by the three-dimensional gravity gradient Vij. Secondly, the Earth's gravitational field from GOCE completely up to 250 degrees is recovered using the radial gravity gradient Vzz and three-dimensional gravity gradient Vij by numerical simulation, respectively. The study results show that when the measurement error of the gravity gradient is 3 × 10-12/s2, the cumulative geoid height errors using the radial gravity gradient Vzz and three-dimensional gravity gradient Vij are 12.319 cm and 9.295 cm at 250 degrees, respectively. The accuracy of the cumulative geoid height using the three-dimensional gravity gradient Vij is improved by 30%-40% on average compared with that using the radial gravity gradient Vzz in 250 degrees. Finally, by mutual verification of the analytical error model and numerical simulation, the orders of magnitude from the accuracies of the Earth's gravitational field recovery make no substantial differences based on the radial and three-dimensional gravity gradients, respectively. Therefore, it is feasible to develop in advance a radial cold-atom interferometric gradiometer with a measurement accuracy of 10-13/s2-10-15/s2 for precisely producing the next-generation GOCE Follow-On Earth gravity field model with a high spatial resolution.

  4. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    PubMed

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in using RF to develop predictive models with large environmental data sets.

  5. Evaluation of accelerometer based multi-sensor versus single-sensor activity recognition systems.

    PubMed

    Gao, Lei; Bourke, A K; Nelson, John

    2014-06-01

    Physical activity has a positive impact on people's well-being and it had been shown to decrease the occurrence of chronic diseases in the older adult population. To date, a substantial amount of research studies exist, which focus on activity recognition using inertial sensors. Many of these studies adopt a single sensor approach and focus on proposing novel features combined with complex classifiers to improve the overall recognition accuracy. In addition, the implementation of the advanced feature extraction algorithms and the complex classifiers exceed the computing ability of most current wearable sensor platforms. This paper proposes a method to adopt multiple sensors on distributed body locations to overcome this problem. The objective of the proposed system is to achieve higher recognition accuracy with "light-weight" signal processing algorithms, which run on a distributed computing based sensor system comprised of computationally efficient nodes. For analysing and evaluating the multi-sensor system, eight subjects were recruited to perform eight normal scripted activities in different life scenarios, each repeated three times. Thus a total of 192 activities were recorded resulting in 864 separate annotated activity states. The methods for designing such a multi-sensor system required consideration of the following: signal pre-processing algorithms, sampling rate, feature selection and classifier selection. Each has been investigated and the most appropriate approach is selected to achieve a trade-off between recognition accuracy and computing execution time. A comparison of six different systems, which employ single or multiple sensors, is presented. The experimental results illustrate that the proposed multi-sensor system can achieve an overall recognition accuracy of 96.4% by adopting the mean and variance features, using the Decision Tree classifier. The results demonstrate that elaborate classifiers and feature sets are not required to achieve high recognition accuracies on a multi-sensor system. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  6. Global monthly water scarcity: blue water footprints versus blue water availability.

    PubMed

    Hoekstra, Arjen Y; Mekonnen, Mesfin M; Chapagain, Ashok K; Mathews, Ruth E; Richter, Brian D

    2012-01-01

    Freshwater scarcity is a growing concern, placing considerable importance on the accuracy of indicators used to characterize and map water scarcity worldwide. We improve upon past efforts by using estimates of blue water footprints (consumptive use of ground- and surface water flows) rather than water withdrawals, accounting for the flows needed to sustain critical ecological functions and by considering monthly rather than annual values. We analyzed 405 river basins for the period 1996-2005. In 201 basins with 2.67 billion inhabitants there was severe water scarcity during at least one month of the year. The ecological and economic consequences of increasing degrees of water scarcity--as evidenced by the Rio Grande (Rio Bravo), Indus, and Murray-Darling River Basins--can include complete desiccation during dry seasons, decimation of aquatic biodiversity, and substantial economic disruption.

  7. Global Monthly Water Scarcity: Blue Water Footprints versus Blue Water Availability

    PubMed Central

    Hoekstra, Arjen Y.; Mekonnen, Mesfin M.; Chapagain, Ashok K.; Mathews, Ruth E.; Richter, Brian D.

    2012-01-01

    Freshwater scarcity is a growing concern, placing considerable importance on the accuracy of indicators used to characterize and map water scarcity worldwide. We improve upon past efforts by using estimates of blue water footprints (consumptive use of ground- and surface water flows) rather than water withdrawals, accounting for the flows needed to sustain critical ecological functions and by considering monthly rather than annual values. We analyzed 405 river basins for the period 1996–2005. In 201 basins with 2.67 billion inhabitants there was severe water scarcity during at least one month of the year. The ecological and economic consequences of increasing degrees of water scarcity – as evidenced by the Rio Grande (Rio Bravo), Indus, and Murray-Darling River Basins – can include complete desiccation during dry seasons, decimation of aquatic biodiversity, and substantial economic disruption. PMID:22393438

  8. COSMO-SkyMed Spotlight interometry over rural areas: the Slumgullion landslide in Colorado, USA

    USGS Publications Warehouse

    Milillo, Pietro; Fielding, Eric J.; Schulz, William H.; Delbridge, Brent; Burgmann, Roland

    2014-01-01

    In the last 7 years, spaceborne synthetic aperture radar (SAR) data with resolution of better than a meter acquired by satellites in spotlight mode offered an unprecedented improvement in SAR interferometry (InSAR). Most attention has been focused on monitoring urban areas and man-made infrastructure exploiting geometric accuracy, stability, and phase fidelity of the spotlight mode. In this paper, we explore the potential application of the COSMO-SkyMed® Spotlight mode to rural areas where decorrelation is substantial and rapidly increases with time. We focus on the rapid repeat times of as short as one day possible with the COSMO-SkyMed® constellation. We further present a qualitative analysis of spotlight interferometry over the Slumgullion landslide in southwest Colorado, which moves at rates of more than 1 cm/day.

  9. Realization of preconditioned Lanczos and conjugate gradient algorithms on optical linear algebra processors.

    PubMed

    Ghosh, A

    1988-08-01

    Lanczos and conjugate gradient algorithms are important in computational linear algebra. In this paper, a parallel pipelined realization of these algorithms on a ring of optical linear algebra processors is described. The flow of data is designed to minimize the idle times of the optical multiprocessor and the redundancy of computations. The effects of optical round-off errors on the solutions obtained by the optical Lanczos and conjugate gradient algorithms are analyzed, and it is shown that optical preconditioning can improve the accuracy of these algorithms substantially. Algorithms for optical preconditioning and results of numerical experiments on solving linear systems of equations arising from partial differential equations are discussed. Since the Lanczos algorithm is used mostly with sparse matrices, a folded storage scheme to represent sparse matrices on spatial light modulators is also described.

  10. Research on navigation of satellite constellation based on an asynchronous observation model using X-ray pulsar

    NASA Astrophysics Data System (ADS)

    Guo, Pengbin; Sun, Jian; Hu, Shuling; Xue, Ju

    2018-02-01

    Pulsar navigation is a promising navigation method for high-altitude orbit space tasks or deep space exploration. At present, an important reason for restricting the development of pulsar navigation is that navigation accuracy is not high due to the slow update of the measurements. In order to improve the accuracy of pulsar navigation, an asynchronous observation model which can improve the update rate of the measurements is proposed on the basis of satellite constellation which has a broad space for development because of its visibility and reliability. The simulation results show that the asynchronous observation model improves the positioning accuracy by 31.48% and velocity accuracy by 24.75% than that of the synchronous observation model. With the new Doppler effects compensation method in the asynchronous observation model proposed in this paper, the positioning accuracy is improved by 32.27%, and the velocity accuracy is improved by 34.07% than that of the traditional method. The simulation results show that without considering the clock error will result in a filtering divergence.

  11. Improved transition path sampling methods for simulation of rare events

    NASA Astrophysics Data System (ADS)

    Chopra, Manan; Malshe, Rohit; Reddy, Allam S.; de Pablo, J. J.

    2008-04-01

    The free energy surfaces of a wide variety of systems encountered in physics, chemistry, and biology are characterized by the existence of deep minima separated by numerous barriers. One of the central aims of recent research in computational chemistry and physics has been to determine how transitions occur between deep local minima on rugged free energy landscapes, and transition path sampling (TPS) Monte-Carlo methods have emerged as an effective means for numerical investigation of such transitions. Many of the shortcomings of TPS-like approaches generally stem from their high computational demands. Two new algorithms are presented in this work that improve the efficiency of TPS simulations. The first algorithm uses biased shooting moves to render the sampling of reactive trajectories more efficient. The second algorithm is shown to substantially improve the accuracy of the transition state ensemble by introducing a subset of local transition path simulations in the transition state. The system considered in this work consists of a two-dimensional rough energy surface that is representative of numerous systems encountered in applications. When taken together, these algorithms provide gains in efficiency of over two orders of magnitude when compared to traditional TPS simulations.

  12. The Road to ICD-10-CM/PCS Implementation: Forecasting the Transition for Providers, Payers, and Other Healthcare Organizations

    PubMed Central

    Sanders, Tekla B; Bowens, Felicia M; Pierce, William; Stasher-Booker, Bridgette; Thompson, Erica Q; Jones, Warren A

    2012-01-01

    This article will examine the benefits and challenges of the US healthcare system's upcoming conversion to use of the International Classification of Diseases, Tenth Revision, Clinical Modification/Procedure Coding System (ICD-10-CM/PCS) and will review the cost implications of the transition. Benefits including improved quality of care, potential cost savings from increased accuracy of payments and reduction of unpaid claims, and improved tracking of healthcare data related to public health and bioterrorism events are discussed. Challenges are noted in the areas of planning and implementation, the financial cost of the transition, a shortage of qualified coders, the need for further training and education of the healthcare workforce, and the loss of productivity during the transition. Although the transition will require substantial implementation and conversion costs, potential benefits can be achieved in the areas of data integrity, fraud detection, enhanced cost analysis capabilities, and improved monitoring of patients’ health outcomes that will yield greater cost savings over time. The discussion concludes with recommendations to healthcare organizations of ways in which technological advances and workforce training and development opportunities can ease the transition to the new coding system. PMID:22548024

  13. Improving critical thinking and clinical reasoning with a continuing education course.

    PubMed

    Cruz, Dina Monteiro; Pimenta, Cibele Mattos; Lunney, Margaret

    2009-03-01

    Continuing education courses related to critical thinking and clinical reasoning are needed to improve the accuracy of diagnosis. This study evaluated a 4-day, 16-hour continuing education course conducted in Brazil.Thirty-nine nurses completed a pretest and a posttest consisting of two written case studies designed to measure the accuracy of nurses' diagnoses. There were significant differences in accuracy from pretest to posttest for case 1 (p = .008) and case 2 (p = .042) and overall (p = .001). Continuing education courses should be implemented to improve the accuracy of nurses' diagnoses.

  14. Accuracy and Reliability of the Klales et al. (2012) Morphoscopic Pelvic Sexing Method.

    PubMed

    Lesciotto, Kate M; Doershuk, Lily J

    2018-01-01

    Klales et al. (2012) devised an ordinal scoring system for the morphoscopic pelvic traits described by Phenice (1969) and used for sex estimation of skeletal remains. The aim of this study was to test the accuracy and reliability of the Klales method using a large sample from the Hamann-Todd collection (n = 279). Two observers were blinded to sex, ancestry, and age and used the Klales et al. method to estimate the sex of each individual. Sex was correctly estimated for females with over 95% accuracy; however, the male allocation accuracy was approximately 50%. Weighted Cohen's kappa and intraclass correlation coefficient analysis for evaluating intra- and interobserver error showed moderate to substantial agreement for all traits. Although each trait can be reliably scored using the Klales method, low accuracy rates and high sex bias indicate better trait descriptions and visual guides are necessary to more accurately reflect the range of morphological variation. © 2017 American Academy of Forensic Sciences.

  15. How variations in distance affect eyewitness reports and identification accuracy.

    PubMed

    Lindsay, R C L; Semmler, Carolyn; Weber, Nathan; Brewer, Neil; Lindsay, Marilyn R

    2008-12-01

    Witnesses observe crimes at various distances and the courts have to interpret their testimony given the likely quality of witnesses' views of events. We examined how accurately witnesses judged the distance between themselves and a target person, and how distance affected description accuracy, choosing behavior, and identification test accuracy. Over 1,300 participants were approached during normal daily activities, and asked to observe a target person at one of a number of possible distances. Under a Perception, Immediate Memory, or Delayed Memory condition, witnesses provided a brief description of the target, estimated the distance to the target, and then examined a 6-person target-present or target-absent lineup to see if they could identify the target. Errors in distance judgments were often substantial. Description accuracy was mediocre and did not vary systematically with distance. Identification choosing rates were not affected by distance, but decision accuracy declined with distance. Contrary to previous research, a 15-m viewing distance was not critical for discriminating accurate from inaccurate decisions.

  16. Recent developments and comprehensive evaluations of a GPU-based Monte Carlo package for proton therapy

    PubMed Central

    Qin, Nan; Botas, Pablo; Giantsoudi, Drosoula; Schuemann, Jan; Tian, Zhen; Jiang, Steve B.; Paganetti, Harald; Jia, Xun

    2016-01-01

    Monte Carlo (MC) simulation is commonly considered as the most accurate dose calculation method for proton therapy. Aiming at achieving fast MC dose calculations for clinical applications, we have previously developed a GPU-based MC tool, gPMC. In this paper, we report our recent updates on gPMC in terms of its accuracy, portability, and functionality, as well as comprehensive tests on this tool. The new version, gPMC v2.0, was developed under the OpenCL environment to enable portability across different computational platforms. Physics models of nuclear interactions were refined to improve calculation accuracy. Scoring functions of gPMC were expanded to enable tallying particle fluence, dose deposited by different particle types, and dose-averaged linear energy transfer (LETd). A multiple counter approach was employed to improve efficiency by reducing frequency of memory writing conflict at scoring. For dose calculation, accuracy improvements over gPMC v1.0 were observed in both water phantom cases and a patient case. For a prostate cancer case planned using high-energy proton beams, dose discrepancies in beam entrance and target region seen in gPMC v1.0 with respect to the gold standard tool for proton Monte Carlo simulations (TOPAS) results were substantially reduced and gamma test passing rate (1%/1mm) was improved from 82.7% to 93.1%. Average relative difference in LETd between gPMC and TOPAS was 1.7%. Average relative differences in dose deposited by primary, secondary, and other heavier particles were within 2.3%, 0.4%, and 0.2%. Depending on source proton energy and phantom complexity, it took 8 to 17 seconds on an AMD Radeon R9 290x GPU to simulate 107 source protons, achieving less than 1% average statistical uncertainty. As beam size was reduced from 10×10 cm2 to 1×1 cm2, time on scoring was only increased by 4.8% with eight counters, in contrast to a 40% increase using only one counter. With the OpenCL environment, the portability of gPMC v2.0 was enhanced. It was successfully executed on different CPUs and GPUs and its performance on different devices varied depending on processing power and hardware structure. PMID:27694712

  17. Improved Accuracy of Continuous Glucose Monitoring Systems in Pediatric Patients with Diabetes Mellitus: Results from Two Studies.

    PubMed

    Laffel, Lori

    2016-02-01

    This study was designed to evaluate accuracy, performance, and safety of the Dexcom (San Diego, CA) G4(®) Platinum continuous glucose monitoring (CGM) system (G4P) compared with the Dexcom G4 Platinum with Software 505 algorithm (SW505) when used as adjunctive management to blood glucose (BG) monitoring over a 7-day period in youth, 2-17 years of age, with diabetes. Youth wore either one or two sensors placed on the abdomen or upper buttocks for 7 days, calibrating the device twice daily with a uniform BG meter. Participants had one in-clinic session on Day 1, 4, or 7, during which fingerstick BG measurements (self-monitoring of blood glucose [SMBG]) were obtained every 30 ± 5 min for comparison with CGM, and in youth 6-17 years of age, reference YSI glucose measurements were obtained from arterialized venous blood collected every 15 ± 5 min for comparison with CGM. The sensor was removed by the participant/family after 7 days. In comparison of 2,922 temporally paired points of CGM with the reference YSI measurement for G4P and 2,262 paired points for SW505, the mean absolute relative difference (MARD) was 17% for G4P versus 10% for SW505 (P < 0.0001). In comparison of 16,318 temporally paired points of CGM with SMBG for G4P and 4,264 paired points for SW505, MARD was 15% for G4P versus 13% for SW505 (P < 0.0001). Similarly, error grid analyses indicated superior performance with SW505 compared with G4P in comparison of CGM with YSI and CGM with SMBG results, with greater percentages of SW505 results falling within error grid Zone A or the combined Zones A plus B. There were no serious adverse events or device-related serious adverse events for either the G4P or the SW505, and there was no sensor breakoff. The updated algorithm offers substantial improvements in accuracy and performance in pediatric patients with diabetes. Use of CGM with improved performance has potential to increase glucose time in range and improve glycemic outcomes for youth.

  18. MRI-guided attenuation correction in whole-body PET/MR: assessment of the effect of bone attenuation.

    PubMed

    Akbarzadeh, A; Ay, M R; Ahmadian, A; Alam, N Riahi; Zaidi, H

    2013-02-01

    Hybrid PET/MRI presents many advantages in comparison with its counterpart PET/CT in terms of improved soft-tissue contrast, decrease in radiation exposure, and truly simultaneous and multi-parametric imaging capabilities. However, the lack of well-established methodology for MR-based attenuation correction is hampering further development and wider acceptance of this technology. We assess the impact of ignoring bone attenuation and using different tissue classes for generation of the attenuation map on the accuracy of attenuation correction of PET data. This work was performed using simulation studies based on the XCAT phantom and clinical input data. For the latter, PET and CT images of patients were used as input for the analytic simulation model using realistic activity distributions where CT-based attenuation correction was utilized as reference for comparison. For both phantom and clinical studies, the reference attenuation map was classified into various numbers of tissue classes to produce three (air, soft tissue and lung), four (air, lungs, soft tissue and cortical bones) and five (air, lungs, soft tissue, cortical bones and spongeous bones) class attenuation maps. The phantom studies demonstrated that ignoring bone increases the relative error by up to 6.8% in the body and up to 31.0% for bony regions. Likewise, the simulated clinical studies showed that the mean relative error reached 15% for lesions located in the body and 30.7% for lesions located in bones, when neglecting bones. These results demonstrate an underestimation of about 30% of tracer uptake when neglecting bone, which in turn imposes substantial loss of quantitative accuracy for PET images produced by hybrid PET/MRI systems. Considering bones in the attenuation map will considerably improve the accuracy of MR-guided attenuation correction in hybrid PET/MR to enable quantitative PET imaging on hybrid PET/MR technologies.

  19. Four decades of microwave satellite soil moisture observations: Part 2. Product validation and inter-satellite comparisons

    NASA Astrophysics Data System (ADS)

    Karthikeyan, L.; Pan, Ming; Wanders, Niko; Kumar, D. Nagesh; Wood, Eric F.

    2017-11-01

    Soil moisture is widely recognized as an important land surface variable that provides a deeper knowledge of land-atmosphere interactions and climate change. Space-borne passive and active microwave sensors have become valuable and essential sources of soil moisture observations at global scales. Over the past four decades, several active and passive microwave sensors have been deployed, along with the recent launch of two fully dedicated missions (SMOS and SMAP). Signifying the four decades of microwave remote sensing of soil moisture, this Part 2 of the two-part review series aims to present an overview of how our knowledge in this field has improved in terms of the design of sensors and their accuracy for retrieving soil moisture. The first part discusses the developments made in active and passive microwave soil moisture retrieval algorithms. We assess the evolution of the products of various sensors over the last four decades, in terms of daily coverage, temporal performance, and spatial performance, by comparing the products of eight passive sensors (SMMR, SSM/I, TMI, AMSR-E, WindSAT, AMSR2, SMOS and SMAP), two active sensors (ERS-Scatterometer, MetOp-ASCAT), and one active/passive merged soil moisture product (ESA-CCI combined product) with the International Soil Moisture Network (ISMN) in-situ stations and the Variable Infiltration Capacity (VIC) land surface model simulations over the Contiguous United States (CONUS). In the process, the regional impacts of vegetation conditions on the spatial and temporal performance of soil moisture products are investigated. We also carried out inter-satellite comparisons to study the roles of sensor design and algorithms on the retrieval accuracy. We find that substantial improvements have been made over recent years in this field in terms of daily coverage, retrieval accuracy, and temporal dynamics. We conclude that the microwave soil moisture products have significantly evolved in the last four decades and will continue to make key contributions to the progress of hydro-meteorological and climate sciences.

  20. Prostate cancer localization with multispectral MRI using cost-sensitive support vector machines and conditional random fields.

    PubMed

    Artan, Yusuf; Haider, Masoom A; Langer, Deanna L; van der Kwast, Theodorus H; Evans, Andrew J; Yang, Yongyi; Wernick, Miles N; Trachtenberg, John; Yetik, Imam Samil

    2010-09-01

    Prostate cancer is a leading cause of cancer death for men in the United States. Fortunately, the survival rate for early diagnosed patients is relatively high. Therefore, in vivo imaging plays an important role for the detection and treatment of the disease. Accurate prostate cancer localization with noninvasive imaging can be used to guide biopsy, radiotherapy, and surgery as well as to monitor disease progression. Magnetic resonance imaging (MRI) performed with an endorectal coil provides higher prostate cancer localization accuracy, when compared to transrectal ultrasound (TRUS). However, in general, a single type of MRI is not sufficient for reliable tumor localization. As an alternative, multispectral MRI, i.e., the use of multiple MRI-derived datasets, has emerged as a promising noninvasive imaging technique for the localization of prostate cancer; however almost all studies are with human readers. There is a significant inter and intraobserver variability for human readers, and it is substantially difficult for humans to analyze the large dataset of multispectral MRI. To solve these problems, this study presents an automated localization method using cost-sensitive support vector machines (SVMs) and shows that this method results in improved localization accuracy than classical SVM. Additionally, we develop a new segmentation method by combining conditional random fields (CRF) with a cost-sensitive framework and show that our method further improves cost-sensitive SVM results by incorporating spatial information. We test SVM, cost-sensitive SVM, and the proposed cost-sensitive CRF on multispectral MRI datasets acquired from 21 biopsy-confirmed cancer patients. Our results show that multispectral MRI helps to increase the accuracy of prostate cancer localization when compared to single MR images; and that using advanced methods such as cost-sensitive SVM as well as the proposed cost-sensitive CRF can boost the performance significantly when compared to SVM.

  1. Robust adaptive extended Kalman filtering for real time MR-thermometry guided HIFU interventions.

    PubMed

    Roujol, Sébastien; de Senneville, Baudouin Denis; Hey, Silke; Moonen, Chrit; Ries, Mario

    2012-03-01

    Real time magnetic resonance (MR) thermometry is gaining clinical importance for monitoring and guiding high intensity focused ultrasound (HIFU) ablations of tumorous tissue. The temperature information can be employed to adjust the position and the power of the HIFU system in real time and to determine the therapy endpoint. The requirement to resolve both physiological motion of mobile organs and the rapid temperature variations induced by state-of-the-art high-power HIFU systems require fast MRI-acquisition schemes, which are generally hampered by low signal-to-noise ratios (SNRs). This directly limits the precision of real time MR-thermometry and thus in many cases the feasibility of sophisticated control algorithms. To overcome these limitations, temporal filtering of the temperature has been suggested in the past, which has generally an adverse impact on the accuracy and latency of the filtered data. Here, we propose a novel filter that aims to improve the precision of MR-thermometry while monitoring and adapting its impact on the accuracy. For this, an adaptive extended Kalman filter using a model describing the heat transfer for acoustic heating in biological tissues was employed together with an additional outlier rejection to address the problem of sparse artifacted temperature points. The filter was compared to an efficient matched FIR filter and outperformed the latter in all tested cases. The filter was first evaluated on simulated data and provided in the worst case (with an approximate configuration of the model) a substantial improvement of the accuracy by a factor 3 and 15 during heat up and cool down periods, respectively. The robustness of the filter was then evaluated during HIFU experiments on a phantom and in vivo in porcine kidney. The presence of strong temperature artifacts did not affect the thermal dose measurement using our filter whereas a high measurement variation of 70% was observed with the FIR filter.

  2. Summary of the Validation of the Second Version of the Aster Gdem

    NASA Astrophysics Data System (ADS)

    Meyer, D. J.; Tachikawa, T.; Abrams, M.; Crippen, R.; Krieger, T.; Gesch, D.; Carabajal, C.

    2012-07-01

    On October 17, 2011, NASA and the Ministry of Economy, Trade and Industry (METI) of Japan released the second version of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Digital Elevation Model (GDEM) to users worldwide at no charge as a contribution to the Global Earth Observing System of Systems (GEOSS). The first version of the ASTER GDEM, released on June 29, 2009, was compiled from over 1.2 million scene-based DEMs covering land surfaces between 83°N and 83°S latitudes. The second version (GDEM2) incorporates 260,000 additional scenes to improve coverage, a smaller correlation kernel to yield higher spatial resolution, and improved water masking. As with GDEM1, US and Japanese partners collaborated to validate GDEM2. Its absolute accuracy was within -0.20 meters on average when compared against 18,000 geodetic control points over the conterminous US (CONUS), with an accuracy of 17 meters at the 95% confidence level. The Japan study noted the GDEM2 differed from the 10-meter national elevation grid by -0.7 meters over bare areas, and by 7.4 meters over forested areas. The CONUS study noted a similar result, with the GDEM2 determined to be about 8 meters above the 1 arc-second US National Elevation Database (NED) over most forested areas, and more than a meter below NED over bare areas. A global ICESat study found the GDEM2 to be on average within 3 meters of altimeter-derived control. The Japan study noted a horizontal displacement of 0.23 pixels in GDEM2. A study from the US National Geospatial Intelligence Agency also determined horizontal displacement and vertical accuracy as compared to the 1 arc-second Shuttle Radar Topography Mission DEM. US and Japanese studies estimated the horizontal resolution of the GDEM2 to be between 71 and 82 meters. Finally, the number of voids and artifacts noted in GDEM1 were substantially reduced in GDEM2.

  3. How common is "common knowledge" about child witnesses among legal professionals? Comparing interviewers, public defenders, and forensic psychologists with laypeople.

    PubMed

    Buck, Julie A; Warren, Amye R; Bruck, Maggie; Kuehnle, Kathryn

    2014-01-01

    The present study evaluates the knowledge of jury-eligible college students (n = 192), investigative interviewers (n = 44), forensic psychologists (n = 39), and public defenders (n = 137) in regard to the research on interviewing children. These groups' knowledge was compared with the scientific research on the impact of interview techniques and practices on the accuracy of child witnesses. Jury-eligible students were the least knowledgeable, but their accuracy varied widely across items. Both interviewers and public defenders performed better than jury-eligible students, but they lacked substantial knowledge about the research on interviewing children on certain topics (e.g., using anatomically detailed dolls); forensic psychologists were the most knowledgeable. These findings suggest that professionals in the legal system need substantial professional development regarding the research on interviewing strategies with child witnesses. They also highlight the need for experts to provide case-relevant information to juries who lack basic information about the validity and reliability of children's reports. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Pediatric Disaster Triage: Multiple Simulation Curriculum Improves Prehospital Care Providers' Assessment Skills.

    PubMed

    Cicero, Mark Xavier; Whitfill, Travis; Overly, Frank; Baird, Janette; Walsh, Barbara; Yarzebski, Jorge; Riera, Antonio; Adelgais, Kathleen; Meckler, Garth D; Baum, Carl; Cone, David Christopher; Auerbach, Marc

    2017-01-01

    Paramedics and emergency medical technicians (EMTs) triage pediatric disaster victims infrequently. The objective of this study was to measure the effect of a multiple-patient, multiple-simulation curriculum on accuracy of pediatric disaster triage (PDT). Paramedics, paramedic students, and EMTs from three sites were enrolled. Triage accuracy was measured three times (Time 0, Time 1 [two weeks later], and Time 2 [6 months later]) during a disaster simulation, in which high and low fidelity manikins and actors portrayed 10 victims. Accuracy was determined by participant triage decision concordance with predetermined expected triage level (RED [Immediate], YELLOW [Delayed], GREEN [Ambulatory], BLACK [Deceased]) for each victim. Between Time 0 and Time 1, participants completed an interactive online module, and after each simulation there was an individual debriefing. Associations between participant level of training, years of experience, and enrollment site were determined, as were instances of the most dangerous mistriage, when RED and YELLOW victims were triaged BLACK. The study enrolled 331 participants, and the analysis included 261 (78.9%) participants who completed the study, 123 from the Connecticut site, 83 from Rhode Island, and 55 from Massachusetts. Triage accuracy improved significantly from Time 0 to Time 1, after the educational interventions (first simulation with debriefing, and an interactive online module), with a median 10% overall improvement (p < 0.001). Subgroup analyses showed between Time 0 and Time 1, paramedics and paramedic students improved more than EMTs (p = 0.002). Analysis of triage accuracy showed greatest improvement in overall accuracy for YELLOW triage patients (Time 0 50% accurate, Time1 100%), followed by RED patients (Time 0 80%, Time 1 100%). There was no significant difference in accuracy between Time 1 and Time 2 (p = 0.073). This study shows that the multiple-victim, multiple-simulation curriculum yields a durable 10% improvement in simulated triage accuracy. Future iterations of the curriculum can target greater improvements in EMT triage accuracy.

  5. Co-Inheritance Analysis within the Domains of Life Substantially Improves Network Inference by Phylogenetic Profiling

    PubMed Central

    Shin, Junha; Lee, Insuk

    2015-01-01

    Phylogenetic profiling, a network inference method based on gene inheritance profiles, has been widely used to construct functional gene networks in microbes. However, its utility for network inference in higher eukaryotes has been limited. An improved algorithm with an in-depth understanding of pathway evolution may overcome this limitation. In this study, we investigated the effects of taxonomic structures on co-inheritance analysis using 2,144 reference species in four query species: Escherichia coli, Saccharomyces cerevisiae, Arabidopsis thaliana, and Homo sapiens. We observed three clusters of reference species based on a principal component analysis of the phylogenetic profiles, which correspond to the three domains of life—Archaea, Bacteria, and Eukaryota—suggesting that pathways inherit primarily within specific domains or lower-ranked taxonomic groups during speciation. Hence, the co-inheritance pattern within a taxonomic group may be eroded by confounding inheritance patterns from irrelevant taxonomic groups. We demonstrated that co-inheritance analysis within domains substantially improved network inference not only in microbe species but also in the higher eukaryotes, including humans. Although we observed two sub-domain clusters of reference species within Eukaryota, co-inheritance analysis within these sub-domain taxonomic groups only marginally improved network inference. Therefore, we conclude that co-inheritance analysis within domains is the optimal approach to network inference with the given reference species. The construction of a series of human gene networks with increasing sample sizes of the reference species for each domain revealed that the size of the high-accuracy networks increased as additional reference species genomes were included, suggesting that within-domain co-inheritance analysis will continue to expand human gene networks as genomes of additional species are sequenced. Taken together, we propose that co-inheritance analysis within the domains of life will greatly potentiate the use of the expected onslaught of sequenced genomes in the study of molecular pathways in higher eukaryotes. PMID:26394049

  6. Cadastral Database Positional Accuracy Improvement

    NASA Astrophysics Data System (ADS)

    Hashim, N. M.; Omar, A. H.; Ramli, S. N. M.; Omar, K. M.; Din, N.

    2017-10-01

    Positional Accuracy Improvement (PAI) is the refining process of the geometry feature in a geospatial dataset to improve its actual position. This actual position relates to the absolute position in specific coordinate system and the relation to the neighborhood features. With the growth of spatial based technology especially Geographical Information System (GIS) and Global Navigation Satellite System (GNSS), the PAI campaign is inevitable especially to the legacy cadastral database. Integration of legacy dataset and higher accuracy dataset like GNSS observation is a potential solution for improving the legacy dataset. However, by merely integrating both datasets will lead to a distortion of the relative geometry. The improved dataset should be further treated to minimize inherent errors and fitting to the new accurate dataset. The main focus of this study is to describe a method of angular based Least Square Adjustment (LSA) for PAI process of legacy dataset. The existing high accuracy dataset known as National Digital Cadastral Database (NDCDB) is then used as bench mark to validate the results. It was found that the propose technique is highly possible for positional accuracy improvement of legacy spatial datasets.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daily, Michael D.; Olsen, Brett N.; Schlesinger, Paul H.

    In mammalian cells cholesterol is essential for membrane function, but in excess can be cytototoxic. The cellular response to acute cholesterol loading involves biophysical-based mechanisms that regulate cholesterol levels, through modulation of the “activity” or accessibility of cholesterol to extra-membrane acceptors. Experiments and united atom (UA) simulations show that at high concentrations of cholesterol, lipid bilayers thin significantly and cholesterol availability to external acceptors increases substantially. Such cholesterol activation is critical to its trafficking within cells. Here we aim to reduce the computational cost to enable simulation of large and complex systems involved in cholesterol regulation, such as those includingmore » oxysterols and cholesterol-sensing proteins. To accomplish this, we have modified the published MARTINI coarse-grained force field to improve its predictions of cholesterol-induced changes in both macroscopic and microscopic properties of membranes. Most notably, MARTINI fails to capture both the (macroscopic) area condensation and membrane thickening seen at less than 30% cholesterol and the thinning seen above 40% cholesterol. The thinning at high concentration is critical to cholesterol activation. Microscopic properties of interest include cholesterol-cholesterol radial distribution functions (RDFs), tilt angle, and accessible surface area. First, we develop an “angle-corrected” model wherein we modify the coarse-grained bond angle potentials based on atomistic simulations. This modification significantly improves prediction of macroscopic properties, most notably the thickening/thinning behavior, and also slightly improves microscopic property prediction relative to MARTINI. Second, we add to the angle correction a “volume correction” by also adjusting phospholipid bond lengths to achieve a more accurate volume per molecule. The angle + volume correction substantially further improves the quantitative agreement of the macroscopic properties (area per molecule and thickness) with united atom simulations. However, this improvement also reduces the accuracy of microscopic predictions like radial distribution functions and cholesterol tilt below that of either MARTINI or the angle-corrected model. Thus, while both of our forcefield corrections improve MARTINI, the combined angle and volume correction should be used for problems involving sterol effects on the overall structure of the membrane, while our angle-corrected model should be used in cases where the properties of individual lipid and sterol models are critically important.« less

  8. An Innovative Approach to Improving the Accuracy of Delirium Assessments Using the Confusion Assessment Method for the Intensive Care Unit.

    PubMed

    DiLibero, Justin; O'Donoghue, Sharon C; DeSanto-Madeya, Susan; Felix, Janice; Ninobla, Annalyn; Woods, Allison

    2016-01-01

    Delirium occurs in up to 80% of intensive care unit (ICU) patients. Despite its prevalence in this population, there continues to be inaccuracies in delirium assessments. In the absence of accurate delirium assessments, delirium in critically ill ICU patients will remain unrecognized and will lead to negative clinical and organizational outcomes. The goal of this quality improvement project was to facilitate sustained improvement in the accuracy of delirium assessments among all ICU patients including those who were sedate or agitated. A pretest-posttest design was used to evaluate the effectiveness of a program to improve the accuracy of delirium screenings among patients admitted to a medical ICU or coronary care unit. Two hundred thirty-six delirium assessment audits were completed during the baseline period and 535 during the postintervention period. Compliance with performing at least 1 delirium assessment every shift was 85% at baseline and improved to 99% during the postintervention period. Baseline assessment accuracy was 70.31% among all patients and 53.49% among sedate and agitated patients. Postintervention assessment accuracy improved to 95.51% for all patients and 89.23% among sedate and agitated patients. The results from this project suggest the effectiveness of the program in improving assessment accuracy among difficult-to-assess patients. Further research is needed to demonstrate the effectiveness of this model across other critical care units, patient populations, and organizations.

  9. Concept Mapping Improves Metacomprehension Accuracy among 7th Graders

    ERIC Educational Resources Information Center

    Redford, Joshua S.; Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2012-01-01

    Two experiments explored concept map construction as a useful intervention to improve metacomprehension accuracy among 7th grade students. In the first experiment, metacomprehension was marginally better for a concept mapping group than for a rereading group. In the second experiment, metacomprehension accuracy was significantly greater for a…

  10. Improving the Accuracy of Software-Based Energy Analysis for Residential Buildings (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polly, B.

    2011-09-01

    This presentation describes the basic components of software-based energy analysis for residential buildings, explores the concepts of 'error' and 'accuracy' when analysis predictions are compared to measured data, and explains how NREL is working to continuously improve the accuracy of energy analysis methods.

  11. Note: An improved calibration system with phase correction for electronic transformers with digital output.

    PubMed

    Cheng, Han-miao; Li, Hong-bin

    2015-08-01

    The existing electronic transformer calibration systems employing data acquisition cards cannot satisfy some practical applications, because the calibration systems have phase measurement errors when they work in the mode of receiving external synchronization signals. This paper proposes an improved calibration system scheme with phase correction to improve the phase measurement accuracy. We employ NI PCI-4474 to design a calibration system, and the system has the potential to receive external synchronization signals and reach extremely high accuracy classes. Accuracy verification has been carried out in the China Electric Power Research Institute, and results demonstrate that the system surpasses the accuracy class 0.05. Furthermore, this system has been used to test the harmonics measurement accuracy of all-fiber optical current transformers. In the same process, we have used an existing calibration system, and a comparison of the test results is presented. The system after improvement is suitable for the intended applications.

  12. Understanding the delayed-keyword effect on metacomprehension accuracy.

    PubMed

    Thiede, Keith W; Dunlosky, John; Griffin, Thomas D; Wiley, Jennifer

    2005-11-01

    The typical finding from research on metacomprehension is that accuracy is quite low. However, recent studies have shown robust accuracy improvements when judgments follow certain generation tasks (summarizing or keyword listing) but only when these tasks are performed at a delay rather than immediately after reading (K. W. Thiede & M. C. M. Anderson, 2003; K. W. Thiede, M. C. M. Anderson, & D. Therriault, 2003). The delayed and immediate conditions in these studies confounded the delay between reading and generation tasks with other task lags, including the lag between multiple generation tasks and the lag between generation tasks and judgments. The first 2 experiments disentangle these confounded manipulations and provide clear evidence that the delay between reading and keyword generation is the only lag critical to improving metacomprehension accuracy. The 3rd and 4th experiments show that not all delayed tasks produce improvements and suggest that delayed generative tasks provide necessary diagnostic cues about comprehension for improving metacomprehension accuracy.

  13. Techniques for improving the accuracy of cyrogenic temperature measurement in ground test programs

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Fabik, Richard H.

    1993-01-01

    The performance of a sensor is often evaluated by determining to what degree of accuracy a measurement can be made using this sensor. The absolute accuracy of a sensor is an important parameter considered when choosing the type of sensor to use in research experiments. Tests were performed to improve the accuracy of cryogenic temperature measurements by calibration of the temperature sensors when installed in their experimental operating environment. The calibration information was then used to correct for temperature sensor measurement errors by adjusting the data acquisition system software. This paper describes a method to improve the accuracy of cryogenic temperature measurements using corrections in the data acquisition system software such that the uncertainty of an individual temperature sensor is improved from plus or minus 0.90 deg R to plus or minus 0.20 deg R over a specified range.

  14. The Importance of Content Representation for Common-Item Equating with Nonrandom Groups.

    ERIC Educational Resources Information Center

    Klein, Lawrence W.; Jarjoura, David

    1985-01-01

    The test equating accuracy of content-representative anchors (subsets of items in common) versus nonrepresentative, but substantially longer, anchors was compared for a professional certification examination. Through a chain of equatings, it was found that content representation in anchors was critical. (Author/GDC)

  15. Temperature Sensor

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Weed Instrument Inc. produces a line of thermocouples - temperature sensors - for a variety of industrial and research uses. One of the company's newer products is a thermocouple specially designed for high accuracy at extreme temperatures above 3,000 degrees Fahrenheit. Development of sensor brought substantial increases in Weed Instrument sales and employment.

  16. Improving the accuracy of camber predictions for precast pretensioned concrete beams : [tech transfer summary].

    DOT National Transportation Integrated Search

    2015-07-01

    Implementing the recommendations of this study is expected to significantly : improve the accuracy of camber measurements and predictions and to : ultimately help reduce construction delays, improve bridge serviceability, : and decrease costs.

  17. Can corrective feedback improve recognition memory?

    PubMed

    Kantner, Justin; Lindsay, D Stephen

    2010-06-01

    An understanding of the effects of corrective feedback on recognition memory can inform both recognition theory and memory training programs, but few published studies have investigated the issue. Although the evidence to date suggests that feedback does not improve recognition accuracy, few studies have directly examined its effect on sensitivity, and fewer have created conditions that facilitate a feedback advantage by encouraging controlled processing at test. In Experiment 1, null effects of feedback were observed following both deep and shallow encoding of categorized study lists. In Experiment 2, feedback robustly influenced response bias by allowing participants to discern highly uneven base rates of old and new items, but sensitivity remained unaffected. In Experiment 3, a false-memory procedure, feedback failed to attenuate false recognition of critical lures. In Experiment 4, participants were unable to use feedback to learn a simple category rule separating old items from new items, despite the fact that feedback was of substantial benefit in a nearly identical categorization task. The recognition system, despite a documented ability to utilize controlled strategic or inferential decision-making processes, appears largely impenetrable to a benefit of corrective feedback.

  18. Geographic Gossip: Efficient Averaging for Sensor Networks

    NASA Astrophysics Data System (ADS)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  19. Can we predict failure in couple therapy early enough to enhance outcome?

    PubMed

    Pepping, Christopher A; Halford, W Kim; Doss, Brian D

    2015-02-01

    Feedback to therapists based on systematic monitoring of individual therapy progress reliably enhances therapy outcome. An implicit assumption of therapy progress feedback is that clients unlikely to benefit from therapy can be detected early enough in the course of therapy for corrective action to be taken. To explore the possibility of using feedback of therapy progress to enhance couple therapy outcome, the current study tested whether weekly therapy progress could detect off-track clients early in couple therapy. In an effectiveness trial of couple therapy, 136 couples were monitored weekly on relationship satisfaction and an expert derived algorithm was used to attempt to predict eventual therapy outcome. As expected, the algorithm detected a significant proportion of couples who did not benefit from couple therapy at Session 3, but prediction was substantially improved at Session 4 so that eventual outcome was accurately predicted for 70% of couples, with little improvement of prediction thereafter. More sophisticated algorithms might enhance prediction accuracy, and a trial of the effects of therapy progress feedback on couple therapy outcome is needed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Evaluating and improving the performance of thin film force sensors within body and device interfaces.

    PubMed

    Likitlersuang, Jirapat; Leineweber, Matthew J; Andrysek, Jan

    2017-10-01

    Thin film force sensors are commonly used within biomechanical systems, and at the interface of the human body and medical and non-medical devices. However, limited information is available about their performance in such applications. The aims of this study were to evaluate and determine ways to improve the performance of thin film (FlexiForce) sensors at the body/device interface. Using a custom apparatus designed to load the sensors under simulated body/device conditions, two aspects were explored relating to sensor calibration and application. The findings revealed accuracy errors of 23.3±17.6% for force measurements at the body/device interface with conventional techniques of sensor calibration and application. Applying a thin rigid disc between the sensor and human body and calibrating the sensor using compliant surfaces was found to substantially reduce measurement errors to 2.9±2.0%. The use of alternative calibration and application procedures is recommended to gain acceptable measurement performance from thin film force sensors in body/device applications. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  1. Trends in extreme learning machines: a review.

    PubMed

    Huang, Gao; Huang, Guang-Bin; Song, Shiji; You, Keyou

    2015-01-01

    Extreme learning machine (ELM) has gained increasing interest from various research fields recently. In this review, we aim to report the current state of the theoretical research and practical advances on this subject. We first give an overview of ELM from the theoretical perspective, including the interpolation theory, universal approximation capability, and generalization ability. Then we focus on the various improvements made to ELM which further improve its stability, sparsity and accuracy under general or specific conditions. Apart from classification and regression, ELM has recently been extended for clustering, feature selection, representational learning and many other learning tasks. These newly emerging algorithms greatly expand the applications of ELM. From implementation aspect, hardware implementation and parallel computation techniques have substantially sped up the training of ELM, making it feasible for big data processing and real-time reasoning. Due to its remarkable efficiency, simplicity, and impressive generalization performance, ELM have been applied in a variety of domains, such as biomedical engineering, computer vision, system identification, and control and robotics. In this review, we try to provide a comprehensive view of these advances in ELM together with its future perspectives.

  2. Loss reduction in axial-flow compressors through low-speed model testing

    NASA Technical Reports Server (NTRS)

    Wisler, D. C.

    1984-01-01

    A systematic procedure for reducing losses in axial-flow compressors is presented. In this procedure, a large, low-speed, aerodynamic model of a high-speed core compressor is designed and fabricated based on aerodynamic similarity principles. This model is then tested at low speed where high-loss regions associated with three-dimensional endwall boundary layers flow separation, leakage, and secondary flows can be located, detailed measurements made, and loss mechanisms determined with much greater accuracy and much lower cost and risk than is possible in small, high-speed compressors. Design modifications are made by using custom-tailored airfoils and vector diagrams, airfoil endbends, and modified wall geometries in the high-loss regions. The design improvements resulting in reduced loss or increased stall margin are then scaled to high speed. This paper describes the procedure and presents experimental results to show that in some cases endwall loss has been reduced by as much as 10 percent, flow separation has been reduced or eliminated, and stall margin has been substantially improved by using these techniques.

  3. Accurate and efficient integration for molecular dynamics simulations at constant temperature and pressure

    NASA Astrophysics Data System (ADS)

    Lippert, Ross A.; Predescu, Cristian; Ierardi, Douglas J.; Mackenzie, Kenneth M.; Eastwood, Michael P.; Dror, Ron O.; Shaw, David E.

    2013-10-01

    In molecular dynamics simulations, control over temperature and pressure is typically achieved by augmenting the original system with additional dynamical variables to create a thermostat and a barostat, respectively. These variables generally evolve on timescales much longer than those of particle motion, but typical integrator implementations update the additional variables along with the particle positions and momenta at each time step. We present a framework that replaces the traditional integration procedure with separate barostat, thermostat, and Newtonian particle motion updates, allowing thermostat and barostat updates to be applied infrequently. Such infrequent updates provide a particularly substantial performance advantage for simulations parallelized across many computer processors, because thermostat and barostat updates typically require communication among all processors. Infrequent updates can also improve accuracy by alleviating certain sources of error associated with limited-precision arithmetic. In addition, separating the barostat, thermostat, and particle motion update steps reduces certain truncation errors, bringing the time-average pressure closer to its target value. Finally, this framework, which we have implemented on both general-purpose and special-purpose hardware, reduces software complexity and improves software modularity.

  4. Improved Electrostatic Embedding for Fragment-Based Chemical Shift Calculations in Molecular Crystals.

    PubMed

    Hartman, Joshua D; Balaji, Ashwin; Beran, Gregory J O

    2017-12-12

    Fragment-based methods predict nuclear magnetic resonance (NMR) chemical shielding tensors in molecular crystals with high accuracy and computational efficiency. Such methods typically employ electrostatic embedding to mimic the crystalline environment, and the quality of the results can be sensitive to the embedding treatment. To improve the quality of this embedding environment for fragment-based molecular crystal property calculations, we borrow ideas from the embedded ion method to incorporate self-consistently polarized Madelung field effects. The self-consistent reproduction of the Madelung potential (SCRMP) model developed here constructs an array of point charges that incorporates self-consistent lattice polarization and which reproduces the Madelung potential at all atomic sites involved in the quantum mechanical region of the system. The performance of fragment- and cluster-based 1 H, 13 C, 14 N, and 17 O chemical shift predictions using SCRMP and density functionals like PBE and PBE0 are assessed. The improved embedding model results in substantial improvements in the predicted 17 O chemical shifts and modest improvements in the 15 N ones. Finally, the performance of the model is demonstrated by examining the assignment of the two oxygen chemical shifts in the challenging γ-polymorph of glycine. Overall, the SCRMP-embedded NMR chemical shift predictions are on par with or more accurate than those obtained with the widely used gauge-including projector augmented wave (GIPAW) model.

  5. Finish line distinctness and accuracy in 7 intraoral scanners versus conventional impression: an in vitro descriptive comparison.

    PubMed

    Nedelcu, Robert; Olsson, Pontus; Nyström, Ingela; Thor, Andreas

    2018-02-23

    Several studies have evaluated accuracy of intraoral scanners (IOS), but data is lacking regarding variations between IOS systems in the depiction of the critical finish line and the finish line accuracy. The aim of this study was to analyze the level of finish line distinctness (FLD), and finish line accuracy (FLA), in 7 intraoral scanners (IOS) and one conventional impression (IMPR). Furthermore, to assess parameters of resolution, tessellation, topography, and color. A dental model with a crown preparation including supra and subgingival finish line was reference-scanned with an industrial scanner (ATOS), and scanned with seven IOS: 3M, CS3500 and CS3600, DWIO, Omnicam, Planscan and Trios. An IMPR was taken and poured, and the model was scanned with a laboratory scanner. The ATOS scan was cropped at finish line and best-fit aligned for 3D Compare Analysis (Geomagic). Accuracy was visualized, and descriptive analysis was performed. All IOS, except Planscan, had comparable overall accuracy, however, FLD and FLA varied substantially. Trios presented the highest FLD, and with CS3600, the highest FLA. 3M, and DWIO had low overall FLD and low FLA in subgingival areas, whilst Planscan had overall low FLD and FLA, as well as lower general accuracy. IMPR presented high FLD, except in subgingival areas, and high FLA. Trios had the highest resolution by factor 1.6 to 3.1 among IOS, followed by IMPR, DWIO, Omnicam, CS3500, 3M, CS3600 and Planscan. Tessellation was found to be non-uniform except in 3M and DWIO. Topographic variation was found for 3M and Trios, with deviations below +/- 25 μm for Trios. Inclusion of color enhanced the identification of the finish line in Trios, Omnicam and CS3600, but not in Planscan. There were sizeable variations between IOS with both higher and lower FLD and FLA than IMPR. High FLD was more related to high localized finish line resolution and non-uniform tessellation, than to high overall resolution. Topography variations were low. Color improved finish line identification in some IOS. It is imperative that clinicians critically evaluate the digital impression, being aware of varying technical limitations among IOS, in particular when challenging subgingival conditions apply.

  6. Accuracy improvement of the ice flow rate measurements on Antarctic ice sheet by DInSAR method

    NASA Astrophysics Data System (ADS)

    Shiramizu, Kaoru; Doi, Koichiro; Aoyama, Yuichi

    2015-04-01

    DInSAR (Differential Interferometric Synthetic Aperture Radar) is an effective tool to measure the flow rate of slow flowing ice streams on Antarctic ice sheet with high resolution. In the flow rate measurement by DInSAR method, we use Digital Elevation Model (DEM) at two times in the estimating process. At first, we use it to remove topographic fringes from InSAR images. And then, it is used to project obtained displacements along Line-Of-Sight (LOS) direction to the actual flow direction. ASTER-GDEM widely-used for InSAR prosessing of the data of polar region has a lot of errors especially in the inland ice sheet area. Thus the errors yield irregular flow rates and directions. Therefore, quality of DEM has a substantial influence on the ice flow rate measurement. In this study, we created a new DEM (resolution 10m; hereinafter referred to as PRISM-DEM) based on ALOS/PRISM images, and compared PRISM-DEM and ASTER-GDEM. The study area is around Skallen, 90km south from Syowa Station, in the southern part of Sôya Coast, East Antarctica. For making DInSAR images, we used ALOS/PALSAR data of 13 pairs (Path633, Row 571-572), observed during the period from November 23, 2007 through January 16, 2011. PRISM-DEM covering the PALSAR scene was created from nadir and backward view images of ALOS/PRISM (Observation date: 2009/1/18) by applying stereo processing with a digital mapping equipment, and then the automatically created a primary DEM was corrected manually to make a final DEM. The number of irregular values of actual ice flow rate was reduced by applying PRISM-DEM compared with that by applying ASTER-GDEM. Additionally, an averaged displacement of approximately 0.5cm was obtained by applying PRISM-DEM over outcrop area, where no crustal displacement considered to occur during the recurrence period of ALOS/PALSAR (46days), while an averaged displacement of approximately 1.65 cm was observed by applying ASTER-GDEM. Since displacements over outcrop area are considered to be apparent ones, the average could be a measure of flow rate estimation accuracy by DInSAR. Therefore, it is concluded that the accuracy of the ice flow rate measurement can be improved by using PRISM-DEM. In this presentation, we will show the results of the estimated flow rate of ice streams in the region of interest, and discuss the additional accuracy improvement of this method.

  7. Transition-Tempered Metadynamics: Robust, Convergent Metadynamics via On-the-Fly Transition Barrier Estimation.

    PubMed

    Dama, James F; Rotskoff, Grant; Parrinello, Michele; Voth, Gregory A

    2014-09-09

    Well-tempered metadynamics has proven to be a practical and efficient adaptive enhanced sampling method for the computational study of biomolecular and materials systems. However, choosing its tunable parameter can be challenging and requires balancing a trade-off between fast escape from local metastable states and fast convergence of an overall free energy estimate. In this article, we present a new smoothly convergent variant of metadynamics, transition-tempered metadynamics, that removes that trade-off and is more robust to changes in its own single tunable parameter, resulting in substantial speed and accuracy improvements. The new method is specifically designed to study state-to-state transitions in which the states of greatest interest are known ahead of time, but transition mechanisms are not. The design is guided by a picture of adaptive enhanced sampling as a means to increase dynamical connectivity of a model's state space until percolation between all points of interest is reached, and it uses the degree of dynamical percolation to automatically tune the convergence rate. We apply the new method to Brownian dynamics on 48 random 1D surfaces, blocked alanine dipeptide in vacuo, and aqueous myoglobin, finding that transition-tempered metadynamics substantially and reproducibly improves upon well-tempered metadynamics in terms of first barrier crossing rate, convergence rate, and robustness to the choice of tuning parameter. Moreover, the trade-off between first barrier crossing rate and convergence rate is eliminated: the new method drives escape from an initial metastable state as fast as metadynamics without tempering, regardless of tuning.

  8. Polymer film-nanoparticle composites as new multimodality, non-migrating breast biopsy markers.

    PubMed

    Kaplan, Jonah A; Grinstaff, Mark W; Bloch, B Nicolas

    2016-03-01

    To develop a breast biopsy marker that resists fast and slow migration and has permanent visibility under commonly used imaging modalities. A polymer-nanoparticle composite film was prepared by embedding superparamagnetic iron oxide nanoparticles and a superelastic Nitinol wire within a flexible polyethylene matrix. MRI, mammography, and ultrasound were used to visualize the marker in agar, ex vivo chicken breast, bovine liver, brisket, and biopsy training phantoms. Fast migration caused by the "accordion effect" was quantified after simulated stereotactic, vacuum-assisted core biopsy/marker placement, and centrifugation was used to simulate accelerated long-term (i.e., slow) migration in ex vivo bovine tissue phantoms. Clear marker visualization under MRI, mammography, and ultrasound was observed. After deployment, the marker partially unfolds to give a geometrically constrained structure preventing fast and slow migration. The marker can be deployed through an 11G introducer without fast migration occurring, and shows substantially less slow migration than conventional markers. The polymer-nanoparticle composite biopsy marker is clearly visible on all clinical imaging modalities and does not show substantial migration, which ensures multimodal assessment of the correct spatial information of the biopsy site, allowing for more accurate diagnosis and treatment planning and improved breast cancer patient care. Polymer-nanoparticle composite biopsy markers are visualized using ultrasound, MRI, and mammography. Embedded iron oxide nanoparticles provide tuneable contrast for MRI visualization. Permanent ultrasound visibility is achieved with a non-biodegradable polymer having a distinct ultrasound signal. Flexible polymer-based biopsy markers undergo shape change upon deployment to minimize migration. Non-migrating multimodal markers will help improve accuracy of pre/post-treatment planning studies.

  9. Seamless positioning and navigation by using geo-referenced images and multi-sensor data.

    PubMed

    Li, Xun; Wang, Jinling; Li, Tao

    2013-07-12

    Ubiquitous positioning is considered to be a highly demanding application for today's Location-Based Services (LBS). While satellite-based navigation has achieved great advances in the past few decades, positioning and navigation in indoor scenarios and deep urban areas has remained a challenging topic of substantial research interest. Various strategies have been adopted to fill this gap, within which vision-based methods have attracted growing attention due to the widespread use of cameras on mobile devices. However, current vision-based methods using image processing have yet to revealed their full potential for navigation applications and are insufficient in many aspects. Therefore in this paper, we present a hybrid image-based positioning system that is intended to provide seamless position solution in six degrees of freedom (6DoF) for location-based services in both outdoor and indoor environments. It mainly uses visual sensor input to match with geo-referenced images for image-based positioning resolution, and also takes advantage of multiple onboard sensors, including the built-in GPS receiver and digital compass to assist visual methods. Experiments demonstrate that such a system can greatly improve the position accuracy for areas where the GPS signal is negatively affected (such as in urban canyons), and it also provides excellent position accuracy for indoor environments.

  10. Diffusion Tensor Image Registration Using Hybrid Connectivity and Tensor Features

    PubMed Central

    Wang, Qian; Yap, Pew-Thian; Wu, Guorong; Shen, Dinggang

    2014-01-01

    Most existing diffusion tensor imaging (DTI) registration methods estimate structural correspondences based on voxelwise matching of tensors. The rich connectivity information that is given by DTI, however, is often neglected. In this article, we propose to integrate complementary information given by connectivity features and tensor features for improved registration accuracy. To utilize connectivity information, we place multiple anchors representing different brain anatomies in the image space, and define the connectivity features for each voxel as the geodesic distances from all anchors to the voxel under consideration. The geodesic distance, which is computed in relation to the tensor field, encapsulates information of brain connectivity. We also extract tensor features for every voxel to reflect the local statistics of tensors in its neighborhood. We then combine both connectivity features and tensor features for registration of tensor images. From the images, landmarks are selected automatically and their correspondences are determined based on their connectivity and tensor feature vectors. The deformation field that deforms one tensor image to the other is iteratively estimated and optimized according to the landmarks and their associated correspondences. Experimental results show that, by using connectivity features and tensor features simultaneously, registration accuracy is increased substantially compared with the cases using either type of features alone. PMID:24293159

  11. A fast code for channel limb radiances with gas absorption and scattering in a spherical atmosphere

    NASA Astrophysics Data System (ADS)

    Eluszkiewicz, Janusz; Uymin, Gennady; Flittner, David; Cady-Pereira, Karen; Mlawer, Eli; Henderson, John; Moncet, Jean-Luc; Nehrkorn, Thomas; Wolff, Michael

    2017-05-01

    We present a radiative transfer code capable of accurately and rapidly computing channel limb radiances in the presence of gaseous absorption and scattering in a spherical atmosphere. The code has been prototyped for the Mars Climate Sounder measuring limb radiances in the thermal part of the spectrum (200-900 cm-1) where absorption by carbon dioxide and water vapor and absorption and scattering by dust and water ice particles are important. The code relies on three main components: 1) The Gauss Seidel Spherical Radiative Transfer Model (GSSRTM) for scattering, 2) The Planetary Line-By-Line Radiative Transfer Model (P-LBLRTM) for gas opacity, and 3) The Optimal Spectral Sampling (OSS) for selecting a limited number of spectral points to simulate channel radiances and thus achieving a substantial increase in speed. The accuracy of the code has been evaluated against brute-force line-by-line calculations performed on the NASA Pleiades supercomputer, with satisfactory results. Additional improvements in both accuracy and speed are attainable through incremental changes to the basic approach presented in this paper, which would further support the use of this code for real-time retrievals and data assimilation. Both newly developed codes, GSSRTM/OSS for MCS and P-LBLRTM, are available for additional testing and user feedback.

  12. Toward seamless wearable sensing: Automatic on-body sensor localization for physical activity monitoring.

    PubMed

    Saeedi, Ramyar; Purath, Janet; Venkatasubramanian, Krishna; Ghasemzadeh, Hassan

    2014-01-01

    Mobile wearable sensors have demonstrated great potential in a broad range of applications in healthcare and wellness. These technologies are known for their potential to revolutionize the way next generation medical services are supplied and consumed by providing more effective interventions, improving health outcomes, and substantially reducing healthcare costs. Despite these potentials, utilization of these sensor devices is currently limited to lab settings and in highly controlled clinical trials. A major obstacle in widespread utilization of these systems is that the sensors need to be used in predefined locations on the body in order to provide accurate outcomes such as type of physical activity performed by the user. This has reduced users' willingness to utilize such technologies. In this paper, we propose a novel signal processing approach that leverages feature selection algorithms for accurate and automatic localization of wearable sensors. Our results based on real data collected using wearable motion sensors demonstrate that the proposed approach can perform sensor localization with 98.4% accuracy which is 30.7% more accurate than an approach without a feature selection mechanism. Furthermore, utilizing our node localization algorithm aids the activity recognition algorithm to achieve 98.8% accuracy (an increase from 33.6% for the system without node localization).

  13. Symmetry laws improve electronegativity equalization by orders of magnitude and call for a paradigm shift in conceptual density functional theory.

    PubMed

    von Szentpály, László

    2015-03-05

    The strict Wigner-Witmer symmetry constraints on chemical bonding are shown to determine the accuracy of electronegativity equalization (ENE) to a high degree. Bonding models employing the electronic chemical potential, μ, as the negative of the ground-state electronegativity, χ(GS), frequently collide with the Wigner-Witmer laws in molecule formation. The violations are presented as the root of the substantially disturbing lack of chemical potential equalization (CPE) in diatomic molecules. For the operational chemical potential, μ(op), the relative deviations from CPE fall between -31% ≤ δμ(op) ≤ +70%. Conceptual density functional theory (cDFT) cannot claim to have operationally (not to mention, rigorously) proven and unified the CPE and ENE principles. The solution to this limitation of cDFT and the symmetry violations is found in substituting μ(op) (i) by Mulliken's valence-state electronegativity, χ(M), for atoms and (ii) its new generalization, the valence-pair-affinity, α(VP), for diatomic molecules. Mulliken's χ(M) is equalized into the α(VP) of the bond, and the accuracy of ENE is orders of magnitude better than that of CPE using μ(op). A paradigm shift replacing the dominance of ground states by emphasizing valence states seems to be in order for conceptual DFT.

  14. Seamless Positioning and Navigation by Using Geo-Referenced Images and Multi-Sensor Data

    PubMed Central

    Li, Xun; Wang, Jinling; Li, Tao

    2013-01-01

    Ubiquitous positioning is considered to be a highly demanding application for today's Location-Based Services (LBS). While satellite-based navigation has achieved great advances in the past few decades, positioning and navigation in indoor scenarios and deep urban areas has remained a challenging topic of substantial research interest. Various strategies have been adopted to fill this gap, within which vision-based methods have attracted growing attention due to the widespread use of cameras on mobile devices. However, current vision-based methods using image processing have yet to revealed their full potential for navigation applications and are insufficient in many aspects. Therefore in this paper, we present a hybrid image-based positioning system that is intended to provide seamless position solution in six degrees of freedom (6DoF) for location-based services in both outdoor and indoor environments. It mainly uses visual sensor input to match with geo-referenced images for image-based positioning resolution, and also takes advantage of multiple onboard sensors, including the built-in GPS receiver and digital compass to assist visual methods. Experiments demonstrate that such a system can greatly improve the position accuracy for areas where the GPS signal is negatively affected (such as in urban canyons), and it also provides excellent position accuracy for indoor environments. PMID:23857267

  15. Cross-Sectional HIV Incidence Surveillance: A Benchmarking of Approaches for Estimating the 'Mean Duration of Recent Infection'.

    PubMed

    Kassanjee, Reshma; De Angelis, Daniela; Farah, Marian; Hanson, Debra; Labuschagne, Jan Phillipus Lourens; Laeyendecker, Oliver; Le Vu, Stéphane; Tom, Brian; Wang, Rui; Welte, Alex

    2017-03-01

    The application of biomarkers for 'recent' infection in cross-sectional HIV incidence surveillance requires the estimation of critical biomarker characteristics. Various approaches have been employed for using longitudinal data to estimate the Mean Duration of Recent Infection (MDRI) - the average time in the 'recent' state. In this systematic benchmarking of MDRI estimation approaches, a simulation platform was used to measure accuracy and precision of over twenty approaches, in thirty scenarios capturing various study designs, subject behaviors and test dynamics that may be encountered in practice. Results highlight that assuming a single continuous sojourn in the 'recent' state can produce substantial bias. Simple interpolation provides useful MDRI estimates provided subjects are tested at regular intervals. Regression performs the best - while 'random effects' describe the subject-clustering in the data, regression models without random effects proved easy to implement, stable, and of similar accuracy in scenarios considered; robustness to parametric assumptions was improved by regressing 'recent'/'non-recent' classifications rather than continuous biomarker readings. All approaches were vulnerable to incorrect assumptions about subjects' (unobserved) infection times. Results provided show the relationships between MDRI estimation performance and the number of subjects, inter-visit intervals, missed visits, loss to follow-up, and aspects of biomarker signal and noise.

  16. a Method for the Positioning and Orientation of Rail-Bound Vehicles in Gnss-Free Environments

    NASA Astrophysics Data System (ADS)

    Hung, R.; King, B. A.; Chen, W.

    2016-06-01

    Mobile Mapping System (MMS) are increasingly applied for spatial data collection to support different fields because of their efficiencies and the levels of detail they can provide. The Position and Orientation System (POS), which is conventionally employed for locating and orienting MMS, allows direct georeferencing of spatial data in real-time. Since the performance of a POS depends on both the Inertial Navigation System (INS) and the Global Navigation Satellite System (GNSS), poor GNSS conditions, such as in long tunnels and underground, introduce the necessity for post-processing. In above-ground railways, mobile mapping technology is employed with high performance sensors for finite usage, which has considerable potential for enhancing railway safety and management in real-time. In contrast, underground railways present a challenge for a conventional POS thus alternative configurations are necessary to maintain data accuracy and alleviate the need for post-processing. This paper introduces a method of rail-bound navigation to replace the role of GNSS for railway applications. The proposed method integrates INS and track alignment data for environment-independent navigation and reduces the demand of post-processing. The principle of rail-bound navigation is presented and its performance is verified by an experiment using a consumer-grade Inertial Measurement Unit (IMU) and a small-scale railway model. The method produced a substantial improvement in position and orientation for a poorly initialised system in centimetre positional accuracy. The potential improvements indicated by, and limitations of rail-bound navigation are also considered for further development in existing railway systems.

  17. Accurate and reliable prediction of relative ligand binding potency in prospective drug discovery by way of a modern free-energy calculation protocol and force field.

    PubMed

    Wang, Lingle; Wu, Yujie; Deng, Yuqing; Kim, Byungchan; Pierce, Levi; Krilov, Goran; Lupyan, Dmitry; Robinson, Shaughnessy; Dahlgren, Markus K; Greenwood, Jeremy; Romero, Donna L; Masse, Craig; Knight, Jennifer L; Steinbrecher, Thomas; Beuming, Thijs; Damm, Wolfgang; Harder, Ed; Sherman, Woody; Brewer, Mark; Wester, Ron; Murcko, Mark; Frye, Leah; Farid, Ramy; Lin, Teng; Mobley, David L; Jorgensen, William L; Berne, Bruce J; Friesner, Richard A; Abel, Robert

    2015-02-25

    Designing tight-binding ligands is a primary objective of small-molecule drug discovery. Over the past few decades, free-energy calculations have benefited from improved force fields and sampling algorithms, as well as the advent of low-cost parallel computing. However, it has proven to be challenging to reliably achieve the level of accuracy that would be needed to guide lead optimization (∼5× in binding affinity) for a wide range of ligands and protein targets. Not surprisingly, widespread commercial application of free-energy simulations has been limited due to the lack of large-scale validation coupled with the technical challenges traditionally associated with running these types of calculations. Here, we report an approach that achieves an unprecedented level of accuracy across a broad range of target classes and ligands, with retrospective results encompassing 200 ligands and a wide variety of chemical perturbations, many of which involve significant changes in ligand chemical structures. In addition, we have applied the method in prospective drug discovery projects and found a significant improvement in the quality of the compounds synthesized that have been predicted to be potent. Compounds predicted to be potent by this approach have a substantial reduction in false positives relative to compounds synthesized on the basis of other computational or medicinal chemistry approaches. Furthermore, the results are consistent with those obtained from our retrospective studies, demonstrating the robustness and broad range of applicability of this approach, which can be used to drive decisions in lead optimization.

  18. Creation of parallel algorithms for the solution of problems of gas dynamics on multi-core computers and GPU

    NASA Astrophysics Data System (ADS)

    Rybakin, B.; Bogatencov, P.; Secrieru, G.; Iliuha, N.

    2013-10-01

    The paper deals with a parallel algorithm for calculations on multiprocessor computers and GPU accelerators. The calculations of shock waves interaction with low-density bubble results and the problem of the gas flow with the forces of gravity are presented. This algorithm combines a possibility to capture a high resolution of shock waves, the second-order accuracy for TVD schemes, and a possibility to observe a low-level diffusion of the advection scheme. Many complex problems of continuum mechanics are numerically solved on structured or unstructured grids. To improve the accuracy of the calculations is necessary to choose a sufficiently small grid (with a small cell size). This leads to the drawback of a substantial increase of computation time. Therefore, for the calculations of complex problems it is reasonable to use the method of Adaptive Mesh Refinement. That is, the grid refinement is performed only in the areas of interest of the structure, where, e.g., the shock waves are generated, or a complex geometry or other such features exist. Thus, the computing time is greatly reduced. In addition, the execution of the application on the resulting sequence of nested, decreasing nets can be parallelized. Proposed algorithm is based on the AMR method. Utilization of AMR method can significantly improve the resolution of the difference grid in areas of high interest, and from other side to accelerate the processes of the multi-dimensional problems calculating. Parallel algorithms of the analyzed difference models realized for the purpose of calculations on graphic processors using the CUDA technology [1].

  19. Development of an in vitro diaphragm motion reproduction system.

    PubMed

    Liao, Ai-Ho; Chuang, Ho-Chiao; Shih, Ming-Chih; Hsu, Hsiao-Yu; Tien, Der-Chi; Kuo, Chia-Chun; Jeng, Shiu-Chen; Chiou, Jeng-Fong

    2017-07-01

    This study developed an in vitro diaphragm motion reproduction system (IVDMRS) based on noninvasive and real-time ultrasound imaging to track the internal displacement of the human diaphragm and diaphragm phantoms with a respiration simulation system (RSS). An ultrasound image tracking algorithm (UITA) was used to retrieve the displacement data of the tracking target and reproduce the diaphragm motion in real time using a red laser to irradiate the diaphragm phantom in vitro. This study also recorded the respiration patterns in 10 volunteers. Both simulated and the respiration patterns in 10 human volunteers signals were input to the RSS for conducting experiments involving the reproduction of diaphragm motion in vitro using the IVDMRS. The reproduction accuracy of the IVDMRS was calculated and analyzed. The results indicate that the respiration frequency substantially affects the correlation between ultrasound and kV images, as well as the reproduction accuracy of the IVDMRS due to the system delay time (0.35s) of ultrasound imaging and signal transmission. The utilization of a phase lead compensator (PLC) reduced the error caused by this delay, thereby improving the reproduction accuracy of the IVDMRS by 14.09-46.98%. Applying the IVDMRS in clinical treatments will allow medical staff to monitor the target displacements in real time by observing the movement of the laser beam. If the target displacement moves outside the planning target volume (PTV), the treatment can be immediately stopped to ensure that healthy tissues do not receive high doses of radiation. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  20. Parameterization, sensitivity analysis, and inversion: an investigation using groundwater modeling of the surface-mined Tivoli-Guidonia basin (Metropolitan City of Rome, Italy)

    NASA Astrophysics Data System (ADS)

    La Vigna, Francesco; Hill, Mary C.; Rossetto, Rudy; Mazza, Roberto

    2016-09-01

    With respect to model parameterization and sensitivity analysis, this work uses a practical example to suggest that methods that start with simple models and use computationally frugal model analysis methods remain valuable in any toolbox of model development methods. In this work, groundwater model calibration starts with a simple parameterization that evolves into a moderately complex model. The model is developed for a water management study of the Tivoli-Guidonia basin (Rome, Italy) where surface mining has been conducted in conjunction with substantial dewatering. The approach to model development used in this work employs repeated analysis using sensitivity and inverse methods, including use of a new observation-stacked parameter importance graph. The methods are highly parallelizable and require few model runs, which make the repeated analyses and attendant insights possible. The success of a model development design can be measured by insights attained and demonstrated model accuracy relevant to predictions. Example insights were obtained: (1) A long-held belief that, except for a few distinct fractures, the travertine is homogeneous was found to be inadequate, and (2) The dewatering pumping rate is more critical to model accuracy than expected. The latter insight motivated additional data collection and improved pumpage estimates. Validation tests using three other recharge and pumpage conditions suggest good accuracy for the predictions considered. The model was used to evaluate management scenarios and showed that similar dewatering results could be achieved using 20 % less pumped water, but would require installing newly positioned wells and cooperation between mine owners.

  1. 75 FR 70604 - Wireless E911 Location Accuracy Requirements

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-18

    ... carriers are unable to recover the substantial cost of constructing a large number of additional cell sites... characteristics, cell site density, overall system technology requirements, etc.) while, in either case, ensuring... the satellites and the handset. The more extensive the tree cover, the greater the difficulty the...

  2. Evaluation of a Moderate Resolution, Satellite-Based Impervious Surface Map Using an Independent, High-Resolution Validation Dataset

    EPA Science Inventory

    Given the relatively high cost of mapping impervious surfaces at regional scales, substantial effort is being expended in the development of moderate-resolution, satellite-based methods for estimating impervious surface area (ISA). To rigorously assess the accuracy of these data ...

  3. C- and L-band space-borne SAR incidence angle normalization for efficient Arctic sea ice monitoring

    NASA Astrophysics Data System (ADS)

    Mahmud, M. S.; Geldsetzer, T.; Howell, S.; Yackel, J.; Nandan, V.

    2017-12-01

    C-band Synthetic Aperture Radar (SAR) has been widely used effectively for operational sea ice monitoring, owing to its greater seperability between snow-covered first-year (FYI) and multi-year (MYI) ice types, during winter. However, during the melt season, C-band SAR backscatter contrast reduces between FYI and MYI. To overcome the limitations of C-band, several studies have recommended utlizing L-band SAR, as it has the potential to significantly improve sea ice classification. Given its longer wavelength, L-band can efficiently separate FYI and MYI types, especially during melt season. Therefore, the combination of C- and L-band SAR is an optimal solution for efficient seasonal sea ice monitoring. As SAR acquires images over a range of incidence angles from near-range to far-range, SAR backscatter varies substantially. To compensate this variation in SAR backscatter, incidence angle dependency of C- and L-band SAR backscatter for different FYI and MYI types is crucial to quantify, which is the objective of this study. Time-series SAR imagery from C-band RADARSAT-2 and L-band ALOS PALSAR during winter months of 2010 across 60 sites over the Canadian Arctic was acquired. Utilizing 15 images for each sites during February-March for both C- and L-band SAR, incidence angle dependency was calculated. Our study reveals that L- and C-band backscatter from FYI and MYI decreases with increasing incidence angle. The mean incidence angle dependency for FYI and MYI were estimated to be -0.21 dB/1° and -0.30 dB/1° respectively from L-band SAR, and -0.22 dB/1° and -0.16 dB/1° from C-band SAR, respectively. While the incidence angle dependency for FYI was found to be similar in both frequencies, it doubled in case of MYI from L-band, compared to C-band. After applying the incidence angle normalization method to both C- and L-band SAR images, preliminary results indicate improved sea ice type seperability between FYI and MYI types, with substantially lower number of mixed pixels; thereby offering more reliable sea ice classification accuracies. Research findings from this study can be utilized to improve seasonal sea ice classification with higher accuracy for operational Arctic sea ice monitoring, especially in regions like the Canadian Arctic, where MYI detection is crucial for safer ship navigations.

  4. 23 CFR 1200.22 - State traffic safety information system improvements grants.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... measures to be used to demonstrate quantitative progress in the accuracy, completeness, timeliness... to implement, provides an explanation. (d) Requirement for quantitative improvement. A State shall demonstrate quantitative improvement in the data attributes of accuracy, completeness, timeliness, uniformity...

  5. 23 CFR 1200.22 - State traffic safety information system improvements grants.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... measures to be used to demonstrate quantitative progress in the accuracy, completeness, timeliness... to implement, provides an explanation. (d) Requirement for quantitative improvement. A State shall demonstrate quantitative improvement in the data attributes of accuracy, completeness, timeliness, uniformity...

  6. Marsh canopy leaf area and orientation calculated for improved marsh structure mapping

    USGS Publications Warehouse

    Ramsey, Elijah W.; Rangoonwala, Amina; Jones, Cathleen E.; Bannister, Terri

    2015-01-01

    An approach is presented for producing the spatiotemporal estimation of leaf area index (LAI) of a highly heterogeneous coastal marsh without reliance on user estimates of marsh leaf-stem orientation. The canopy LAI profile derivation used three years of field measured photosynthetically active radiation (PAR) vertical profiles at seven S. alterniflora marsh sites and iterative transform of those PAR attenuation profiles to best-fit light extinction coefficients (KM). KM sun zenith dependency was removed obtaining the leaf angle distribution (LAD) representing the average marsh orientation and the LAD used to calculate the LAI canopy profile. LAI and LAD reproduced measured PAR profiles with 99% accuracy and corresponded to field documented structures. LAI and LAD better reflect marsh structure and results substantiate the need to account for marsh orientation. The structure indexes are directly amenable to remote sensing spatiotemporal mapping and offer a more meaningful representation of wetland systems promoting biophysical function understanding.

  7. The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions

    PubMed Central

    Larget, Bret

    2013-01-01

    In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066

  8. Thermal expansion in dispersion-bound molecular crystals

    DOE PAGES

    Ko, Hsin -Yu; DiStasio, Robert A.; Santra, Biswajit; ...

    2018-05-18

    In this paper, we explore how anharmonicity, nuclear quantum effects (NQE), many-body dispersion interactions, and Pauli repulsion influence thermal properties of dispersion-bound molecular crystals. Accounting for anharmonicity with ab initio molecular dynamics yields cell parameters accurate to within 2% of experiment for a set of pyridinelike molecular crystals at finite temperatures and pressures. From the experimental thermal expansion curve, we find that pyridine-I has a Debye temperature just above its melting point, indicating sizable NQE across the entire crystalline range of stability. We find that NQE lead to a substantial volume increase in pyridine-I (≈ 40% more than classical thermalmore » expansion at 153 K) and attribute this to intermolecular Pauli repulsion promoted by intramolecular quantum fluctuations. Finally, when predicting delicate properties such as the thermal expansivity, we show that many-body dispersion interactions and more sophisticated density functional approximations improve the accuracy of the theoretical model.« less

  9. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.

    PubMed

    Theobald, Douglas L; Wuttke, Deborah S

    2006-09-01

    THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.

  10. Fast globally optimal segmentation of 3D prostate MRI with axial symmetry prior.

    PubMed

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron

    2013-01-01

    We propose a novel global optimization approach to segmenting a given 3D prostate T2w magnetic resonance (MR) image, which enforces the inherent axial symmetry of the prostate shape and simultaneously performs a sequence of 2D axial slice-wise segmentations with a global 3D coherence prior. We show that the proposed challenging combinatorial optimization problem can be solved globally and exactly by means of convex relaxation. With this regard, we introduce a novel coupled continuous max-flow model, which is dual to the studied convex relaxed optimization formulation and leads to an efficient multiplier augmented algorithm based on the modern convex optimization theory. Moreover, the new continuous max-flow based algorithm was implemented on GPUs to achieve a substantial improvement in computation. Experimental results using public and in-house datasets demonstrate great advantages of the proposed method in terms of both accuracy and efficiency.

  11. Thermal expansion in dispersion-bound molecular crystals

    NASA Astrophysics Data System (ADS)

    Ko, Hsin-Yu; DiStasio, Robert A.; Santra, Biswajit; Car, Roberto

    2018-05-01

    We explore how anharmonicity, nuclear quantum effects (NQE), many-body dispersion interactions, and Pauli repulsion influence thermal properties of dispersion-bound molecular crystals. Accounting for anharmonicity with ab initio molecular dynamics yields cell parameters accurate to within 2 % of experiment for a set of pyridinelike molecular crystals at finite temperatures and pressures. From the experimental thermal expansion curve, we find that pyridine-I has a Debye temperature just above its melting point, indicating sizable NQE across the entire crystalline range of stability. We find that NQE lead to a substantial volume increase in pyridine-I (≈40 % more than classical thermal expansion at 153 K) and attribute this to intermolecular Pauli repulsion promoted by intramolecular quantum fluctuations. When predicting delicate properties such as the thermal expansivity, we show that many-body dispersion interactions and more sophisticated density functional approximations improve the accuracy of the theoretical model.

  12. Robust decentralized control laws for the ACES structure

    NASA Technical Reports Server (NTRS)

    Collins, Emmanuel G., Jr.; Phillips, Douglas J.; Hyland, David C.

    1991-01-01

    Control system design for the Active Control Technique Evaluation for Spacecraft (ACES) structure at NASA Marshall Space Flight Center is discussed. The primary objective of this experiment is to design controllers that provide substantial reduction of the line-of-sight pointing errors. Satisfaction of this objective requires the controllers to attenuate beam vibration significantly. The primary method chosen for control design is the optimal projection approach for uncertain systems (OPUS). The OPUS design process allows the simultaneous tradeoff of five fundamental issues in control design: actuator sizing, sensor accuracy, controller order, robustness, and system performance. A brief description of the basic ACES configuration is given. The development of the models used for control design and control design for eight system loops that were selected by analysis of test data collected from the structure are discussed. Experimental results showing that very significant performance improvement is achieved when all eight feedback loops are closed are presented.

  13. Accurate density functional prediction of molecular electron affinity with the scaling corrected Kohn–Sham frontier orbital energies

    NASA Astrophysics Data System (ADS)

    Zhang, DaDi; Yang, Xiaolong; Zheng, Xiao; Yang, Weitao

    2018-04-01

    Electron affinity (EA) is the energy released when an additional electron is attached to an atom or a molecule. EA is a fundamental thermochemical property, and it is closely pertinent to other important properties such as electronegativity and hardness. However, accurate prediction of EA is difficult with density functional theory methods. The somewhat large error of the calculated EAs originates mainly from the intrinsic delocalisation error associated with the approximate exchange-correlation functional. In this work, we employ a previously developed non-empirical global scaling correction approach, which explicitly imposes the Perdew-Parr-Levy-Balduz condition to the approximate functional, and achieve a substantially improved accuracy for the calculated EAs. In our approach, the EA is given by the scaling corrected Kohn-Sham lowest unoccupied molecular orbital energy of the neutral molecule, without the need to carry out the self-consistent-field calculation for the anion.

  14. Bilinguals’ Existing Languages Benefit Vocabulary Learning in a Third Language

    PubMed Central

    Bartolotti, James; Marian, Viorica

    2017-01-01

    Learning a new language involves substantial vocabulary acquisition. Learners can accelerate this process by relying on words with native-language overlap, such as cognates. For bilingual third language learners, it is necessary to determine how their two existing languages interact during novel language learning. A scaffolding account predicts transfer from either language for individual words, whereas an accumulation account predicts cumulative transfer from both languages. To compare these accounts, twenty English-German bilingual adults were taught an artificial language containing 48 novel written words that varied orthogonally in English and German wordlikeness (neighborhood size and orthotactic probability). Wordlikeness in each language improved word production accuracy, and similarity to one language provided the same benefit as dual-language overlap. In addition, participants’ memory for novel words was affected by the statistical distributions of letters in the novel language. Results indicate that bilinguals utilize both languages during third language acquisition, supporting a scaffolding learning model. PMID:28781384

  15. Accurate Semilocal Density Functional for Condensed-Matter Physics and Quantum Chemistry.

    PubMed

    Tao, Jianmin; Mo, Yuxiang

    2016-08-12

    Most density functionals have been developed by imposing the known exact constraints on the exchange-correlation energy, or by a fit to a set of properties of selected systems, or by both. However, accurate modeling of the conventional exchange hole presents a great challenge, due to the delocalization of the hole. Making use of the property that the hole can be made localized under a general coordinate transformation, here we derive an exchange hole from the density matrix expansion, while the correlation part is obtained by imposing the low-density limit constraint. From the hole, a semilocal exchange-correlation functional is calculated. Our comprehensive test shows that this functional can achieve remarkable accuracy for diverse properties of molecules, solids, and solid surfaces, substantially improving upon the nonempirical functionals proposed in recent years. Accurate semilocal functionals based on their associated holes are physically appealing and practically useful for developing nonlocal functionals.

  16. Thermal expansion in dispersion-bound molecular crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ko, Hsin -Yu; DiStasio, Robert A.; Santra, Biswajit

    In this paper, we explore how anharmonicity, nuclear quantum effects (NQE), many-body dispersion interactions, and Pauli repulsion influence thermal properties of dispersion-bound molecular crystals. Accounting for anharmonicity with ab initio molecular dynamics yields cell parameters accurate to within 2% of experiment for a set of pyridinelike molecular crystals at finite temperatures and pressures. From the experimental thermal expansion curve, we find that pyridine-I has a Debye temperature just above its melting point, indicating sizable NQE across the entire crystalline range of stability. We find that NQE lead to a substantial volume increase in pyridine-I (≈ 40% more than classical thermalmore » expansion at 153 K) and attribute this to intermolecular Pauli repulsion promoted by intramolecular quantum fluctuations. Finally, when predicting delicate properties such as the thermal expansivity, we show that many-body dispersion interactions and more sophisticated density functional approximations improve the accuracy of the theoretical model.« less

  17. Detecting microbial dysbiosis associated with Pediatric Crohn’s disease despite the high variability of the gut microbiota

    PubMed Central

    Wang, Feng; Kaplan, Jess L.; Gold, Benjamin D.; Bhasin, Manoj K.; Ward, Naomi L.; Kellermayer, Richard; Kirschner, Barbara S.; Heyman, Melvin B.; Dowd, Scot E.; Cox, Stephen B.; Dogan, Haluk; Steven, Blaire; Ferry, George D.; Cohen, Stanley A.; Baldassano, Robert N.; Moran, Christopher J.; Garnett, Elizabeth A.; Drake, Lauren; Otu, Hasan H.; Mirny, Leonid A.; Libermann, Towia A.; Winter, Harland S.; Korolev, Kirill

    2016-01-01

    SUMMARY The relationship between the host and its microbiota is challenging to understand because both microbial communities and their environment are highly variable. We developed a set of techniques to address this challenge based on population dynamics and information theory. These methods identified additional bacterial taxa associated with pediatric Crohn's disease and could detect significant changes in microbial communities with fewer samples than previous statistical approaches. We also substantially improved the accuracy of the diagnosis based on the microbiota from stool samples and found that the ecological niche of a microbe predicts its role in Crohn’s disease. Bacteria typically residing in the lumen of healthy patients decrease in disease while bacteria typically residing on the mucosa of healthy patients increase in disease. Our results also show that the associations with Crohn’s disease are evolutionarily conserved and provide a mutual-information-based method to visualize dysbiosis. PMID:26804920

  18. CFA-aware features for steganalysis of color images

    NASA Astrophysics Data System (ADS)

    Goljan, Miroslav; Fridrich, Jessica

    2015-03-01

    Color interpolation is a form of upsampling, which introduces constraints on the relationship between neighboring pixels in a color image. These constraints can be utilized to substantially boost the accuracy of steganography detectors. In this paper, we introduce a rich model formed by 3D co-occurrences of color noise residuals split according to the structure of the Bayer color filter array to further improve detection. Some color interpolation algorithms, AHD and PPG, impose pixel constraints so tight that extremely accurate detection becomes possible with merely eight features eliminating the need for model richification. We carry out experiments on non-adaptive LSB matching and the content-adaptive algorithm WOW on five different color interpolation algorithms. In contrast to grayscale images, in color images that exhibit traces of color interpolation the security of WOW is significantly lower and, depending on the interpolation algorithm, may even be lower than non-adaptive LSB matching.

  19. Full Wave Function Optimization with Quantum Monte Carlo and Its Effect on the Dissociation Energy of FeS.

    PubMed

    Haghighi Mood, Kaveh; Lüchow, Arne

    2017-08-17

    Diffusion quantum Monte Carlo calculations with partial and full optimization of the guide function are carried out for the dissociation of the FeS molecule. For the first time, quantum Monte Carlo orbital optimization for transition metal compounds is performed. It is demonstrated that energy optimization of the orbitals of a complete active space wave function in the presence of a Jastrow correlation function is required to obtain agreement with the experimental dissociation energy. Furthermore, it is shown that orbital optimization leads to a 5 Δ ground state, in agreement with experiments but in disagreement with other high-level ab initio wave function calculations which all predict a 5 Σ + ground state. The role of the Jastrow factor in DMC calculations with pseudopotentials is investigated. The results suggest that a large Jastrow factor may improve the DMC accuracy substantially at small additional cost.

  20. Research and development program for the development of advanced time-temperature dependent constitutive relationships. Volume 1: Theoretical discussion

    NASA Technical Reports Server (NTRS)

    Cassenti, B. N.

    1983-01-01

    The results of a 10-month research and development program for the development of advanced time-temperature constitutive relationships are presented. The program included (1) the effect of rate of change of temperature, (2) the development of a term to include time independent effects, and (3) improvements in computational efficiency. It was shown that rate of change of temperature could have a substantial effect on the predicted material response. A modification to include time-independent effects, applicable to many viscoplastic constitutive theories, was shown to reduce to classical plasticity. The computation time can be reduced by a factor of two if self-adaptive integration is used when compared to an integration using ordinary forward differences. During the course of the investigation, it was demonstrated that the most important single factor affecting the theoretical accuracy was the choice of material parameters.

  1. Accurate quantitative CF-LIBS analysis of both major and minor elements in alloys via iterative correction of plasma temperature and spectral intensity

    NASA Astrophysics Data System (ADS)

    Shuxia, ZHAO; Lei, ZHANG; Jiajia, HOU; Yang, ZHAO; Wangbao, YIN; Weiguang, MA; Lei, DONG; Liantuan, XIAO; Suotang, JIA

    2018-03-01

    The chemical composition of alloys directly determines their mechanical behaviors and application fields. Accurate and rapid analysis of both major and minor elements in alloys plays a key role in metallurgy quality control and material classification processes. A quantitative calibration-free laser-induced breakdown spectroscopy (CF-LIBS) analysis method, which carries out combined correction of plasma temperature and spectral intensity by using a second-order iterative algorithm and two boundary standard samples, is proposed to realize accurate composition measurements. Experimental results show that, compared to conventional CF-LIBS analysis, the relative errors for major elements Cu and Zn and minor element Pb in the copper-lead alloys has been reduced from 12%, 26% and 32% to 1.8%, 2.7% and 13.4%, respectively. The measurement accuracy for all elements has been improved substantially.

  2. Bilinguals' Existing Languages Benefit Vocabulary Learning in a Third Language.

    PubMed

    Bartolotti, James; Marian, Viorica

    2017-03-01

    Learning a new language involves substantial vocabulary acquisition. Learners can accelerate this process by relying on words with native-language overlap, such as cognates. For bilingual third language learners, it is necessary to determine how their two existing languages interact during novel language learning. A scaffolding account predicts transfer from either language for individual words, whereas an accumulation account predicts cumulative transfer from both languages. To compare these accounts, twenty English-German bilingual adults were taught an artificial language containing 48 novel written words that varied orthogonally in English and German wordlikeness (neighborhood size and orthotactic probability). Wordlikeness in each language improved word production accuracy, and similarity to one language provided the same benefit as dual-language overlap. In addition, participants' memory for novel words was affected by the statistical distributions of letters in the novel language. Results indicate that bilinguals utilize both languages during third language acquisition, supporting a scaffolding learning model.

  3. Tsunami: ocean dynamo generator.

    PubMed

    Sugioka, Hiroko; Hamano, Yozo; Baba, Kiyoshi; Kasaya, Takafumi; Tada, Noriko; Suetsugu, Daisuke

    2014-01-08

    Secondary magnetic fields are induced by the flow of electrically conducting seawater through the Earth's primary magnetic field ('ocean dynamo effect'), and hence it has long been speculated that tsunami flows should produce measurable magnetic field perturbations, although the signal-to-noise ratio would be small because of the influence of the solar magnetic fields. Here, we report on the detection of deep-seafloor electromagnetic perturbations of 10-micron-order induced by a tsunami, which propagated through a seafloor electromagnetometer array network. The observed data extracted tsunami characteristics, including the direction and velocity of propagation as well as sea-level change, first to verify the induction theory. Presently, offshore observation systems for the early forecasting of tsunami are based on the sea-level measurement by seafloor pressure gauges. In terms of tsunami forecasting accuracy, the integration of vectored electromagnetic measurements into existing scalar observation systems would represent a substantial improvement in the performance of tsunami early-warning systems.

  4. Accelerating MP2C dispersion corrections for dimers and molecular crystals

    NASA Astrophysics Data System (ADS)

    Huang, Yuanhang; Shao, Yihan; Beran, Gregory J. O.

    2013-06-01

    The MP2C dispersion correction of Pitonak and Hesselmann [J. Chem. Theory Comput. 6, 168 (2010)], 10.1021/ct9005882 substantially improves the performance of second-order Møller-Plesset perturbation theory for non-covalent interactions, albeit with non-trivial computational cost. Here, the MP2C correction is computed in a monomer-centered basis instead of a dimer-centered one. When applied to a single dimer MP2 calculation, this change accelerates the MP2C dispersion correction several-fold while introducing only trivial new errors. More significantly, in the context of fragment-based molecular crystal studies, combination of the new monomer basis algorithm and the periodic symmetry of the crystal reduces the cost of computing the dispersion correction by two orders of magnitude. This speed-up reduces the MP2C dispersion correction calculation from a significant computational expense to a negligible one in crystals like aspirin or oxalyl dihydrazide, without compromising accuracy.

  5. Accuracy of forecasts in strategic intelligence

    PubMed Central

    Mandel, David R.; Barnes, Alan

    2014-01-01

    The accuracy of 1,514 strategic intelligence forecasts abstracted from intelligence reports was assessed. The results show that both discrimination and calibration of forecasts was very good. Discrimination was better for senior (versus junior) analysts and for easier (versus harder) forecasts. Miscalibration was mainly due to underconfidence such that analysts assigned more uncertainty than needed given their high level of discrimination. Underconfidence was more pronounced for harder (versus easier) forecasts and for forecasts deemed more (versus less) important for policy decision making. Despite the observed underconfidence, there was a paucity of forecasts in the least informative 0.4–0.6 probability range. Recalibrating the forecasts substantially reduced underconfidence. The findings offer cause for tempered optimism about the accuracy of strategic intelligence forecasts and indicate that intelligence producers aim to promote informativeness while avoiding overstatement. PMID:25024176

  6. Use of digital land-cover data from the Landsat satellite in estimating streamflow characteristics in the Cumberland Plateau of Tennessee

    USGS Publications Warehouse

    Hollyday, E.F.; Hansen, G.R.

    1983-01-01

    Streamflow may be estimated with regression equations that relate streamflow characteristics to characteristics of the drainage basin. A statistical experiment was performed to compare the accuracy of equations using basin characteristics derived from maps and climatological records (control group equations) with the accuracy of equations using basin characteristics derived from Landsat data as well as maps and climatological records (experimental group equations). Results show that when the equations in both groups are arranged into six flow categories, there is no substantial difference in accuracy between control group equations and experimental group equations for this particular site where drainage area accounts for more than 90 percent of the variance in all streamflow characteristics (except low flows and most annual peak logarithms). (USGS)

  7. Tightly Coupled Integration of GPS Ambiguity Fixed Precise Point Positioning and MEMS-INS through a Troposphere-Constrained Adaptive Kalman Filter

    PubMed Central

    Han, Houzeng; Xu, Tianhe; Wang, Jian

    2016-01-01

    Precise Point Positioning (PPP) makes use of the undifferenced pseudorange and carrier phase measurements with ionospheric-free (IF) combinations to achieve centimeter-level positioning accuracy. Conventionally, the IF ambiguities are estimated as float values. To improve the PPP positioning accuracy and shorten the convergence time, the integer phase clock model with between-satellites single-difference (BSSD) operation is used to recover the integer property. However, the continuity and availability of stand-alone PPP is largely restricted by the observation environment. The positioning performance will be significantly degraded when GPS operates under challenging environments, if less than five satellites are present. A commonly used approach is integrating a low cost inertial sensor to improve the positioning performance and robustness. In this study, a tightly coupled (TC) algorithm is implemented by integrating PPP with inertial navigation system (INS) using an Extended Kalman filter (EKF). The navigation states, inertial sensor errors and GPS error states are estimated together. The troposphere constrained approach, which utilizes external tropospheric delay as virtual observation, is applied to further improve the ambiguity-fixed height positioning accuracy, and an improved adaptive filtering strategy is implemented to improve the covariance modelling considering the realistic noise effect. A field vehicular test with a geodetic GPS receiver and a low cost inertial sensor was conducted to validate the improvement on positioning performance with the proposed approach. The results show that the positioning accuracy has been improved with inertial aiding. Centimeter-level positioning accuracy is achievable during the test, and the PPP/INS TC integration achieves a fast re-convergence after signal outages. For troposphere constrained solutions, a significant improvement for the height component has been obtained. The overall positioning accuracies of the height component are improved by 30.36%, 16.95% and 24.07% for three different convergence times, i.e., 60, 50 and 30 min, respectively. It shows that the ambiguity-fixed horizontal positioning accuracy has been significantly improved. When compared with the conventional PPP solution, it can be seen that position accuracies are improved by 19.51%, 61.11% and 23.53% for the north, east and height components, respectively, after one hour convergence through the troposphere constraint fixed PPP/INS with adaptive covariance model. PMID:27399721

  8. Cloud-based preoperative planning for total hip arthroplasty: a study of accuracy, efficiency, and compliance.

    PubMed

    Maratt, Joseph D; Srinivasan, Ramesh C; Dahl, William J; Schilling, Peter L; Urquhart, Andrew G

    2012-08-01

    As digital radiography becomes more prevalent, several systems for digital preoperative planning have become available. The purpose of this study was to evaluate the accuracy and efficiency of an inexpensive, cloud-based digital templating system, which is comparable with acetate templating. However, cloud-based templating is substantially faster and more convenient than acetate templating or locally installed software. Although this is a practical solution for this particular medical application, regulatory changes are necessary before the tremendous advantages of cloud-based storage and computing can be realized in medical research and clinical practice. Copyright 2012, SLACK Incorporated.

  9. Attitude-correlated frames approach for a star sensor to improve attitude accuracy under highly dynamic conditions.

    PubMed

    Ma, Liheng; Zhan, Dejun; Jiang, Guangwen; Fu, Sihua; Jia, Hui; Wang, Xingshu; Huang, Zongsheng; Zheng, Jiaxing; Hu, Feng; Wu, Wei; Qin, Shiqiao

    2015-09-01

    The attitude accuracy of a star sensor decreases rapidly when star images become motion-blurred under dynamic conditions. Existing techniques concentrate on a single frame of star images to solve this problem and improvements are obtained to a certain extent. An attitude-correlated frames (ACF) approach, which concentrates on the features of the attitude transforms of the adjacent star image frames, is proposed to improve upon the existing techniques. The attitude transforms between different star image frames are measured by the strap-down gyro unit precisely. With the ACF method, a much larger star image frame is obtained through the combination of adjacent frames. As a result, the degradation of attitude accuracy caused by motion-blurring are compensated for. The improvement of the attitude accuracy is approximately proportional to the square root of the number of correlated star image frames. Simulations and experimental results indicate that the ACF approach is effective in removing random noises and improving the attitude determination accuracy of the star sensor under highly dynamic conditions.

  10. Accuracy of references and quotations in veterinary journals.

    PubMed

    Hinchcliff, K W; Bruce, N J; Powers, J D; Kipp, M L

    1993-02-01

    The accuracy of references and quotations used to substantiate statements of fact in articles published in 6 frequently cited veterinary journals was examined. Three hundred references were randomly selected, and the accuracy of each citation was examined. A subset of 100 references was examined for quotational accuracy; ie, the accuracy with which authors represented the work or assertions of the author being cited. Of the 300 references selected, 295 were located, and 125 major errors were found in 88 (29.8%) of them. Sixty-seven (53.6%) major errors were found involving authors, 12 (9.6%) involved the article title, 14 (11.2%) involved the book or journal title, and 32 (25.6%) involved the volume number, date, or page numbers. Sixty-eight minor errors were detected. The accuracy of 111 quotations from 95 citations in 65 articles was examined. Nine quotations were technical and not classified, 86 (84.3%) were classified as correct, 2 (1.9%) contained minor misquotations, and 14 (13.7%) contained major misquotations. We concluded that misquotations and errors in citations occur frequently in veterinary journals, but at a rate similar to that reported for other biomedical journals.

  11. Cost and accuracy of advanced breeding trial designs in apple

    PubMed Central

    Harshman, Julia M; Evans, Kate M; Hardner, Craig M

    2016-01-01

    Trialing advanced candidates in tree fruit crops is expensive due to the long-term nature of the planting and labor-intensive evaluations required to make selection decisions. How closely the trait evaluations approximate the true trait value needs balancing with the cost of the program. Designs of field trials of advanced apple candidates in which reduced number of locations, the number of years and the number of harvests per year were modeled to investigate the effect on the cost and accuracy in an operational breeding program. The aim was to find designs that would allow evaluation of the most additional candidates while sacrificing the least accuracy. Critical percentage difference, response to selection, and correlated response were used to examine changes in accuracy of trait evaluations. For the quality traits evaluated, accuracy and response to selection were not substantially reduced for most trial designs. Risk management influences the decision to change trial design, and some designs had greater risk associated with them. Balancing cost and accuracy with risk yields valuable insight into advanced breeding trial design. The methods outlined in this analysis would be well suited to other horticultural crop breeding programs. PMID:27019717

  12. Accuracy Improvement of Neutron Nuclear Data on Minor Actinides

    NASA Astrophysics Data System (ADS)

    Harada, Hideo; Iwamoto, Osamu; Iwamoto, Nobuyuki; Kimura, Atsushi; Terada, Kazushi; Nakao, Taro; Nakamura, Shoji; Mizuyama, Kazuhito; Igashira, Masayuki; Katabuchi, Tatsuya; Sano, Tadafumi; Takahashi, Yoshiyuki; Takamiya, Koichi; Pyeon, Cheol Ho; Fukutani, Satoshi; Fujii, Toshiyuki; Hori, Jun-ichi; Yagi, Takahiro; Yashima, Hiroshi

    2015-05-01

    Improvement of accuracy of neutron nuclear data for minor actinides (MAs) and long-lived fission products (LLFPs) is required for developing innovative nuclear system transmuting these nuclei. In order to meet the requirement, the project entitled as "Research and development for Accuracy Improvement of neutron nuclear data on Minor ACtinides (AIMAC)" has been started as one of the "Innovative Nuclear Research and Development Program" in Japan at October 2013. The AIMAC project team is composed of researchers in four different fields: differential nuclear data measurement, integral nuclear data measurement, nuclear chemistry, and nuclear data evaluation. By integrating all of the forefront knowledge and techniques in these fields, the team aims at improving the accuracy of the data. The background and research plan of the AIMAC project are presented.

  13. Image processing and machine learning for fully automated probabilistic evaluation of medical images.

    PubMed

    Sajn, Luka; Kukar, Matjaž

    2011-12-01

    The paper presents results of our long-term study on using image processing and data mining methods in a medical imaging. Since evaluation of modern medical images is becoming increasingly complex, advanced analytical and decision support tools are involved in integration of partial diagnostic results. Such partial results, frequently obtained from tests with substantial imperfections, are integrated into ultimate diagnostic conclusion about the probability of disease for a given patient. We study various topics such as improving the predictive power of clinical tests by utilizing pre-test and post-test probabilities, texture representation, multi-resolution feature extraction, feature construction and data mining algorithms that significantly outperform medical practice. Our long-term study reveals three significant milestones. The first improvement was achieved by significantly increasing post-test diagnostic probabilities with respect to expert physicians. The second, even more significant improvement utilizes multi-resolution image parametrization. Machine learning methods in conjunction with the feature subset selection on these parameters significantly improve diagnostic performance. However, further feature construction with the principle component analysis on these features elevates results to an even higher accuracy level that represents the third milestone. With the proposed approach clinical results are significantly improved throughout the study. The most significant result of our study is improvement in the diagnostic power of the whole diagnostic process. Our compound approach aids, but does not replace, the physician's judgment and may assist in decisions on cost effectiveness of tests. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  14. Design considerations and validation of the MSTAR absolute metrology system

    NASA Astrophysics Data System (ADS)

    Peters, Robert D.; Lay, Oliver P.; Dubovitsky, Serge; Burger, Johan; Jeganathan, Muthu

    2004-08-01

    Absolute metrology measures the actual distance between two optical fiducials. A number of methods have been employed, including pulsed time-of-flight, intensity-modulated optical beam, and two-color interferometry. The rms accuracy is currently limited to ~5 microns. Resolving the integer number of wavelengths requires a 1-sigma range accuracy of ~0.1 microns. Closing this gap has a large pay-off: the range (length measurement) accuracy can be increased substantially using the unambiguous optical phase. The MSTAR sensor (Modulation Sideband Technology for Absolute Ranging) is a new system for measuring absolute distance, capable of resolving the integer cycle ambiguity of standard interferometers, and making it possible to measure distance with sub-nanometer accuracy. In this paper, we present recent experiments that use dispersed white light interferometry to independently validate the zero-point of the system. We also describe progress towards reducing the size of optics, and stabilizing the laser wavelength for operation over larger target ranges. MSTAR is a general-purpose tool for conveniently measuring length with much greater accuracy than was previously possible, and has a wide range of possible applications.

  15. Accuracy of the Atherosclerotic Cardiovascular Risk Equation in a Large Contemporary, Multiethnic Population.

    PubMed

    Rana, Jamal S; Tabada, Grace H; Solomon, Matthew D; Lo, Joan C; Jaffe, Marc G; Sung, Sue Hee; Ballantyne, Christie M; Go, Alan S

    2016-05-10

    The accuracy of the 2013 American College of Cardiology/American Heart Association (ACC/AHA) Pooled Cohort Risk Equation for atherosclerotic cardiovascular disease (ASCVD) events in contemporary and ethnically diverse populations is not well understood. The goal of this study was to evaluate the accuracy of the 2013 ACC/AHA Pooled Cohort Risk Equation within a large, multiethnic population in clinical care. The target population for consideration of cholesterol-lowering therapy in a large, integrated health care delivery system population was identified in 2008 and followed up through 2013. The main analyses excluded those with known ASCVD, diabetes mellitus, low-density lipoprotein cholesterol levels <70 or ≥190 mg/dl, prior lipid-lowering therapy use, or incomplete 5-year follow-up. Patient characteristics were obtained from electronic medical records, and ASCVD events were ascertained by using validated algorithms for hospitalization databases and death certificates. We compared predicted versus observed 5-year ASCVD risk, overall and according to sex and race/ethnicity. We additionally examined predicted versus observed risk in patients with diabetes mellitus. Among 307,591 eligible adults without diabetes between 40 and 75 years of age, 22,283 were black, 52,917 were Asian/Pacific Islander, and 18,745 were Hispanic. We observed 2,061 ASCVD events during 1,515,142 person-years. In each 5-year predicted ASCVD risk category, observed 5-year ASCVD risk was substantially lower: 0.20% for predicted risk <2.50%; 0.65% for predicted risk 2.50% to <3.75%; 0.90% for predicted risk 3.75% to <5.00%; and 1.85% for predicted risk ≥5.00% (C statistic: 0.74). Similar ASCVD risk overestimation and poor calibration with moderate discrimination (C statistic: 0.68 to 0.74) were observed in sex, racial/ethnic, and socioeconomic status subgroups, and in sensitivity analyses among patients receiving statins for primary prevention. Calibration among 4,242 eligible adults with diabetes was improved, but discrimination was worse (C statistic: 0.64). In a large, contemporary "real-world" population, the ACC/AHA Pooled Cohort Risk Equation substantially overestimated actual 5-year risk in adults without diabetes, overall and across sociodemographic subgroups. Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  16. Interobserver agreement and diagnostic accuracy of brain magnetic resonance imaging in dogs.

    PubMed

    Leclerc, Mylène-Kim; d'Anjou, Marc-André; Blond, Laurent; Carmel, Éric Norman; Dennis, Ruth; Kraft, Susan L; Matthews, Andrea R; Parent, Joane M

    2013-06-15

    To evaluate interobserver agreement and diagnostic accuracy of brain MRI in dogs. Evaluation study. 44 dogs. 5 board-certified veterinary radiologists with variable MRI experience interpreted transverse T2-weighted (T2w), T2w fluid-attenuated inversion recovery (FLAIR), and T1-weighted-FLAIR; transverse, sagittal, and dorsal T2w; and T1-weighted-FLAIR postcontrast brain sequences (1.5 T). Several imaging parameters were scored, including the following: lesion (present or absent), lesion characteristics (axial localization, mass effect, edema, hemorrhage, and cavitation), contrast enhancement characteristics, and most likely diagnosis (normal, neoplastic, inflammatory, vascular, metabolic or toxic, or other). Magnetic resonance imaging diagnoses were determined initially without patient information and then repeated, providing history and signalment. For all cases and readers, MRI diagnoses were compared with final diagnoses established with results from histologic examination (when available) or with other pertinent clinical data (CSF analysis, clinical response to treatment, or MRI follow-up). Magnetic resonance scores were compared between examiners with κ statistics. Reading agreement was substantial to almost perfect (0.64 < κ < 0.86) when identifying a brain lesion on MRI; fair to moderate (0.14 < κ < 0.60) when interpreting hemorrhage, edema, and pattern of contrast enhancement; fair to substantial (0.22 < κ < 0.74) for dural tail sign and categorization of margins of enhancement; and moderate to substantial (0.40 < κ < 0.78) for axial localization, presence of mass effect, cavitation, intensity, and distribution of enhancement. Interobserver agreement was moderate to substantial for categories of diagnosis (0.56 < κ < 0.69), and agreement with the final diagnosis was substantial regardless of whether patient information was (0.65 < κ < 0.76) or was not (0.65 < κ < 0.68) provided. The present study found that whereas some MRI features such as edema and hemorrhage were interpreted less consistently, radiologists were reasonably constant and accurate when providing diagnoses.

  17. Note: An improved calibration system with phase correction for electronic transformers with digital output

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Han-miao, E-mail: chenghanmiao@hust.edu.cn; Li, Hong-bin, E-mail: lihongbin@hust.edu.cn; State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan 430074

    The existing electronic transformer calibration systems employing data acquisition cards cannot satisfy some practical applications, because the calibration systems have phase measurement errors when they work in the mode of receiving external synchronization signals. This paper proposes an improved calibration system scheme with phase correction to improve the phase measurement accuracy. We employ NI PCI-4474 to design a calibration system, and the system has the potential to receive external synchronization signals and reach extremely high accuracy classes. Accuracy verification has been carried out in the China Electric Power Research Institute, and results demonstrate that the system surpasses the accuracy classmore » 0.05. Furthermore, this system has been used to test the harmonics measurement accuracy of all-fiber optical current transformers. In the same process, we have used an existing calibration system, and a comparison of the test results is presented. The system after improvement is suitable for the intended applications.« less

  18. Inflammatory cytokine biomarkers to identify women with asymptomatic sexually transmitted infections and bacterial vaginosis who are at high risk of HIV infection.

    PubMed

    Masson, Lindi; Arnold, Kelly B; Little, Francesca; Mlisana, Koleka; Lewis, David A; Mkhize, Nonhlanhla; Gamieldien, Hoyam; Ngcapu, Sinaye; Johnson, Leigh; Lauffenburger, Douglas A; Abdool Karim, Quarraisha; Abdool Karim, Salim S; Passmore, Jo-Ann S

    2016-05-01

    Untreated sexually transmitted infections (STIs) and bacterial vaginosis (BV) cause genital inflammation and increase the risk of HIV infection. WHO-recommended syndromic STI and BV management is severely limited as many women with asymptomatic infections go untreated. The purpose of this cross-sectional study was to evaluate genital cytokine profiles as a biomarker of STIs and BV to identify women with asymptomatic, treatable infections. Concentrations of 42 cytokines in cervicovaginal lavages from 227 HIV-uninfected women were measured using Luminex. All women were screened for BV by microscopy and STIs using molecular assays. Multivariate analyses were used to identify cytokine profiles associated with STIs/BV. A multivariate profile of seven cytokines (interleukin (IL)-1α, IL-1β, tumour necrosis factor-β, IL-4, fractalkine, macrophage-derived chemokine, and interferon-γ) most accurately predicted the presence of a treatable genital condition, with 77% classification accuracy and 75% cross-validation accuracy (sensitivity 72%; specificity 81%, positive predictive value (PPV) 86%, negative predictive value (NPV) 64%). Concomitant increased IL-1β and decreased IP-10 concentrations predicted the presence of a treatable genital condition without a substantial reduction in predictive value (sensitivity 77%, specificity 72%, PPV 82% and NPV 65%), correctly classifying 75% of the women. This approach performed substantially better than clinical signs (sensitivity 19%, specificity 92%, PPV 79% and NPV 40%). Supplementing syndromic management with an assessment of IL-1β and IP-10 as biomarkers of genital inflammation may improve STI/BV management for women, enabling more effective treatment of asymptomatic infections and potentially reducing their risk of HIV infection. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  19. The Accuracy of Pulse Spectroscopy for Detecting Hypoxemia and Coexisting Methemoglobin or Carboxyhemoglobin.

    PubMed

    Kulcke, Axel; Feiner, John; Menn, Ingolf; Holmer, Amadeus; Hayoz, Josef; Bickler, Philip

    2016-06-01

    Pulse spectroscopy is a new noninvasive technology involving hundreds of wavelengths of visible and infrared light, enabling the simultaneous quantitation of multiple types of normal and dysfunctional hemoglobin. We evaluated the accuracy of a first-generation pulse spectroscopy system (V-Spec™ Monitoring System, Senspec, Germany) in measuring oxygen saturation (SpO2) and detecting carboxyhemoglobin (COHb) or methemoglobin (MetHb), alone or simultaneously, with hypoxemia. Nineteen volunteers were fitted with V-Spec probes on the forehead and fingers. A radial arterial catheter was placed for blood sampling during (1) hypoxemia with arterial oxygen saturations (SaO2) of 100% to 58.5%; (2) normoxia with MetHb and COHb increased to approximately 10%; (3) 10% COHb or MetHb combined with hypoxemia with SaO2 of 100% to 80%. Standard measures of pulse-oximetry performance were calculated: bias (pulse spectroscopy measured value - arterial measured value) mean ± SD and root-mean-square error (Arms). The SpO2 bias for SaO2 approximately 60% to 100% was 0.06% ± 1.30% and Arms of 1.30%. COHb bias was 0.45 ± 1.63, with an Arms of 1.69% overall, and did not degrade substantially during moderate hypoxemia. MetHb bias was 0.36 ± 0.80 overall and stayed small with hypoxemia. Arms was 0.88 and was <3% at all levels of SaO2 and MetHb. Hypoxemia was also accurately detected by pulse spectroscopy at elevated levels of COHb. At elevated MetHb levels, a substantial negative bias developed, -10.3 at MetHb >10%. Pulse spectroscopy accurately detects hypoxemia, MetHb, and COHb. The technology also accurately detects these dysfunctional hemoglobins during hypoxemia. Future releases of this device may have an improved SpO2 algorithm that is more robust with methemoglobinemia.

  20. Conclusions about children's reporting accuracy for energy and macronutrients over multiple interviews depend on the analytic approach for comparing reported information to reference information.

    PubMed

    Baxter, Suzanne Domel; Smith, Albert F; Hardin, James W; Nichols, Michele D

    2007-04-01

    Validation study data are used to illustrate that conclusions about children's reporting accuracy for energy and macronutrients over multiple interviews (ie, time) depend on the analytic approach for comparing reported and reference information-conventional, which disregards accuracy of reported items and amounts, or reporting-error-sensitive, which classifies reported items as matches (eaten) or intrusions (not eaten), and amounts as corresponding or overreported. Children were observed eating school meals on 1 day (n=12), or 2 (n=13) or 3 (n=79) nonconsecutive days separated by >or=25 days, and interviewed in the morning after each observation day about intake the previous day. Reference (observed) and reported information were transformed to energy and macronutrients (ie, protein, carbohydrate, and fat), and compared. For energy and each macronutrient: report rates (reported/reference), correspondence rates (genuine accuracy measures), and inflation ratios (error measures). Mixed-model analyses. Using the conventional approach for analyzing energy and macronutrients, report rates did not vary systematically over interviews (all four P values >0.61). Using the reporting-error-sensitive approach for analyzing energy and macronutrients, correspondence rates increased over interviews (all four P values <0.04), indicating that reporting accuracy improved over time; inflation ratios decreased, although not significantly, over interviews, also suggesting that reporting accuracy improved over time. Correspondence rates were lower than report rates, indicating that reporting accuracy was worse than implied by conventional measures. When analyzed using the reporting-error-sensitive approach, children's dietary reporting accuracy for energy and macronutrients improved over time, but the conventional approach masked improvements and overestimated accuracy. The reporting-error-sensitive approach is recommended when analyzing data from validation studies of dietary reporting accuracy for energy and macronutrients.

  1. Conclusions about children’s reporting accuracy for energy and macronutrients over multiple interviews depend on the analytic approach for comparing reported information to reference information

    PubMed Central

    Baxter, Suzanne Domel; Smith, Albert F.; Hardin, James W.; Nichols, Michele D.

    2008-01-01

    Objective Validation-study data are used to illustrate that conclusions about children’s reporting accuracy for energy and macronutrients over multiple interviews (ie, time) depend on the analytic approach for comparing reported and reference information—conventional, which disregards accuracy of reported items and amounts, or reporting-error-sensitive, which classifies reported items as matches (eaten) or intrusions (not eaten), and amounts as corresponding or overreported. Subjects and design Children were observed eating school meals on one day (n = 12), or two (n = 13) or three (n = 79) nonconsecutive days separated by ≥25 days, and interviewed in the morning after each observation day about intake the previous day. Reference (observed) and reported information were transformed to energy and macronutrients (protein, carbohydrate, fat), and compared. Main outcome measures For energy and each macronutrient: report rates (reported/reference), correspondence rates (genuine accuracy measures), inflation ratios (error measures). Statistical analyses Mixed-model analyses. Results Using the conventional approach for analyzing energy and macronutrients, report rates did not vary systematically over interviews (Ps > .61). Using the reporting-error-sensitive approach for analyzing energy and macronutrients, correspondence rates increased over interviews (Ps < .04), indicating that reporting accuracy improved over time; inflation ratios decreased, although not significantly, over interviews, also suggesting that reporting accuracy improved over time. Correspondence rates were lower than report rates, indicating that reporting accuracy was worse than implied by conventional measures. Conclusions When analyzed using the reporting-error-sensitive approach, children’s dietary reporting accuracy for energy and macronutrients improved over time, but the conventional approach masked improvements and overestimated accuracy. Applications The reporting-error-sensitive approach is recommended when analyzing data from validation studies of dietary reporting accuracy for energy and macronutrients. PMID:17383265

  2. Improving coding accuracy in an academic practice.

    PubMed

    Nguyen, Dana; O'Mara, Heather; Powell, Robert

    2017-01-01

    Practice management has become an increasingly important component of graduate medical education. This applies to every practice environment; private, academic, and military. One of the most critical aspects of practice management is documentation and coding for physician services, as they directly affect the financial success of any practice. Our quality improvement project aimed to implement a new and innovative method for teaching billing and coding in a longitudinal fashion in a family medicine residency. We hypothesized that implementation of a new teaching strategy would increase coding accuracy rates among residents and faculty. Design: single group, pretest-posttest. military family medicine residency clinic. Study populations: 7 faculty physicians and 18 resident physicians participated as learners in the project. Educational intervention: monthly structured coding learning sessions in the academic curriculum that involved learner-presented cases, small group case review, and large group discussion. overall coding accuracy (compliance) percentage and coding accuracy per year group for the subjects that were able to participate longitudinally. Statistical tests used: average coding accuracy for population; paired t test to assess improvement between 2 intervention periods, both aggregate and by year group. Overall coding accuracy rates remained stable over the course of time regardless of the modality of the educational intervention. A paired t test was conducted to compare coding accuracy rates at baseline (mean (M)=26.4%, SD=10%) to accuracy rates after all educational interventions were complete (M=26.8%, SD=12%); t24=-0.127, P=.90. Didactic teaching and small group discussion sessions did not improve overall coding accuracy in a residency practice. Future interventions could focus on educating providers at the individual level.

  3. Accuracy improvement of multimodal measurement of speed of sound based on image processing

    NASA Astrophysics Data System (ADS)

    Nitta, Naotaka; Kaya, Akio; Misawa, Masaki; Hyodo, Koji; Numano, Tomokazu

    2017-07-01

    Since the speed of sound (SOS) reflects tissue characteristics and is expected as an evaluation index of elasticity and water content, the noninvasive measurement of SOS is eagerly anticipated. However, it is difficult to measure the SOS by using an ultrasound device alone. Therefore, we have presented a noninvasive measurement method of SOS using ultrasound (US) and magnetic resonance (MR) images. By this method, we determine the longitudinal SOS based on the thickness measurement using the MR image and the time of flight (TOF) measurement using the US image. The accuracy of SOS measurement is affected by the accuracy of image registration and the accuracy of thickness measurements in the MR and US images. In this study, we address the accuracy improvement in the latter thickness measurement, and present an image-processing-based method for improving the accuracy of thickness measurement. The method was investigated by using in vivo data obtained from a tissue-engineered cartilage implanted in the back of a rat, with an unclear boundary.

  4. Comparison of ferumoxytol-enhanced MRA with conventional angiography for assessment of severity of transplant renal artery stenosis.

    PubMed

    Fananapazir, Ghaneh; Bashir, Mustafa R; Corwin, Michael T; Lamba, Ramit; Vu, Catherine T; Troppmann, Christoph

    2017-03-01

    To determine the accuracy of ferumoxytol-enhanced magnetic resonance angiography (MRA) in assessing the severity of transplant renal artery stenosis (TRAS), using digital subtraction angiography (DSA) as the reference standard. Our Institutional Review Board approved this retrospective, Health Insurance Portability and Accountability Act-compliant study. Thirty-three patients with documented clinical suspicion for TRAS (elevated serum creatinine, refractory hypertension, edema, and/or audible bruit) and/or concerning sonographic findings (elevated renal artery velocity and/or intraparenchymal parvus tardus waveforms) underwent a 1.5T MRA with ferumoxytol prior to DSA. All DSAs were independently reviewed by an interventional radiologist and served as the reference standard. The MRAs were reviewed by three readers who were blinded to the ultrasound and DSA findings for the presence and severity of TRAS. Sensitivity, specificity, and accuracy for identifying substantial stenoses (>50%) were determined. Intraclass correlation coefficients (ICCs) were calculated among readers. Mean differences between the percent stenosis from each MRA reader and DSA were calculated. On DSA, a total of 42 stenoses were identified in the 33 patients. The sensitivity, specificity, and accuracy of MRA in detecting substantial stenoses were 100%, 75-87.5%, and 95.2-97.6%, respectively, among the readers. There was excellent agreement among readers as to the percent stenosis (ICC = 0.82). MRA overestimated the degree of stenosis by 3.9-9.6% compared to DSA. Ferumoxytol-enhanced MRA provides high sensitivity, specificity, and accuracy for determining the severity of TRAS. Our results suggest that it can potentially be used as a noninvasive examination following ultrasound to reduce the number of unnecessary conventional angiograms. 3 J. Magn. Reson. Imaging 2017;45:779-785. © 2016 International Society for Magnetic Resonance in Medicine.

  5. Critically re-evaluating a common technique: Accuracy, reliability, and confirmation bias of EMG.

    PubMed

    Narayanaswami, Pushpa; Geisbush, Thomas; Jones, Lyell; Weiss, Michael; Mozaffar, Tahseen; Gronseth, Gary; Rutkove, Seward B

    2016-01-19

    (1) To assess the diagnostic accuracy of EMG in radiculopathy. (2) To evaluate the intrarater reliability and interrater reliability of EMG in radiculopathy. (3) To assess the presence of confirmation bias in EMG. Three experienced academic electromyographers interpreted 3 compact discs with 20 EMG videos (10 normal, 10 radiculopathy) in a blinded, standardized fashion without information regarding the nature of the study. The EMGs were interpreted 3 times (discs A, B, C) 1 month apart. Clinical information was provided only with disc C. Intrarater reliability was calculated by comparing interpretations in discs A and B, interrater reliability by comparing interpretation between reviewers. Confirmation bias was estimated by the difference in correct interpretations when clinical information was provided. Sensitivity was similar to previous reports (77%, confidence interval [CI] 63%-90%); specificity was 71%, CI 56%-85%. Intrarater reliability was good (κ 0.61, 95% CI 0.41-0.81); interrater reliability was lower (κ 0.53, CI 0.35-0.71). There was no substantial confirmation bias when clinical information was provided (absolute difference in correct responses 2.2%, CI -13.3% to 17.7%); the study lacked precision to exclude moderate confirmation bias. This study supports that (1) serial EMG studies should be performed by the same electromyographer since intrarater reliability is better than interrater reliability; (2) knowledge of clinical information does not bias EMG interpretation substantially; (3) EMG has moderate diagnostic accuracy for radiculopathy with modest specificity and electromyographers should exercise caution interpreting mild abnormalities. This study provides Class III evidence that EMG has moderate diagnostic accuracy and specificity for radiculopathy. © 2015 American Academy of Neurology.

  6. Multivariate prediction of motor diagnosis in Huntington's disease: 12 years of PREDICT‐HD

    PubMed Central

    Long, Jeffrey D.

    2015-01-01

    Abstract Background It is well known in Huntington's disease that cytosine‐adenine‐guanine expansion and age at study entry are predictive of the timing of motor diagnosis. The goal of this study was to assess whether additional motor, imaging, cognitive, functional, psychiatric, and demographic variables measured at study entry increased the ability to predict the risk of motor diagnosis over 12 years. Methods One thousand seventy‐eight Huntington's disease gene–expanded carriers (64% female) from the Neurobiological Predictors of Huntington's Disease study were followed up for up to 12 y (mean = 5, standard deviation = 3.3) covering 2002 to 2014. No one had a motor diagnosis at study entry, but 225 (21%) carriers prospectively received a motor diagnosis. Analysis was performed with random survival forests, which is a machine learning method for right‐censored data. Results Adding 34 variables along with cytosine‐adenine‐guanine and age substantially increased predictive accuracy relative to cytosine‐adenine‐guanine and age alone. Adding six of the common motor and cognitive variables (total motor score, diagnostic confidence level, Symbol Digit Modalities Test, three Stroop tests) resulted in lower predictive accuracy than the full set, but still had twice the 5‐y predictive accuracy than when using cytosine‐adenine‐guanine and age alone. Additional analysis suggested interactions and nonlinear effects that were characterized in a post hoc Cox regression model. Conclusions Measurement of clinical variables can substantially increase the accuracy of predicting motor diagnosis over and above cytosine‐adenine‐guanine and age (and their interaction). Estimated probabilities can be used to characterize progression level and aid in future studies' sample selection. © 2015 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society PMID:26340420

  7. Multivariate prediction of motor diagnosis in Huntington's disease: 12 years of PREDICT-HD.

    PubMed

    Long, Jeffrey D; Paulsen, Jane S

    2015-10-01

    It is well known in Huntington's disease that cytosine-adenine-guanine expansion and age at study entry are predictive of the timing of motor diagnosis. The goal of this study was to assess whether additional motor, imaging, cognitive, functional, psychiatric, and demographic variables measured at study entry increased the ability to predict the risk of motor diagnosis over 12 years. One thousand seventy-eight Huntington's disease gene-expanded carriers (64% female) from the Neurobiological Predictors of Huntington's Disease study were followed up for up to 12 y (mean = 5, standard deviation = 3.3) covering 2002 to 2014. No one had a motor diagnosis at study entry, but 225 (21%) carriers prospectively received a motor diagnosis. Analysis was performed with random survival forests, which is a machine learning method for right-censored data. Adding 34 variables along with cytosine-adenine-guanine and age substantially increased predictive accuracy relative to cytosine-adenine-guanine and age alone. Adding six of the common motor and cognitive variables (total motor score, diagnostic confidence level, Symbol Digit Modalities Test, three Stroop tests) resulted in lower predictive accuracy than the full set, but still had twice the 5-y predictive accuracy than when using cytosine-adenine-guanine and age alone. Additional analysis suggested interactions and nonlinear effects that were characterized in a post hoc Cox regression model. Measurement of clinical variables can substantially increase the accuracy of predicting motor diagnosis over and above cytosine-adenine-guanine and age (and their interaction). Estimated probabilities can be used to characterize progression level and aid in future studies' sample selection. © 2015 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society.

  8. Sensitivity and specificity of univariate MRI analysis of experimentally degraded cartilage under clinical imaging conditions.

    PubMed

    Lukas, Vanessa A; Fishbein, Kenneth W; Reiter, David A; Lin, Ping-Chang; Schneider, Erika; Spencer, Richard G

    2015-07-01

    To evaluate the sensitivity and specificity of classification of pathomimetically degraded bovine nasal cartilage at 3 Tesla and 37°C using univariate MRI measurements of both pure parameter values and intensities of parameter-weighted images. Pre- and posttrypsin degradation values of T1 , T2 , T2 *, magnetization transfer ratio (MTR), and apparent diffusion coefficient (ADC), and corresponding weighted images, were analyzed. Classification based on the Euclidean distance was performed and the quality of classification was assessed through sensitivity, specificity and accuracy (ACC). The classifiers with the highest accuracy values were ADC (ACC = 0.82 ± 0.06), MTR (ACC = 0.78 ± 0.06), T1 (ACC = 0.99 ± 0.01), T2 derived from a three-dimensional (3D) spin-echo sequence (ACC = 0.74 ± 0.05), and T2 derived from a 2D spin-echo sequence (ACC = 0.77 ± 0.06), along with two of the diffusion-weighted signal intensities (b = 333 s/mm(2) : ACC = 0.80 ± 0.05; b = 666 s/mm(2) : ACC = 0.85 ± 0.04). In particular, T1 values differed substantially between the groups, resulting in atypically high classification accuracy. The second-best classifier, diffusion weighting with b = 666 s/mm(2) , as well as all other parameters evaluated, exhibited substantial overlap between pre- and postdegradation groups, resulting in decreased accuracies. Classification according to T1 values showed excellent test characteristics (ACC = 0.99), with several other parameters also showing reasonable performance (ACC > 0.70). Of these, diffusion weighting is particularly promising as a potentially practical clinical modality. As in previous work, we again find that highly statistically significant group mean differences do not necessarily translate into accurate clinical classification rules. © 2014 Wiley Periodicals, Inc.

  9. Head-to-head comparison of adaptive statistical and model-based iterative reconstruction algorithms for submillisievert coronary CT angiography.

    PubMed

    Benz, Dominik C; Fuchs, Tobias A; Gräni, Christoph; Studer Bruengger, Annina A; Clerc, Olivier F; Mikulicic, Fran; Messerli, Michael; Stehli, Julia; Possner, Mathias; Pazhenkottil, Aju P; Gaemperli, Oliver; Kaufmann, Philipp A; Buechel, Ronny R

    2018-02-01

    Iterative reconstruction (IR) algorithms allow for a significant reduction in radiation dose of coronary computed tomography angiography (CCTA). We performed a head-to-head comparison of adaptive statistical IR (ASiR) and model-based IR (MBIR) algorithms to assess their impact on quantitative image parameters and diagnostic accuracy for submillisievert CCTA. CCTA datasets of 91 patients were reconstructed using filtered back projection (FBP), increasing contributions of ASiR (20, 40, 60, 80, and 100%), and MBIR. Signal and noise were measured in the aortic root to calculate signal-to-noise ratio (SNR). In a subgroup of 36 patients, diagnostic accuracy of ASiR 40%, ASiR 100%, and MBIR for diagnosis of coronary artery disease (CAD) was compared with invasive coronary angiography. Median radiation dose was 0.21 mSv for CCTA. While increasing levels of ASiR gradually reduced image noise compared with FBP (up to - 48%, P < 0.001), MBIR provided largest noise reduction (-79% compared with FBP) outperforming ASiR (-59% compared with ASiR 100%; P < 0.001). Increased noise and lower SNR with ASiR 40% and ASiR 100% resulted in substantially lower diagnostic accuracy to detect CAD as diagnosed by invasive coronary angiography compared with MBIR: sensitivity and specificity were 100 and 37%, 100 and 57%, and 100 and 74% for ASiR 40%, ASiR 100%, and MBIR, respectively. MBIR offers substantial noise reduction with increased SNR, paving the way for implementation of submillisievert CCTA protocols in clinical routine. In contrast, inferior noise reduction by ASiR negatively affects diagnostic accuracy of submillisievert CCTA for CAD detection. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2017. For permissions, please email: journals.permissions@oup.com.

  10. Using Commercial Digital Cameras and Structure-for-Motion Software to Map Snow Cover Depth from Small Aircraft

    NASA Astrophysics Data System (ADS)

    Sturm, M.; Nolan, M.; Larsen, C. F.

    2014-12-01

    A long-standing goal in snow hydrology has been to map snow cover in detail, either mapping snow depth or snow water equivalent (SWE) with sub-meter resolution. Airborne LiDAR and air photogrammetry have been used successfully for this purpose, but both require significant investments in equipment and substantial processing effort. Here we detail a relatively inexpensive and simple airborne photogrammetric technique that can be used to measure snow depth. The main airborne hardware consists of a consumer-grade digital camera attached to a survey-quality, dual-frequency GPS. Photogrammetric processing is done using commercially available Structure from Motion (SfM) software that does not require ground control points. Digital elevation models (DEMs) are made from snow-free acquisitions in the summer and snow-covered acquisitions in winter, and the maps are then differenced to arrive at snow thickness. We tested the accuracy and precision of snow depths measured using this system through 1) a comparison with airborne scanning LiDAR, 2) a comparison of results from two independent and slightly different photogrameteric systems, and 3) comparison to extensive on-the-ground measured snow depths. Vertical accuracy and precision are on the order of +/-30 cm and +/- 8 cm, respectively. The accuracy can be made to approach that of the precision if suitable snow-free ground control points exists and are used to co-register summer to winter DEM maps. Final snow depth accuracy from our series of tests was on the order of ±15 cm. This photogrammetric method substantially lowers the economic and expertise barriers to entry for mapping snow.

  11. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  12. 47 CFR 73.664 - Determining operating power.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... power to within an accuracy of ±5% of the power indicated by the full scale reading of the electrical... frequency amplifier stage and the transmission line meter are to be read and compared with similar readings taken with the dummy load replaced by the antenna. These readings must be in substantial agreement. (3...

  13. When bulk density methods matter: Implications for estimating soil organic carbon pools in rocky soils

    USDA-ARS?s Scientific Manuscript database

    Resolving uncertainty in the carbon cycle is paramount to refining climate predictions. Soil organic carbon (SOC) is a major component of terrestrial C pools, and accuracy of SOC estimates are only as good as the measurements and assumptions used to obtain them. Dryland soils account for a substanti...

  14. Current sensor

    DOEpatents

    Yakymyshyn, Christopher Paul; Brubaker, Michael Allen; Yakymyshyn, Pamela Jane

    2007-01-16

    A current sensor is described that uses a plurality of magnetic field sensors positioned around a current carrying conductor. The sensor can be hinged to allow clamping to a conductor. The current sensor provides high measurement accuracy for both DC and AC currents, and is substantially immune to the effects of temperature, conductor position, nearby current carrying conductors and aging.

  15. 47 CFR 73.664 - Determining operating power.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... power to within an accuracy of ±5% of the power indicated by the full scale reading of the electrical... frequency amplifier stage and the transmission line meter are to be read and compared with similar readings taken with the dummy load replaced by the antenna. These readings must be in substantial agreement. (3...

  16. 47 CFR 73.664 - Determining operating power.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... power to within an accuracy of ±5% of the power indicated by the full scale reading of the electrical... frequency amplifier stage and the transmission line meter are to be read and compared with similar readings taken with the dummy load replaced by the antenna. These readings must be in substantial agreement. (3...

  17. 47 CFR 73.664 - Determining operating power.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... power to within an accuracy of ±5% of the power indicated by the full scale reading of the electrical... frequency amplifier stage and the transmission line meter are to be read and compared with similar readings taken with the dummy load replaced by the antenna. These readings must be in substantial agreement. (3...

  18. Seeing and Being Seen: Predictors of Accurate Perceptions about Classmates’ Relationships

    PubMed Central

    Neal, Jennifer Watling; Neal, Zachary P.; Cappella, Elise

    2015-01-01

    This study examines predictors of observer accuracy (i.e. seeing) and target accuracy (i.e. being seen) in perceptions of classmates’ relationships in a predominantly African American sample of 420 second through fourth graders (ages 7 – 11). Girls, children in higher grades, and children in smaller classrooms were more accurate observers. Targets (i.e. pairs of children) were more accurately observed when they occurred in smaller classrooms of higher grades and involved same-sex, high-popularity, and similar-popularity children. Moreover, relationships between pairs of girls were more accurately observed than relationships between pairs of boys. As a set, these findings suggest the importance of both observer and target characteristics for children’s accurate perceptions of classroom relationships. Moreover, the substantial variation in observer accuracy and target accuracy has methodological implications for both peer-reported assessments of classroom relationships and the use of stochastic actor-based models to understand peer selection and socialization processes. PMID:26347582

  19. IMPROVING THE ACCURACY OF HISTORIC SATELLITE IMAGE CLASSIFICATION BY COMBINING LOW-RESOLUTION MULTISPECTRAL DATA WITH HIGH-RESOLUTION PANCHROMATIC DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Getman, Daniel J

    2008-01-01

    Many attempts to observe changes in terrestrial systems over time would be significantly enhanced if it were possible to improve the accuracy of classifications of low-resolution historic satellite data. In an effort to examine improving the accuracy of historic satellite image classification by combining satellite and air photo data, two experiments were undertaken in which low-resolution multispectral data and high-resolution panchromatic data were combined and then classified using the ECHO spectral-spatial image classification algorithm and the Maximum Likelihood technique. The multispectral data consisted of 6 multispectral channels (30-meter pixel resolution) from Landsat 7. These data were augmented with panchromatic datamore » (15m pixel resolution) from Landsat 7 in the first experiment, and with a mosaic of digital aerial photography (1m pixel resolution) in the second. The addition of the Landsat 7 panchromatic data provided a significant improvement in the accuracy of classifications made using the ECHO algorithm. Although the inclusion of aerial photography provided an improvement in accuracy, this improvement was only statistically significant at a 40-60% level. These results suggest that once error levels associated with combining aerial photography and multispectral satellite data are reduced, this approach has the potential to significantly enhance the precision and accuracy of classifications made using historic remotely sensed data, as a way to extend the time range of efforts to track temporal changes in terrestrial systems.« less

  20. Prostate cancer in East Asia: evolving trend over the last decade

    PubMed Central

    Zhu, Yao; Wang, Hong-Kai; Qu, Yuan-Yuan; Ye, Ding-Wei

    2015-01-01

    Prostate cancer is now becoming an emerging health priority in East Asia. Most of our current knowledge on Prostate cancer has been generated from studies conducted in Western population; however, there is considerable heterogeneity of Prostate cancer between East and West. In this article, we reviewed epidemiologic trends, risk factors, disease characteristics and management of Prostate cancer in East Asian population over the last decade. Growing evidence from East Asia suggests an important role of genetic and environmental risk factors interactions in the carcinogenesis of Prostate cancer. Exposure to westernized diet and life style and improvement in health care in combination contribute substantially to the increasing epidemic in this region. Diagnostic and treatment guidelines in East Asia are largely based on Western knowledge. Although there is a remarkable improvement in the outcome over the last decade, ample evidence suggests an inneglectable difference in diagnostic accuracy, treatment efficacy and adverse events between different populations. The knowledge from western countries should be calibrated in the Asian setting to provide a better race-based treatment approach. In this review, we intend to reveal the evolving trend of Prostate cancer in the last decade, in order to gain evidence to improve Prostate cancer prevention and control in East Asia. PMID:25080928

  1. Complementary and alternative medicine on wikipedia: opportunities for improvement.

    PubMed

    Koo, Malcolm

    2014-01-01

    Wikipedia, a free and collaborative Internet encyclopedia, has become one of the most popular sources of free information on the Internet. However, there have been concerns over the quality of online health information, particularly that on complementary and alternative medicine (CAM). This exploratory study aimed to evaluate several page attributes of articles on CAM in the English Wikipedia. A total of 97 articles were analyzed and compared with eight articles of broad categories of therapies in conventional medicine using the Mann-Whitney U test. Based on the Wikipedia editorial assessment grading, 4% of the articles attained "good article" status, 34% required considerable editing, and 56% needed substantial improvements in their content. The median daily access of the articles over the previous 90 days was 372 (range: 7-4,214). The median word count was 1840 with a readability of grade 12.7 (range: 9.4-17.7). Medians of word count and citation density of the CAM articles were significantly lower than those in the articles of conventional medicine therapies. In conclusion, despite its limitations, the general public will continue to access health information on Wikipedia. There are opportunities for health professionals to contribute their knowledge and to improve the accuracy and completeness of the CAM articles on Wikipedia.

  2. Priority Actions and Progress to Substantially and Sustainably Reduce the Mortality, Morbidity and Socioeconomic Burden of Tropical Snakebite.

    PubMed

    Harrison, Robert A; Gutiérrez, José María

    2016-11-24

    The deliberations and conclusions of a Hinxton Retreat convened in September 2015, entitled "Mechanisms to reverse the public health neglect of snakebite victims" are reported. The participants recommended that the following priority actions be included in strategies to reduce the global impact of snake envenoming: (a) collection of accurate global snakebite incidence, mortality and morbidity data to underpin advocacy efforts and help design public health campaigns; (b) promotion of (i) public education prevention campaigns; (ii) transport systems to improve access to hospitals and (iii) establishment of regional antivenom-efficacy testing facilities to ensure antivenoms' effectiveness and safety; (c) exploration of funding models for investment in the production of antivenoms to address deficiencies in some regions; (d) establishment of (i) programs for training in effective first aid, hospital management and post-treatment care of victims; (ii) a clinical network to generate treatment guidelines and (iii) a clinical trials system to improve the clinical management of snakebite; (e) development of (i) novel treatments of the systemic and local tissue-destructive effects of envenoming and (ii) affordable, simple, point-of-care snakebite diagnostic kits to improve the accuracy and rapidity of treatment; (f) devising and implementation of interventions to help the people and communities affected by physical and psychological sequelae of snakebite.

  3. Regression trees for predicting mortality in patients with cardiovascular disease: What improvement is achieved by using ensemble-based methods?

    PubMed Central

    Austin, Peter C; Lee, Douglas S; Steyerberg, Ewout W; Tu, Jack V

    2012-01-01

    In biomedical research, the logistic regression model is the most commonly used method for predicting the probability of a binary outcome. While many clinical researchers have expressed an enthusiasm for regression trees, this method may have limited accuracy for predicting health outcomes. We aimed to evaluate the improvement that is achieved by using ensemble-based methods, including bootstrap aggregation (bagging) of regression trees, random forests, and boosted regression trees. We analyzed 30-day mortality in two large cohorts of patients hospitalized with either acute myocardial infarction (N = 16,230) or congestive heart failure (N = 15,848) in two distinct eras (1999–2001 and 2004–2005). We found that both the in-sample and out-of-sample prediction of ensemble methods offered substantial improvement in predicting cardiovascular mortality compared to conventional regression trees. However, conventional logistic regression models that incorporated restricted cubic smoothing splines had even better performance. We conclude that ensemble methods from the data mining and machine learning literature increase the predictive performance of regression trees, but may not lead to clear advantages over conventional logistic regression models for predicting short-term mortality in population-based samples of subjects with cardiovascular disease. PMID:22777999

  4. Swing arm profilometer: high accuracy testing for large reaction-bonded silicon carbide optics with a capacitive probe

    NASA Astrophysics Data System (ADS)

    Xiong, Ling; Luo, Xiao; Hu, Hai-xiang; Zhang, Zhi-yu; Zhang, Feng; Zheng, Li-gong; Zhang, Xue-jun

    2017-08-01

    A feasible way to improve the manufacturing efficiency of large reaction-bonded silicon carbide optics is to increase the processing accuracy in the ground stage before polishing, which requires high accuracy metrology. A swing arm profilometer (SAP) has been used to measure large optics during the ground stage. A method has been developed for improving the measurement accuracy of SAP using a capacitive probe and implementing calibrations. The experimental result compared with the interferometer test shows the accuracy of 0.068 μm in root-mean-square (RMS) and maps in 37 low-order Zernike terms show accuracy of 0.048 μm RMS, which shows a powerful capability to provide a major input in high-precision grinding.

  5. STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies.

    PubMed

    Bossuyt, Patrick M; Reitsma, Johannes B; Bruns, David E; Gatsonis, Constantine A; Glasziou, Paul P; Irwig, Les; Lijmer, Jeroen G; Moher, David; Rennie, Drummond; de Vet, Henrica C W; Kressel, Herbert Y; Rifai, Nader; Golub, Robert M; Altman, Douglas G; Hooft, Lotty; Korevaar, Daniël A; Cohen, Jérémie F

    2015-12-01

    Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of diagnostic accuracy studies, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a diagnostic accuracy study. This update incorporates recent evidence about sources of bias and variability in diagnostic accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of diagnostic accuracy studies.

  6. Lung function parameters improve prediction of VO2peak in an elderly population: The Generation 100 study.

    PubMed

    Hassel, Erlend; Stensvold, Dorthe; Halvorsen, Thomas; Wisløff, Ulrik; Langhammer, Arnulf; Steinshamn, Sigurd

    2017-01-01

    Peak oxygen uptake (VO2peak) is an indicator of cardiovascular health and a useful tool for risk stratification. Direct measurement of VO2peak is resource-demanding and may be contraindicated. There exist several non-exercise models to estimate VO2peak that utilize easily obtainable health parameters, but none of them includes lung function measures or hemoglobin concentrations. We aimed to test whether addition of these parameters could improve prediction of VO2peak compared to an established model that includes age, waist circumference, self-reported physical activity and resting heart rate. We included 1431 subjects aged 69-77 years that completed a laboratory test of VO2peak, spirometry, and a gas diffusion test. Prediction models for VO2peak were developed with multiple linear regression, and goodness of fit was evaluated. Forced expiratory volume in one second (FEV1), diffusing capacity of the lung for carbon monoxide and blood hemoglobin concentration significantly improved the ability of the established model to predict VO2peak. The explained variance of the model increased from 31% to 48% for men and from 32% to 38% for women (p<0.001). FEV1, diffusing capacity of the lungs for carbon monoxide and hemoglobin concentration substantially improved the accuracy of VO2peak prediction when added to an established model in an elderly population.

  7. Improvement of the prompt-gamma neutron activation facility at Brookhaven National Laboratory.

    PubMed

    Dilmanian, F A; Lidofsky, L J; Stamatelatos, I; Kamen, Y; Yasumura, S; Vartsky, D; Pierson, R N; Weber, D A; Moore, R I; Ma, R

    1998-02-01

    The prompt-gamma neutron activation facility at Brookhaven National Laboratory was upgraded to improve both the precision and accuracy of its in vivo determinations of total body nitrogen. The upgrade, guided by Monte Carlo simulations, involved elongating and modifying the source collimator and its shielding, repositioning the system's two NaI(Tl) detectors, and improving the neutron and gamma shielding of these detectors. The new source collimator has a graphite reflector around the 238PuBe neutron source to enhance the low-energy region of the neutron spectrum incident on the patient. The gamma detectors have been relocated from positions close to the upward-emerging collimated neutron beam to positions close to and at the sides of the patient. These modifications substantially reduced spurious counts resulting from the capture of small-angle scattered neutrons in the NaI detectors. The pile-up background under the 10.8 MeV 14N(n, gamma)15N spectral peak has been reduced so that the nitrogen peak-to-background ratio has been increased by a factor of 2.8. The resulting reduction in the coefficient of variation of the total body nitrogen measurements from 3% to 2.2% has improved the statistical significance of the results possible for any given number of patient measurements. The new system also has a more uniform composite sensitivity.

  8. Validation of China-wide interpolated daily climate variables from 1960 to 2011

    NASA Astrophysics Data System (ADS)

    Yuan, Wenping; Xu, Bing; Chen, Zhuoqi; Xia, Jiangzhou; Xu, Wenfang; Chen, Yang; Wu, Xiaoxu; Fu, Yang

    2015-02-01

    Temporally and spatially continuous meteorological variables are increasingly in demand to support many different types of applications related to climate studies. Using measurements from 600 climate stations, a thin-plate spline method was applied to generate daily gridded climate datasets for mean air temperature, maximum temperature, minimum temperature, relative humidity, sunshine duration, wind speed, atmospheric pressure, and precipitation over China for the period 1961-2011. A comprehensive evaluation of interpolated climate was conducted at 150 independent validation sites. The results showed superior performance for most of the estimated variables. Except for wind speed, determination coefficients ( R 2) varied from 0.65 to 0.90, and interpolations showed high consistency with observations. Most of the estimated climate variables showed relatively consistent accuracy among all seasons according to the root mean square error, R 2, and relative predictive error. The interpolated data correctly predicted the occurrence of daily precipitation at validation sites with an accuracy of 83 %. Moreover, the interpolation data successfully explained the interannual variability trend for the eight meteorological variables at most validation sites. Consistent interannual variability trends were observed at 66-95 % of the sites for the eight meteorological variables. Accuracy in distinguishing extreme weather events differed substantially among the meteorological variables. The interpolated data identified extreme events for the three temperature variables, relative humidity, and sunshine duration with an accuracy ranging from 63 to 77 %. However, for wind speed, air pressure, and precipitation, the interpolation model correctly identified only 41, 48, and 58 % of extreme events, respectively. The validation indicates that the interpolations can be applied with high confidence for the three temperatures variables, as well as relative humidity and sunshine duration based on the performance of these variables in estimating daily variations, interannual variability, and extreme events. Although longitude, latitude, and elevation data are included in the model, additional information, such as topography and cloud cover, should be integrated into the interpolation algorithm to improve performance in estimating wind speed, atmospheric pressure, and precipitation.

  9. Accuracy of CNV Detection from GWAS Data.

    PubMed

    Zhang, Dandan; Qian, Yudong; Akula, Nirmala; Alliey-Rodriguez, Ney; Tang, Jinsong; Gershon, Elliot S; Liu, Chunyu

    2011-01-13

    Several computer programs are available for detecting copy number variants (CNVs) using genome-wide SNP arrays. We evaluated the performance of four CNV detection software suites--Birdsuite, Partek, HelixTree, and PennCNV-Affy--in the identification of both rare and common CNVs. Each program's performance was assessed in two ways. The first was its recovery rate, i.e., its ability to call 893 CNVs previously identified in eight HapMap samples by paired-end sequencing of whole-genome fosmid clones, and 51,440 CNVs identified by array Comparative Genome Hybridization (aCGH) followed by validation procedures, in 90 HapMap CEU samples. The second evaluation was program performance calling rare and common CNVs in the Bipolar Genome Study (BiGS) data set (1001 bipolar cases and 1033 controls, all of European ancestry) as measured by the Affymetrix SNP 6.0 array. Accuracy in calling rare CNVs was assessed by positive predictive value, based on the proportion of rare CNVs validated by quantitative real-time PCR (qPCR), while accuracy in calling common CNVs was assessed by false positive/false negative rates based on qPCR validation results from a subset of common CNVs. Birdsuite recovered the highest percentages of known HapMap CNVs containing >20 markers in two reference CNV datasets. The recovery rate increased with decreased CNV frequency. In the tested rare CNV data, Birdsuite and Partek had higher positive predictive values than the other software suites. In a test of three common CNVs in the BiGS dataset, Birdsuite's call was 98.8% consistent with qPCR quantification in one CNV region, but the other two regions showed an unacceptable degree of accuracy. We found relatively poor consistency between the two "gold standards," the sequence data of Kidd et al., and aCGH data of Conrad et al. Algorithms for calling CNVs especially common ones need substantial improvement, and a "gold standard" for detection of CNVs remains to be established.

  10. Graphics Processing Unit-Accelerated Nonrigid Registration of MR Images to CT Images During CT-Guided Percutaneous Liver Tumor Ablations.

    PubMed

    Tokuda, Junichi; Plishker, William; Torabi, Meysam; Olubiyi, Olutayo I; Zaki, George; Tatli, Servet; Silverman, Stuart G; Shekher, Raj; Hata, Nobuhiko

    2015-06-01

    Accuracy and speed are essential for the intraprocedural nonrigid magnetic resonance (MR) to computed tomography (CT) image registration in the assessment of tumor margins during CT-guided liver tumor ablations. Although both accuracy and speed can be improved by limiting the registration to a region of interest (ROI), manual contouring of the ROI prolongs the registration process substantially. To achieve accurate and fast registration without the use of an ROI, we combined a nonrigid registration technique on the basis of volume subdivision with hardware acceleration using a graphics processing unit (GPU). We compared the registration accuracy and processing time of GPU-accelerated volume subdivision-based nonrigid registration technique to the conventional nonrigid B-spline registration technique. Fourteen image data sets of preprocedural MR and intraprocedural CT images for percutaneous CT-guided liver tumor ablations were obtained. Each set of images was registered using the GPU-accelerated volume subdivision technique and the B-spline technique. Manual contouring of ROI was used only for the B-spline technique. Registration accuracies (Dice similarity coefficient [DSC] and 95% Hausdorff distance [HD]) and total processing time including contouring of ROIs and computation were compared using a paired Student t test. Accuracies of the GPU-accelerated registrations and B-spline registrations, respectively, were 88.3 ± 3.7% versus 89.3 ± 4.9% (P = .41) for DSC and 13.1 ± 5.2 versus 11.4 ± 6.3 mm (P = .15) for HD. Total processing time of the GPU-accelerated registration and B-spline registration techniques was 88 ± 14 versus 557 ± 116 seconds (P < .000000002), respectively; there was no significant difference in computation time despite the difference in the complexity of the algorithms (P = .71). The GPU-accelerated volume subdivision technique was as accurate as the B-spline technique and required significantly less processing time. The GPU-accelerated volume subdivision technique may enable the implementation of nonrigid registration into routine clinical practice. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  11. Generating Keywords Improves Metacomprehension and Self-Regulation in Elementary and Middle School Children

    ERIC Educational Resources Information Center

    de Bruin, Anique B. H.; Thiede, Keith W.; Camp, Gino; Redford, Joshua

    2011-01-01

    The ability to monitor understanding of texts, usually referred to as metacomprehension accuracy, is typically quite poor in adult learners; however, recently interventions have been developed to improve accuracy. In two experiments, we evaluated whether generating delayed keywords prior to judging comprehension improved metacomprehension accuracy…

  12. Double-input compartmental modeling and spectral analysis for the quantification of positron emission tomography data in oncology

    NASA Astrophysics Data System (ADS)

    Tomasi, G.; Kimberley, S.; Rosso, L.; Aboagye, E.; Turkheimer, F.

    2012-04-01

    In positron emission tomography (PET) studies involving organs different from the brain, ignoring the metabolite contribution to the tissue time-activity curves (TAC), as in the standard single-input (SI) models, may compromise the accuracy of the estimated parameters. We employed here double-input (DI) compartmental modeling (CM), previously used for [11C]thymidine, and a novel DI spectral analysis (SA) approach on the tracers 5-[18F]fluorouracil (5-[18F]FU) and [18F]fluorothymidine ([18F]FLT). CM and SA were performed initially with a SI approach using the parent plasma TAC as an input function. These methods were then employed using a DI approach with the metabolite plasma TAC as an additional input function. Regions of interest (ROIs) corresponding to healthy liver, kidneys and liver metastases for 5-[18F]FU and to tumor, vertebra and liver for [18F]FLT were analyzed. For 5-[18F]FU, the improvement of the fit quality with the DI approaches was remarkable; in CM, the Akaike information criterion (AIC) always selected the DI over the SI model. Volume of distribution estimates obtained with DI CM and DI SA were in excellent agreement, for both parent 5-[18F]FU (R2 = 0.91) and metabolite [18F]FBAL (R2 = 0.99). For [18F]FLT, the DI methods provided notable improvements but less substantial than for 5-[18F]FU due to the lower rate of metabolism of [18F]FLT. On the basis of the AIC values, agreement between [18F]FLT Ki estimated with the SI and DI models was good (R2 = 0.75) for the ROIs where the metabolite contribution was negligible, indicating that the additional input did not bias the parent tracer only-related estimates. When the AIC suggested a substantial contribution of the metabolite [18F]FLT-glucuronide, on the other hand, the change in the parent tracer only-related parameters was significant (R2 = 0.33 for Ki). Our results indicated that improvements of DI over SI approaches can range from moderate to substantial and are more significant for tracers with a high rate of metabolism. Furthermore, they showed that SA is suitable for DI modeling and can be used effectively in the analysis of PET data.

  13. Global Optimization Ensemble Model for Classification Methods

    PubMed Central

    Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab

    2014-01-01

    Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382

  14. The Effect of Written Corrective Feedback on Grammatical Accuracy of EFL Students: An Improvement over Previous Unfocused Designs

    ERIC Educational Resources Information Center

    Khanlarzadeh, Mobin; Nemati, Majid

    2016-01-01

    The effectiveness of written corrective feedback (WCF) in the improvement of language learners' grammatical accuracy has been a topic of interest in SLA studies for the past couple of decades. The present study reports the findings of a three-month study investigating the effect of direct unfocused WCF on the grammatical accuracy of elementary…

  15. Finite element analysis of transonic flows in cascades: Importance of computational grids in improving accuracy and convergence

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Akay, H. U.

    1981-01-01

    The finite element method is applied for the solution of transonic potential flows through a cascade of airfoils. Convergence characteristics of the solution scheme are discussed. Accuracy of the numerical solutions is investigated for various flow regions in the transonic flow configuration. The design of an efficient finite element computational grid is discussed for improving accuracy and convergence.

  16. Improved fibrosis staging by elastometry and blood test in chronic hepatitis C.

    PubMed

    Calès, Paul; Boursier, Jérôme; Ducancelle, Alexandra; Oberti, Frédéric; Hubert, Isabelle; Hunault, Gilles; de Lédinghen, Victor; Zarski, Jean-Pierre; Salmon, Dominique; Lunel, Françoise

    2014-07-01

    Our main objective was to improve non-invasive fibrosis staging accuracy by resolving the limits of previous methods via new test combinations. Our secondary objectives were to improve staging precision, by developing a detailed fibrosis classification, and reliability (personalized accuracy) determination. All patients (729) included in the derivation population had chronic hepatitis C, liver biopsy, 6 blood tests and Fibroscan. Validation populations included 1584 patients. The most accurate combination was provided by using most markers of FibroMeter and Fibroscan results targeted for significant fibrosis, i.e. 'E-FibroMeter'. Its classification accuracy (91.7%) and precision (assessed by F difference with Metavir: 0.62 ± 0.57) were better than those of FibroMeter (84.1%, P < 0.001; 0.72 ± 0.57, P < 0.001), Fibroscan (88.2%, P = 0.011; 0.68 ± 0.57, P = 0.020), and a previous CSF-SF classification of FibroMeter + Fibroscan (86.7%, P < 0.001; 0.65 ± 0.57, P = 0.044). The accuracy for fibrosis absence (F0) was increased, e.g. from 16.0% with Fibroscan to 75.0% with E-FibroMeter (P < 0.001). Cirrhosis sensitivity was improved, e.g. E-FibroMeter: 92.7% vs. Fibroscan: 83.3%, P = 0.004. The combination improved reliability by deleting unreliable results (accuracy <50%) observed with a single test (1.2% of patients) and increasing optimal reliability (accuracy ≥85%) from 80.4% of patients with Fibroscan (accuracy: 90.9%) to 94.2% of patients with E-FibroMeter (accuracy: 92.9%), P < 0.001. The patient rate with 100% predictive values for cirrhosis by the best combination was twice (36.2%) that of the best single test (FibroMeter: 16.2%, P < 0.001). The new test combination increased: accuracy, globally and especially in patients without fibrosis, staging precision, cirrhosis prediction, and even reliability, thus offering improved fibrosis staging. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  17. [Design and accuracy analysis of upper slicing system of MSCT].

    PubMed

    Jiang, Rongjian

    2013-05-01

    The upper slicing system is the main components of the optical system in MSCT. This paper focuses on the design of upper slicing system and its accuracy analysis to improve the accuracy of imaging. The error of slice thickness and ray center by bearings, screw and control system were analyzed and tested. In fact, the accumulated error measured is less than 1 microm, absolute error measured is less than 10 microm. Improving the accuracy of the upper slicing system contributes to the appropriate treatment methods and success rate of treatment.

  18. Method to improve accuracy of positioning object by eLoran system with applying standard Kalman filter

    NASA Astrophysics Data System (ADS)

    Grunin, A. P.; Kalinov, G. A.; Bolokhovtsev, A. V.; Sai, S. V.

    2018-05-01

    This article reports on a novel method to improve the accuracy of positioning an object by a low frequency hyperbolic radio navigation system like an eLoran. This method is based on the application of the standard Kalman filter. Investigations of an affection of the filter parameters and the type of the movement on accuracy of the vehicle position estimation are carried out. Evaluation of the method accuracy was investigated by separating data from the semi-empirical movement model to different types of movements.

  19. PAIR Comparison between Two Within-Group Conditions of Resting-State fMRI Improves Classification Accuracy

    PubMed Central

    Zhou, Zhen; Wang, Jian-Bao; Zang, Yu-Feng; Pan, Gang

    2018-01-01

    Classification approaches have been increasingly applied to differentiate patients and normal controls using resting-state functional magnetic resonance imaging data (RS-fMRI). Although most previous classification studies have reported promising accuracy within individual datasets, achieving high levels of accuracy with multiple datasets remains challenging for two main reasons: high dimensionality, and high variability across subjects. We used two independent RS-fMRI datasets (n = 31, 46, respectively) both with eyes closed (EC) and eyes open (EO) conditions. For each dataset, we first reduced the number of features to a small number of brain regions with paired t-tests, using the amplitude of low frequency fluctuation (ALFF) as a metric. Second, we employed a new method for feature extraction, named the PAIR method, examining EC and EO as paired conditions rather than independent conditions. Specifically, for each dataset, we obtained EC minus EO (EC—EO) maps of ALFF from half of subjects (n = 15 for dataset-1, n = 23 for dataset-2) and obtained EO—EC maps from the other half (n = 16 for dataset-1, n = 23 for dataset-2). A support vector machine (SVM) method was used for classification of EC RS-fMRI mapping and EO mapping. The mean classification accuracy of the PAIR method was 91.40% for dataset-1, and 92.75% for dataset-2 in the conventional frequency band of 0.01–0.08 Hz. For cross-dataset validation, we applied the classifier from dataset-1 directly to dataset-2, and vice versa. The mean accuracy of cross-dataset validation was 94.93% for dataset-1 to dataset-2 and 90.32% for dataset-2 to dataset-1 in the 0.01–0.08 Hz range. For the UNPAIR method, classification accuracy was substantially lower (mean 69.89% for dataset-1 and 82.97% for dataset-2), and was much lower for cross-dataset validation (64.69% for dataset-1 to dataset-2 and 64.98% for dataset-2 to dataset-1) in the 0.01–0.08 Hz range. In conclusion, for within-group design studies (e.g., paired conditions or follow-up studies), we recommend the PAIR method for feature extraction. In addition, dimensionality reduction with strong prior knowledge of specific brain regions should also be considered for feature selection in neuroimaging studies. PMID:29375288

  20. Prediction accuracies for growth and wood attributes of interior spruce in space using genotyping-by-sequencing.

    PubMed

    Gamal El-Dien, Omnia; Ratcliffe, Blaise; Klápště, Jaroslav; Chen, Charles; Porth, Ilga; El-Kassaby, Yousry A

    2015-05-09

    Genomic selection (GS) in forestry can substantially reduce the length of breeding cycle and increase gain per unit time through early selection and greater selection intensity, particularly for traits of low heritability and late expression. Affordable next-generation sequencing technologies made it possible to genotype large numbers of trees at a reasonable cost. Genotyping-by-sequencing was used to genotype 1,126 Interior spruce trees representing 25 open-pollinated families planted over three sites in British Columbia, Canada. Four imputation algorithms were compared (mean value (MI), singular value decomposition (SVD), expectation maximization (EM), and a newly derived, family-based k-nearest neighbor (kNN-Fam)). Trees were phenotyped for several yield and wood attributes. Single- and multi-site GS prediction models were developed using the Ridge Regression Best Linear Unbiased Predictor (RR-BLUP) and the Generalized Ridge Regression (GRR) to test different assumption about trait architecture. Finally, using PCA, multi-trait GS prediction models were developed. The EM and kNN-Fam imputation methods were superior for 30 and 60% missing data, respectively. The RR-BLUP GS prediction model produced better accuracies than the GRR indicating that the genetic architecture for these traits is complex. GS prediction accuracies for multi-site were high and better than those of single-sites while multi-site predictability produced the lowest accuracies reflecting type-b genetic correlations and deemed unreliable. The incorporation of genomic information in quantitative genetics analyses produced more realistic heritability estimates as half-sib pedigree tended to inflate the additive genetic variance and subsequently both heritability and gain estimates. Principle component scores as representatives of multi-trait GS prediction models produced surprising results where negatively correlated traits could be concurrently selected for using PCA2 and PCA3. The application of GS to open-pollinated family testing, the simplest form of tree improvement evaluation methods, was proven to be effective. Prediction accuracies obtained for all traits greatly support the integration of GS in tree breeding. While the within-site GS prediction accuracies were high, the results clearly indicate that single-site GS models ability to predict other sites are unreliable supporting the utilization of multi-site approach. Principle component scores provided an opportunity for the concurrent selection of traits with different phenotypic optima.

Top