Sample records for correction improved prediction

  1. Correcting Memory Improves Accuracy of Predicted Task Duration

    ERIC Educational Resources Information Center

    Roy, Michael M.; Mitten, Scott T.; Christenfeld, Nicholas J. S.

    2008-01-01

    People are often inaccurate in predicting task duration. The memory bias explanation holds that this error is due to people having incorrect memories of how long previous tasks have taken, and these biased memories cause biased predictions. Therefore, the authors examined the effect on increasing predictive accuracy of correcting memory through…

  2. [Improving apple fruit quality predictions by effective correction of Vis-NIR laser diffuse reflecting images].

    PubMed

    Qing, Zhao-shen; Ji, Bao-ping; Shi, Bo-lin; Zhu, Da-zhou; Tu, Zhen-hua; Zude, Manuela

    2008-06-01

    In the present study, improved laser-induced light backscattering imaging was studied regarding its potential for analyzing apple SSC and fruit flesh firmness. Images of the diffuse reflection of light on the fruit surface were obtained from Fuji apples using laser diodes emitting at five wavelength bands (680, 780, 880, 940 and 980 nm). Image processing algorithms were tested to correct for dissimilar equator and shape of fruit, and partial least squares (PLS) regression analysis was applied to calibrate on the fruit quality parameter. In comparison to the calibration based on corrected frequency with the models built by raw data, the former improved r from 0. 78 to 0.80 and from 0.87 to 0.89 for predicting SSC and firmness, respectively. Comparing models based on mean value of intensities with results obtained by frequency of intensities, the latter gave higher performance for predicting Fuji SSC and firmness. Comparing calibration for predicting SSC based on the corrected frequency of intensities and the results obtained from raw data set, the former improved root mean of standard error of prediction (RMSEP) from 1.28 degrees to 0.84 degrees Brix. On the other hand, in comparison to models for analyzing flesh firmness built by means of corrected frequency of intensities with the calibrations based on raw data, the former gave the improvement in RMSEP from 8.23 to 6.17 N x cm(-2).

  3. Impact of creatine kinase correction on the predictive value of S-100B after mild traumatic brain injury.

    PubMed

    Bazarian, Jeffrey J; Beck, Christopher; Blyth, Brian; von Ahsen, Nicolas; Hasselblatt, Martin

    2006-01-01

    To validate a correction factor for the extracranial release of the astroglial protein, S-100B, based on concomitant creatine kinase (CK) levels. The CK- S-100B relationship in non-head injured marathon runners was used to derive a correction factor for the extracranial release of S-100B. This factor was then applied to a separate cohort of 96 mild traumatic brain injury (TBI) patients in whom both CK and S-100B levels were measured. Corrected S-100B was compared to uncorrected S-100B for the prediction of initial head CT, three-month headache and three-month post concussive syndrome (PCS). Corrected S-100B resulted in a statistically significant improvement in the prediction of 3-month headache (area under curve [AUC] 0.46 vs 0.52, p=0.02), but not PCS or initial head CT. Using a cutoff that maximizes sensitivity (> or = 90%), corrected S-100B improved the prediction of initial head CT scan (negative predictive value from 75% [95% CI, 2.6%, 67.0%] to 96% [95% CI: 83.5%, 99.8%]). Although S-100B is overall poorly predictive of outcome, a correction factor using CK is a valid means of accounting for extracranial release. By increasing the proportion of mild TBI patients correctly categorized as low risk for abnormal head CT, CK-corrected S100-B can further reduce the number of unnecessary brain CT scans performed after this injury.

  4. A two-dimensional matrix correction for off-axis portal dose prediction errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, Daniel W.; Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263; Kumaraswamy, Lalith

    2013-05-15

    Purpose: This study presents a follow-up to a modified calibration procedure for portal dosimetry published by Bailey et al. ['An effective correction algorithm for off-axis portal dosimetry errors,' Med. Phys. 36, 4089-4094 (2009)]. A commercial portal dose prediction system exhibits disagreement of up to 15% (calibrated units) between measured and predicted images as off-axis distance increases. The previous modified calibration procedure accounts for these off-axis effects in most regions of the detecting surface, but is limited by the simplistic assumption of radial symmetry. Methods: We find that a two-dimensional (2D) matrix correction, applied to each calibrated image, accounts for off-axismore » prediction errors in all regions of the detecting surface, including those still problematic after the radial correction is performed. The correction matrix is calculated by quantitative comparison of predicted and measured images that span the entire detecting surface. The correction matrix was verified for dose-linearity, and its effectiveness was verified on a number of test fields. The 2D correction was employed to retrospectively examine 22 off-axis, asymmetric electronic-compensation breast fields, five intensity-modulated brain fields (moderate-high modulation) manipulated for far off-axis delivery, and 29 intensity-modulated clinical fields of varying complexity in the central portion of the detecting surface. Results: Employing the matrix correction to the off-axis test fields and clinical fields, predicted vs measured portal dose agreement improves by up to 15%, producing up to 10% better agreement than the radial correction in some areas of the detecting surface. Gamma evaluation analyses (3 mm, 3% global, 10% dose threshold) of predicted vs measured portal dose images demonstrate pass rate improvement of up to 75% with the matrix correction, producing pass rates that are up to 30% higher than those resulting from the radial correction technique alone. As in the 1D correction case, the 2D algorithm leaves the portal dosimetry process virtually unchanged in the central portion of the detector, and thus these correction algorithms are not needed for centrally located fields of moderate size (at least, in the case of 6 MV beam energy).Conclusion: The 2D correction improves the portal dosimetry results for those fields for which the 1D correction proves insufficient, especially in the inplane, off-axis regions of the detector. This 2D correction neglects the relatively smaller discrepancies that may be caused by backscatter from nonuniform machine components downstream from the detecting layer.« less

  5. Optimizing wavefront-guided corrections for highly aberrated eyes in the presence of registration uncertainty

    PubMed Central

    Shi, Yue; Queener, Hope M.; Marsack, Jason D.; Ravikumar, Ayeswarya; Bedell, Harold E.; Applegate, Raymond A.

    2013-01-01

    Dynamic registration uncertainty of a wavefront-guided correction with respect to underlying wavefront error (WFE) inevitably decreases retinal image quality. A partial correction may improve average retinal image quality and visual acuity in the presence of registration uncertainties. The purpose of this paper is to (a) develop an algorithm to optimize wavefront-guided correction that improves visual acuity given registration uncertainty and (b) test the hypothesis that these corrections provide improved visual performance in the presence of these uncertainties as compared to a full-magnitude correction or a correction by Guirao, Cox, and Williams (2002). A stochastic parallel gradient descent (SPGD) algorithm was used to optimize the partial-magnitude correction for three keratoconic eyes based on measured scleral contact lens movement. Given its high correlation with logMAR acuity, the retinal image quality metric log visual Strehl was used as a predictor of visual acuity. Predicted values of visual acuity with the optimized corrections were validated by regressing measured acuity loss against predicted loss. Measured loss was obtained from normal subjects viewing acuity charts that were degraded by the residual aberrations generated by the movement of the full-magnitude correction, the correction by Guirao, and optimized SPGD correction. Partial-magnitude corrections optimized with an SPGD algorithm provide at least one line improvement of average visual acuity over the full magnitude and the correction by Guirao given the registration uncertainty. This study demonstrates that it is possible to improve the average visual acuity by optimizing wavefront-guided correction in the presence of registration uncertainty. PMID:23757512

  6. SU-F-J-219: Predicting Ventilation Change Due to Radiation Therapy: Dependency On Pre-RT Ventilation and Effort Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, T; Du, K; Bayouth, J

    Purpose: Ventilation change caused by radiation therapy (RT) can be predicted using four-dimensional computed tomography (4DCT) and image registration. This study tested the dependency of predicted post-RT ventilation on effort correction and pre-RT lung function. Methods: Pre-RT and 3 month post-RT 4DCT images were obtained for 13 patients. The 4DCT images were used to create ventilation maps using a deformable image registration based Jacobian expansion calculation. The post-RT ventilation maps were predicted in four different ways using the dose delivered, pre-RT ventilation, and effort correction. The pre-RT ventilation and effort correction were toggled to determine dependency. The four different predictedmore » ventilation maps were compared to the post-RT ventilation map calculated from image registration to establish the best prediction method. Gamma pass rates were used to compare the different maps with the criteria of 2mm distance-to-agreement and 6% ventilation difference. Paired t-tests of gamma pass rates were used to determine significant differences between the maps. Additional gamma pass rates were calculated using only voxels receiving over 20 Gy. Results: The predicted post-RT ventilation maps were in agreement with the actual post-RT maps in the following percentage of voxels averaged over all subjects: 71% with pre-RT ventilation and effort correction, 69% with no pre-RT ventilation and effort correction, 60% with pre-RT ventilation and no effort correction, and 58% with no pre-RT ventilation and no effort correction. When analyzing only voxels receiving over 20 Gy, the gamma pass rates were respectively 74%, 69%, 65%, and 55%. The prediction including both pre- RT ventilation and effort correction was the only prediction with significant improvement over using no prediction (p<0.02). Conclusion: Post-RT ventilation is best predicted using both pre-RT ventilation and effort correction. This is the only prediction that provided a significant improvement on agreement. Research support from NIH grants CA166119 and CA166703, a gift from Roger Koch, and a Pilot Grant from University of Iowa Carver College of Medicine.« less

  7. An improved method to detect correct protein folds using partial clustering.

    PubMed

    Zhou, Jianjun; Wishart, David S

    2013-01-16

    Structure-based clustering is commonly used to identify correct protein folds among candidate folds (also called decoys) generated by protein structure prediction programs. However, traditional clustering methods exhibit a poor runtime performance on large decoy sets. We hypothesized that a more efficient "partial" clustering approach in combination with an improved scoring scheme could significantly improve both the speed and performance of existing candidate selection methods. We propose a new scheme that performs rapid but incomplete clustering on protein decoys. Our method detects structurally similar decoys (measured using either C(α) RMSD or GDT-TS score) and extracts representatives from them without assigning every decoy to a cluster. We integrated our new clustering strategy with several different scoring functions to assess both the performance and speed in identifying correct or near-correct folds. Experimental results on 35 Rosetta decoy sets and 40 I-TASSER decoy sets show that our method can improve the correct fold detection rate as assessed by two different quality criteria. This improvement is significantly better than two recently published clustering methods, Durandal and Calibur-lite. Speed and efficiency testing shows that our method can handle much larger decoy sets and is up to 22 times faster than Durandal and Calibur-lite. The new method, named HS-Forest, avoids the computationally expensive task of clustering every decoy, yet still allows superior correct-fold selection. Its improved speed, efficiency and decoy-selection performance should enable structure prediction researchers to work with larger decoy sets and significantly improve their ab initio structure prediction performance.

  8. An improved method to detect correct protein folds using partial clustering

    PubMed Central

    2013-01-01

    Background Structure-based clustering is commonly used to identify correct protein folds among candidate folds (also called decoys) generated by protein structure prediction programs. However, traditional clustering methods exhibit a poor runtime performance on large decoy sets. We hypothesized that a more efficient “partial“ clustering approach in combination with an improved scoring scheme could significantly improve both the speed and performance of existing candidate selection methods. Results We propose a new scheme that performs rapid but incomplete clustering on protein decoys. Our method detects structurally similar decoys (measured using either Cα RMSD or GDT-TS score) and extracts representatives from them without assigning every decoy to a cluster. We integrated our new clustering strategy with several different scoring functions to assess both the performance and speed in identifying correct or near-correct folds. Experimental results on 35 Rosetta decoy sets and 40 I-TASSER decoy sets show that our method can improve the correct fold detection rate as assessed by two different quality criteria. This improvement is significantly better than two recently published clustering methods, Durandal and Calibur-lite. Speed and efficiency testing shows that our method can handle much larger decoy sets and is up to 22 times faster than Durandal and Calibur-lite. Conclusions The new method, named HS-Forest, avoids the computationally expensive task of clustering every decoy, yet still allows superior correct-fold selection. Its improved speed, efficiency and decoy-selection performance should enable structure prediction researchers to work with larger decoy sets and significantly improve their ab initio structure prediction performance. PMID:23323835

  9. Extended Glauert tip correction to include vortex rollup effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maniaci, David; Schmitz, Sven

    Wind turbine loads predictions by blade-element momentum theory using the standard tip-loss correction have been shown to over-predict loading near the blade tip in comparison to experimental data. This over-prediction is theorized to be due to the assumption of light rotor loading, inherent in the standard tip-loss correction model of Glauert. A higher- order free-wake method, WindDVE, is used to compute the rollup process of the trailing vortex sheets downstream of wind turbine blades. Results obtained serve an exact correction function to the Glauert tip correction used in blade-element momentum methods. Lastly, it is found that accounting for the effectsmore » of tip vortex rollup within the Glauert tip correction indeed results in improved prediction of blade tip loads computed by blade-element momentum methods.« less

  10. Extended Glauert tip correction to include vortex rollup effects

    DOE PAGES

    Maniaci, David; Schmitz, Sven

    2016-10-03

    Wind turbine loads predictions by blade-element momentum theory using the standard tip-loss correction have been shown to over-predict loading near the blade tip in comparison to experimental data. This over-prediction is theorized to be due to the assumption of light rotor loading, inherent in the standard tip-loss correction model of Glauert. A higher- order free-wake method, WindDVE, is used to compute the rollup process of the trailing vortex sheets downstream of wind turbine blades. Results obtained serve an exact correction function to the Glauert tip correction used in blade-element momentum methods. Lastly, it is found that accounting for the effectsmore » of tip vortex rollup within the Glauert tip correction indeed results in improved prediction of blade tip loads computed by blade-element momentum methods.« less

  11. Revisiting Hansen Solubility Parameters by Including Thermodynamics.

    PubMed

    Louwerse, Manuel J; Maldonado, Ana; Rousseau, Simon; Moreau-Masselon, Chloe; Roux, Bernard; Rothenberg, Gadi

    2017-11-03

    The Hansen solubility parameter approach is revisited by implementing the thermodynamics of dissolution and mixing. Hansen's pragmatic approach has earned its spurs in predicting solvents for polymer solutions, but for molecular solutes improvements are needed. By going into the details of entropy and enthalpy, several corrections are suggested that make the methodology thermodynamically sound without losing its ease of use. The most important corrections include accounting for the solvent molecules' size, the destruction of the solid's crystal structure, and the specificity of hydrogen-bonding interactions, as well as opportunities to predict the solubility at extrapolated temperatures. Testing the original and the improved methods on a large industrial dataset including solvent blends, fit qualities improved from 0.89 to 0.97 and the percentage of correct predictions rose from 54 % to 78 %. Full Matlab scripts are included in the Supporting Information, allowing readers to implement these improvements on their own datasets. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Impacts of Earth rotation parameters on GNSS ultra-rapid orbit prediction: Derivation and real-time correction

    NASA Astrophysics Data System (ADS)

    Wang, Qianxin; Hu, Chao; Xu, Tianhe; Chang, Guobin; Hernández Moraleda, Alberto

    2017-12-01

    Analysis centers (ACs) for global navigation satellite systems (GNSSs) cannot accurately obtain real-time Earth rotation parameters (ERPs). Thus, the prediction of ultra-rapid orbits in the international terrestrial reference system (ITRS) has to utilize the predicted ERPs issued by the International Earth Rotation and Reference Systems Service (IERS) or the International GNSS Service (IGS). In this study, the accuracy of ERPs predicted by IERS and IGS is analyzed. The error of the ERPs predicted for one day can reach 0.15 mas and 0.053 ms in polar motion and UT1-UTC direction, respectively. Then, the impact of ERP errors on ultra-rapid orbit prediction by GNSS is studied. The methods for orbit integration and frame transformation in orbit prediction with introduced ERP errors dominate the accuracy of the predicted orbit. Experimental results show that the transformation from the geocentric celestial references system (GCRS) to ITRS exerts the strongest effect on the accuracy of the predicted ultra-rapid orbit. To obtain the most accurate predicted ultra-rapid orbit, a corresponding real-time orbit correction method is developed. First, orbits without ERP-related errors are predicted on the basis of ITRS observed part of ultra-rapid orbit for use as reference. Then, the corresponding predicted orbit is transformed from GCRS to ITRS to adjust for the predicted ERPs. Finally, the corrected ERPs with error slopes are re-introduced to correct the predicted orbit in ITRS. To validate the proposed method, three experimental schemes are designed: function extrapolation, simulation experiments, and experiments with predicted ultra-rapid orbits and international GNSS Monitoring and Assessment System (iGMAS) products. Experimental results show that using the proposed correction method with IERS products considerably improved the accuracy of ultra-rapid orbit prediction (except the geosynchronous BeiDou orbits). The accuracy of orbit prediction is enhanced by at least 50% (error related to ERP) when a highly accurate observed orbit is used with the correction method. For iGMAS-predicted orbits, the accuracy improvement ranges from 8.5% for the inclined BeiDou orbits to 17.99% for the GPS orbits. This demonstrates that the correction method proposed by this study can optimize the ultra-rapid orbit prediction.

  13. Improve the prediction of RNA-binding residues using structural neighbours.

    PubMed

    Li, Quan; Cao, Zanxia; Liu, Haiyan

    2010-03-01

    The interactions between RNA-binding proteins (RBPs) with RNA play key roles in managing some of the cell's basic functions. The identification and prediction of RNA binding sites is important for understanding the RNA-binding mechanism. Computational approaches are being developed to predict RNA-binding residues based on the sequence- or structure-derived features. To achieve higher prediction accuracy, improvements on current prediction methods are necessary. We identified that the structural neighbors of RNA-binding and non-RNA-binding residues have different amino acid compositions. Combining this structure-derived feature with evolutionary (PSSM) and other structural information (secondary structure and solvent accessibility) significantly improves the predictions over existing methods. Using a multiple linear regression approach and 6-fold cross validation, our best model can achieve an overall correct rate of 87.8% and MCC of 0.47, with a specificity of 93.4%, correctly predict 52.4% of the RNA-binding residues for a dataset containing 107 non-homologous RNA-binding proteins. Compared with existing methods, including the amino acid compositions of structure neighbors lead to clearly improvement. A web server was developed for predicting RNA binding residues in a protein sequence (or structure),which is available at http://mcgill.3322.org/RNA/.

  14. Comparison of four statistical and machine learning methods for crash severity prediction.

    PubMed

    Iranitalab, Amirfarrokh; Khattak, Aemal

    2017-11-01

    Crash severity prediction models enable different agencies to predict the severity of a reported crash with unknown severity or the severity of crashes that may be expected to occur sometime in the future. This paper had three main objectives: comparison of the performance of four statistical and machine learning methods including Multinomial Logit (MNL), Nearest Neighbor Classification (NNC), Support Vector Machines (SVM) and Random Forests (RF), in predicting traffic crash severity; developing a crash costs-based approach for comparison of crash severity prediction methods; and investigating the effects of data clustering methods comprising K-means Clustering (KC) and Latent Class Clustering (LCC), on the performance of crash severity prediction models. The 2012-2015 reported crash data from Nebraska, United States was obtained and two-vehicle crashes were extracted as the analysis data. The dataset was split into training/estimation (2012-2014) and validation (2015) subsets. The four prediction methods were trained/estimated using the training/estimation dataset and the correct prediction rates for each crash severity level, overall correct prediction rate and a proposed crash costs-based accuracy measure were obtained for the validation dataset. The correct prediction rates and the proposed approach showed NNC had the best prediction performance in overall and in more severe crashes. RF and SVM had the next two sufficient performances and MNL was the weakest method. Data clustering did not affect the prediction results of SVM, but KC improved the prediction performance of MNL, NNC and RF, while LCC caused improvement in MNL and RF but weakened the performance of NNC. Overall correct prediction rate had almost the exact opposite results compared to the proposed approach, showing that neglecting the crash costs can lead to misjudgment in choosing the right prediction method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daily, Michael D.; Olsen, Brett N.; Schlesinger, Paul H.

    In mammalian cells cholesterol is essential for membrane function, but in excess can be cytototoxic. The cellular response to acute cholesterol loading involves biophysical-based mechanisms that regulate cholesterol levels, through modulation of the “activity” or accessibility of cholesterol to extra-membrane acceptors. Experiments and united atom (UA) simulations show that at high concentrations of cholesterol, lipid bilayers thin significantly and cholesterol availability to external acceptors increases substantially. Such cholesterol activation is critical to its trafficking within cells. Here we aim to reduce the computational cost to enable simulation of large and complex systems involved in cholesterol regulation, such as those includingmore » oxysterols and cholesterol-sensing proteins. To accomplish this, we have modified the published MARTINI coarse-grained force field to improve its predictions of cholesterol-induced changes in both macroscopic and microscopic properties of membranes. Most notably, MARTINI fails to capture both the (macroscopic) area condensation and membrane thickening seen at less than 30% cholesterol and the thinning seen above 40% cholesterol. The thinning at high concentration is critical to cholesterol activation. Microscopic properties of interest include cholesterol-cholesterol radial distribution functions (RDFs), tilt angle, and accessible surface area. First, we develop an “angle-corrected” model wherein we modify the coarse-grained bond angle potentials based on atomistic simulations. This modification significantly improves prediction of macroscopic properties, most notably the thickening/thinning behavior, and also slightly improves microscopic property prediction relative to MARTINI. Second, we add to the angle correction a “volume correction” by also adjusting phospholipid bond lengths to achieve a more accurate volume per molecule. The angle + volume correction substantially further improves the quantitative agreement of the macroscopic properties (area per molecule and thickness) with united atom simulations. However, this improvement also reduces the accuracy of microscopic predictions like radial distribution functions and cholesterol tilt below that of either MARTINI or the angle-corrected model. Thus, while both of our forcefield corrections improve MARTINI, the combined angle and volume correction should be used for problems involving sterol effects on the overall structure of the membrane, while our angle-corrected model should be used in cases where the properties of individual lipid and sterol models are critically important.« less

  16. First Principle Predictions of Isotopic Shifts in H2O

    NASA Technical Reports Server (NTRS)

    Schwenke, David W.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    We compute isotope independent first and second order corrections to the Born-Oppenheimer approximation for water and use them to predict isotopic shifts. For the diagonal correction, we use icMRCI wavefunctions and derivatives with respect to mass dependent, internal coordinates to generate the mass independent correction functions. For the non-adiabatic correction, we use scaled SCF/CIS wave functions and a generalization of the Handy method to obtain mass independent correction functions. We find that including the non-adiabatic correction gives significantly improved results compared to just including the diagonal correction when the Born-Oppenheimer potential energy surface is optimized for H2O-16. The agreement with experimental results for deuterium and tritium containing isotopes is nearly as good as our best empirical correction, however, the present correction is expected to be more reliable for higher, uncharacterized levels.

  17. Development of the Metacognitive Skills of Prediction and Evaluation in Children With or Without Math Disability

    PubMed Central

    Garrett, Adia J.; Mazzocco, Michèle M. M.; Baker, Linda

    2009-01-01

    Metacognition refers to knowledge about one’s own cognition. The present study was designed to assess metacognitive skills that either precede or follow task engagement, rather than the processes that occur during a task. Specifically, we examined prediction and evaluation skills among children with (n = 17) or without (n = 179) mathematics learning disability (MLD), from grades 2 to 4. Children were asked to predict which of several math problems they could solve correctly; later, they were asked to solve those problems. They were asked to evaluate whether their solution to each of another set of problems was correct. Children’s ability to evaluate their answers to math problems improved from grade 2 to grade 3, whereas there was no change over time in the children’s ability to predict which problems they could solve correctly. Children with MLD were less accurate than children without MLD in evaluating both their correct and incorrect solutions, and they were less accurate at predicting which problems they could solve correctly. However, children with MLD were as accurate as their peers in correctly predicting that they could not solve specific math problems. The findings have implications for the usefulness of children’s self-review during mathematics problem solving. PMID:20084181

  18. Baseline correction combined partial least squares algorithm and its application in on-line Fourier transform infrared quantitative analysis.

    PubMed

    Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping

    2011-04-01

    In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Prediction-Correction Algorithms for Time-Varying Constrained Optimization

    DOE PAGES

    Simonetto, Andrea; Dall'Anese, Emiliano

    2017-07-26

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  20. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    PubMed

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  1. Parameter prediction based on Improved Process neural network and ARMA error compensation in Evaporation Process

    NASA Astrophysics Data System (ADS)

    Qian, Xiaoshan

    2018-01-01

    The traditional model of evaporation process parameters have continuity and cumulative characteristics of the prediction error larger issues, based on the basis of the process proposed an adaptive particle swarm neural network forecasting method parameters established on the autoregressive moving average (ARMA) error correction procedure compensated prediction model to predict the results of the neural network to improve prediction accuracy. Taking a alumina plant evaporation process to analyze production data validation, and compared with the traditional model, the new model prediction accuracy greatly improved, can be used to predict the dynamic process of evaporation of sodium aluminate solution components.

  2. Dual assimilation of satellite soil moisture to improve flood prediction in ungauged catchments

    USDA-ARS?s Scientific Manuscript database

    This paper explores the use of active and passive satellite soil moisture products for improving stream flow prediction within 4 large (>5,000km2) semi-arid catchments. We use the probability distributed model (PDM) under a data-scarce scenario and aim at correcting two key controlling factors in th...

  3. XenoSite: accurately predicting CYP-mediated sites of metabolism with neural networks.

    PubMed

    Zaretzki, Jed; Matlock, Matthew; Swamidass, S Joshua

    2013-12-23

    Understanding how xenobiotic molecules are metabolized is important because it influences the safety, efficacy, and dose of medicines and how they can be modified to improve these properties. The cytochrome P450s (CYPs) are proteins responsible for metabolizing 90% of drugs on the market, and many computational methods can predict which atomic sites of a molecule--sites of metabolism (SOMs)--are modified during CYP-mediated metabolism. This study improves on prior methods of predicting CYP-mediated SOMs by using new descriptors and machine learning based on neural networks. The new method, XenoSite, is faster to train and more accurate by as much as 4% or 5% for some isozymes. Furthermore, some "incorrect" predictions made by XenoSite were subsequently validated as correct predictions by revaluation of the source literature. Moreover, XenoSite output is interpretable as a probability, which reflects both the confidence of the model that a particular atom is metabolized and the statistical likelihood that its prediction for that atom is correct.

  4. Improved Density Functional Tight Binding Potentials for Metalloid Aluminum Clusters

    DTIC Science & Technology

    2016-06-01

    simulations of the oxidation of Al4Cp * 4 show reasonable comparison with a DFT-based Car -Parrinello method, including correct prediction of hydride transfers...comparison with a DFT-based Car -Parrinello method, including correct prediction of hydride transfers from Cp* to the metal centers during the...initio molecular dynamics of the oxidation of Al4Cp * 4 using a DFT-based Car -Parrinello method. This simulation, which 43 several months on the

  5. Canopy Temperature and Vegetation Indices from High-Throughput Phenotyping Improve Accuracy of Pedigree and Genomic Selection for Grain Yield in Wheat

    PubMed Central

    Rutkoski, Jessica; Poland, Jesse; Mondal, Suchismita; Autrique, Enrique; Pérez, Lorena González; Crossa, José; Reynolds, Matthew; Singh, Ravi

    2016-01-01

    Genomic selection can be applied prior to phenotyping, enabling shorter breeding cycles and greater rates of genetic gain relative to phenotypic selection. Traits measured using high-throughput phenotyping based on proximal or remote sensing could be useful for improving pedigree and genomic prediction model accuracies for traits not yet possible to phenotype directly. We tested if using aerial measurements of canopy temperature, and green and red normalized difference vegetation index as secondary traits in pedigree and genomic best linear unbiased prediction models could increase accuracy for grain yield in wheat, Triticum aestivum L., using 557 lines in five environments. Secondary traits on training and test sets, and grain yield on the training set were modeled as multivariate, and compared to univariate models with grain yield on the training set only. Cross validation accuracies were estimated within and across-environment, with and without replication, and with and without correcting for days to heading. We observed that, within environment, with unreplicated secondary trait data, and without correcting for days to heading, secondary traits increased accuracies for grain yield by 56% in pedigree, and 70% in genomic prediction models, on average. Secondary traits increased accuracy slightly more when replicated, and considerably less when models corrected for days to heading. In across-environment prediction, trends were similar but less consistent. These results show that secondary traits measured in high-throughput could be used in pedigree and genomic prediction to improve accuracy. This approach could improve selection in wheat during early stages if validated in early-generation breeding plots. PMID:27402362

  6. The Incremental Value of Subjective and Quantitative Assessment of 18F-FDG PET for the Prediction of Pathologic Complete Response to Preoperative Chemoradiotherapy in Esophageal Cancer.

    PubMed

    van Rossum, Peter S N; Fried, David V; Zhang, Lifei; Hofstetter, Wayne L; van Vulpen, Marco; Meijer, Gert J; Court, Laurence E; Lin, Steven H

    2016-05-01

    A reliable prediction of a pathologic complete response (pathCR) to chemoradiotherapy before surgery for esophageal cancer would enable investigators to study the feasibility and outcome of an organ-preserving strategy after chemoradiotherapy. So far no clinical parameters or diagnostic studies are able to accurately predict which patients will achieve a pathCR. The aim of this study was to determine whether subjective and quantitative assessment of baseline and postchemoradiation (18)F-FDG PET can improve the accuracy of predicting pathCR to preoperative chemoradiotherapy in esophageal cancer beyond clinical predictors. This retrospective study was approved by the institutional review board, and the need for written informed consent was waived. Clinical parameters along with subjective and quantitative parameters from baseline and postchemoradiation (18)F-FDG PET were derived from 217 esophageal adenocarcinoma patients who underwent chemoradiotherapy followed by surgery. The associations between these parameters and pathCR were studied in univariable and multivariable logistic regression analysis. Four prediction models were constructed and internally validated using bootstrapping to study the incremental predictive values of subjective assessment of (18)F-FDG PET, conventional quantitative metabolic features, and comprehensive (18)F-FDG PET texture/geometry features, respectively. The clinical benefit of (18)F-FDG PET was determined using decision-curve analysis. A pathCR was found in 59 (27%) patients. A clinical prediction model (corrected c-index, 0.67) was improved by adding (18)F-FDG PET-based subjective assessment of response (corrected c-index, 0.72). This latter model was slightly improved by the addition of 1 conventional quantitative metabolic feature only (i.e., postchemoradiation total lesion glycolysis; corrected c-index, 0.73), and even more by subsequently adding 4 comprehensive (18)F-FDG PET texture/geometry features (corrected c-index, 0.77). However, at a decision threshold of 0.9 or higher, representing a clinically relevant predictive value for pathCR at which one may be willing to omit surgery, there was no clear incremental value. Subjective and quantitative assessment of (18)F-FDG PET provides statistical incremental value for predicting pathCR after preoperative chemoradiotherapy in esophageal cancer. However, the discriminatory improvement beyond clinical predictors does not translate into a clinically relevant benefit that could change decision making. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  7. Performance of combined fragmentation and retention prediction for the identification of organic micropollutants by LC-HRMS.

    PubMed

    Hu, Meng; Müller, Erik; Schymanski, Emma L; Ruttkies, Christoph; Schulze, Tobias; Brack, Werner; Krauss, Martin

    2018-03-01

    In nontarget screening, structure elucidation of small molecules from high resolution mass spectrometry (HRMS) data is challenging, particularly the selection of the most likely candidate structure among the many retrieved from compound databases. Several fragmentation and retention prediction methods have been developed to improve this candidate selection. In order to evaluate their performance, we compared two in silico fragmenters (MetFrag and CFM-ID) and two retention time prediction models (based on the chromatographic hydrophobicity index (CHI) and on log D). A set of 78 known organic micropollutants was analyzed by liquid chromatography coupled to a LTQ Orbitrap HRMS with electrospray ionization (ESI) in positive and negative mode using two fragmentation techniques with different collision energies. Both fragmenters (MetFrag and CFM-ID) performed well for most compounds, with average ranking the correct candidate structure within the top 25% and 22 to 37% for ESI+ and ESI- mode, respectively. The rank of the correct candidate structure slightly improved when MetFrag and CFM-ID were combined. For unknown compounds detected in both ESI+ and ESI-, generally positive mode mass spectra were better for further structure elucidation. Both retention prediction models performed reasonably well for more hydrophobic compounds but not for early eluting hydrophilic substances. The log D prediction showed a better accuracy than the CHI model. Although the two fragmentation prediction methods are more diagnostic and sensitive for candidate selection, the inclusion of retention prediction by calculating a consensus score with optimized weighting can improve the ranking of correct candidates as compared to the individual methods. Graphical abstract Consensus workflow for combining fragmentation and retention prediction in LC-HRMS-based micropollutant identification.

  8. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM

    PubMed Central

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei

    2018-01-01

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model’s performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM’s parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models’ performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors. PMID:29342942

  9. Systems Training for Emotional Predictability and Problem Solving (STEPPS) group treatment for offenders with borderline personality disorder.

    PubMed

    Black, Donald W; Blum, Nancee; McCormick, Brett; Allen, Jeff

    2013-02-01

    Systems Training for Emotional Predictability and Problem Solving (STEPPS) is a manual-based group treatment of persons with borderline personality disorder (BPD). We report results from a study of offenders supervised by the Iowa Department of Corrections. Seventy-seven offenders participated in STEPPS groups. The offenders experienced clinically significant improvement in BPD-related symptoms (d = 1.30), mood, and negative affectivity. Suicidal behaviors and disciplinary infractions were reduced. Baseline severity was inversely associated with improvement. The offenders indicated satisfaction with STEPPS. We conclude that STEPPS can be successfully integrated into the care of offenders with BPD in prison and community corrections settings.

  10. Improving operational flood ensemble prediction by the assimilation of satellite soil moisture: comparison between lumped and semi-distributed schemes

    USDA-ARS?s Scientific Manuscript database

    Assimilation of remotely sensed soil moisture data (SM-DA) to correct soil water stores of rainfall-runoff models has shown skill in improving streamflow prediction. In the case of large and sparsely monitored catchments, SM-DA is a particularly attractive tool.Within this context, we assimilate act...

  11. An improved method for predicting brittleness of rocks via well logs in tight oil reservoirs

    NASA Astrophysics Data System (ADS)

    Wang, Zhenlin; Sun, Ting; Feng, Cheng; Wang, Wei; Han, Chuang

    2018-06-01

    There can be no industrial oil production in tight oil reservoirs until fracturing is undertaken. Under such conditions, the brittleness of the rocks is a very important factor. However, it has so far been difficult to predict. In this paper, the selected study area is the tight oil reservoirs in Lucaogou formation, Permian, Jimusaer sag, Junggar basin. According to the transformation of dynamic and static rock mechanics parameters and the correction of confining pressure, an improved method is proposed for quantitatively predicting the brittleness of rocks via well logs in tight oil reservoirs. First, 19 typical tight oil core samples are selected in the study area. Their static Young’s modulus, static Poisson’s ratio and petrophysical parameters are measured. In addition, the static brittleness indices of four other tight oil cores are measured under different confining pressure conditions. Second, the dynamic Young’s modulus, Poisson’s ratio and brittleness index are calculated using the compressional and shear wave velocity. With combination of the measured and calculated results, the transformation model of dynamic and static brittleness index is built based on the influence of porosity and clay content. The comparison of the predicted brittleness indices and measured results shows that the model has high accuracy. Third, on the basis of the experimental data under different confining pressure conditions, the amplifying factor of brittleness index is proposed to correct for the influence of confining pressure on the brittleness index. Finally, the above improved models are applied to formation evaluation via well logs. Compared with the results before correction, the results of the improved models agree better with the experimental data, which indicates that the improved models have better application effects. The brittleness index prediction method of tight oil reservoirs is improved in this research. It is of great importance in the optimization of fracturing layer and fracturing construction schemes and the improvement of oil recovery.

  12. Improvement of forecast skill for severe weather by merging radar-based extrapolation and storm-scale NWP corrected forecast

    NASA Astrophysics Data System (ADS)

    Wang, Gaili; Wong, Wai-Kin; Hong, Yang; Liu, Liping; Dong, Jili; Xue, Ming

    2015-03-01

    The primary objective of this study is to improve the performance of deterministic high resolution rainfall forecasts caused by severe storms by merging an extrapolation radar-based scheme with a storm-scale Numerical Weather Prediction (NWP) model. Effectiveness of Multi-scale Tracking and Forecasting Radar Echoes (MTaRE) model was compared with that of a storm-scale NWP model named Advanced Regional Prediction System (ARPS) for forecasting a violent tornado event that developed over parts of western and much of central Oklahoma on May 24, 2011. Then the bias corrections were performed to improve the forecast accuracy of ARPS forecasts. Finally, the corrected ARPS forecast and radar-based extrapolation were optimally merged by using a hyperbolic tangent weight scheme. The comparison of forecast skill between MTaRE and ARPS in high spatial resolution of 0.01° × 0.01° and high temporal resolution of 5 min showed that MTaRE outperformed ARPS in terms of index of agreement and mean absolute error (MAE). MTaRE had a better Critical Success Index (CSI) for less than 20-min lead times and was comparable to ARPS for 20- to 50-min lead times, while ARPS had a better CSI for more than 50-min lead times. Bias correction significantly improved ARPS forecasts in terms of MAE and index of agreement, although the CSI of corrected ARPS forecasts was similar to that of the uncorrected ARPS forecasts. Moreover, optimally merging results using hyperbolic tangent weight scheme further improved the forecast accuracy and became more stable.

  13. CodingQuarry: highly accurate hidden Markov model gene prediction in fungal genomes using RNA-seq transcripts.

    PubMed

    Testa, Alison C; Hane, James K; Ellwood, Simon R; Oliver, Richard P

    2015-03-11

    The impact of gene annotation quality on functional and comparative genomics makes gene prediction an important process, particularly in non-model species, including many fungi. Sets of homologous protein sequences are rarely complete with respect to the fungal species of interest and are often small or unreliable, especially when closely related species have not been sequenced or annotated in detail. In these cases, protein homology-based evidence fails to correctly annotate many genes, or significantly improve ab initio predictions. Generalised hidden Markov models (GHMM) have proven to be invaluable tools in gene annotation and, recently, RNA-seq has emerged as a cost-effective means to significantly improve the quality of automated gene annotation. As these methods do not require sets of homologous proteins, improving gene prediction from these resources is of benefit to fungal researchers. While many pipelines now incorporate RNA-seq data in training GHMMs, there has been relatively little investigation into additionally combining RNA-seq data at the point of prediction, and room for improvement in this area motivates this study. CodingQuarry is a highly accurate, self-training GHMM fungal gene predictor designed to work with assembled, aligned RNA-seq transcripts. RNA-seq data informs annotations both during gene-model training and in prediction. Our approach capitalises on the high quality of fungal transcript assemblies by incorporating predictions made directly from transcript sequences. Correct predictions are made despite transcript assembly problems, including those caused by overlap between the transcripts of adjacent gene loci. Stringent benchmarking against high-confidence annotation subsets showed CodingQuarry predicted 91.3% of Schizosaccharomyces pombe genes and 90.4% of Saccharomyces cerevisiae genes perfectly. These results are 4-5% better than those of AUGUSTUS, the next best performing RNA-seq driven gene predictor tested. Comparisons against whole genome Sc. pombe and S. cerevisiae annotations further substantiate a 4-5% improvement in the number of correctly predicted genes. We demonstrate the success of a novel method of incorporating RNA-seq data into GHMM fungal gene prediction. This shows that a high quality annotation can be achieved without relying on protein homology or a training set of genes. CodingQuarry is freely available ( https://sourceforge.net/projects/codingquarry/ ), and suitable for incorporation into genome annotation pipelines.

  14. Investigation into the propagation of Omega very low frequency signals and techniques for improvement of navigation accuracy including differential and composite omega

    NASA Technical Reports Server (NTRS)

    1973-01-01

    An analysis of Very Low Frequency propagation in the atmosphere in the 10-14 kHz range leads to a discussion of some of the more significant causes of phase perturbation. The method of generating sky-wave corrections to predict the Omega phase is discussed. Composite Omega is considered as a means of lane identification and of reducing Omega navigation error. A simple technique for generating trapezoidal model (T-model) phase prediction is presented and compared with the Navy predictions and actual phase measurements. The T-model prediction analysis illustrates the ability to account for the major phase shift created by the diurnal effects on the lower ionosphere. An analysis of the Navy sky-wave correction table is used to provide information about spatial and temporal correlation of phase correction relative to the differential mode of operation.

  15. Assessing the capability of continuum and discrete particle methods to simulate gas-solids flow using DNS predictions as a benchmark

    DOE PAGES

    Lu, Liqiang; Liu, Xiaowen; Li, Tingwen; ...

    2017-08-12

    For this study, gas–solids flow in a three-dimension periodic domain was numerically investigated by direct numerical simulation (DNS), computational fluid dynamic-discrete element method (CFD-DEM) and two-fluid model (TFM). DNS data obtained by finely resolving the flow around every particle are used as a benchmark to assess the validity of coarser DEM and TFM approaches. The CFD-DEM predicts the correct cluster size distribution and under-predicts the macro-scale slip velocity even with a grid size as small as twice the particle diameter. The TFM approach predicts larger cluster size and lower slip velocity with a homogeneous drag correlation. Although the slip velocitymore » can be matched by a simple modification to the drag model, the predicted voidage distribution is still different from DNS: Both CFD-DEM and TFM over-predict the fraction of particles in dense regions and under-predict the fraction of particles in regions of intermediate void fractions. Also, the cluster aspect ratio of DNS is smaller than CFD-DEM and TFM. Since a simple correction to the drag model can predict a correct slip velocity, it is hopeful that drag corrections based on more elaborate theories that consider voidage gradient and particle fluctuations may be able to improve the current predictions of cluster distribution.« less

  16. Assessing the capability of continuum and discrete particle methods to simulate gas-solids flow using DNS predictions as a benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Liqiang; Liu, Xiaowen; Li, Tingwen

    For this study, gas–solids flow in a three-dimension periodic domain was numerically investigated by direct numerical simulation (DNS), computational fluid dynamic-discrete element method (CFD-DEM) and two-fluid model (TFM). DNS data obtained by finely resolving the flow around every particle are used as a benchmark to assess the validity of coarser DEM and TFM approaches. The CFD-DEM predicts the correct cluster size distribution and under-predicts the macro-scale slip velocity even with a grid size as small as twice the particle diameter. The TFM approach predicts larger cluster size and lower slip velocity with a homogeneous drag correlation. Although the slip velocitymore » can be matched by a simple modification to the drag model, the predicted voidage distribution is still different from DNS: Both CFD-DEM and TFM over-predict the fraction of particles in dense regions and under-predict the fraction of particles in regions of intermediate void fractions. Also, the cluster aspect ratio of DNS is smaller than CFD-DEM and TFM. Since a simple correction to the drag model can predict a correct slip velocity, it is hopeful that drag corrections based on more elaborate theories that consider voidage gradient and particle fluctuations may be able to improve the current predictions of cluster distribution.« less

  17. Comparisons of Predictions of the XB-70-1 Longitudinal Stability and Control Derivatives with Flight Results for Six Flight Conditions

    NASA Technical Reports Server (NTRS)

    Wolowicz, C. H.; Yancey, R. B.

    1973-01-01

    Preliminary correlations of flight-determined and predicted stability and control characteristics of the XB-70-1 reported in NASA TN D-4578 were subject to uncertainties in several areas which necessitated a review of prediction techniques particularly for the longitudinal characteristics. Reevaluation and updating of the original predictions, including aeroelastic corrections, for six specific flight-test conditions resulted in improved correlations of static pitch stability with flight data. The original predictions for the pitch-damping derivative, on the other hand, showed better correlation with flight data than the updated predictions. It appears that additional study is required in the application of aeroelastic corrections to rigid model wind-tunnel data and the theoretical determination of dynamic derivatives for this class of aircraft.

  18. Voidage correction algorithm for unresolved Euler-Lagrange simulations

    NASA Astrophysics Data System (ADS)

    Askarishahi, Maryam; Salehi, Mohammad-Sadegh; Radl, Stefan

    2018-04-01

    The effect of grid coarsening on the predicted total drag force and heat exchange rate in dense gas-particle flows is investigated using Euler-Lagrange (EL) approach. We demonstrate that grid coarsening may reduce the predicted total drag force and exchange rate. Surprisingly, exchange coefficients predicted by the EL approach deviate more significantly from the exact value compared to results of Euler-Euler (EE)-based calculations. The voidage gradient is identified as the root cause of this peculiar behavior. Consequently, we propose a correction algorithm based on a sigmoidal function to predict the voidage experienced by individual particles. Our correction algorithm can significantly improve the prediction of exchange coefficients in EL models, which is tested for simulations involving Euler grid cell sizes between 2d_p and 12d_p . It is most relevant in simulations of dense polydisperse particle suspensions featuring steep voidage profiles. For these suspensions, classical approaches may result in an error of the total exchange rate of up to 30%.

  19. ECOSAR model performance with a large test set of industrial chemicals.

    PubMed

    Reuschenbach, Peter; Silvani, Maurizio; Dammann, Martina; Warnecke, Dietmar; Knacker, Thomas

    2008-05-01

    The widely used ECOSAR computer programme for QSAR prediction of chemical toxicity towards aquatic organisms was evaluated by using large data sets of industrial chemicals with varying molecular structures. Experimentally derived toxicity data covering acute effects on fish, Daphnia and green algae growth inhibition of in total more than 1,000 randomly selected substances were compared to the prediction results of the ECOSAR programme in order (1) to assess the capability of ECOSAR to correctly classify the chemicals into defined classes of aquatic toxicity according to rules of EU regulation and (2) to determine the number of correct predictions within tolerance factors from 2 to 1,000. Regarding ecotoxicity classification, 65% (fish), 52% (Daphnia) and 49% (algae) of the substances were correctly predicted into the classes "not harmful", "harmful", "toxic" and "very toxic". At all trophic levels about 20% of the chemicals were underestimated in their toxicity. The class of "not harmful" substances (experimental LC/EC(50)>100 mg l(-1)) represents nearly half of the whole data set. The percentages for correct predictions of toxic effects on fish, Daphnia and algae growth inhibition were 69%, 64% and 60%, respectively, when a tolerance factor of 10 was allowed. Focussing on those experimental results which were verified by analytically measured concentrations, the predictability for Daphnia and algae toxicity was improved by approximately three percentage points, whereas for fish no improvement was determined. The calculated correlation coefficients demonstrated poor correlation when the complete data set was taken, but showed good results for some of the ECOSAR chemical classes. The results are discussed in the context of literature data on the performance of ECOSAR and other QSAR models.

  20. Empirical source strength correlations for rans-based acoustic analogy methods

    NASA Astrophysics Data System (ADS)

    Kube-McDowell, Matthew Tyndall

    JeNo is a jet noise prediction code based on an acoustic analogy method developed by Mani, Gliebe, Balsa, and Khavaran. Using the flow predictions from a standard Reynolds-averaged Navier-Stokes computational fluid dynamics solver, JeNo predicts the overall sound pressure level and angular spectra for high-speed hot jets over a range of observer angles, with a processing time suitable for rapid design purposes. JeNo models the noise from hot jets as a combination of two types of noise sources; quadrupole sources dependent on velocity fluctuations, which represent the major noise of turbulent mixing, and dipole sources dependent on enthalpy fluctuations, which represent the effects of thermal variation. These two sources are modeled by JeNo as propagating independently into the far-field, with no cross-correlation at the observer location. However, high-fidelity computational fluid dynamics solutions demonstrate that this assumption is false. In this thesis, the theory, assumptions, and limitations of the JeNo code are briefly discussed, and a modification to the acoustic analogy method is proposed in which the cross-correlation of the two primary noise sources is allowed to vary with the speed of the jet and the observer location. As a proof-of-concept implementation, an empirical correlation correction function is derived from comparisons between JeNo's noise predictions and a set of experimental measurements taken for the Air Force Aero-Propulsion Laboratory. The empirical correlation correction is then applied to JeNo's predictions of a separate data set of hot jets tested at NASA's Glenn Research Center. Metrics are derived to measure the qualitative and quantitative performance of JeNo's acoustic predictions, and the empirical correction is shown to provide a quantitative improvement in the noise prediction at low observer angles with no freestream flow, and a qualitative improvement in the presence of freestream flow. However, the results also demonstrate that there are underlying flaws in JeNo's ability to predict the behavior of a hot jet's acoustic signature at certain rear observer angles, and that this correlation correction is not able to correct these flaws.

  1. Cluster-based analysis improves predictive validity of spike-triggered receptive field estimates

    PubMed Central

    Malone, Brian J.

    2017-01-01

    Spectrotemporal receptive field (STRF) characterization is a central goal of auditory physiology. STRFs are often approximated by the spike-triggered average (STA), which reflects the average stimulus preceding a spike. In many cases, the raw STA is subjected to a threshold defined by gain values expected by chance. However, such correction methods have not been universally adopted, and the consequences of specific gain-thresholding approaches have not been investigated systematically. Here, we evaluate two classes of statistical correction techniques, using the resulting STRF estimates to predict responses to a novel validation stimulus. The first, more traditional technique eliminated STRF pixels (time-frequency bins) with gain values expected by chance. This correction method yielded significant increases in prediction accuracy, including when the threshold setting was optimized for each unit. The second technique was a two-step thresholding procedure wherein clusters of contiguous pixels surviving an initial gain threshold were then subjected to a cluster mass threshold based on summed pixel values. This approach significantly improved upon even the best gain-thresholding techniques. Additional analyses suggested that allowing threshold settings to vary independently for excitatory and inhibitory subfields of the STRF resulted in only marginal additional gains, at best. In summary, augmenting reverse correlation techniques with principled statistical correction choices increased prediction accuracy by over 80% for multi-unit STRFs and by over 40% for single-unit STRFs, furthering the interpretational relevance of the recovered spectrotemporal filters for auditory systems analysis. PMID:28877194

  2. Crustal Thickness Mapping of the Rifted Margin Ocean-Continent Transition using Satellite Gravity Inversion Incorporating a Lithosphere Thermal Correction

    NASA Astrophysics Data System (ADS)

    Hurst, N. W.; Kusznir, N. J.

    2005-05-01

    A new method of inverting satellite gravity at rifted continental margins to give crustal thickness, incorporating a lithosphere thermal correction, has been developed which does not use a priori information about the location of the ocean-continent transition (OCT) and provides an independent prediction of OCT location. Satellite derived gravity anomaly data (Sandwell and Smith 1997) and bathymetry data (Gebco 2003) are used to derive the mantle residual gravity anomaly which is inverted in 3D in the spectral domain to give Moho depth. Oceanic lithosphere and stretched continental margin lithosphere produce a large negative residual thermal gravity anomaly (up to -380 mgal), which must be corrected for in order to determine Moho depth. This thermal gravity correction may be determined for oceanic lithosphere using oceanic isochron data, and for the thinned continental margin lithosphere using margin rift age and beta stretching estimates iteratively derived from crustal basement thickness determined from the gravity inversion. The gravity inversion using the thermal gravity correction predicts oceanic crustal thicknesses consistent with seismic observations, while that without the thermal correction predicts much too great oceanic crustal thicknesses. Predicted Moho depth and crustal thinning across the Hatton and Faroes rifted margins, using the gravity inversion with embedded thermal correction, compare well with those produced by wide-angle seismology. A new gravity inversion method has been developed in which no isochrons are used to define the thermal gravity correction. The new method assumes all lithosphere to be initially continental and a uniform lithosphere stretching age is used corresponding to the time of continental breakup. The thinning factor produced by the gravity inversion is used to predict the thickness of oceanic crust. This new modified form of gravity inversion with embedded thermal correction provides an improved estimate of rifted continental margin crustal thinning and an improved (and isochron independent) prediction of OCT location. The new method uses an empirical relationship to predict the thickness of oceanic crust as a function of lithosphere thinning factor controlled by two input parameters: a critical thinning factor for the start of ocean crust production and the maximum oceanic crustal thickness produced when the thinning factor = 1, corresponding to infinite lithosphere stretching. The disadvantage of using a uniform stretching age corresponding to the age of continental breakup is that the inversion fails to predict increasing thermal gravity correction towards the ocean ridge and incorrectly predicts thickening of oceanic crust with decreasing oceanic age. The new gravity inversion method has been applied to N. Atlantic rifted margins. This work forms part of the NERC Margins iSIMM project. iSIMM investigators are from Liverpool and Cambridge Universities, Badley Geoscience & Schlumberger Cambridge Research supported by the NERC, the DTI, Agip UK, BP, Amerada Hess Ltd, Anadarko, ConocoPhillips, Shell, Statoil and WesternGeco. The iSIMM team comprises NJ Kusznir, RS White, AM Roberts, PAF Christie, A Chappell, J Eccles, R Fletcher, D Healy, N Hurst, ZC Lunnon, CJ Parkin, AW Roberts, LK Smith, V Tymms & R Spitzer.

  3. [Changes in psychopathological symptoms during the waiting period for outpatient psychotherapy].

    PubMed

    Huckert, Thomas Frank; Hank, Petra; Krampen, Günter

    2012-08-01

    This study empirically tests symptom changes in a sample of 106 psychotherapy outpatients during a 6-month waiting period before treatment commencement. Using indirect measurement of change, the patients improve in psychopathological symptoms. Using direct measurement of change, 48% of the outpatients show no significant change in psychopathological symptoms. However, the symptoms of 29% improve and 23% worsen. Using multinomial logistic regression, group membership (no change, positive change, negative change) can be predicted by personality traits for 60% of the patients. Social trust negatively predicts changes for the worse. Liberal gender-role orientation positively predicts improvement. A positive self-concept of ability positively predicts changes for the worse. Moreover sociodemographic variables correctly predict group membership for 57% of the patients. Age positively predicts changes for the worse. Female gender negatively predicts improvement. © Georg Thieme Verlag KG Stuttgart · New York.

  4. Postprocessing for Air Quality Predictions

    NASA Astrophysics Data System (ADS)

    Delle Monache, L.

    2017-12-01

    In recent year, air quality (AQ) forecasting has made significant progress towards better predictions with the goal of protecting the public from harmful pollutants. This progress is the results of improvements in weather and chemical transport models, their coupling, and more accurate emission inventories (e.g., with the development of new algorithms to account in near real-time for fires). Nevertheless, AQ predictions are still affected at times by significant biases which stem from limitations in both weather and chemistry transport models. Those are the result of numerical approximations and the poor representation (and understanding) of important physical and chemical process. Moreover, although the quality of emission inventories has been significantly improved, they are still one of the main sources of uncertainties in AQ predictions. For operational real-time AQ forecasting, a significant portion of these biases can be reduced with the implementation of postprocessing methods. We will review some of the techniques that have been proposed to reduce both systematic and random errors of AQ predictions, and improve the correlation between predictions and observations of ground-level ozone and surface particulate matter less than 2.5 µm in diameter (PM2.5). These methods, which can be applied to both deterministic and probabilistic predictions, include simple bias-correction techniques, corrections inspired by the Kalman filter, regression methods, and the more recently developed analog-based algorithms. These approaches will be compared and contrasted, and strength and weaknesses of each will be discussed.

  5. Reverberant acoustic energy in auditoria that comprise systems of coupled rooms

    NASA Astrophysics Data System (ADS)

    Summers, Jason E.

    2003-11-01

    A frequency-dependent model for reverberant energy in coupled rooms is developed and compared with measurements for a 1:10 scale model and for Bass Hall, Ft. Worth, TX. At high frequencies, prior statistical-acoustics models are improved by geometrical-acoustics corrections for decay within sub-rooms and for energy transfer between sub-rooms. Comparisons of computational geometrical acoustics predictions based on beam-axis tracing with scale model measurements indicate errors resulting from tail-correction assuming constant quadratic growth of reflection density. Using ray tracing in the late part corrects this error. For mid-frequencies, the models are modified to account for wave effects at coupling apertures by including power transmission coefficients. Similarly, statical-acoustics models are improved through more accurate estimates of power transmission measurements. Scale model measurements are in accord with the predicted behavior. The edge-diffraction model is adapted to study transmission through apertures. Multiple-order scattering is theoretically and experimentally shown inaccurate due to neglect of slope diffraction. At low frequencies, perturbation models qualitatively explain scale model measurements. Measurements confirm relation of coupling strength to unperturbed pressure distribution on coupling surfaces. Measurements in Bass Hall exhibit effects of the coupled stage house. High frequency predictions of statistical acoustics and geometrical acoustics models and predictions of coupling apertures all agree with measurements.

  6. The influence of a wall function on turbine blade heat transfer prediction

    NASA Technical Reports Server (NTRS)

    Whitaker, Kevin W.

    1989-01-01

    The second phase of a continuing investigation to improve the prediction of turbine blade heat transfer coefficients was completed. The present study specifically investigated how a numeric wall function in the turbulence model of a two-dimensional boundary layer code, STAN5, affected heat transfer prediction capabilities. Several sources of inaccuracy in the wall function were identified and then corrected or improved. Heat transfer coefficient predictions were then obtained using each one of the modifications to determine its effect. Results indicated that the modifications made to the wall function can significantly affect the prediction of heat transfer coefficients on turbine blades. The improvement in accuracy due the modifications is still inconclusive and is still being investigated.

  7. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    NASA Astrophysics Data System (ADS)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  8. Multi-model data fusion to improve an early warning system for hypo-/hyperglycemic events.

    PubMed

    Botwey, Ransford Henry; Daskalaki, Elena; Diem, Peter; Mougiakakou, Stavroula G

    2014-01-01

    Correct predictions of future blood glucose levels in individuals with Type 1 Diabetes (T1D) can be used to provide early warning of upcoming hypo-/hyperglycemic events and thus to improve the patient's safety. To increase prediction accuracy and efficiency, various approaches have been proposed which combine multiple predictors to produce superior results compared to single predictors. Three methods for model fusion are presented and comparatively assessed. Data from 23 T1D subjects under sensor-augmented pump (SAP) therapy were used in two adaptive data-driven models (an autoregressive model with output correction - cARX, and a recurrent neural network - RNN). Data fusion techniques based on i) Dempster-Shafer Evidential Theory (DST), ii) Genetic Algorithms (GA), and iii) Genetic Programming (GP) were used to merge the complimentary performances of the prediction models. The fused output is used in a warning algorithm to issue alarms of upcoming hypo-/hyperglycemic events. The fusion schemes showed improved performance with lower root mean square errors, lower time lags, and higher correlation. In the warning algorithm, median daily false alarms (DFA) of 0.25%, and 100% correct alarms (CA) were obtained for both event types. The detection times (DT) before occurrence of events were 13.0 and 12.1 min respectively for hypo-/hyperglycemic events. Compared to the cARX and RNN models, and a linear fusion of the two, the proposed fusion schemes represents a significant improvement.

  9. Space vehicle acoustics prediction improvement for payloads. [space shuttle

    NASA Technical Reports Server (NTRS)

    Dandridge, R. E.

    1979-01-01

    The modal analysis method was extensively modified for the prediction of space vehicle noise reduction in the shuttle payload enclosure, and this program was adapted to the IBM 360 computer. The predicted noise reduction levels for two test cases were compared with experimental results to determine the validity of the analytical model for predicting space vehicle payload noise environments in the 10 Hz one-third octave band regime. The prediction approach for the two test cases generally gave reasonable magnitudes and trends when compared with the measured noise reduction spectra. The discrepancies in the predictions could be corrected primarily by improved modeling of the vehicle structural walls and of the enclosed acoustic space to obtain a more accurate assessment of normal modes. Techniques for improving and expandng the noise prediction for a payload environment are also suggested.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonetto, Andrea; Dall'Anese, Emiliano

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  11. Improving salt marsh digital elevation model accuracy with full-waveform lidar and nonparametric predictive modeling

    NASA Astrophysics Data System (ADS)

    Rogers, Jeffrey N.; Parrish, Christopher E.; Ward, Larry G.; Burdick, David M.

    2018-03-01

    Salt marsh vegetation tends to increase vertical uncertainty in light detection and ranging (lidar) derived elevation data, often causing the data to become ineffective for analysis of topographic features governing tidal inundation or vegetation zonation. Previous attempts at improving lidar data collected in salt marsh environments range from simply computing and subtracting the global elevation bias to more complex methods such as computing vegetation-specific, constant correction factors. The vegetation specific corrections can be used along with an existing habitat map to apply separate corrections to different areas within a study site. It is hypothesized here that correcting salt marsh lidar data by applying location-specific, point-by-point corrections, which are computed from lidar waveform-derived features, tidal-datum based elevation, distance from shoreline and other lidar digital elevation model based variables, using nonparametric regression will produce better results. The methods were developed and tested using full-waveform lidar and ground truth for three marshes in Cape Cod, Massachusetts, U.S.A. Five different model algorithms for nonparametric regression were evaluated, with TreeNet's stochastic gradient boosting algorithm consistently producing better regression and classification results. Additionally, models were constructed to predict the vegetative zone (high marsh and low marsh). The predictive modeling methods used in this study estimated ground elevation with a mean bias of 0.00 m and a standard deviation of 0.07 m (0.07 m root mean square error). These methods appear very promising for correction of salt marsh lidar data and, importantly, do not require an existing habitat map, biomass measurements, or image based remote sensing data such as multi/hyperspectral imagery.

  12. Literature mining supports a next-generation modeling approach to predict cellular byproduct secretion.

    PubMed

    King, Zachary A; O'Brien, Edward J; Feist, Adam M; Palsson, Bernhard O

    2017-01-01

    The metabolic byproducts secreted by growing cells can be easily measured and provide a window into the state of a cell; they have been essential to the development of microbiology, cancer biology, and biotechnology. Progress in computational modeling of cells has made it possible to predict metabolic byproduct secretion with bottom-up reconstructions of metabolic networks. However, owing to a lack of data, it has not been possible to validate these predictions across a wide range of strains and conditions. Through literature mining, we were able to generate a database of Escherichia coli strains and their experimentally measured byproduct secretions. We simulated these strains in six historical genome-scale models of E. coli, and we report that the predictive power of the models has increased as they have expanded in size and scope. The latest genome-scale model of metabolism correctly predicts byproduct secretion for 35/89 (39%) of designs. The next-generation genome-scale model of metabolism and gene expression (ME-model) correctly predicts byproduct secretion for 40/89 (45%) of designs, and we show that ME-model predictions could be further improved through kinetic parameterization. We analyze the failure modes of these simulations and discuss opportunities to improve prediction of byproduct secretion. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  13. Dispersion-correcting potentials can significantly improve the bond dissociation enthalpies and noncovalent binding energies predicted by density-functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DiLabio, Gino A., E-mail: Gino.DiLabio@nrc.ca; Department of Chemistry, University of British Columbia, Okanagan, 3333 University Way, Kelowna, British Columbia V1V 1V7; Koleini, Mohammad

    2014-05-14

    Dispersion-correcting potentials (DCPs) are atom-centered Gaussian functions that are applied in a manner that is similar to effective core potentials. Previous work on DCPs has focussed on their use as a simple means of improving the ability of conventional density-functional theory methods to predict the binding energies of noncovalently bonded molecular dimers. We show in this work that DCPs developed for use with the LC-ωPBE functional along with 6-31+G(2d,2p) basis sets are capable of simultaneously improving predicted noncovalent binding energies of van der Waals dimer complexes and covalent bond dissociation enthalpies in molecules. Specifically, the DCPs developed herein for themore » C, H, N, and O atoms provide binding energies for a set of 66 noncovalently bonded molecular dimers (the “S66” set) with a mean absolute error (MAE) of 0.21 kcal/mol, which represents an improvement of more than a factor of 10 over unadorned LC-ωPBE/6-31+G(2d,2p) and almost a factor of two improvement over LC-ωPBE/6-31+G(2d,2p) used in conjunction with the “D3” pairwise dispersion energy corrections. In addition, the DCPs reduce the MAE of calculated X-H and X-Y (X,Y = C, H, N, O) bond dissociation enthalpies for a set of 40 species from 3.2 kcal/mol obtained with unadorned LC-ωPBE/6-31+G(2d,2p) to 1.6 kcal/mol. Our findings demonstrate that broad improvements to the performance of DFT methods may be achievable through the use of DCPs.« less

  14. Predictive models reduce talent development costs in female gymnastics.

    PubMed

    Pion, Johan; Hohmann, Andreas; Liu, Tianbiao; Lenoir, Matthieu; Segers, Veerle

    2017-04-01

    This retrospective study focuses on the comparison of different predictive models based on the results of a talent identification test battery for female gymnasts. We studied to what extent these models have the potential to optimise selection procedures, and at the same time reduce talent development costs in female artistic gymnastics. The dropout rate of 243 female elite gymnasts was investigated, 5 years past talent selection, using linear (discriminant analysis) and non-linear predictive models (Kohonen feature maps and multilayer perceptron). The coaches classified 51.9% of the participants correct. Discriminant analysis improved the correct classification to 71.6% while the non-linear technique of Kohonen feature maps reached 73.7% correctness. Application of the multilayer perceptron even classified 79.8% of the gymnasts correctly. The combination of different predictive models for talent selection can avoid deselection of high-potential female gymnasts. The selection procedure based upon the different statistical analyses results in decrease of 33.3% of cost because the pool of selected athletes can be reduced to 92 instead of 138 gymnasts (as selected by the coaches). Reduction of the costs allows the limited resources to be fully invested in the high-potential athletes.

  15. Model-Based Control of Observer Bias for the Analysis of Presence-Only Data in Ecology

    PubMed Central

    Warton, David I.; Renner, Ian W.; Ramp, Daniel

    2013-01-01

    Presence-only data, where information is available concerning species presence but not species absence, are subject to bias due to observers being more likely to visit and record sightings at some locations than others (hereafter “observer bias”). In this paper, we describe and evaluate a model-based approach to accounting for observer bias directly – by modelling presence locations as a function of known observer bias variables (such as accessibility variables) in addition to environmental variables, then conditioning on a common level of bias to make predictions of species occurrence free of such observer bias. We implement this idea using point process models with a LASSO penalty, a new presence-only method related to maximum entropy modelling, that implicitly addresses the “pseudo-absence problem” of where to locate pseudo-absences (and how many). The proposed method of bias-correction is evaluated using systematically collected presence/absence data for 62 plant species endemic to the Blue Mountains near Sydney, Australia. It is shown that modelling and controlling for observer bias significantly improves the accuracy of predictions made using presence-only data, and usually improves predictions as compared to pseudo-absence or “inventory” methods of bias correction based on absences from non-target species. Future research will consider the potential for improving the proposed bias-correction approach by estimating the observer bias simultaneously across multiple species. PMID:24260167

  16. 3-D residual eddy current field characterisation: applied to diffusion weighted magnetic resonance imaging.

    PubMed

    O'Brien, Kieran; Daducci, Alessandro; Kickler, Nils; Lazeyras, Francois; Gruetter, Rolf; Feiweier, Thorsten; Krueger, Gunnar

    2013-08-01

    Clinical use of the Stejskal-Tanner diffusion weighted images is hampered by the geometric distortions that result from the large residual 3-D eddy current field induced. In this work, we aimed to predict, using linear response theory, the residual 3-D eddy current field required for geometric distortion correction based on phantom eddy current field measurements. The predicted 3-D eddy current field induced by the diffusion-weighting gradients was able to reduce the root mean square error of the residual eddy current field to ~1 Hz. The model's performance was tested on diffusion weighted images of four normal volunteers, following distortion correction, the quality of the Stejskal-Tanner diffusion-weighted images was found to have comparable quality to image registration based corrections (FSL) at low b-values. Unlike registration techniques the correction was not hindered by low SNR at high b-values, and results in improved image quality relative to FSL. Characterization of the 3-D eddy current field with linear response theory enables the prediction of the 3-D eddy current field required to correct eddy current induced geometric distortions for a wide range of clinical and high b-value protocols.

  17. Minimum Energy Routing through Interactive Techniques (MERIT) modeling

    NASA Technical Reports Server (NTRS)

    Wylie, Donald P.

    1988-01-01

    The MERIT program is designed to demonstrate the feasibility of fuel savings by airlines through improved route selection using wind observations from their own fleet. After a discussion of weather and aircraft data, manually correcting wind fields, automatic corrections to wind fields, and short-range prediction models, it is concluded that improvements in wind information are possible if a system is developed for analyzing wind observations and correcting the forecasts made by the major models. One data handling system, McIDAS, can easily collect and display wind observations and model forecasts. Changing the wind forecasts beyond the time of the most recent observations is more difficult; an Australian Mesoscale Model was tested with promising but not definitive results.

  18. Spatial memory for asymmetrical dot locations predicts lateralization among patients with presurgical mesial temporal lobe epilepsy.

    PubMed

    Brown, Franklin C; Hirsch, Lawrence J; Spencer, Dennis D

    2015-11-01

    This study examined the ability of an asymmetrical dot location memory test (Brown Location Test, BLT) and two verbal memory tests (Verbal Selective Reminding Test (VSRT) and California Verbal Learning Test, Second Edition (CVLT-II)) to correctly lateralize left (LTLE) or right (RTLE) mesial temporal lobe epilepsy that was confirmed with video-EEG. Subjects consisted of 16 patients with medically refractory RTLE and 13 patients with medically refractory LTLE who were left hemisphere language dominant. Positive predictive values for lateralizing TLE correctly were 87.5% for the BLT, 72.7% for the VSRT, and 80% for the CVLT-II. Binary logistic regression indicated that the BLT alone correctly classified 76.9% of patients with left temporal lobe epilepsy and 87.5% of patients with right temporal lobe epilepsy. Inclusion of the verbal memory tests improved this to 92.3% of patients with left temporal lobe epilepsy and 100% correct classification of patients with right temporal lobe epilepsy. Though of a limited sample size, this study suggests that the BLT alone provides strong laterality information which improves with the addition of verbal memory tests. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Assessment of general movements and heart rate variability in prediction of neurodevelopmental outcome in preterm infants.

    PubMed

    Dimitrijević, Lidija; Bjelaković, Bojko; Čolović, Hristina; Mikov, Aleksandra; Živković, Vesna; Kocić, Mirjana; Lukić, Stevo

    2016-08-01

    Adverse neurologic outcome in preterm infants could be associated with abnormal heart rate (HR) characteristics as well as with abnormal general movements (GMs) in the 1st month of life. To demonstrate to what extent GMs assessment can predict neurological outcome in preterm infants in our clinical setting; and to assess the clinical usefulness of time-domain indices of heart rate variability (HRV) in improving predictive value of poor repertoire (PR) GMs in writhing period. Qualitative assessment of GMs at 1 and 3 months corrected age; 24h electrocardiography (ECG) recordings and analyzing HRV at 1 month corrected age. Seventy nine premature infants at risk of neurodevelopmental impairments were included prospectively. Neurodevelopmental outcome was assessed at the age of 2 years corrected. Children were classified as having normal neurodevelopmental status, minor neurologic dysfunction (MND), or cerebral palsy (CP). We found that GMs in writhing period (1 month corrected age) predicted CP at 2 years with sensitivity of 100%, and specificity of 72.1%. Our results demonstrated the excellent predictive value of cramped synchronized (CS) GMs, but not of PR pattern. Analyzing separately a group of infants with PR GMs we found significantly lower values of HRV parameters in infants who later developed CP or MND vs. infants with PR GMs who had normal outcome. The quality of GMs was predictive for neurodevelopmental outcome at 2 years. Prediction of PR GMs was significantly enhanced with analyzing HRV parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Correlation of chemical shifts predicted by molecular dynamics simulations for partially disordered proteins.

    PubMed

    Karp, Jerome M; Eryilmaz, Ertan; Erylimaz, Ertan; Cowburn, David

    2015-01-01

    There has been a longstanding interest in being able to accurately predict NMR chemical shifts from structural data. Recent studies have focused on using molecular dynamics (MD) simulation data as input for improved prediction. Here we examine the accuracy of chemical shift prediction for intein systems, which have regions of intrinsic disorder. We find that using MD simulation data as input for chemical shift prediction does not consistently improve prediction accuracy over use of a static X-ray crystal structure. This appears to result from the complex conformational ensemble of the disordered protein segments. We show that using accelerated molecular dynamics (aMD) simulations improves chemical shift prediction, suggesting that methods which better sample the conformational ensemble like aMD are more appropriate tools for use in chemical shift prediction for proteins with disordered regions. Moreover, our study suggests that data accurately reflecting protein dynamics must be used as input for chemical shift prediction in order to correctly predict chemical shifts in systems with disorder.

  1. Developing and implementing the use of predictive models for estimating water quality at Great Lakes beaches

    USGS Publications Warehouse

    Francy, Donna S.; Brady, Amie M.G.; Carvin, Rebecca B.; Corsi, Steven R.; Fuller, Lori M.; Harrison, John H.; Hayhurst, Brett A.; Lant, Jeremiah; Nevers, Meredith B.; Terrio, Paul J.; Zimmerman, Tammy M.

    2013-01-01

    Predictive models have been used at beaches to improve the timeliness and accuracy of recreational water-quality assessments over the most common current approach to water-quality monitoring, which relies on culturing fecal-indicator bacteria such as Escherichia coli (E. coli.). Beach-specific predictive models use environmental and water-quality variables that are easily and quickly measured as surrogates to estimate concentrations of fecal-indicator bacteria or to provide the probability that a State recreational water-quality standard will be exceeded. When predictive models are used for beach closure or advisory decisions, they are referred to as “nowcasts.” During the recreational seasons of 2010-12, the U.S. Geological Survey (USGS), in cooperation with 23 local and State agencies, worked to improve existing nowcasts at 4 beaches, validate predictive models at another 38 beaches, and collect data for predictive-model development at 7 beaches throughout the Great Lakes. This report summarizes efforts to collect data and develop predictive models by multiple agencies and to compile existing information on the beaches and beach-monitoring programs into one comprehensive report. Local agencies measured E. coli concentrations and variables expected to affect E. coli concentrations such as wave height, turbidity, water temperature, and numbers of birds at the time of sampling. In addition to these field measurements, equipment was installed by the USGS or local agencies at or near several beaches to collect water-quality and metrological measurements in near real time, including nearshore buoys, weather stations, and tributary staff gages and monitors. The USGS worked with local agencies to retrieve data from existing sources either manually or by use of tools designed specifically to compile and process data for predictive-model development. Predictive models were developed by use of linear regression and (or) partial least squares techniques for 42 beaches that had at least 2 years of data (2010-11 and sometimes earlier) and for 1 beach that had 1 year of data. For most models, software designed for model development by the U.S. Environmental Protection Agency (Virtual Beach) was used. The selected model for each beach was based on a combination of explanatory variables including, most commonly, turbidity, day of the year, change in lake level over 24 hours, wave height, wind direction and speed, and antecedent rainfall for various time periods. Forty-two predictive models were validated against data collected during an independent year (2012) and compared to the current method for assessing recreational water quality-using the previous day’s E. coli concentration (persistence model). Goals for good predictive-model performance were responses that were at least 5 percent greater than the persistence model and overall correct responses greater than or equal to 80 percent, sensitivities (percentage of exceedances of the bathing-water standard that were correctly predicted by the model) greater than or equal to 50 percent, and specificities (percentage of nonexceedances correctly predicted by the model) greater than or equal to 85 percent. Out of 42 predictive models, 24 models yielded over-all correct responses that were at least 5 percent greater than the use of the persistence model. Predictive-model responses met the performance goals more often than the persistence-model responses in terms of overall correctness (28 versus 17 models, respectively), sensitivity (17 versus 4 models), and specificity (34 versus 25 models). Gaining knowledge of each beach and the factors that affect E. coli concentrations is important for developing good predictive models. Collection of additional years of data with a wide range of environmental conditions may also help to improve future model performance. The USGS will continue to work with local agencies in 2013 and beyond to develop and validate predictive models at beaches and improve existing nowcasts, restructuring monitoring activities to accommodate future uncertainties in funding and resources.

  2. The relationship between tree growth patterns and likelihood of mortality: A study of two tree species in the Sierra Nevada

    USGS Publications Warehouse

    Das, A.J.; Battles, J.J.; Stephenson, N.L.; van Mantgem, P.J.

    2007-01-01

    We examined mortality of Abies concolor (Gord. & Glend.) Lindl. (white fir) and Pinus lambertiana Dougl. (sugar pine) by developing logistic models using three growth indices obtained from tree rings: average growth, growth trend, and count of abrupt growth declines. For P. lambertiana, models with average growth, growth trend, and count of abrupt declines improved overall prediction (78.6% dead trees correctly classified, 83.7% live trees correctly classified) compared with a model with average recent growth alone (69.6% dead trees correctly classified, 67.3% live trees correctly classified). For A. concolor, counts of abrupt declines and longer time intervals improved overall classification (trees with DBH ???20 cm: 78.9% dead trees correctly classified and 76.7% live trees correctly classified vs. 64.9% dead trees correctly classified and 77.9% live trees correctly classified; trees with DBH <20 cm: 71.6% dead trees correctly classified and 71.0% live trees correctly classified vs. 67.2% dead trees correctly classified and 66.7% live trees correctly classified). In general, count of abrupt declines improved live-tree classification. External validation of A. concolor models showed that they functioned well at stands not used in model development, and the development of size-specific models demonstrated important differences in mortality risk between understory and canopy trees. Population-level mortality-risk models were developed for A. concolor and generated realistic mortality rates at two sites. Our results support the contention that a more comprehensive use of the growth record yields a more robust assessment of mortality risk. ?? 2007 NRC.

  3. Correcting wave predictions with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Makarynskyy, O.; Makarynska, D.

    2003-04-01

    The predictions of wind waves with different lead times are necessary in a large scope of coastal and open ocean activities. Numerical wave models, which usually provide this information, are based on deterministic equations that do not entirely account for the complexity and uncertainty of the wave generation and dissipation processes. An attempt to improve wave parameters short-term forecasts based on artificial neural networks is reported. In recent years, artificial neural networks have been used in a number of coastal engineering applications due to their ability to approximate the nonlinear mathematical behavior without a priori knowledge of interrelations among the elements within a system. The common multilayer feed-forward networks, with a nonlinear transfer functions in the hidden layers, were developed and employed to forecast the wave characteristics over one hour intervals starting from one up to 24 hours, and to correct these predictions. Three non-overlapping data sets of wave characteristics, both from a buoy, moored roughly 60 miles west of the Aran Islands, west coast of Ireland, were used to train and validate the neural nets involved. The networks were trained with error back propagation algorithm. Time series plots and scatterplots of the wave characteristics as well as tables with statistics show an improvement of the results achieved due to the correction procedure employed.

  4. A comparative study of two codes with an improved two-equation turbulence model for predicting jet plumes

    NASA Technical Reports Server (NTRS)

    Balakrishnan, L.; Abdol-Hamid, Khaled S.

    1992-01-01

    Compressible jet plumes were studied using a two-equation turbulence model. A space marching procedure based on an upwind numerical scheme was used to solve the governing equations and turbulence transport equations. The computed results indicate that extending the space marching procedure for solving supersonic/subsonic mixing problems can be stable, efficient and accurate. Moreover, a newly developed correction for compressible dissipation has been verified in fully expanded and underexpanded jet plumes. For a sonic jet plume, no improvement in results over the standard two-equation model was seen. However for a supersonic jet plume, the correction due to compressible dissipation successfully predicted the reduced spreading rate of the jet compared to the sonic case. The computed results were generally in good agreement with the experimental data.

  5. Real-time distortion correction of spiral and echo planar images using the gradient system impulse response function.

    PubMed

    Campbell-Washburn, Adrienne E; Xue, Hui; Lederman, Robert J; Faranesh, Anthony Z; Hansen, Michael S

    2016-06-01

    MRI-guided interventions demand high frame rate imaging, making fast imaging techniques such as spiral imaging and echo planar imaging (EPI) appealing. In this study, we implemented a real-time distortion correction framework to enable the use of these fast acquisitions for interventional MRI. Distortions caused by gradient waveform inaccuracies were corrected using the gradient impulse response function (GIRF), which was measured by standard equipment and saved as a calibration file on the host computer. This file was used at runtime to calculate the predicted k-space trajectories for image reconstruction. Additionally, the off-resonance reconstruction frequency was modified in real time to interactively deblur spiral images. Real-time distortion correction for arbitrary image orientations was achieved in phantoms and healthy human volunteers. The GIRF-predicted k-space trajectories matched measured k-space trajectories closely for spiral imaging. Spiral and EPI image distortion was visibly improved using the GIRF-predicted trajectories. The GIRF calibration file showed no systematic drift in 4 months and was demonstrated to correct distortions after 30 min of continuous scanning despite gradient heating. Interactive off-resonance reconstruction was used to sharpen anatomical boundaries during continuous imaging. This real-time distortion correction framework will enable the use of these high frame rate imaging methods for MRI-guided interventions. Magn Reson Med 75:2278-2285, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  6. Real-time distortion correction of spiral and echo planar images using the gradient system impulse response function

    PubMed Central

    Campbell-Washburn, Adrienne E; Xue, Hui; Lederman, Robert J; Faranesh, Anthony Z; Hansen, Michael S

    2015-01-01

    Purpose MRI-guided interventions demand high frame-rate imaging, making fast imaging techniques such as spiral imaging and echo planar imaging (EPI) appealing. In this study, we implemented a real-time distortion correction framework to enable the use of these fast acquisitions for interventional MRI. Methods Distortions caused by gradient waveform inaccuracies were corrected using the gradient impulse response function (GIRF), which was measured by standard equipment and saved as a calibration file on the host computer. This file was used at runtime to calculate the predicted k-space trajectories for image reconstruction. Additionally, the off-resonance reconstruction frequency was modified in real-time to interactively de-blur spiral images. Results Real-time distortion correction for arbitrary image orientations was achieved in phantoms and healthy human volunteers. The GIRF predicted k-space trajectories matched measured k-space trajectories closely for spiral imaging. Spiral and EPI image distortion was visibly improved using the GIRF predicted trajectories. The GIRF calibration file showed no systematic drift in 4 months and was demonstrated to correct distortions after 30 minutes of continuous scanning despite gradient heating. Interactive off-resonance reconstruction was used to sharpen anatomical boundaries during continuous imaging. Conclusions This real-time distortion correction framework will enable the use of these high frame-rate imaging methods for MRI-guided interventions. PMID:26114951

  7. A Predictive Safety Management System Software Package Based on the Continuous Hazard Tracking and Failure Prediction Methodology

    NASA Technical Reports Server (NTRS)

    Quintana, Rolando

    2003-01-01

    The goal of this research was to integrate a previously validated and reliable safety model, called Continuous Hazard Tracking and Failure Prediction Methodology (CHTFPM), into a software application. This led to the development of a safety management information system (PSMIS). This means that the theory or principles of the CHTFPM were incorporated in a software package; hence, the PSMIS is referred to as CHTFPM management information system (CHTFPM MIS). The purpose of the PSMIS is to reduce the time and manpower required to perform predictive studies as well as to facilitate the handling of enormous quantities of information in this type of studies. The CHTFPM theory encompasses the philosophy of looking at the concept of safety engineering from a new perspective: from a proactive, than a reactive, viewpoint. That is, corrective measures are taken before a problem instead of after it happened. That is why the CHTFPM is a predictive safety because it foresees or anticipates accidents, system failures and unacceptable risks; therefore, corrective action can be taken in order to prevent all these unwanted issues. Consequently, safety and reliability of systems or processes can be further improved by taking proactive and timely corrective actions.

  8. A Comprehensive Review on the Predictive Performance of the Sheiner-Tozer and Derivative Equations for the Correction of Phenytoin Concentrations.

    PubMed

    Kiang, Tony K L; Ensom, Mary H H

    2016-04-01

    In settings where free phenytoin concentrations are not available, the Sheiner-Tozer equation-Corrected total phenytoin concentration = Observed total phenytoin concentration/[(0.2 × Albumin) + 0.1]; phenytoin in µg/mL, albumin in g/dL-and its derivative equations are commonly used to correct for altered phenytoin binding to albumin. The objective of this article was to provide a comprehensive and updated review on the predictive performance of these equations in various patient populations. A literature search of PubMed, EMBASE, and Google Scholar was conducted using combinations of the following terms: Sheiner-Tozer, Winter-Tozer, phenytoin, predictive equation, precision, bias, free fraction. All English-language articles up to November 2015 (excluding abstracts) were evaluated. This review shows the Sheiner-Tozer equation to be biased and imprecise in various critical care, head trauma, and general neurology patient populations. Factors contributing to bias and imprecision include the following: albumin concentration, free phenytoin assay temperature, experimental conditions (eg, timing of concentration sampling, steady-state dosing conditions), renal function, age, concomitant medications, and patient type. Although derivative equations using varying albumin coefficients have improved accuracy (without much improvement in precision) in intensive care and elderly patients, these equations still require further validation. Further experiments are also needed to yield derivative equations with good predictive performance in all populations as well as to validate the equations' impact on actual patient efficacy and toxicity outcomes. More complex, multivariate predictive equations may be required to capture all variables that can potentially affect phenytoin pharmacokinetics and clinical therapeutic outcomes. © The Author(s) 2016.

  9. Use of bias correction techniques to improve seasonal forecasts for reservoirs - A case-study in northwestern Mediterranean.

    PubMed

    Marcos, Raül; Llasat, Ma Carmen; Quintana-Seguí, Pere; Turco, Marco

    2018-01-01

    In this paper, we have compared different bias correction methodologies to assess whether they could be advantageous for improving the performance of a seasonal prediction model for volume anomalies in the Boadella reservoir (northwestern Mediterranean). The bias correction adjustments have been applied on precipitation and temperature from the European Centre for Middle-range Weather Forecasting System 4 (S4). We have used three bias correction strategies: two linear (mean bias correction, BC, and linear regression, LR) and one non-linear (Model Output Statistics analogs, MOS-analog). The results have been compared with climatology and persistence. The volume-anomaly model is a previously computed Multiple Linear Regression that ingests precipitation, temperature and in-flow anomaly data to simulate monthly volume anomalies. The potential utility for end-users has been assessed using economic value curve areas. We have studied the S4 hindcast period 1981-2010 for each month of the year and up to seven months ahead considering an ensemble of 15 members. We have shown that the MOS-analog and LR bias corrections can improve the original S4. The application to volume anomalies points towards the possibility to introduce bias correction methods as a tool to improve water resource seasonal forecasts in an end-user context of climate services. Particularly, the MOS-analog approach gives generally better results than the other approaches in late autumn and early winter. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Exemplar-based human action pose correction.

    PubMed

    Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen

    2014-07-01

    The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.

  11. Excimer laser correction of hyperopia, hyperopic and mixed astigmatism: past, present, and future.

    PubMed

    Lukenda, Adrian; Martinović, Zeljka Karaman; Kalauz, Miro

    2012-06-01

    The broad acceptance of "spot scanning" or "flying spot" excimer lasers in the last decade has enabled the domination of corneal ablative laser surgery over other refractive surgical procedures for the correction of hyperopia, hyperopic and mixed astigmatism. This review outlines the most important reasons why the ablative laser correction of hyperopia, hyperopic and mixed astigmatism for many years lagged behind that of myopia. Most of today's scanning laser systems, used in the LASIK and PRK procedures, can safely and effectively perform low, moderate and high hyperopic and hyperopic astigmatic corrections. The introduction of these laser platforms has also significantly improved the long term refractive stability of hyperopic treatments. In the future, further improvements in femtosecond and nanosecond technology, eye-tracker systems, and the development of new customized algorithms, such as the ray-tracing method, could additionally increase the upper limit for the safe and predictable corneal ablative laser correction ofhyperopia, hyperopic and mixed astigmatism.

  12. Resumming double non-global logarithms in the evolution of a jet

    NASA Astrophysics Data System (ADS)

    Hatta, Y.; Iancu, E.; Mueller, A. H.; Triantafyllopoulos, D. N.

    2018-02-01

    We consider the Banfi-Marchesini-Smye (BMS) equation which resums `non-global' energy logarithms in the QCD evolution of the energy lost by a pair of jets via soft radiation at large angles. We identify a new physical regime where, besides the energy logarithms, one has to also resum (anti)collinear logarithms. Such a regime occurs when the jets are highly collimated (boosted) and the relative angles between successive soft gluon emissions are strongly increasing. These anti-collinear emissions can violate the correct time-ordering for time-like cascades and result in large radiative corrections enhanced by double collinear logs, making the BMS evolution unstable beyond leading order. We isolate the first such a correction in a recent calculation of the BMS equation to next-to-leading order by Caron-Huot. To overcome this difficulty, we construct a `collinearly-improved' version of the leading-order BMS equation which resums the double collinear logarithms to all orders. Our construction is inspired by a recent treatment of the Balitsky-Kovchegov (BK) equation for the high-energy evolution of a space-like wavefunction, where similar time-ordering issues occur. We show that the conformal mapping relating the leading-order BMS and BK equations correctly predicts the physical time-ordering, but it fails to predict the detailed structure of the collinear improvement.

  13. An empirical approach to improving tidal predictions using recent real-time tide gauge data

    NASA Astrophysics Data System (ADS)

    Hibbert, Angela; Royston, Samantha; Horsburgh, Kevin J.; Leach, Harry

    2014-05-01

    Classical harmonic methods of tidal prediction are often problematic in estuarine environments due to the distortion of tidal fluctuations in shallow water, which results in a disparity between predicted and observed sea levels. This is of particular concern in the Bristol Channel, where the error associated with tidal predictions is potentially greater due to an unusually large tidal range of around 12m. As such predictions are fundamental to the short-term forecasting of High Water (HW) extremes, it is vital that alternative solutions are found. In a pilot study, using a year-long observational sea level record from the Port of Avonmouth in the Bristol Channel, the UK National Tidal and Sea Level Facility (NTSLF) tested the potential for reducing tidal prediction errors, using three alternatives to the Harmonic Method of tidal prediction. The three methods evaluated were (1) the use of Artificial Neural Network (ANN) models, (2) the Species Concordance technique and (3) a simple empirical procedure for correcting Harmonic Method High Water predictions based upon a few recent observations (referred to as the Empirical Correction Method). This latter method was then successfully applied to sea level records from an additional 42 of the 45 tide gauges that comprise the UK Tide Gauge Network. Consequently, it is to be incorporated into the operational systems of the UK Coastal Monitoring and Forecasting Partnership in order to improve short-term sea level predictions for the UK and in particular, the accurate estimation of HW extremes.

  14. Data Prediction for Public Events in Professional Domains Based on Improved RNN- LSTM

    NASA Astrophysics Data System (ADS)

    Song, Bonan; Fan, Chunxiao; Wu, Yuexin; Sun, Juanjuan

    2018-02-01

    The traditional data services of prediction for emergency or non-periodic events usually cannot generate satisfying result or fulfill the correct prediction purpose. However, these events are influenced by external causes, which mean certain a priori information of these events generally can be collected through the Internet. This paper studied the above problems and proposed an improved model—LSTM (Long Short-term Memory) dynamic prediction and a priori information sequence generation model by combining RNN-LSTM and public events a priori information. In prediction tasks, the model is qualified for determining trends, and its accuracy also is validated. This model generates a better performance and prediction results than the previous one. Using a priori information can increase the accuracy of prediction; LSTM can better adapt to the changes of time sequence; LSTM can be widely applied to the same type of prediction tasks, and other prediction tasks related to time sequence.

  15. Dilatation-dissipation corrections for advanced turbulence models

    NASA Technical Reports Server (NTRS)

    Wilcox, David C.

    1992-01-01

    This paper analyzes dilatation-dissipation based compressibility corrections for advanced turbulence models. Numerical computations verify that the dilatation-dissipation corrections devised by Sarkar and Zeman greatly improve both the k-omega and k-epsilon model predicted effect of Mach number on spreading rate. However, computations with the k-gamma model also show that the Sarkar/Zeman terms cause an undesired reduction in skin friction for the compressible flat-plate boundary layer. A perturbation solution for the compressible wall layer shows that the Sarkar and Zeman terms reduce the effective von Karman constant in the law of the wall. This is the source of the inaccurate k-gamma model skin-friction predictions for the flat-plate boundary layer. The perturbation solution also shows that the k-epsilon model has an inherent flaw for compressible boundary layers that is not compensated for by the dilatation-dissipation corrections. A compressibility modification for k-gamma and k-epsilon models is proposed that is similar to those of Sarkar and Zeman. The new compressibility term permits accurate predictions for the compressible mixing layer, flat-plate boundary layer, and a shock separated flow with the same values for all closure coefficients.

  16. The accuracy of parent-reported height and weight for 6-12 year old U.S. children.

    PubMed

    Wright, Davene R; Glanz, Karen; Colburn, Trina; Robson, Shannon M; Saelens, Brian E

    2018-02-12

    Previous studies have examined correlations between BMI calculated using parent-reported and directly-measured child height and weight. The objective of this study was to validate correction factors for parent-reported child measurements. Concordance between parent-reported and investigator measured child height, weight, and BMI (kg/m 2 ) among participants in the Neighborhood Impact on Kids Study (n = 616) was examined using the Lin coefficient, where a value of ±1.0 indicates perfect concordance and a value of zero denotes non-concordance. A correction model for parent-reported height, weight, and BMI based on commonly collected demographic information was developed using 75% of the sample. This model was used to estimate corrected measures for the remaining 25% of the sample and measured concordance between correct parent-reported and investigator-measured values. Accuracy of corrected values in classifying children as overweight/obese was assessed by sensitivity and specificity. Concordance between parent-reported and measured height, weight and BMI was low (0.007, - 0.039, and - 0.005 respectively). Concordance in the corrected test samples improved to 0.752 for height, 0.616 for weight, and 0.227 for BMI. Sensitivity of corrected parent-reported measures for predicting overweight and obesity among children in the test sample decreased from 42.8 to 25.6% while specificity improved from 79.5 to 88.6%. Correction factors improved concordance for height and weight but did not improve the sensitivity of parent-reported measures for measuring child overweight and obesity. Future research should be conducted using larger and more nationally-representative samples that allow researchers to fully explore demographic variance in correction coefficients.

  17. Wrong, but useful: regional species distribution models may not be improved by range-wide data under biased sampling.

    PubMed

    El-Gabbas, Ahmed; Dormann, Carsten F

    2018-02-01

    Species distribution modeling (SDM) is an essential method in ecology and conservation. SDMs are often calibrated within one country's borders, typically along a limited environmental gradient with biased and incomplete data, making the quality of these models questionable. In this study, we evaluated how adequate are national presence-only data for calibrating regional SDMs. We trained SDMs for Egyptian bat species at two different scales: only within Egypt and at a species-specific global extent. We used two modeling algorithms: Maxent and elastic net, both under the point-process modeling framework. For each modeling algorithm, we measured the congruence of the predictions of global and regional models for Egypt, assuming that the lower the congruence, the lower the appropriateness of the Egyptian dataset to describe the species' niche. We inspected the effect of incorporating predictions from global models as additional predictor ("prior") to regional models, and quantified the improvement in terms of AUC and the congruence between regional models run with and without priors. Moreover, we analyzed predictive performance improvements after correction for sampling bias at both scales. On average, predictions from global and regional models in Egypt only weakly concur. Collectively, the use of priors did not lead to much improvement: similar AUC and high congruence between regional models calibrated with and without priors. Correction for sampling bias led to higher model performance, whatever prior used, making the use of priors less pronounced. Under biased and incomplete sampling, the use of global bats data did not improve regional model performance. Without enough bias-free regional data, we cannot objectively identify the actual improvement of regional models after incorporating information from the global niche. However, we still believe in great potential for global model predictions to guide future surveys and improve regional sampling in data-poor regions.

  18. Predictive modeling for corrective maintenance of imaging devices from machine logs.

    PubMed

    Patil, Ravindra B; Patil, Meru A; Ravi, Vidya; Naik, Sarif

    2017-07-01

    In the cost sensitive healthcare industry, an unplanned downtime of diagnostic and therapy imaging devices can be a burden on the financials of both the hospitals as well as the original equipment manufacturers (OEMs). In the current era of connectivity, it is easier to get these devices connected to a standard monitoring station. Once the system is connected, OEMs can monitor the health of these devices remotely and take corrective actions by providing preventive maintenance thereby avoiding major unplanned downtime. In this article, we present an overall methodology of predicting failure of these devices well before customer experiences it. We use data-driven approach based on machine learning to predict failures in turn resulting in reduced machine downtime, improved customer satisfaction and cost savings for the OEMs. One of the use-case of predicting component failure of PHILIPS iXR system is explained in this article.

  19. Classification tree models for predicting distributions of michigan stream fish from landscape variables

    USGS Publications Warehouse

    Steen, P.J.; Zorn, T.G.; Seelbach, P.W.; Schaeffer, J.S.

    2008-01-01

    Traditionally, fish habitat requirements have been described from local-scale environmental variables. However, recent studies have shown that studying landscape-scale processes improves our understanding of what drives species assemblages and distribution patterns across the landscape. Our goal was to learn more about constraints on the distribution of Michigan stream fish by examining landscape-scale habitat variables. We used classification trees and landscape-scale habitat variables to create and validate presence-absence models and relative abundance models for Michigan stream fishes. We developed 93 presence-absence models that on average were 72% correct in making predictions for an independent data set, and we developed 46 relative abundance models that were 76% correct in making predictions for independent data. The models were used to create statewide predictive distribution and abundance maps that have the potential to be used for a variety of conservation and scientific purposes. ?? Copyright by the American Fisheries Society 2008.

  20. Study on SOC wavelet analysis for LiFePO4 battery

    NASA Astrophysics Data System (ADS)

    Liu, Xuepeng; Zhao, Dongmei

    2017-08-01

    Improving the prediction accuracy of SOC can reduce the complexity of the conservative and control strategy of the strategy such as the scheduling, optimization and planning of LiFePO4 battery system. Based on the analysis of the relationship between the SOC historical data and the external stress factors, the SOC Estimation-Correction Prediction Model based on wavelet analysis is established. Using wavelet neural network prediction model is of high precision to achieve forecast link, external stress measured data is used to update parameters estimation in the model, implement correction link, makes the forecast model can adapt to the LiFePO4 battery under rated condition of charge and discharge the operating point of the variable operation area. The test results show that the method can obtain higher precision prediction model when the input and output of LiFePO4 battery are changed frequently.

  1. Assessment of Specific Characteristics of Abnormal General Movements: Does It Enhance the Prediction of Cerebral Palsy?

    ERIC Educational Resources Information Center

    Hamer, Elisa G.; Bos, Arend F.; Hadders-Algra, Mijna

    2011-01-01

    Aim: Abnormal general movements at around 3 months corrected age indicate a high risk of cerebral palsy (CP). We aimed to determine whether specific movement characteristics can improve the predictive power of definitely abnormal general movements. Method: Video recordings of 46 infants with definitely abnormal general movements at 9 to 13 weeks…

  2. Subjective field study of response to impulsive helicopter noise

    NASA Technical Reports Server (NTRS)

    Powell, C. A.

    1981-01-01

    Subjects, located outdoors and indoors, judged the noisiness and other subjective noise characteristics of flyovers of two helicopters and a propeller driven airplane as part of a study of the effects of impulsiveness on the subjective response to helicopter noise. In the first experiment, the impulsive characteristics of one helicopter was controlled by varying the main rotor speed while maintaining a constant airspeed in level flight. The second experiment which utilized only the helicopters, included descent and level flight operations. The more impulsive helicopter was consistently judged less noisy than the less impulsive helicopter at equal effective perceived noise levels (EPNL). The ability of EPNL to predict noisiness was not improved by the addition of either of two proposed impulse corrections. A subjective measure of impulsiveness, however, which was not significantly related to the proposed impulse corrections, was found to improve the predictive ability of EPNL.

  3. Predictive sensor method and apparatus

    NASA Technical Reports Server (NTRS)

    Cambridge, Vivien J.; Koger, Thomas L.

    1993-01-01

    A microprocessor and electronics package employing predictive methodology was developed to accelerate the response time of slowly responding hydrogen sensors. The system developed improved sensor response time from approximately 90 seconds to 8.5 seconds. The microprocessor works in real-time providing accurate hydrogen concentration corrected for fluctuations in sensor output resulting from changes in atmospheric pressure and temperature. Following the successful development of the hydrogen sensor system, the system and predictive methodology was adapted to a commercial medical thermometer probe. Results of the experiment indicate that, with some customization of hardware and software, response time improvements are possible for medical thermometers as well as other slowly responding sensors.

  4. How to deal with multiple binding poses in alchemical relative protein-ligand binding free energy calculations.

    PubMed

    Kaus, Joseph W; Harder, Edward; Lin, Teng; Abel, Robert; McCammon, J Andrew; Wang, Lingle

    2015-06-09

    Recent advances in improved force fields and sampling methods have made it possible for the accurate calculation of protein–ligand binding free energies. Alchemical free energy perturbation (FEP) using an explicit solvent model is one of the most rigorous methods to calculate relative binding free energies. However, for cases where there are high energy barriers separating the relevant conformations that are important for ligand binding, the calculated free energy may depend on the initial conformation used in the simulation due to the lack of complete sampling of all the important regions in phase space. This is particularly true for ligands with multiple possible binding modes separated by high energy barriers, making it difficult to sample all relevant binding modes even with modern enhanced sampling methods. In this paper, we apply a previously developed method that provides a corrected binding free energy for ligands with multiple binding modes by combining the free energy results from multiple alchemical FEP calculations starting from all enumerated poses, and the results are compared with Glide docking and MM-GBSA calculations. From these calculations, the dominant ligand binding mode can also be predicted. We apply this method to a series of ligands that bind to c-Jun N-terminal kinase-1 (JNK1) and obtain improved free energy results. The dominant ligand binding modes predicted by this method agree with the available crystallography, while both Glide docking and MM-GBSA calculations incorrectly predict the binding modes for some ligands. The method also helps separate the force field error from the ligand sampling error, such that deviations in the predicted binding free energy from the experimental values likely indicate possible inaccuracies in the force field. An error in the force field for a subset of the ligands studied was identified using this method, and improved free energy results were obtained by correcting the partial charges assigned to the ligands. This improved the root-mean-square error (RMSE) for the predicted binding free energy from 1.9 kcal/mol with the original partial charges to 1.3 kcal/mol with the corrected partial charges.

  5. How To Deal with Multiple Binding Poses in Alchemical Relative Protein–Ligand Binding Free Energy Calculations

    PubMed Central

    2016-01-01

    Recent advances in improved force fields and sampling methods have made it possible for the accurate calculation of protein–ligand binding free energies. Alchemical free energy perturbation (FEP) using an explicit solvent model is one of the most rigorous methods to calculate relative binding free energies. However, for cases where there are high energy barriers separating the relevant conformations that are important for ligand binding, the calculated free energy may depend on the initial conformation used in the simulation due to the lack of complete sampling of all the important regions in phase space. This is particularly true for ligands with multiple possible binding modes separated by high energy barriers, making it difficult to sample all relevant binding modes even with modern enhanced sampling methods. In this paper, we apply a previously developed method that provides a corrected binding free energy for ligands with multiple binding modes by combining the free energy results from multiple alchemical FEP calculations starting from all enumerated poses, and the results are compared with Glide docking and MM-GBSA calculations. From these calculations, the dominant ligand binding mode can also be predicted. We apply this method to a series of ligands that bind to c-Jun N-terminal kinase-1 (JNK1) and obtain improved free energy results. The dominant ligand binding modes predicted by this method agree with the available crystallography, while both Glide docking and MM-GBSA calculations incorrectly predict the binding modes for some ligands. The method also helps separate the force field error from the ligand sampling error, such that deviations in the predicted binding free energy from the experimental values likely indicate possible inaccuracies in the force field. An error in the force field for a subset of the ligands studied was identified using this method, and improved free energy results were obtained by correcting the partial charges assigned to the ligands. This improved the root-mean-square error (RMSE) for the predicted binding free energy from 1.9 kcal/mol with the original partial charges to 1.3 kcal/mol with the corrected partial charges. PMID:26085821

  6. Counteracting structural errors in ensemble forecast of influenza outbreaks.

    PubMed

    Pei, Sen; Shaman, Jeffrey

    2017-10-13

    For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.

  7. Optical Coherence Tomography–Based Corneal Power Measurement and Intraocular Lens Power Calculation Following Laser Vision Correction (An American Ophthalmological Society Thesis)

    PubMed Central

    Huang, David; Tang, Maolong; Wang, Li; Zhang, Xinbo; Armour, Rebecca L.; Gattey, Devin M.; Lombardi, Lorinna H.; Koch, Douglas D.

    2013-01-01

    Purpose: To use optical coherence tomography (OCT) to measure corneal power and improve the selection of intraocular lens (IOL) power in cataract surgeries after laser vision correction. Methods: Patients with previous myopic laser vision corrections were enrolled in this prospective study from two eye centers. Corneal thickness and power were measured by Fourier-domain OCT. Axial length, anterior chamber depth, and automated keratometry were measured by a partial coherence interferometer. An OCT-based IOL formula was developed. The mean absolute error of the OCT-based formula in predicting postoperative refraction was compared to two regression-based IOL formulae for eyes with previous laser vision correction. Results: Forty-six eyes of 46 patients all had uncomplicated cataract surgery with monofocal IOL implantation. The mean arithmetic prediction error of postoperative refraction was 0.05 ± 0.65 diopter (D) for the OCT formula, 0.14 ± 0.83 D for the Haigis-L formula, and 0.24 ± 0.82 D for the no-history Shammas-PL formula. The mean absolute error was 0.50 D for OCT compared to a mean absolute error of 0.67 D for Haigis-L and 0.67 D for Shammas-PL. The adjusted mean absolute error (average prediction error removed) was 0.49 D for OCT, 0.65 D for Haigis-L (P=.031), and 0.62 D for Shammas-PL (P=.044). For OCT, 61% of the eyes were within 0.5 D of prediction error, whereas 46% were within 0.5 D for both Haigis-L and Shammas-PL (P=.034). Conclusions: The predictive accuracy of OCT-based IOL power calculation was better than Haigis-L and Shammas-PL formulas in eyes after laser vision correction. PMID:24167323

  8. Eye-pupil displacement and prediction: effects on residual wavefront in adaptive optics retinal imaging

    PubMed Central

    Kulcsár, Caroline; Raynaud, Henri-François; Garcia-Rissmann, Aurea

    2016-01-01

    This paper studies the effect of pupil displacements on the best achievable performance of retinal imaging adaptive optics (AO) systems, using 52 trajectories of horizontal and vertical displacements sampled at 80 Hz by a pupil tracker (PT) device on 13 different subjects. This effect is quantified in the form of minimal root mean square (rms) of the residual phase affecting image formation, as a function of the delay between PT measurement and wavefront correction. It is shown that simple dynamic models identified from data can be used to predict horizontal and vertical pupil displacements with greater accuracy (in terms of average rms) over short-term time horizons. The potential impact of these improvements on residual wavefront rms is investigated. These results allow to quantify the part of disturbances corrected by retinal imaging systems that are caused by relative displacements of an otherwise fixed or slowy-varying subject-dependent aberration. They also suggest that prediction has a limited impact on wavefront rms and that taking into account PT measurements in real time improves the performance of AO retinal imaging systems. PMID:27231607

  9. The Acoustic Analogy: A Powerful Tool in Aeroacoustics with Emphasis on Jet Noise Prediction

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Doty, Michael J.; Hunter, Craig A.

    2004-01-01

    The acoustic analogy introduced by Lighthill to study jet noise is now over 50 years old. In the present paper, Lighthill s Acoustic Analogy is revisited together with a brief evaluation of the state-of-the-art of the subject and an exploration of the possibility of further improvements in jet noise prediction from analytical methods, computational fluid dynamics (CFD) predictions, and measurement techniques. Experimental Particle Image Velocimetry (PIV) data is used both to evaluate turbulent statistics from Reynolds-averaged Navier-Stokes (RANS) CFD and to propose correlation models for the Lighthill stress tensor. The NASA Langley Jet3D code is used to study the effect of these models on jet noise prediction. From the analytical investigation, a retarded time correction is shown that improves, by approximately 8 dB, the over-prediction of aft-arc jet noise by Jet3D. In experimental investigation, the PIV data agree well with the CFD mean flow predictions, with room for improvement in Reynolds stress predictions. Initial modifications, suggested by the PIV data, to the form of the Jet3D correlation model showed no noticeable improvements in jet noise prediction.

  10. Theoretical prediction of nuclear magnetic shieldings and indirect spin-spin coupling constants in 1,1-, cis-, and trans-1,2-difluoroethylenes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nozirov, Farhod, E-mail: teobaldk@gmail.com, E-mail: farhod.nozirov@gmail.com; Stachów, Michał, E-mail: michal.stachow@gmail.com; Kupka, Teobald, E-mail: teobaldk@gmail.com, E-mail: farhod.nozirov@gmail.com

    2014-04-14

    A theoretical prediction of nuclear magnetic shieldings and indirect spin-spin coupling constants in 1,1-, cis- and trans-1,2-difluoroethylenes is reported. The results obtained using density functional theory (DFT) combined with large basis sets and gauge-independent atomic orbital calculations were critically compared with experiment and conventional, higher level correlated electronic structure methods. Accurate structural, vibrational, and NMR parameters of difluoroethylenes were obtained using several density functionals combined with dedicated basis sets. B3LYP/6-311++G(3df,2pd) optimized structures of difluoroethylenes closely reproduced experimental geometries and earlier reported benchmark coupled cluster results, while BLYP/6-311++G(3df,2pd) produced accurate harmonic vibrational frequencies. The most accurate vibrations were obtained using B3LYP/6-311++G(3df,2pd)more » with correction for anharmonicity. Becke half and half (BHandH) density functional predicted more accurate {sup 19}F isotropic shieldings and van Voorhis and Scuseria's τ-dependent gradient-corrected correlation functional yielded better carbon shieldings than B3LYP. A surprisingly good performance of Hartree-Fock (HF) method in predicting nuclear shieldings in these molecules was observed. Inclusion of zero-point vibrational correction markedly improved agreement with experiment for nuclear shieldings calculated by HF, MP2, CCSD, and CCSD(T) methods but worsened the DFT results. The threefold improvement in accuracy when predicting {sup 2}J(FF) in 1,1-difluoroethylene for BHandH density functional compared to B3LYP was observed (the deviations from experiment were −46 vs. −115 Hz)« less

  11. Improved Use of Satellite Imagery to Forecast Hurricanes

    NASA Technical Reports Server (NTRS)

    Louis, Jean-Francois

    2001-01-01

    This project tested a novel method that uses satellite imagery to correct phase errors in the initial state for numerical weather prediction, applied to hurricane forecasts. The system was tested on hurricanes Guillermo (1997), Felicia (1997) and Iniki (1992). We compared the performance of the system with and without phase correction to a procedure that uses bogus data in the initial state, similar to current operational procedures. The phase correction keeps the hurricane on track in the analysis and is far superior to a system without phase correction. Compared to operational procedure, phase correction generates somewhat worse 3-day forecast of the hurricane track, but better forecast of intensity. It is believed that the phase correction module would work best in the context of 4-dimensional variational data assimilation. Very little modification to 4DVar would be required.

  12. Iris registration in wavefront-guided LASIK to correct mixed astigmatism.

    PubMed

    Khalifa, Mounir; El-Kateb, Mohamed; Shaheen, Mohamed Shafik

    2009-03-01

    To investigate the predictability, safety, and efficiency of wavefront-guided laser in situ keratomileusis (LASIK) with iris-registration technology to correct mixed astigmatism. Vision correction center, Alexandria, Egypt. This retrospective double-blind study included randomly selected patients with mixed astigmatism who sought laser refractive surgery. Patients were divided equally into 3 groups and treated with conventional LASIK and manual marking, wavefront-guided LASIK and manual marking, or wavefront-guided LASIK with iris registration (LASIK+IR group). Eyes were analyzed preoperatively and up to 3 months postoperatively. The LASIK+IR group had better postoperative uncorrected visual acuity (100% 20/30 or better; 90% 20/20 or better; 20% 20/16 or better) than the other groups and did not lose preoperative best spectacle-corrected visual acuity, unlike the other groups. This group also had the highest percentage of eyes that gained lines of acuity (20% 1 line; 10% 2 lines). The LASIK+IR group had the highest predictability of spherical refraction (80% within +/-0.50 diopter [D]; 100% within +/-1.00 D [P<.05] and the highest predictability of cylinder refraction. The LASIK+IR eyes had a significantly smaller increase postoperatively in coma, trefoil, and secondary astigmatism. There was no significant difference between groups in spherical aberration or quadrafoil. The LASIK-IR group had the most improvement in scotopic contrast sensitivity (P<.05). Wavefront-guided LASIK with iris registration was more predictable, safe, and efficient than conventional or wavefront-guided LASIK with manual marking in correcting mixed astigmatism. Further studies are needed to confirm these results.

  13. Accurate density functional prediction of molecular electron affinity with the scaling corrected Kohn–Sham frontier orbital energies

    NASA Astrophysics Data System (ADS)

    Zhang, DaDi; Yang, Xiaolong; Zheng, Xiao; Yang, Weitao

    2018-04-01

    Electron affinity (EA) is the energy released when an additional electron is attached to an atom or a molecule. EA is a fundamental thermochemical property, and it is closely pertinent to other important properties such as electronegativity and hardness. However, accurate prediction of EA is difficult with density functional theory methods. The somewhat large error of the calculated EAs originates mainly from the intrinsic delocalisation error associated with the approximate exchange-correlation functional. In this work, we employ a previously developed non-empirical global scaling correction approach, which explicitly imposes the Perdew-Parr-Levy-Balduz condition to the approximate functional, and achieve a substantially improved accuracy for the calculated EAs. In our approach, the EA is given by the scaling corrected Kohn-Sham lowest unoccupied molecular orbital energy of the neutral molecule, without the need to carry out the self-consistent-field calculation for the anion.

  14. Neurofilament light chain in cerebrospinal fluid and prediction of disease activity in clinically isolated syndrome and relapsing-remitting multiple sclerosis.

    PubMed

    Håkansson, I; Tisell, A; Cassel, P; Blennow, K; Zetterberg, H; Lundberg, P; Dahle, C; Vrethem, M; Ernerudh, J

    2017-05-01

    Improved biomarkers are needed to facilitate clinical decision-making and as surrogate endpoints in clinical trials in multiple sclerosis (MS). We assessed whether neurodegenerative and neuroinflammatory markers in cerebrospinal fluid (CSF) at initial sampling could predict disease activity during 2 years of follow-up in patients with clinically isolated syndrome (CIS) and relapsing-remitting MS. Using multiplex bead array and enzyme-linked immunosorbent assay, CXCL1, CXCL8, CXCL10, CXCL13, CCL20, CCL22, neurofilament light chain (NFL), neurofilament heavy chain, glial fibrillary acidic protein, chitinase-3-like-1, matrix metalloproteinase-9 and osteopontin were analysed in CSF from 41 patients with CIS or relapsing-remitting MS and 22 healthy controls. Disease activity (relapses, magnetic resonance imaging activity or disability worsening) in patients was recorded during 2 years of follow-up in this prospective longitudinal cohort study. In a logistic regression analysis model, NFL in CSF at baseline emerged as the best predictive marker, correctly classifying 93% of patients who showed evidence of disease activity during 2 years of follow-up and 67% of patients who did not, with an overall proportion of 85% (33 of 39 patients) correctly classified. Combining NFL with either neurofilament heavy chain or osteopontin resulted in 87% overall correctly classified patients, whereas combining NFL with a chemokine did not improve results. This study demonstrates the potential prognostic value of NFL in baseline CSF in CIS and relapsing-remitting MS and supports its use as a predictive biomarker of disease activity. © 2017 EAN.

  15. Patterns of recontracture after surgical correction of Dupuytren disease.

    PubMed

    Dias, Joseph J; Singh, Harvinder Pal; Ullah, Aamer; Bhowal, Bhaskar; Thompson, John R

    2013-10-01

    To study the evolution of deformity of the proximal interphalangeal joint over 5 years after good surgical correction of Dupuytren-induced contracture. We assessed 63 patients (72 fingers; 69 hands) with Dupuytren disease for the degree of contracture, its correction after surgery, and the range of movement at the proximal interphalangeal joints at 3 and 6 months, and 1, 3, and 5 years after fasciectomy with or without the use of a firebreak graft. We investigated associations between the recurrence of contracture and preoperative patient and surgical factors. There were 4 patterns of evolution of contracture after surgical correction. A total of 31 patients (33 hands) showed good improvement that was maintained for 5 years (minimal recontracture group). Twenty patients (23 hands) showed good initial improvement, which mildly worsened (< 20°) but was then maintained over 5 years (mild early recontracture group). Four patients (5 hands) worsened in first 3 months after surgery (> 20°) but there was no further worsening (severe early recontracture group). Eight patients (8 hands) worsened progressively over 5 years (progressive recontracture group). Worsening of contracture more than 6° between 3 and 6 months after surgery predicted progressive recontracture at 5 years. Recurrence of contracture (not disease recurrence) could be predicted as early as 6 months after surgery for Dupuytren disease. Copyright © 2013 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  16. Global ocean tides through assimilation of oceanographic and altimeter satellite data in a hydrodynamic model

    NASA Technical Reports Server (NTRS)

    Leprovost, Christian; Mazzega, P.; Vincent, P.

    1991-01-01

    Ocean tides must be considered in many scientific disciplines: astronomy, oceanography, geodesy, geophysics, meteorology, and space technologies. Progress in each of these disciplines leads to the need for greater knowledge and more precise predictions of the ocean tide contribution. This is particularly true of satellite altimetry. On one side, the present and future satellite altimetry missions provide and will supply new data that will contribute to the improvement of the present ocean tide solutions. On the other side, tidal corrections included in the Geophysical Data Records must be determined with the maximum possible accuracy. The valuable results obtained with satellite altimeter data thus far have not been penalized by the insufficiencies of the present ocean tide predictions included in the geophysical data records (GDR's) because the oceanic processes investigated have shorter wavelengths than the error field of the tidal predictions, so that the residual errors of the tidal corrections are absorbed in the empirical tilt and bias corrections of the satellite orbit. For future applications to large-scale oceanic phenomena, however, it will no longer be possible to ignore these insufficiencies.

  17. A method for the in vivo measurement of americium-241 at long times post-exposure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neton, J.W.

    1988-01-01

    This study investigated an improved method for the quantitative measurement, calibration and calculation of {sup 241}Am organ burdens in humans. The techniques developed correct for cross-talk or count-rate contributions from surrounding and adjacent organ burdens and assures for the proper assignment of activity to the lungs, liver and skeleton. In order to predict the net count-rates for the measurement geometries of the skull, liver and lung, a background prediction method was developed. This method utilizes data obtained from the measurement of a group of control subjects. Based on this data, a linear prediction equation was developed for each measurement geometry.more » In order to correct for the cross-contributions among the various deposition loci, a series of surrogate human phantom structures were measured. The results of measurements of {sup 241}Am depositions in six exposure cases have been evaluated using these new techniques and have indicated that lung burden estimates could be in error by as much as 100 percent when corrections are not made for contributions to the count-rate from other organs.« less

  18. The accuracy of climate models' simulated season lengths and the effectiveness of grid scale correction factors

    DOE PAGES

    Winterhalter, Wade E.

    2011-09-01

    Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I foundmore » that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.« less

  19. SU-F-J-199: Predictive Models for Cone Beam CT-Based Online Verification of Pencil Beam Scanning Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yin, L; Lin, A; Ahn, P

    Purpose: To utilize online CBCT scans to develop models for predicting DVH metrics in proton therapy of head and neck tumors. Methods: Nine patients with locally advanced oropharyngeal cancer were retrospectively selected in this study. Deformable image registration was applied to the simulation CT, target volumes, and organs at risk (OARs) contours onto each weekly CBCT scan. Intensity modulated proton therapy (IMPT) treatment plans were created on the simulation CT and forward calculated onto each corrected CBCT scan. Thirty six potentially predictive metrics were extracted from each corrected CBCT. These features include minimum/maximum/mean over and under-ranges at the proximal andmore » distal surface of PTV volumes, and geometrical and water equivalent distance between PTV and each OARs. Principal component analysis (PCA) was used to reduce the dimension of the extracted features. Three principal components were found to account for over 90% of variances in those features. Datasets from eight patients were used to train a machine learning model to fit these principal components with DVH metrics (dose to 95% and 5% of PTV, mean dose or max dose to OARs) from the forward calculated dose on each corrected CBCT. The accuracy of this model was verified on the datasets from the 9th patient. Results: The predicted changes of DVH metrics from the model were in good agreement with actual values calculated on corrected CBCT images. Median differences were within 1 Gy for most DVH metrics except for larynx and constrictor mean dose. However, a large spread of the differences was observed, indicating additional training datasets and predictive features are needed to improve the model. Conclusion: Intensity corrected CBCT scans hold the potential to be used for online verification of proton therapy and prediction of delivered dose distributions.« less

  20. Improved soil water deficit estimation through the integration of canopy temperature measurements into a soil water balance model

    USDA-ARS?s Scientific Manuscript database

    Correct prediction of the dynamics of total available water in the root zone (TAWr) is critical for irrigation management as shown in the soil water balance model presented in FAO paper 56 (Allen et al., 1998). In this study, we propose a framework to improve TAWr estimation by incorporating the cro...

  1. Rhythm Perception and Its Role in Perception and Learning of Dysrhythmic Speech.

    PubMed

    Borrie, Stephanie A; Lansford, Kaitlin L; Barrett, Tyson S

    2017-03-01

    The perception of rhythm cues plays an important role in recognizing spoken language, especially in adverse listening conditions. Indeed, this has been shown to hold true even when the rhythm cues themselves are dysrhythmic. This study investigates whether expertise in rhythm perception provides a processing advantage for perception (initial intelligibility) and learning (intelligibility improvement) of naturally dysrhythmic speech, dysarthria. Fifty young adults with typical hearing participated in 3 key tests, including a rhythm perception test, a receptive vocabulary test, and a speech perception and learning test, with standard pretest, familiarization, and posttest phases. Initial intelligibility scores were calculated as the proportion of correct pretest words, while intelligibility improvement scores were calculated by subtracting this proportion from the proportion of correct posttest words. Rhythm perception scores predicted intelligibility improvement scores but not initial intelligibility. On the other hand, receptive vocabulary scores predicted initial intelligibility scores but not intelligibility improvement. Expertise in rhythm perception appears to provide an advantage for processing dysrhythmic speech, but a familiarization experience is required for the advantage to be realized. Findings are discussed in relation to the role of rhythm in speech processing and shed light on processing models that consider the consequence of rhythm abnormalities in dysarthria.

  2. Arthroscopic Hip Revision Surgery for Residual Femoroacetabular Impingement (FAI): Surgical Outcomes Compared With a Matched Cohort After Primary Arthroscopic FAI Correction.

    PubMed

    Larson, Christopher M; Giveans, M Russell; Samuelson, Kathryn M; Stone, Rebecca M; Bedi, Asheesh

    2014-08-01

    There are limited data reporting outcomes after revision arthroscopic surgery for residual femoroacetabular impingement (FAI). (1) Revision arthroscopic FAI correction results in improved outcomes, but they are inferior to those of primary arthroscopic FAI correction. (2) Improved postrevision radiographic parameters are predictive of better outcomes. Cohort study; Level of evidence, 3. Patients who underwent arthroscopic hip revision for residual FAI were reviewed. Pathomorphological findings, intraoperative findings, and preoperative and postoperative modified Harris Hip Score (MHHS), Short Form-12 (SF-12), and pain on a visual analog scale (VAS) values were evaluated. Outcomes after revision arthroscopic FAI correction were compared with outcomes of a matched cohort who underwent primary arthroscopic FAI correction. A total of 79 patients (85 hips) with a mean age of 29.5 years underwent arthroscopic revision FAI correction (mean follow-up, 26 months). The labrum was debrided (27 hips), repaired (49 hips), or reconstructed (7 hips). Two labrums were stable and required no treatment. The results of revision arthroscopic FAI correction were compared with those of 220 age- and sex-matched patients (237 hips) who underwent primary arthroscopic FAI correction (mean follow-up, 23 months). The mean improvement in outcome scores after revision FAI correction was 17.8 (MHHS), 12.5 (SF-12), and 1.4 (VAS) points compared with 23.4 (MHHS), 19.7 (SF-12), and 4.6 (VAS) points after primary arthroscopic FAI correction. The mean improvement was significantly better in the primary cohort compared with the revision cohort (P < .01 for MHHS, SF-12, and VAS values). Good/excellent results were achieved in 81.7% of the primary cohort and 62.7% of the revision cohort (P < .01). Greater postoperative head-neck offset (P = .024), subspine/anterior inferior iliac spine (AIIS) decompression (P = .014), labral repair/reconstruction (P = .009), and capsular plication (P = .032) were significant predictors for better outcomes after revision surgery. Arthroscopic hip revision surgery for residual FAI yielded significantly improved outcome measures, but these were inferior to those after primary arthroscopic FAI corrective surgery. Improved femoral head-neck offset after cam decompression, identification and treatment of subspine/AIIS impingement, labral preservation/reconstruction, and capsular preservation/plication may be paramount to achieve satisfactory outcomes. © 2014 The Author(s).

  3. Optimizing the Hydrological and Biogeochemical Simulations on a Hillslope with Stony Soil

    NASA Astrophysics Data System (ADS)

    Zhu, Q.

    2017-12-01

    Stony soils are widely distributed in the hilly area. However, traditional pedotransfer functions are not reliable in predicting the soil hydraulic parameters for these soils due to the impacts of rock fragments. Therefore, large uncertainties and errors may exist in the hillslope hydrological and biogeochemical simulations in stony soils due to poor estimations of soil hydraulic parameters. In addition, homogenous soil hydraulic parameters are usually used in traditional hillslope simulations. However, soil hydraulic parameters are spatially heterogeneous on the hillslope. This may also cause the unreliable simulations. In this study, we obtained soil hydraulic parameters using five different approaches on a tea hillslope in Taihu Lake basin, China. These five approaches included (1) Rossetta predicted and spatially homogenous, (2) Rossetta predicted and spatially heterogeneous), (3) Rossetta predicted, rock fragment corrected and spatially homogenous, (4) Rossetta predicted, rock fragment corrected and spatially heterogeneous, and (5) extracted from observed soil-water retention curves fitted by dual-pore function and spatially heterogeneous (observed). These five sets of soil hydraulic properties were then input into Hydrus-3D and DNDC to simulate the soil hydrological and biogeochemical processes. The aim of this study is testing two hypotheses. First, considering the spatial heterogeneity of soil hydraulic parameters will improve the simulations. Second, considering the impact of rock fragment on soil hydraulic parameters will improve the simulations.

  4. Application of distance correction to ChemCam laser-induced breakdown spectroscopy measurements

    DOE PAGES

    Mezzacappa, A.; Melikechi, N.; Cousin, A.; ...

    2016-04-04

    Laser-induced breakdown spectroscopy (LIBS) provides chemical information from atomic, ionic, and molecular emissions from which geochemical composition can be deciphered. Analysis of LIBS spectra in cases where targets are observed at different distances, as is the case for the ChemCam instrument on the Mars rover Curiosity, which performs analyses at distances between 2 and 7.4 m is not a simple task. Previously, we showed that spectral distance correction based on a proxy spectroscopic standard created from first-shot dust observations on Mars targets ameliorates the distance bias in multivariate-based elemental-composition predictions of laboratory data. In this work, we correct an expandedmore » set of neutral and ionic spectral emissions for distance bias in the ChemCam data set. By using and testing different selection criteria to generate multiple proxy standards, we find a correction that minimizes the difference in spectral intensity measured at two different distances and increases spectral reproducibility. When the quantitative performance of distance correction is assessed, there is improvement for SiO 2, Al 2O 3, CaO, FeOT, Na 2O, K 2O, that is, for most of the major rock forming elements, and for the total major-element weight percent predicted. But, for MgO the method does not provide improvements while for TiO 2, it yields inconsistent results. Additionally, we observed that many emission lines do not behave consistently with distance, evidenced from laboratory analogue measurements and ChemCam data. This limits the effectiveness of the method.« less

  5. Involuntary orienting of attention to a sound desynchronizes the occipital alpha rhythm and improves visual perception.

    PubMed

    Feng, Wenfeng; Störmer, Viola S; Martinez, Antigona; McDonald, John J; Hillyard, Steven A

    2017-04-15

    Directing attention voluntarily to the location of a visual target results in an amplitude reduction (desynchronization) of the occipital alpha rhythm (8-14Hz), which is predictive of improved perceptual processing of the target. Here we investigated whether modulations of the occipital alpha rhythm triggered by the involuntary orienting of attention to a salient but spatially non-predictive sound would similarly influence perception of a subsequent visual target. Target discrimination was more accurate when a sound preceded the target at the same location (validly cued trials) than when the sound was on the side opposite to the target (invalidly cued trials). This behavioral effect was accompanied by a sound-induced desynchronization of the alpha rhythm over the lateral occipital scalp. The magnitude of alpha desynchronization over the hemisphere contralateral to the sound predicted correct discriminations of validly cued targets but not of invalidly cued targets. These results support the conclusion that cue-induced alpha desynchronization over the occipital cortex is a manifestation of a general priming mechanism that improves visual processing and that this mechanism can be activated either by the voluntary or involuntary orienting of attention. Further, the observed pattern of alpha modulations preceding correct and incorrect discriminations of valid and invalid targets suggests that involuntary orienting to the non-predictive sound has a rapid and purely facilitatory influence on processing targets on the cued side, with no inhibitory influence on targets on the opposite side. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Scalar production and decay to top quarks including interference effects at NLO in QCD in an EFT approach

    DOE PAGES

    Franzosi, Diogo Buarque; Vryonidou, Eleni; Zhang, Cen

    2017-10-13

    Scalar and pseudo-scalar resonances decaying to top quarks are common predictions in several scenarios beyond the standard model (SM) and are extensively searched for by LHC experiments. Challenges on the experimental side require optimising the strategy based on accurate predictions. Firstly, QCD corrections are known to be large both for the SM QCD background and for the pure signal scalar production. Secondly, leading order and approximate next-to-leading order (NLO) calculations indicate that the interference between signal and background is large and drastically changes the lineshape of the signal, from a simple peak to a peak-dip structure. Therefore, a robust predictionmore » of this interference at NLO accuracy in QCD is necessary to ensure that higher-order corrections do not alter the lineshapes. We compute the exact NLO corrections, assuming a point-like coupling between the scalar and the gluons and consistently embedding the calculation in an effective field theory within an automated framework, and present results for a representative set of beyond the SM benchmarks. The results can be further matched to parton shower simulation, providing more realistic predictions. We find that NLO corrections are important and lead to a significant reduction of the uncertainties. We also discuss how our computation can be used to improve the predictions for physics scenarios where the gluon-scalar loop is resolved and the effective approach is less applicable.« less

  7. Scalar production and decay to top quarks including interference effects at NLO in QCD in an EFT approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franzosi, Diogo Buarque; Vryonidou, Eleni; Zhang, Cen

    Scalar and pseudo-scalar resonances decaying to top quarks are common predictions in several scenarios beyond the standard model (SM) and are extensively searched for by LHC experiments. Challenges on the experimental side require optimising the strategy based on accurate predictions. Firstly, QCD corrections are known to be large both for the SM QCD background and for the pure signal scalar production. Secondly, leading order and approximate next-to-leading order (NLO) calculations indicate that the interference between signal and background is large and drastically changes the lineshape of the signal, from a simple peak to a peak-dip structure. Therefore, a robust predictionmore » of this interference at NLO accuracy in QCD is necessary to ensure that higher-order corrections do not alter the lineshapes. We compute the exact NLO corrections, assuming a point-like coupling between the scalar and the gluons and consistently embedding the calculation in an effective field theory within an automated framework, and present results for a representative set of beyond the SM benchmarks. The results can be further matched to parton shower simulation, providing more realistic predictions. We find that NLO corrections are important and lead to a significant reduction of the uncertainties. We also discuss how our computation can be used to improve the predictions for physics scenarios where the gluon-scalar loop is resolved and the effective approach is less applicable.« less

  8. Transformer Incipient Fault Prediction Using Combined Artificial Neural Network and Various Particle Swarm Optimisation Techniques.

    PubMed

    Illias, Hazlee Azil; Chai, Xin Rui; Abu Bakar, Ab Halim; Mokhlis, Hazlie

    2015-01-01

    It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works.

  9. Transformer Incipient Fault Prediction Using Combined Artificial Neural Network and Various Particle Swarm Optimisation Techniques

    PubMed Central

    2015-01-01

    It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works. PMID:26103634

  10. Predictors of emotional functioning in youth after surgical correction of idiopathic scoliosis.

    PubMed

    Zebracki, Kathy; Thawrani, Dinesh; Oswald, Timothy S; Anadio, Jennifer M; Sturm, Peter F

    2013-09-01

    Patients with idiopathic scoliosis, although otherwise healthy, often have significant concerns about their self-image and appearance. In a group of juveniles and adolescents, this can impact adjustment in school, functioning in peer groups, and general sense of well-being. There are limited data to help physicians reliably and precisely identify those who are at higher risk of poor emotional adjustment even after spine deformity correction. The purpose of this study was to determine the predictors of emotional maladjustment in juvenile and adolescent patients after surgical correction of idiopathic scoliosis. A total of 233 juveniles, mean age 11.26 ± 1.02 (range, 8 to 12) years and 909 adolescents, mean age 14.91 ± 1.61 (range, 13 to 21) years, who underwent surgical correction for idiopathic scoliosis and who were participating in a prospective longitudinal multicenter database, were enrolled in the study. Participants completed the Scoliosis Research Society-22 (SRS-22) questionnaire before surgery and 2 years postoperatively. Radiographs were used to measure Cobb angle and surface measurements were used to determine decompensation and trunk shift. Adolescents reported poorer mental health preoperatively (P<0.05) and 2 years postoperatively (P<0.001) than juveniles; however, both groups reported improved mental health (P<0.001) and self-image (P<0.01) postoperatively. Mental health 2 years postoperatively was predicted by preoperative self-image (P<0.05), mental health (P<0.001), and main thoracic Cobb angle (P<0.05) in the juvenile group. Within the adolescent group, mental health 2 years postoperatively was predicted by preoperative mental health (P<0.001); self-image 2 years postoperatively was predicted by preoperative mental health (P<0.01) and self-image (P<0.001). Self-image and mental health are significantly improved after spine deformity correction in juveniles and adolescents with idiopathic scoliosis. However, consistent with normative development, adolescents are at higher risk for emotional maladjustment than juveniles. Surgical decision making in scoliosis correction should take the emotional status of the patient into consideration.

  11. Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting.

    PubMed

    Khan, Tarik A; Friedensohn, Simon; Gorter de Vries, Arthur R; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T

    2016-03-01

    High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion-the intraclonal diversity index-which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology.

  12. Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting

    PubMed Central

    Khan, Tarik A.; Friedensohn, Simon; de Vries, Arthur R. Gorter; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T.

    2016-01-01

    High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion—the intraclonal diversity index—which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology. PMID:26998518

  13. High accuracy satellite drag model (HASDM)

    NASA Astrophysics Data System (ADS)

    Storz, M.; Bowman, B.; Branson, J.

    The dominant error source in the force models used to predict low perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying high-resolution density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal, semidiurnal and terdiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index a p to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low perigee satellites.

  14. High accuracy satellite drag model (HASDM)

    NASA Astrophysics Data System (ADS)

    Storz, Mark F.; Bowman, Bruce R.; Branson, Major James I.; Casali, Stephen J.; Tobiska, W. Kent

    The dominant error source in force models used to predict low-perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index ap, to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low-perigee satellites.

  15. Experimental evaluation of a mathematical model for predicting transfer efficiency of a high volume-low pressure air spray gun.

    PubMed

    Tan, Y M; Flynn, M R

    2000-10-01

    The transfer efficiency of a spray-painting gun is defined as the amount of coating applied to the workpiece divided by the amount sprayed. Characterizing this transfer process allows for accurate estimation of the overspray generation rate, which is important for determining a spray painter's exposure to airborne contaminants. This study presents an experimental evaluation of a mathematical model for predicting the transfer efficiency of a high volume-low pressure spray gun. The effects of gun-to-surface distance and nozzle pressure on the agreement between the transfer efficiency measurement and prediction were examined. Wind tunnel studies and non-volatile vacuum pump oil in place of commercial paint were used to determine transfer efficiency at nine gun-to-surface distances and four nozzle pressure levels. The mathematical model successfully predicts transfer efficiency within the uncertainty limits. The least squares regression between measured and predicted transfer efficiency has a slope of 0.83 and an intercept of 0.12 (R2 = 0.98). Two correction factors were determined to improve the mathematical model. At higher nozzle pressure settings, 6.5 psig and 5.5 psig, the correction factor is a function of both gun-to-surface distance and nozzle pressure level. At lower nozzle pressures, 4 psig and 2.75 psig, gun-to-surface distance slightly influences the correction factor, while nozzle pressure has no discernible effect.

  16. Renormalization group independence of Cosmological Attractors

    NASA Astrophysics Data System (ADS)

    Fumagalli, Jacopo

    2017-06-01

    The large class of inflationary models known as α- and ξ-attractors gives identical cosmological predictions at tree level (at leading order in inverse power of the number of efolds). Working with the renormalization group improved action, we show that these predictions are robust under quantum corrections. This means that for all the models considered the inflationary parameters (ns , r) are (nearly) independent on the Renormalization Group flow. The result follows once the field dependence of the renormalization scale, fixed by demanding the leading log correction to vanish, satisfies a quite generic condition. In Higgs inflation (which is a particular ξ-attractor) this is indeed the case; in the more general attractor models this is still ensured by the renormalizability of the theory in the effective field theory sense.

  17. Seeing the Errors You Feel Enhances Locomotor Performance but Not Learning.

    PubMed

    Roemmich, Ryan T; Long, Andrew W; Bastian, Amy J

    2016-10-24

    In human motor learning, it is thought that the more information we have about our errors, the faster we learn. Here, we show that additional error information can lead to improved motor performance without any concomitant improvement in learning. We studied split-belt treadmill walking that drives people to learn a new gait pattern using sensory prediction errors detected by proprioceptive feedback. When we also provided visual error feedback, participants acquired the new walking pattern far more rapidly and showed accelerated restoration of the normal walking pattern during washout. However, when the visual error feedback was removed during either learning or washout, errors reappeared with performance immediately returning to the level expected based on proprioceptive learning alone. These findings support a model with two mechanisms: a dual-rate adaptation process that learns invariantly from sensory prediction error detected by proprioception and a visual-feedback-dependent process that monitors learning and corrects residual errors but shows no learning itself. We show that our voluntary correction model accurately predicted behavior in multiple situations where visual feedback was used to change acquisition of new walking patterns while the underlying learning was unaffected. The computational and behavioral framework proposed here suggests that parallel learning and error correction systems allow us to rapidly satisfy task demands without necessarily committing to learning, as the relative permanence of learning may be inappropriate or inefficient when facing environments that are liable to change. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Improved Short-Term Clock Prediction Method for Real-Time Positioning.

    PubMed

    Lv, Yifei; Dai, Zhiqiang; Zhao, Qile; Yang, Sheng; Zhou, Jinning; Liu, Jingnan

    2017-06-06

    The application of real-time precise point positioning (PPP) requires real-time precise orbit and clock products that should be predicted within a short time to compensate for the communication delay or data gap. Unlike orbit correction, clock correction is difficult to model and predict. The widely used linear model hardly fits long periodic trends with a small data set and exhibits significant accuracy degradation in real-time prediction when a large data set is used. This study proposes a new prediction model for maintaining short-term satellite clocks to meet the high-precision requirements of real-time clocks and provide clock extrapolation without interrupting the real-time data stream. Fast Fourier transform (FFT) is used to analyze the linear prediction residuals of real-time clocks. The periodic terms obtained through FFT are adopted in the sliding window prediction to achieve a significant improvement in short-term prediction accuracy. This study also analyzes and compares the accuracy of short-term forecasts (less than 3 h) by using different length observations. Experimental results obtained from International GNSS Service (IGS) final products and our own real-time clocks show that the 3-h prediction accuracy is better than 0.85 ns. The new model can replace IGS ultra-rapid products in the application of real-time PPP. It is also found that there is a positive correlation between the prediction accuracy and the short-term stability of on-board clocks. Compared with the accuracy of the traditional linear model, the accuracy of the static PPP using the new model of the 2-h prediction clock in N, E, and U directions is improved by about 50%. Furthermore, the static PPP accuracy of 2-h clock products is better than 0.1 m. When an interruption occurs in the real-time model, the accuracy of the kinematic PPP solution using 1-h clock prediction product is better than 0.2 m, without significant accuracy degradation. This model is of practical significance because it solves the problems of interruption and delay in data broadcast in real-time clock estimation and can meet the requirements of real-time PPP.

  19. A first-principles examination of the asymmetric induction model in the binap/Rh(I)-catalysed 1,4-addition of phenylboronic acid to cyclic enones by density functional theory calculations.

    PubMed

    Qin, Hua-Li; Chen, Xiao-Qing; Huang, Yi-Zhen; Kantchev, Eric Assen B

    2014-09-26

    First-principles modelling of the diastereomeric transition states in the enantiodiscrimination stage of the catalytic cycle can reveal intimate details about the mechanism of enantioselection. This information can be invaluable for further improvement of the catalytic protocols by rational design. Herein, we present a density functional theory (IEFPCM/PBE0/DGDZVP level of theory) modelling of the carborhodation step for the asymmetric 1,4-arylation of cyclic α,β-unsaturated ketones mediated by a [(binap)Rh(I)] catalyst. The calculations completely support the older, qualitative, pictorial model predicting the sense of the asymmetric induction for both the chelating diphosphane (binap) and the more recent chiral diene (Phbod) ligands, while also permitting quantification of the enantiomeric excess (ee). The effect of dispersion interaction correction and basis sets has been also investigated. Dispersion-corrected functionals and solvation models significantly improve the predicted ee values. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Transonic cascade flow prediction using the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Arnone, A.; Stecco, S. S.

    1991-01-01

    This paper presents results which summarize the work carried out during the last three years to improve the efficiency and accuracy of numerical predictions in turbomachinery flow calculations. A new kind of nonperiodic c-type grid is presented and a Runge-Kutta scheme with accelerating strategies is used as a flow solver. The code capability is presented by testing four different blades at different exit Mach numbers in transonic regimes. Comparison with experiments shows the very good reliability of the numerical prediction. In particular, the loss coefficient seems to be correctly predicted by using the well-known Baldwin-Lomax turbulence model.

  1. The NASA aircraft noise prediction program improved propeller analysis system

    NASA Technical Reports Server (NTRS)

    Nguyen, L. Cathy

    1991-01-01

    The improvements and the modifications of the NASA Aircraft Noise Prediction Program (ANOPP) and the Propeller Analysis System (PAS) are described. Comparisons of the predictions and the test data are included in the case studies for the flat plate model in the Boundary Layer Module, for the effects of applying compressibility corrections to the lift and pressure coefficients, for the use of different weight factors in the Propeller Performance Module, for the use of the improved retarded time equation solution, and for the effect of the number grids in the Transonic Propeller Noise Module. The DNW tunnel test data of a propeller at different angles of attack and the Dowty Rotol data are compared with ANOPP predictions. The effect of the number of grids on the Transonic Propeller Noise Module predictions and the comparison of ANOPP TPN and DFP-ATP codes are studied. In addition to the above impact studies, the transonic propeller noise predictions for the SR-7, the UDF front rotor, and the support of the enroute noise test program are included.

  2. LBSizeCleav: improved support vector machine (SVM)-based prediction of Dicer cleavage sites using loop/bulge length.

    PubMed

    Bao, Yu; Hayashida, Morihiro; Akutsu, Tatsuya

    2016-11-25

    Dicer is necessary for the process of mature microRNA (miRNA) formation because the Dicer enzyme cleaves pre-miRNA correctly to generate miRNA with correct seed regions. Nonetheless, the mechanism underlying the selection of a Dicer cleavage site is still not fully understood. To date, several studies have been conducted to solve this problem, for example, a recent discovery indicates that the loop/bulge structure plays a central role in the selection of Dicer cleavage sites. In accordance with this breakthrough, a support vector machine (SVM)-based method called PHDCleav was developed to predict Dicer cleavage sites which outperforms other methods based on random forest and naive Bayes. PHDCleav, however, tests only whether a position in the shift window belongs to a loop/bulge structure. In this paper, we used the length of loop/bulge structures (in addition to their presence or absence) to develop an improved method, LBSizeCleav, for predicting Dicer cleavage sites. To evaluate our method, we used 810 empirically validated sequences of human pre-miRNAs and performed fivefold cross-validation. In both 5p and 3p arms of pre-miRNAs, LBSizeCleav showed greater prediction accuracy than PHDCleav did. This result suggests that the length of loop/bulge structures is useful for prediction of Dicer cleavage sites. We developed a novel algorithm for feature space mapping based on the length of a loop/bulge for predicting Dicer cleavage sites. The better performance of our method indicates the usefulness of the length of loop/bulge structures for such predictions.

  3. An improved simulation of the 2015 El Niño event by optimally correcting the initial conditions and model parameters in an intermediate coupled model

    NASA Astrophysics Data System (ADS)

    Zhang, Rong-Hua; Tao, Ling-Jiang; Gao, Chuan

    2017-09-01

    Large uncertainties exist in real-time predictions of the 2015 El Niño event, which have systematic intensity biases that are strongly model-dependent. It is critically important to characterize those model biases so they can be reduced appropriately. In this study, the conditional nonlinear optimal perturbation (CNOP)-based approach was applied to an intermediate coupled model (ICM) equipped with a four-dimensional variational data assimilation technique. The CNOP-based approach was used to quantify prediction errors that can be attributed to initial conditions (ICs) and model parameters (MPs). Two key MPs were considered in the ICM: one represents the intensity of the thermocline effect, and the other represents the relative coupling intensity between the ocean and atmosphere. Two experiments were performed to illustrate the effects of error corrections, one with a standard simulation and another with an optimized simulation in which errors in the ICs and MPs derived from the CNOP-based approach were optimally corrected. The results indicate that simulations of the 2015 El Niño event can be effectively improved by using CNOP-derived error correcting. In particular, the El Niño intensity in late 2015 was adequately captured when simulations were started from early 2015. Quantitatively, the Niño3.4 SST index simulated in Dec. 2015 increased to 2.8 °C in the optimized simulation, compared with only 1.5 °C in the standard simulation. The feasibility and effectiveness of using the CNOP-based technique to improve ENSO simulations are demonstrated in the context of the 2015 El Niño event. The limitations and further applications are also discussed.

  4. Population variation in isotopic composition of shorebird feathers: Implications for determining molting grounds

    USGS Publications Warehouse

    Torres-Dowdall, J.; Farmer, A.H.; Bucher, E.H.; Rye, R.O.; Landis, G.

    2009-01-01

    Stable isotope analyses have revolutionized the study of migratory connectivity. However, as with all tools, their limitations must be understood in order to derive the maximum benefit of a particular application. The goal of this study was to evaluate the efficacy of stable isotopes of C, N, H, O and S for assigning known-origin feathers to the molting sites of migrant shorebird species wintering and breeding in Argentina. Specific objectives were to: 1) compare the efficacy of the technique for studying shorebird species with different migration patterns, life histories and habitat-use patterns; 2) evaluate the grouping of species with similar migration and habitat use patterns in a single analysis to potentially improve prediction accuracy; and 3) evaluate the potential gains in prediction accuracy that might be achieved from using multiple stable isotopes. The efficacy of stable isotope ratios to determine origin was found to vary with species. While one species (White-rumped Sandpiper, Calidris fuscicollis) had high levels of accuracy assigning samples to known origin (91% of samples correctly assigned), another (Collared Plover, Charadrius collaris) showed low levels of accuracy (52% of samples correctly assigned). Intra-individual variability may account for this difference in efficacy. The prediction model for three species with similar migration and habitat-use patterns performed poorly compared with the model for just one of the species (71% versus 91% of samples correctly assigned). Thus, combining multiple sympatric species may not improve model prediction accuracy. Increasing the number of stable isotopes in the analyses increased the accuracy of assigning shorebirds to their molting origin, but the best combination - involving a subset of all the isotopes analyzed - varied among species.

  5. Modeling to Predict Escherichia coli at Presque Isle Beach 2, City of Erie, Erie County, Pennsylvania

    USGS Publications Warehouse

    Zimmerman, Tammy M.

    2008-01-01

    The Lake Erie beaches in Pennsylvania are a valuable recreational resource for Erie County. Concentrations of Escherichia coli (E. coli) at monitored beaches in Presque Isle State Park in Erie, Pa., occasionally exceed the single-sample bathing-water standard of 235 colonies per 100 milliliters resulting in potentially unsafe swimming conditions and prompting beach managers to post public advisories or to close beaches to recreation. To supplement the current method for assessing recreational water quality (E. coli concentrations from the previous day), a predictive regression model for E. coli concentrations at Presque Isle Beach 2 was developed from data collected during the 2004 and 2005 recreational seasons. Model output included predicted E. coli concentrations and exceedance probabilities--the probability that E. coli concentrations would exceed the standard. For this study, E. coli concentrations and other water-quality and environmental data were collected during the 2006 recreational season at Presque Isle Beach 2. The data from 2006, an independent year, were used to test (validate) the 2004-2005 predictive regression model and compare the model performance to the current method. Using 2006 data, the 2004-2005 model yielded more correct responses and better predicted exceedances of the standard than the use of E. coli concentrations from the previous day. The differences were not pronounced, however, and more data are needed. For example, the model correctly predicted exceedances of the standard 11 percent of the time (1 out of 9 exceedances that occurred in 2006) whereas using the E. coli concentrations from the previous day did not result in any correctly predicted exceedances. After validation, new models were developed by adding the 2006 data to the 2004-2005 dataset and by analyzing the data in 2- and 3-year combinations. Results showed that excluding the 2004 data (using 2005 and 2006 data only) yielded the best model. Explanatory variables in the 2005-2006 model were log10 turbidity, bird count, and wave height. The 2005-2006 model correctly predicted when the standard would not be exceeded (specificity) with a response of 95.2 percent (178 out of 187 nonexceedances) and correctly predicted when the standard would be exceeded (sensitivity) with a response of 64.3 percent (9 out of 14 exceedances). In all cases, the results from predictive modeling produced higher percentages of correct predictions than using E. coli concentrations from the previous day. Additional data collected each year can be used to test and possibly improve the model. The results of this study will aid beach managers in more rapidly determining when waters are not safe for recreational use and, subsequently, when to close a beach or post an advisory.

  6. Correction of Dual-PRF Doppler Velocity Outliers in the Presence of Aliasing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altube, Patricia; Bech, Joan; Argemí, Oriol

    In Doppler weather radars, the presence of unfolding errors or outliers is a well-known quality issue for radial velocity fields estimated using the dual–pulse repetition frequency (PRF) technique. Postprocessing methods have been developed to correct dual-PRF outliers, but these need prior application of a dealiasing algorithm for an adequate correction. Our paper presents an alternative procedure based on circular statistics that corrects dual-PRF errors in the presence of extended Nyquist aliasing. The correction potential of the proposed method is quantitatively tested by means of velocity field simulations and is exemplified in the application to real cases, including severe storm events.more » The comparison with two other existing correction methods indicates an improved performance in the correction of clustered outliers. The technique we propose is well suited for real-time applications requiring high-quality Doppler radar velocity fields, such as wind shear and mesocyclone detection algorithms, or assimilation in numerical weather prediction models.« less

  7. Correction of Dual-PRF Doppler Velocity Outliers in the Presence of Aliasing

    DOE PAGES

    Altube, Patricia; Bech, Joan; Argemí, Oriol; ...

    2017-07-18

    In Doppler weather radars, the presence of unfolding errors or outliers is a well-known quality issue for radial velocity fields estimated using the dual–pulse repetition frequency (PRF) technique. Postprocessing methods have been developed to correct dual-PRF outliers, but these need prior application of a dealiasing algorithm for an adequate correction. Our paper presents an alternative procedure based on circular statistics that corrects dual-PRF errors in the presence of extended Nyquist aliasing. The correction potential of the proposed method is quantitatively tested by means of velocity field simulations and is exemplified in the application to real cases, including severe storm events.more » The comparison with two other existing correction methods indicates an improved performance in the correction of clustered outliers. The technique we propose is well suited for real-time applications requiring high-quality Doppler radar velocity fields, such as wind shear and mesocyclone detection algorithms, or assimilation in numerical weather prediction models.« less

  8. Improvement of background solar wind predictions

    NASA Astrophysics Data System (ADS)

    Dálya, Zsuzsanna; Opitz, Andrea

    2016-04-01

    In order to estimate the solar wind properties at any heliospheric positions propagation tools use solar measurements as input data. The ballistic method extrapolates in-situ solar wind observations to the target position. This works well for undisturbed solar wind, while solar wind disturbances such as Corotating Interaction Regions (CIRs) and Coronal Mass Ejections (CMEs) need more consideration. We are working on dedicated ICME lists to clean these signatures from the input data in order to improve our prediction accuracy. These ICME lists are created from several heliospheric spacecraft measurements: ACE, WIND, STEREO, SOHO, MEX and VEX. As a result, we are able to filter out these events from the time series. Our corrected predictions contribute to the investigation of the quiet solar wind and space weather studies.

  9. Simulating boundary layer transition with low-Reynolds-number k-epsilon turbulence models. I - An evaluation of prediction characteristics. II - An approach to improving the predictions

    NASA Technical Reports Server (NTRS)

    Schmidt, R. C.; Patankar, S. V.

    1991-01-01

    The capability of two k-epsilon low-Reynolds number (LRN) turbulence models, those of Jones and Launder (1972) and Lam and Bremhorst (1981), to predict transition in external boundary-layer flows subject to free-stream turbulence is analyzed. Both models correctly predict the basic qualitative aspects of boundary-layer transition with free stream turbulence, but for calculations started at low values of certain defined Reynolds numbers, the transition is generally predicted at unrealistically early locations. Also, the methods predict transition lengths significantly shorter than those found experimentally. An approach to overcoming these deficiencies without abandoning the basic LRN k-epsilon framework is developed. This approach limits the production term in the turbulent kinetic energy equation and is based on a simple stability criterion. It is correlated to the free-stream turbulence value. The modification is shown to improve the qualitative and quantitative characteristics of the transition predictions.

  10. Combined electroencephalography-functional magnetic resonance imaging and electrical source imaging improves localization of pediatric focal epilepsy.

    PubMed

    Centeno, Maria; Tierney, Tim M; Perani, Suejen; Shamshiri, Elhum A; St Pier, Kelly; Wilkinson, Charlotte; Konn, Daniel; Vulliemoz, Serge; Grouiller, Frédéric; Lemieux, Louis; Pressler, Ronit M; Clark, Christopher A; Cross, J Helen; Carmichael, David W

    2017-08-01

    Surgical treatment in epilepsy is effective if the epileptogenic zone (EZ) can be correctly localized and characterized. Here we use simultaneous electroencephalography-functional magnetic resonance imaging (EEG-fMRI) data to derive EEG-fMRI and electrical source imaging (ESI) maps. Their yield and their individual and combined ability to (1) localize the EZ and (2) predict seizure outcome were then evaluated. Fifty-three children with drug-resistant epilepsy underwent EEG-fMRI. Interictal discharges were mapped using both EEG-fMRI hemodynamic responses and ESI. A single localization was derived from each individual test (EEG-fMRI global maxima [GM]/ESI maximum) and from the combination of both maps (EEG-fMRI/ESI spatial intersection). To determine the localization accuracy and its predictive performance, the individual and combined test localizations were compared to the presumed EZ and to the postsurgical outcome. Fifty-two of 53 patients had significant maps: 47 of 53 for EEG-fMRI, 44 of 53 for ESI, and 34 of 53 for both. The EZ was well characterized in 29 patients; 26 had an EEG-fMRI GM localization that was correct in 11, 22 patients had ESI localization that was correct in 17, and 12 patients had combined EEG-fMRI and ESI that was correct in 11. Seizure outcome following resection was correctly predicted by EEG-fMRI GM in 8 of 20 patients, and by the ESI maximum in 13 of 16. The combined EEG-fMRI/ESI region entirely predicted outcome in 9 of 9 patients, including 3 with no lesion visible on MRI. EEG-fMRI combined with ESI provides a simple unbiased localization that may predict surgery better than each individual test, including in MRI-negative patients. Ann Neurol 2017;82:278-287. © 2017 American Neurological Association.

  11. A simple but fully nonlocal correction to the random phase approximation

    NASA Astrophysics Data System (ADS)

    Ruzsinszky, Adrienn; Perdew, John P.; Csonka, Gábor I.

    2011-03-01

    The random phase approximation (RPA) stands on the top rung of the ladder of ground-state density functional approximations. The simple or direct RPA has been found to predict accurately many isoelectronic energy differences. A nonempirical local or semilocal correction to this direct RPA leaves isoelectronic energy differences almost unchanged, while improving total energies, ionization energies, etc., but fails to correct the RPA underestimation of molecular atomization energies. Direct RPA and its semilocal correction may miss part of the middle-range multicenter nonlocality of the correlation energy in a molecule. Here we propose a fully nonlocal, hybrid-functional-like addition to the semilocal correction. The added full nonlocality is important in molecules, but not in atoms. Under uniform-density scaling, this fully nonlocal correction scales like the second-order-exchange contribution to the correlation energy, an important part of the correction to direct RPA, and like the semilocal correction itself. For the atomization energies of ten molecules, and with the help of one fit parameter, it performs much better than the elaborate second-order screened exchange correction.

  12. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin

    2015-03-15

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR.more » Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.« less

  13. Using gaze patterns to predict task intent in collaboration.

    PubMed

    Huang, Chien-Ming; Andrist, Sean; Sauppé, Allison; Mutlu, Bilge

    2015-01-01

    In everyday interactions, humans naturally exhibit behavioral cues, such as gaze and head movements, that signal their intentions while interpreting the behavioral cues of others to predict their intentions. Such intention prediction enables each partner to adapt their behaviors to the intent of others, serving a critical role in joint action where parties work together to achieve a common goal. Among behavioral cues, eye gaze is particularly important in understanding a person's attention and intention. In this work, we seek to quantify how gaze patterns may indicate a person's intention. Our investigation was contextualized in a dyadic sandwich-making scenario in which a "worker" prepared a sandwich by adding ingredients requested by a "customer." In this context, we investigated the extent to which the customers' gaze cues serve as predictors of which ingredients they intend to request. Predictive features were derived to represent characteristics of the customers' gaze patterns. We developed a support vector machine-based (SVM-based) model that achieved 76% accuracy in predicting the customers' intended requests based solely on gaze features. Moreover, the predictor made correct predictions approximately 1.8 s before the spoken request from the customer. We further analyzed several episodes of interactions from our data to develop a deeper understanding of the scenarios where our predictor succeeded and failed in making correct predictions. These analyses revealed additional gaze patterns that may be leveraged to improve intention prediction. This work highlights gaze cues as a significant resource for understanding human intentions and informs the design of real-time recognizers of user intention for intelligent systems, such as assistive robots and ubiquitous devices, that may enable more complex capabilities and improved user experience.

  14. An improved dark-object subtraction technique for atmospheric scattering correction of multispectral data

    USGS Publications Warehouse

    Chavez, P.S.

    1988-01-01

    Digital analysis of remotely sensed data has become an important component of many earth-science studies. These data are often processed through a set of preprocessing or "clean-up" routines that includes a correction for atmospheric scattering, often called haze. Various methods to correct or remove the additive haze component have been developed, including the widely used dark-object subtraction technique. A problem with most of these methods is that the haze values for each spectral band are selected independently. This can create problems because atmospheric scattering is highly wavelength-dependent in the visible part of the electromagnetic spectrum and the scattering values are correlated with each other. Therefore, multispectral data such as from the Landsat Thematic Mapper and Multispectral Scanner must be corrected with haze values that are spectral band dependent. An improved dark-object subtraction technique is demonstrated that allows the user to select a relative atmospheric scattering model to predict the haze values for all the spectral bands from a selected starting band haze value. The improved method normalizes the predicted haze values for the different gain and offset parameters used by the imaging system. Examples of haze value differences between the old and improved methods for Thematic Mapper Bands 1, 2, 3, 4, 5, and 7 are 40.0, 13.0, 12.0, 8.0, 5.0, and 2.0 vs. 40.0, 13.2, 8.9, 4.9, 16.7, and 3.3, respectively, using a relative scattering model of a clear atmosphere. In one Landsat multispectral scanner image the haze value differences for Bands 4, 5, 6, and 7 were 30.0, 50.0, 50.0, and 40.0 for the old method vs. 30.0, 34.4, 43.6, and 6.4 for the new method using a relative scattering model of a hazy atmosphere. ?? 1988.

  15. Protein docking prediction using predicted protein-protein interface.

    PubMed

    Li, Bin; Kihara, Daisuke

    2012-01-10

    Many important cellular processes are carried out by protein complexes. To provide physical pictures of interacting proteins, many computational protein-protein prediction methods have been developed in the past. However, it is still difficult to identify the correct docking complex structure within top ranks among alternative conformations. We present a novel protein docking algorithm that utilizes imperfect protein-protein binding interface prediction for guiding protein docking. Since the accuracy of protein binding site prediction varies depending on cases, the challenge is to develop a method which does not deteriorate but improves docking results by using a binding site prediction which may not be 100% accurate. The algorithm, named PI-LZerD (using Predicted Interface with Local 3D Zernike descriptor-based Docking algorithm), is based on a pair wise protein docking prediction algorithm, LZerD, which we have developed earlier. PI-LZerD starts from performing docking prediction using the provided protein-protein binding interface prediction as constraints, which is followed by the second round of docking with updated docking interface information to further improve docking conformation. Benchmark results on bound and unbound cases show that PI-LZerD consistently improves the docking prediction accuracy as compared with docking without using binding site prediction or using the binding site prediction as post-filtering. We have developed PI-LZerD, a pairwise docking algorithm, which uses imperfect protein-protein binding interface prediction to improve docking accuracy. PI-LZerD consistently showed better prediction accuracy over alternative methods in the series of benchmark experiments including docking using actual docking interface site predictions as well as unbound docking cases.

  16. HESS Opinions "Should we apply bias correction to global and regional climate model data?"

    NASA Astrophysics Data System (ADS)

    Ehret, U.; Zehe, E.; Wulfmeyer, V.; Warrach-Sagi, K.; Liebert, J.

    2012-04-01

    Despite considerable progress in recent years, output of both Global and Regional Circulation Models is still afflicted with biases to a degree that precludes its direct use, especially in climate change impact studies. This is well known, and to overcome this problem bias correction (BC), i.e. the correction of model output towards observations in a post processing step for its subsequent application in climate change impact studies has now become a standard procedure. In this paper we argue that bias correction, which has a considerable influence on the results of impact studies, is not a valid procedure in the way it is currently used: it impairs the advantages of Circulation Models which are based on established physical laws by altering spatiotemporal field consistency, relations among variables and by violating conservation principles. Bias correction largely neglects feedback mechanisms and it is unclear whether bias correction methods are time-invariant under climate change conditions. Applying bias correction increases agreement of Climate Model output with observations in hind casts and hence narrows the uncertainty range of simulations and predictions without, however, providing a satisfactory physical justification. This is in most cases not transparent to the end user. We argue that this masks rather than reduces uncertainty, which may lead to avoidable forejudging of end users and decision makers. We present here a brief overview of state-of-the-art bias correction methods, discuss the related assumptions and implications, draw conclusions on the validity of bias correction and propose ways to cope with biased output of Circulation Models in the short term and how to reduce the bias in the long term. The most promising strategy for improved future Global and Regional Circulation Model simulations is the increase in model resolution to the convection-permitting scale in combination with ensemble predictions based on sophisticated approaches for ensemble perturbation. With this article, we advocate communicating the entire uncertainty range associated with climate change predictions openly and hope to stimulate a lively discussion on bias correction among the atmospheric and hydrological community and end users of climate change impact studies.

  17. Macro-microscopic mass formulae and nuclear mass predictions

    NASA Astrophysics Data System (ADS)

    Royer, G.; Guilbaud, M.; Onillon, A.

    2010-12-01

    Different mass formulae derived from the liquid drop model and the pairing and shell energies of the Thomas-Fermi model have been studied and compared. They include or not the diffuseness correction to the Coulomb energy, the charge exchange correction term, the curvature energy, different forms of the Wigner term and powers of the relative neutron excess I=(N-Z)/A. Their coefficients have been determined by a least square fitting procedure to 2027 experimental atomic masses (G. Audi et al. (2003) [1]). The Coulomb diffuseness correction Z/A term or the charge exchange correction Z/A term plays the main role to improve the accuracy of the mass formula. The Wigner term and the curvature energy can also be used separately but their coefficients are very unstable. The different fits lead to a surface energy coefficient of around 17-18 MeV. A large equivalent rms radius ( r=1.22-1.24 fm) or a shorter central radius may be used. An rms deviation of 0.54 MeV can be reached between the experimental and theoretical masses. The remaining differences come probably mainly from the determination of the shell and pairing energies. Mass predictions of selected expressions have been compared to 161 new experimental masses and the correct agreement allows to provide extrapolations to masses of 656 selected exotic nuclei.

  18. Improving the spectral measurement accuracy based on temperature distribution and spectra-temperature relationship

    NASA Astrophysics Data System (ADS)

    Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin

    2018-05-01

    Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.

  19. Gender and age related predictive value of walk test in heart failure: do anthropometrics matter in clinical practice?

    PubMed

    Frankenstein, L; Remppis, A; Graham, J; Schellberg, D; Sigg, C; Nelles, M; Katus, H A; Zugck, C

    2008-07-21

    The six-minute walk test (6 WT) is a valid and reliable predictor of morbidity and mortality in chronic heart failure (CHF) patients, frequently used as an endpoint or target in clinical trials. As opposed to spiroergometry, improvement of its prognostic accuracy by correction for height, weight, age and gender has not yet been attempted comprehensively despite known influences of these parameters. We recorded the 6 WT of 1035 CHF patients, attending clinic from 1995 to 2005. The 1-year prognostic value of 6 WT was calculated, alone and after correction for height, weight, BMI and/or age. Analysis was performed on the entire cohort, on males and females separately and stratified according to BMI (<25, 25-30 and >30 kg/m(2)). 6 WT weakly correlated with age (r=-0.32; p<0.0001), height (r=0.2; p<0.0001), weight (r=0.11; p<0.001), not with BMI (r=0.01; p=ns). The 6 WT was a strong predictor of 1-year mortality in both genders, both as a single and age corrected parameter. Parameters derived from correction of 6 WT for height, weight or BMI did not improve the prognostic value in univariate analysis for either gender. Comparison of the receiver operated characteristics showed no significant gain in prognostic accuracy from any derived variable, either for males or females. The six-minute walk test is a valid tool for risk prediction in both male and female CHF patients. In both genders, correcting 6 WT distance for height, weight or BMI alone, or adjusting for age, does not increase the prognostic power of this tool.

  20. TMSEG: Novel prediction of transmembrane helices.

    PubMed

    Bernhofer, Michael; Kloppmann, Edda; Reeb, Jonas; Rost, Burkhard

    2016-11-01

    Transmembrane proteins (TMPs) are important drug targets because they are essential for signaling, regulation, and transport. Despite important breakthroughs, experimental structure determination remains challenging for TMPs. Various methods have bridged the gap by predicting transmembrane helices (TMHs), but room for improvement remains. Here, we present TMSEG, a novel method identifying TMPs and accurately predicting their TMHs and their topology. The method combines machine learning with empirical filters. Testing it on a non-redundant dataset of 41 TMPs and 285 soluble proteins, and applying strict performance measures, TMSEG outperformed the state-of-the-art in our hands. TMSEG correctly distinguished helical TMPs from other proteins with a sensitivity of 98 ± 2% and a false positive rate as low as 3 ± 1%. Individual TMHs were predicted with a precision of 87 ± 3% and recall of 84 ± 3%. Furthermore, in 63 ± 6% of helical TMPs the placement of all TMHs and their inside/outside topology was correctly predicted. There are two main features that distinguish TMSEG from other methods. First, the errors in finding all helical TMPs in an organism are significantly reduced. For example, in human this leads to 200 and 1600 fewer misclassifications compared to the second and third best method available, and 4400 fewer mistakes than by a simple hydrophobicity-based method. Second, TMSEG provides an add-on improvement for any existing method to benefit from. Proteins 2016; 84:1706-1716. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  1. Determination of stores pointing error due to wing flexibility under flight load

    NASA Technical Reports Server (NTRS)

    Lokos, William A.; Bahm, Catherine M.; Heinle, Robert A.

    1995-01-01

    The in-flight elastic wing twist of a fighter-type aircraft was studied to provide for an improved on-board real-time computed prediction of pointing variations of three wing store stations. This is an important capability to correct sensor pod alignment variation or to establish initial conditions of iron bombs or smart weapons prior to release. The original algorithm was based upon coarse measurements. The electro-optical Flight Deflection Measurement System measured the deformed wing shape in flight under maneuver loads to provide a higher resolution database from which an improved twist prediction algorithm could be developed. The FDMS produced excellent repeatable data. In addition, a NASTRAN finite-element analysis was performed to provide additional elastic deformation data. The FDMS data combined with the NASTRAN analysis indicated that an improved prediction algorithm could be derived by using a different set of aircraft parameters, namely normal acceleration, stores configuration, Mach number, and gross weight.

  2. Spirometry: predicting risk and outcome.

    PubMed

    Brunelli, Alessandro; Rocco, Gaetano

    2008-02-01

    Predicted postoperative FEV1 is certainly the most widely used parameter in preoperative risk stratification [54] and the measure recommend by BTS and ACCP functional guidelines as a first step in the screening of patients for lung resection surgery. Nevertheless, recent evidences have demonstrated that ppoFEV1 is not a reliable predictor of postoperative cardiopulmonary complications in patients with preoperative impaired pulmonary function. This may be because of the fact that the resection of a portion of lung in patients with obstructive disease determines only a minimal loss, or even an improvement, in overall respiratory function and exercise tolerance. This lung volume reduction effect takes place very early, since the first postoperative days, balancing what ever negative physiologic effects a thoracotomy and lung resection may entail. In addition to its poor predictive role in COPD patients, ppoFEV1 largely underestimate the actual loss in the very first days after operation, when most of the complications develop. The rationale to use a parameter which is poorly correlated with the pulmonary function at the moment the complications occur seems unwarranted. At the very best, ppoFEV1 appears a weak surrogate of the immediate postoperative FEV1. The FEV1 measured on the first postoperative day may be 30% less than predicted. Corrective equations have been published to correct this discrepancy with the aim to improve risk stratification.

  3. The Effect Of Different Corrective Feedback Methods on the Outcome and Self Confidence of Young Athletes

    PubMed Central

    Tzetzis, George; Votsis, Evandros; Kourtessis, Thomas

    2008-01-01

    This experiment investigated the effects of three corrective feedback methods, using different combinations of correction, or error cues and positive feedback for learning two badminton skills with different difficulty (forehand clear - low difficulty, backhand clear - high difficulty). Outcome and self-confidence scores were used as dependent variables. The 48 participants were randomly assigned into four groups. Group A received correction cues and positive feedback. Group B received cues on errors of execution. Group C received positive feedback, correction cues and error cues. Group D was the control group. A pre, post and a retention test was conducted. A three way analysis of variance ANOVA (4 groups X 2 task difficulty X 3 measures) with repeated measures on the last factor revealed significant interactions for each depended variable. All the corrective feedback methods groups, increased their outcome scores over time for the easy skill, but only groups A and C for the difficult skill. Groups A and B had significantly better outcome scores than group C and the control group for the easy skill on the retention test. However, for the difficult skill, group C was better than groups A, B and D. The self confidence scores of groups A and C improved over time for the easy skill but not for group B and D. Again, for the difficult skill, only group C improved over time. Finally a regression analysis depicted that the improvement in performance predicted a proportion of the improvement in self confidence for both the easy and the difficult skill. It was concluded that when young athletes are taught skills of different difficulty, different type of instruction, might be more appropriate in order to improve outcome and self confidence. A more integrated approach on teaching will assist coaches or physical education teachers to be more efficient and effective. Key pointsThe type of the skill is a critical factor in determining the effectiveness of the feedback types.Different instructional methods of corrective feedback could have beneficial effects in the outcome and self-confidence of young athletesInstructions focusing on the correct cues or errors increase performance of easy skills.Positive feedback or correction cues increase self-confidence of easy skills but only the combination of error and correction cues increase self confidence and outcome scores of difficult skills. PMID:24149905

  4. Correcting pervasive errors in RNA crystallography through enumerative structure prediction.

    PubMed

    Chou, Fang-Chieh; Sripakdeevong, Parin; Dibrov, Sergey M; Hermann, Thomas; Das, Rhiju

    2013-01-01

    Three-dimensional RNA models fitted into crystallographic density maps exhibit pervasive conformational ambiguities, geometric errors and steric clashes. To address these problems, we present enumerative real-space refinement assisted by electron density under Rosetta (ERRASER), coupled to Python-based hierarchical environment for integrated 'xtallography' (PHENIX) diffraction-based refinement. On 24 data sets, ERRASER automatically corrects the majority of MolProbity-assessed errors, improves the average R(free) factor, resolves functionally important discrepancies in noncanonical structure and refines low-resolution models to better match higher-resolution models.

  5. On the Prediction of Mechanical Behavior of Particulate Composites Using an Improved Mori-Tanaka Method

    DTIC Science & Technology

    1997-01-01

    perturbed strain, [L/ L] P501263.PDF [Page: 12 of 122] UNCLASSIFIED viii €~j constrained strain, [L/ L] €£j eigenstrain , [L/ L] €£J c corrected... eigenstrain of phase-r material, [L/ L] £iJ u uncorrected eigenstrain of phase~r material, [L/ L] fijkl correction matrix of phase-r material... eigenstrains , [2] wher·e St.jkl is known as the Eshelby tensor. The tensor is a function of the matrix Poisson ratio and the shape of the inclusion

  6. Can machine-learning improve cardiovascular risk prediction using routine clinical data?

    PubMed Central

    Kai, Joe; Garibaldi, Jonathan M.; Qureshi, Nadeem

    2017-01-01

    Background Current approaches to predict cardiovascular risk fail to identify many people who would benefit from preventive treatment, while others receive unnecessary intervention. Machine-learning offers opportunity to improve accuracy by exploiting complex interactions between risk factors. We assessed whether machine-learning can improve cardiovascular risk prediction. Methods Prospective cohort study using routine clinical data of 378,256 patients from UK family practices, free from cardiovascular disease at outset. Four machine-learning algorithms (random forest, logistic regression, gradient boosting machines, neural networks) were compared to an established algorithm (American College of Cardiology guidelines) to predict first cardiovascular event over 10-years. Predictive accuracy was assessed by area under the ‘receiver operating curve’ (AUC); and sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) to predict 7.5% cardiovascular risk (threshold for initiating statins). Findings 24,970 incident cardiovascular events (6.6%) occurred. Compared to the established risk prediction algorithm (AUC 0.728, 95% CI 0.723–0.735), machine-learning algorithms improved prediction: random forest +1.7% (AUC 0.745, 95% CI 0.739–0.750), logistic regression +3.2% (AUC 0.760, 95% CI 0.755–0.766), gradient boosting +3.3% (AUC 0.761, 95% CI 0.755–0.766), neural networks +3.6% (AUC 0.764, 95% CI 0.759–0.769). The highest achieving (neural networks) algorithm predicted 4,998/7,404 cases (sensitivity 67.5%, PPV 18.4%) and 53,458/75,585 non-cases (specificity 70.7%, NPV 95.7%), correctly predicting 355 (+7.6%) more patients who developed cardiovascular disease compared to the established algorithm. Conclusions Machine-learning significantly improves accuracy of cardiovascular risk prediction, increasing the number of patients identified who could benefit from preventive treatment, while avoiding unnecessary treatment of others. PMID:28376093

  7. Can machine-learning improve cardiovascular risk prediction using routine clinical data?

    PubMed

    Weng, Stephen F; Reps, Jenna; Kai, Joe; Garibaldi, Jonathan M; Qureshi, Nadeem

    2017-01-01

    Current approaches to predict cardiovascular risk fail to identify many people who would benefit from preventive treatment, while others receive unnecessary intervention. Machine-learning offers opportunity to improve accuracy by exploiting complex interactions between risk factors. We assessed whether machine-learning can improve cardiovascular risk prediction. Prospective cohort study using routine clinical data of 378,256 patients from UK family practices, free from cardiovascular disease at outset. Four machine-learning algorithms (random forest, logistic regression, gradient boosting machines, neural networks) were compared to an established algorithm (American College of Cardiology guidelines) to predict first cardiovascular event over 10-years. Predictive accuracy was assessed by area under the 'receiver operating curve' (AUC); and sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) to predict 7.5% cardiovascular risk (threshold for initiating statins). 24,970 incident cardiovascular events (6.6%) occurred. Compared to the established risk prediction algorithm (AUC 0.728, 95% CI 0.723-0.735), machine-learning algorithms improved prediction: random forest +1.7% (AUC 0.745, 95% CI 0.739-0.750), logistic regression +3.2% (AUC 0.760, 95% CI 0.755-0.766), gradient boosting +3.3% (AUC 0.761, 95% CI 0.755-0.766), neural networks +3.6% (AUC 0.764, 95% CI 0.759-0.769). The highest achieving (neural networks) algorithm predicted 4,998/7,404 cases (sensitivity 67.5%, PPV 18.4%) and 53,458/75,585 non-cases (specificity 70.7%, NPV 95.7%), correctly predicting 355 (+7.6%) more patients who developed cardiovascular disease compared to the established algorithm. Machine-learning significantly improves accuracy of cardiovascular risk prediction, increasing the number of patients identified who could benefit from preventive treatment, while avoiding unnecessary treatment of others.

  8. Healthy, wealthy, and wise: retirement planning predicts employee health improvements.

    PubMed

    Gubler, Timothy; Pierce, Lamar

    2014-09-01

    Are poor physical and financial health driven by the same underlying psychological factors? We found that the decision to contribute to a 401(k) retirement plan predicted whether an individual acted to correct poor physical-health indicators revealed during an employer-sponsored health examination. Using this examination as a quasi-exogenous shock to employees' personal-health knowledge, we examined which employees were more likely to improve their health, controlling for differences in initial health, demographics, job type, and income. We found that existing retirement-contribution patterns and future health improvements were highly correlated. Employees who saved for the future by contributing to a 401(k) showed improvements in their abnormal blood-test results and health behaviors approximately 27% more often than noncontributors did. These findings are consistent with an underlying individual time-discounting trait that is both difficult to change and domain interdependent, and that predicts long-term individual behaviors in multiple dimensions. © The Author(s) 2014.

  9. High-temperature fatigue in metals - A brief review of life prediction methods developed at the Lewis Research Center of NASA

    NASA Technical Reports Server (NTRS)

    Halford, G. R.

    1983-01-01

    The presentation focuses primarily on the progress we at NASA Lewis Research Center have made. The understanding of the phenomenological processes of high temperature fatigue of metals for the purpose of calculating lives of turbine engine hot section components is discussed. Improved understanding resulted in the development of accurate and physically correct life prediction methods such as Strain-Range partitioning for calculating creep fatigue interactions and the Double Linear Damage Rule for predicting potentially severe interactions between high and low cycle fatigue. Examples of other life prediction methods are also discussed. Previously announced in STAR as A83-12159

  10. Comparative assessment of several post-processing methods for correcting evapotranspiration forecasts derived from TIGGE datasets.

    NASA Astrophysics Data System (ADS)

    Tian, D.; Medina, H.

    2017-12-01

    Post-processing of medium range reference evapotranspiration (ETo) forecasts based on numerical weather prediction (NWP) models has the potential of improving the quality and utility of these forecasts. This work compares the performance of several post-processing methods for correcting ETo forecasts over the continental U.S. generated from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) database using data from Europe (EC), the United Kingdom (MO), and the United States (NCEP). The pondered post-processing techniques are: simple bias correction, the use of multimodels, the Ensemble Model Output Statistics (EMOS, Gneitting et al., 2005) and the Bayesian Model Averaging (BMA, Raftery et al., 2005). ETo estimates based on quality-controlled U.S. Regional Climate Reference Network measurements, and computed with the FAO 56 Penman Monteith equation, are adopted as baseline. EMOS and BMA are generally the most efficient post-processing techniques of the ETo forecasts. Nevertheless, the simple bias correction of the best model is commonly much more rewarding than using multimodel raw forecasts. Our results demonstrate the potential of different forecasting and post-processing frameworks in operational evapotranspiration and irrigation advisory systems at national scale.

  11. Using Lidar and Radar measurements to constrain predictions of forest ecosystem structure and function.

    PubMed

    Antonarakis, Alexander S; Saatchi, Sassan S; Chazdon, Robin L; Moorcroft, Paul R

    2011-06-01

    Insights into vegetation and aboveground biomass dynamics within terrestrial ecosystems have come almost exclusively from ground-based forest inventories that are limited in their spatial extent. Lidar and synthetic-aperture Radar are promising remote-sensing-based techniques for obtaining comprehensive measurements of forest structure at regional to global scales. In this study we investigate how Lidar-derived forest heights and Radar-derived aboveground biomass can be used to constrain the dynamics of the ED2 terrestrial biosphere model. Four-year simulations initialized with Lidar and Radar structure variables were compared against simulations initialized from forest-inventory data and output from a long-term potential-vegtation simulation. Both height and biomass initializations from Lidar and Radar measurements significantly improved the representation of forest structure within the model, eliminating the bias of too many large trees that arose in the potential-vegtation-initialized simulation. The Lidar and Radar initializations decreased the proportion of larger trees estimated by the potential vegetation by approximately 20-30%, matching the forest inventory. This resulted in improved predictions of ecosystem-scale carbon fluxes and structural dynamics compared to predictions from the potential-vegtation simulation. The Radar initialization produced biomass values that were 75% closer to the forest inventory, with Lidar initializations producing canopy height values closest to the forest inventory. Net primary production values for the Radar and Lidar initializations were around 6-8% closer to the forest inventory. Correcting the Lidar and Radar initializations for forest composition resulted in improved biomass and basal-area dynamics as well as leaf-area index. Correcting the Lidar and Radar initializations for forest composition and fine-scale structure by combining the remote-sensing measurements with ground-based inventory data further improved predictions, suggesting that further improvements of structural and carbon-flux metrics will also depend on obtaining reliable estimates of forest composition and accurate representation of the fine-scale vertical and horizontal structure of plant canopies.

  12. Study on model current predictive control method of PV grid- connected inverters systems with voltage sag

    NASA Astrophysics Data System (ADS)

    Jin, N.; Yang, F.; Shang, S. Y.; Tao, T.; Liu, J. S.

    2016-08-01

    According to the limitations of the LVRT technology of traditional photovoltaic inverter existed, this paper proposes a low voltage ride through (LVRT) control method based on model current predictive control (MCPC). This method can effectively improve the photovoltaic inverter output characteristics and response speed. The MCPC method of photovoltaic grid-connected inverter designed, the sum of the absolute value of the predictive current and the given current error is adopted as the cost function with the model predictive control method. According to the MCPC, the optimal space voltage vector is selected. Photovoltaic inverter has achieved automatically switches of priority active or reactive power control of two control modes according to the different operating states, which effectively improve the inverter capability of LVRT. The simulation and experimental results proves that the proposed method is correct and effective.

  13. Plateletpheresis efficiency and mathematical correction of software-derived platelet yield prediction: A linear regression and ROC modeling approach.

    PubMed

    Jaime-Pérez, José Carlos; Jiménez-Castillo, Raúl Alberto; Vázquez-Hernández, Karina Elizabeth; Salazar-Riojas, Rosario; Méndez-Ramírez, Nereida; Gómez-Almaguer, David

    2017-10-01

    Advances in automated cell separators have improved the efficiency of plateletpheresis and the possibility of obtaining double products (DP). We assessed cell processor accuracy of predicted platelet (PLT) yields with the goal of a better prediction of DP collections. This retrospective proof-of-concept study included 302 plateletpheresis procedures performed on a Trima Accel v6.0 at the apheresis unit of a hematology department. Donor variables, software predicted yield and actual PLT yield were statistically evaluated. Software prediction was optimized by linear regression analysis and its optimal cut-off to obtain a DP assessed by receiver operating characteristic curve (ROC) modeling. Three hundred and two plateletpheresis procedures were performed; in 271 (89.7%) occasions, donors were men and in 31 (10.3%) women. Pre-donation PLT count had the best direct correlation with actual PLT yield (r = 0.486. P < .001). Means of software machine-derived values differed significantly from actual PLT yield, 4.72 × 10 11 vs.6.12 × 10 11 , respectively, (P < .001). The following equation was developed to adjust these values: actual PLT yield= 0.221 + (1.254 × theoretical platelet yield). ROC curve model showed an optimal apheresis device software prediction cut-off of 4.65 × 10 11 to obtain a DP, with a sensitivity of 82.2%, specificity of 93.3%, and an area under the curve (AUC) of 0.909. Trima Accel v6.0 software consistently underestimated PLT yields. Simple correction derived from linear regression analysis accurately corrected this underestimation and ROC analysis identified a precise cut-off to reliably predict a DP. © 2016 Wiley Periodicals, Inc.

  14. A Multidimensional B-Spline Correction for Accurate Modeling Sugar Puckering in QM/MM Simulations.

    PubMed

    Huang, Ming; Dissanayake, Thakshila; Kuechler, Erich; Radak, Brian K; Lee, Tai-Sung; Giese, Timothy J; York, Darrin M

    2017-09-12

    The computational efficiency of approximate quantum mechanical methods allows their use for the construction of multidimensional reaction free energy profiles. It has recently been demonstrated that quantum models based on the neglect of diatomic differential overlap (NNDO) approximation have difficulty modeling deoxyribose and ribose sugar ring puckers and thus limit their predictive value in the study of RNA and DNA systems. A method has been introduced in our previous work to improve the description of the sugar puckering conformational landscape that uses a multidimensional B-spline correction map (BMAP correction) for systems involving intrinsically coupled torsion angles. This method greatly improved the adiabatic potential energy surface profiles of DNA and RNA sugar rings relative to high-level ab initio methods even for highly problematic NDDO-based models. In the present work, a BMAP correction is developed, implemented, and tested in molecular dynamics simulations using the AM1/d-PhoT semiempirical Hamiltonian for biological phosphoryl transfer reactions. Results are presented for gas-phase adiabatic potential energy surfaces of RNA transesterification model reactions and condensed-phase QM/MM free energy surfaces for nonenzymatic and RNase A-catalyzed transesterification reactions. The results show that the BMAP correction is stable, efficient, and leads to improvement in both the potential energy and free energy profiles for the reactions studied, as compared with ab initio and experimental reference data. Exploration of the effect of the size of the quantum mechanical region indicates the best agreement with experimental reaction barriers occurs when the full CpA dinucleotide substrate is treated quantum mechanically with the sugar pucker correction.

  15. Improved prediction of biochemical recurrence after radical prostatectomy by genetic polymorphisms.

    PubMed

    Morote, Juan; Del Amo, Jokin; Borque, Angel; Ars, Elisabet; Hernández, Carlos; Herranz, Felipe; Arruza, Antonio; Llarena, Roberto; Planas, Jacques; Viso, María J; Palou, Joan; Raventós, Carles X; Tejedor, Diego; Artieda, Marta; Simón, Laureano; Martínez, Antonio; Rioja, Luis A

    2010-08-01

    Single nucleotide polymorphisms are inherited genetic variations that can predispose or protect individuals against clinical events. We hypothesized that single nucleotide polymorphism profiling may improve the prediction of biochemical recurrence after radical prostatectomy. We performed a retrospective, multi-institutional study of 703 patients treated with radical prostatectomy for clinically localized prostate cancer who had at least 5 years of followup after surgery. All patients were genotyped for 83 prostate cancer related single nucleotide polymorphisms using a low density oligonucleotide microarray. Baseline clinicopathological variables and single nucleotide polymorphisms were analyzed to predict biochemical recurrence within 5 years using stepwise logistic regression. Discrimination was measured by ROC curve AUC, specificity, sensitivity, predictive values, net reclassification improvement and integrated discrimination index. The overall biochemical recurrence rate was 35%. The model with the best fit combined 8 covariates, including the 5 clinicopathological variables prostate specific antigen, Gleason score, pathological stage, lymph node involvement and margin status, and 3 single nucleotide polymorphisms at the KLK2, SULT1A1 and TLR4 genes. Model predictive power was defined by 80% positive predictive value, 74% negative predictive value and an AUC of 0.78. The model based on clinicopathological variables plus single nucleotide polymorphisms showed significant improvement over the model without single nucleotide polymorphisms, as indicated by 23.3% net reclassification improvement (p = 0.003), integrated discrimination index (p <0.001) and likelihood ratio test (p <0.001). Internal validation proved model robustness (bootstrap corrected AUC 0.78, range 0.74 to 0.82). The calibration plot showed close agreement between biochemical recurrence observed and predicted probabilities. Predicting biochemical recurrence after radical prostatectomy based on clinicopathological data can be significantly improved by including patient genetic information. Copyright (c) 2010 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  16. Results from the centers for disease control and prevention's predict the 2013-2014 Influenza Season Challenge.

    PubMed

    Biggerstaff, Matthew; Alper, David; Dredze, Mark; Fox, Spencer; Fung, Isaac Chun-Hai; Hickmann, Kyle S; Lewis, Bryan; Rosenfeld, Roni; Shaman, Jeffrey; Tsou, Ming-Hsiang; Velardi, Paola; Vespignani, Alessandro; Finelli, Lyn

    2016-07-22

    Early insights into the timing of the start, peak, and intensity of the influenza season could be useful in planning influenza prevention and control activities. To encourage development and innovation in influenza forecasting, the Centers for Disease Control and Prevention (CDC) organized a challenge to predict the 2013-14 Unites States influenza season. Challenge contestants were asked to forecast the start, peak, and intensity of the 2013-2014 influenza season at the national level and at any or all Health and Human Services (HHS) region level(s). The challenge ran from December 1, 2013-March 27, 2014; contestants were required to submit 9 biweekly forecasts at the national level to be eligible. The selection of the winner was based on expert evaluation of the methodology used to make the prediction and the accuracy of the prediction as judged against the U.S. Outpatient Influenza-like Illness Surveillance Network (ILINet). Nine teams submitted 13 forecasts for all required milestones. The first forecast was due on December 2, 2013; 3/13 forecasts received correctly predicted the start of the influenza season within one week, 1/13 predicted the peak within 1 week, 3/13 predicted the peak ILINet percentage within 1 %, and 4/13 predicted the season duration within 1 week. For the prediction due on December 19, 2013, the number of forecasts that correctly forecasted the peak week increased to 2/13, the peak percentage to 6/13, and the duration of the season to 6/13. As the season progressed, the forecasts became more stable and were closer to the season milestones. Forecasting has become technically feasible, but further efforts are needed to improve forecast accuracy so that policy makers can reliably use these predictions. CDC and challenge contestants plan to build upon the methods developed during this contest to improve the accuracy of influenza forecasts.

  17. Oxygen uptake on-kinetics during six-minute walk test predicts short-term outcomes after off-pump coronary artery bypass surgery.

    PubMed

    Rocco, Isadora Salvador; Viceconte, Marcela; Pauletti, Hayanne Osiro; Matos-Garcia, Bruna Caroline; Marcondi, Natasha Oliveira; Bublitz, Caroline; Bolzan, Douglas William; Moreira, Rita Simone Lopes; Reis, Michel Silva; Hossne, Nelson Américo; Gomes, Walter José; Arena, Ross; Guizilini, Solange

    2017-12-26

    We aimed to investigate the ability of oxygen uptake kinetics to predict short-term outcomes after off-pump coronary artery bypass grafting. Fifty-two patients aged 60.9 ± 7.8 years waiting for off-pump coronary artery bypass surgery were evaluated. The 6-min walk test distance was performed pre-operatively, while simultaneously using a portable cardiopulmonary testing device. The transition of oxygen uptake kinetics from rest to exercise was recorded to calculate oxygen uptake kinetics fitting a monoexponential regression model. Oxygen uptake at steady state, constant time, and mean response time corrected by work rate were analysed. Short-term clinical outcomes were evaluated during the early post-operative of off-pump coronary artery bypass surgery. Multivariate analysis showed body mass index, surgery time, and mean response time corrected by work rate as independent predictors for short-term outcomes. The optimal mean response time corrected by work rate cut-off to estimate short-term clinical outcomes was 1.51 × 10 -3  min 2 /ml. Patients with slower mean response time corrected by work rate demonstrated higher rates of hypertension, diabetes, EuroSCOREII, left ventricular dysfunction, and impaired 6-min walk test parameters. The per cent-predicted distance threshold of 66% in the pre-operative was associated with delayed oxygen uptake kinetics. Pre-operative oxygen uptake kinetics during 6-min walk test predicts short-term clinical outcomes after off-pump coronary artery bypass surgery. From a clinically applicable perspective, a threshold of 66% of pre-operative predicted 6-min walk test distance indicated slower kinetics, which leads to longer intensive care unit and post-surgery hospital length of stay. Implications for rehabilitation Coronary artery bypass grafting is a treatment aimed to improve expectancy of life and prevent disability due to the disease progression; The use of pre-operative submaximal functional capacity test enabled the identification of patients with high risk of complications, where patients with delayed oxygen uptake kinetics exhibited worse short-term outcomes; Our findings suggest the importance of the rehabilitation in the pre-operative in order to "pre-habilitate" the patients to the surgical procedure; Faster oxygen uptake on-kinetics could be achieved by improving the oxidative capacity of muscles and cardiovascular conditioning through rehabilitation, adding better results following cardiac surgery.

  18. Astigmatism Correction With Toric Intraocular Lenses in Descemet Membrane Endothelial Keratoplasty Triple Procedures.

    PubMed

    Yokogawa, Hideaki; Sanchez, P James; Mayko, Zachary M; Straiko, Michael D; Terry, Mark A

    2017-03-01

    To report the clinical efficacy of astigmatism correction with toric intraocular lenses (IOLs) in patients undergoing the Descemet membrane endothelial keratoplasty (DMEK) triple procedure and to evaluate the accuracy of the correction. Fifteen eyes of 10 patients who received cataract extraction, toric IOL placement, and DMEK surgery for Fuchs corneal dystrophy and cataracts were evaluated. The cylinder power of toric IOLs was determined by an online toric calculator with keratoscopy measurements obtained using Scheimpflug corneal imaging. Prediction errors were assessed as a difference vector between the anticipated minus postoperative residual astigmatism. At 10.1 ± 4.9 months postoperatively, 8/13 (61.5%) of eyes achieved uncorrected distance visual acuity better than 20/40. Mean best spectacle-corrected distance visual acuity (logMAR) improved from 0.21 ± 0.15 preoperatively to 0.08 ± 0.12 postoperatively (P < 0.01). The magnitude of refractive astigmatism was also significantly decreased from 2.23 ± 1.10 D (range 0.75-4.25 D) preoperatively to 0.87 ± 0.75 D (range 0.00-3.00 D) postoperatively (P < 0.01). In 1 eye with rotational misalignment by 43 degrees, we found no improvement of astigmatism. The prediction error of astigmatism at the corneal plane was 0.77 ± 0.54 D (range 0.10-1.77 D). Four eyes with preoperative "with-the-rule" corneal astigmatism had postoperative "against-the-rule" refractive astigmatism. For patients with Fuchs corneal dystrophy and cataracts, use of toric IOLs might be a valuable option in triple DMEK surgery. Additionally, care should be taken to prevent excessive IOL rotation.

  19. High-Precision Differential Predictions for Top-Quark Pairs at the LHC

    NASA Astrophysics Data System (ADS)

    Czakon, Michal; Heymes, David; Mitov, Alexander

    2016-02-01

    We present the first complete next-to-next-to-leading order (NNLO) QCD predictions for differential distributions in the top-quark pair production process at the LHC. Our results are derived from a fully differential partonic Monte Carlo calculation with stable top quarks which involves no approximations beyond the fixed-order truncation of the perturbation series. The NNLO corrections improve the agreement between existing LHC measurements [V. Khachatryan et al. (CMS Collaboration), Eur. Phys. J. C 75, 542 (2015)] and standard model predictions for the top-quark transverse momentum distribution, thus helping alleviate one long-standing discrepancy. The shape of the top-quark pair invariant mass distribution turns out to be stable with respect to radiative corrections beyond NLO which increases the value of this observable as a place to search for physics beyond the standard model. The results presented here provide essential input for parton distribution function fits, implementation of higher-order effects in Monte Carlo generators, as well as top-quark mass and strong coupling determination.

  20. High-Precision Differential Predictions for Top-Quark Pairs at the LHC.

    PubMed

    Czakon, Michal; Heymes, David; Mitov, Alexander

    2016-02-26

    We present the first complete next-to-next-to-leading order (NNLO) QCD predictions for differential distributions in the top-quark pair production process at the LHC. Our results are derived from a fully differential partonic Monte Carlo calculation with stable top quarks which involves no approximations beyond the fixed-order truncation of the perturbation series. The NNLO corrections improve the agreement between existing LHC measurements [V. Khachatryan et al. (CMS Collaboration), Eur. Phys. J. C 75, 542 (2015)] and standard model predictions for the top-quark transverse momentum distribution, thus helping alleviate one long-standing discrepancy. The shape of the top-quark pair invariant mass distribution turns out to be stable with respect to radiative corrections beyond NLO which increases the value of this observable as a place to search for physics beyond the standard model. The results presented here provide essential input for parton distribution function fits, implementation of higher-order effects in Monte Carlo generators, as well as top-quark mass and strong coupling determination.

  1. The effect of capturing the correct turbulence dissipation rate in BHR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwarzkopf, John Dennis; Ristorcelli, Raymond

    In this manuscript, we discuss the shortcoming of a quasi-equilibrium assumption made in the BHR closure model. Turbulence closure models generally assume fully developed turbulence, which is not applicable to 1) non-equilibrium turbulence (e.g. change in mean pressure gradient) or 2) laminar-turbulence transition flows. Based on DNS data, we show that the current BHR dissipation equation [modeled based on the fully developed turbulence phenomenology] does not capture important features of nonequilibrium flows. To demonstrate our thesis, we use the BHR equations to predict a non-equilibrium flow both with the BHR dissipation and the dissipation from DNS. We find that themore » prediction can be substantially improved, both qualitatively and quantitatively, with the correct dissipation rate. We conclude that a new set of nonequilibrium phenomenological assumptions must be used to develop a new model equation for the dissipation to accurately predict the turbulence time scale used by other models.« less

  2. Improved model quality assessment using ProQ2.

    PubMed

    Ray, Arjun; Lindahl, Erik; Wallner, Björn

    2012-09-10

    Employing methods to assess the quality of modeled protein structures is now standard practice in bioinformatics. In a broad sense, the techniques can be divided into methods relying on consensus prediction on the one hand, and single-model methods on the other. Consensus methods frequently perform very well when there is a clear consensus, but this is not always the case. In particular, they frequently fail in selecting the best possible model in the hard cases (lacking consensus) or in the easy cases where models are very similar. In contrast, single-model methods do not suffer from these drawbacks and could potentially be applied on any protein of interest to assess quality or as a scoring function for sampling-based refinement. Here, we present a new single-model method, ProQ2, based on ideas from its predecessor, ProQ. ProQ2 is a model quality assessment algorithm that uses support vector machines to predict local as well as global quality of protein models. Improved performance is obtained by combining previously used features with updated structural and predicted features. The most important contribution can be attributed to the use of profile weighting of the residue specific features and the use features averaged over the whole model even though the prediction is still local. ProQ2 is significantly better than its predecessors at detecting high quality models, improving the sum of Z-scores for the selected first-ranked models by 20% and 32% compared to the second-best single-model method in CASP8 and CASP9, respectively. The absolute quality assessment of the models at both local and global level is also improved. The Pearson's correlation between the correct and local predicted score is improved from 0.59 to 0.70 on CASP8 and from 0.62 to 0.68 on CASP9; for global score to the correct GDT_TS from 0.75 to 0.80 and from 0.77 to 0.80 again compared to the second-best single methods in CASP8 and CASP9, respectively. ProQ2 is available at http://proq2.wallnerlab.org.

  3. An Improved Method of AGM for High Precision Geolocation of SAR Images

    NASA Astrophysics Data System (ADS)

    Zhou, G.; He, C.; Yue, T.; Huang, W.; Huang, Y.; Li, X.; Chen, Y.

    2018-05-01

    In order to take full advantage of SAR images, it is necessary to obtain the high precision location of the image. During the geometric correction process of images, to ensure the accuracy of image geometric correction and extract the effective mapping information from the images, precise image geolocation is important. This paper presents an improved analytical geolocation method (IAGM) that determine the high precision geolocation of each pixel in a digital SAR image. This method is based on analytical geolocation method (AGM) proposed by X. K. Yuan aiming at realizing the solution of RD model. Tests will be conducted using RADARSAT-2 SAR image. Comparing the predicted feature geolocation with the position as determined by high precision orthophoto, results indicate an accuracy of 50m is attainable with this method. Error sources will be analyzed and some recommendations about improving image location accuracy in future spaceborne SAR's will be given.

  4. Continued Research into Characterizing the Preturbulence Environment for Sensor Development, New Hazard Algorithms and Experimental Flight Planning

    NASA Technical Reports Server (NTRS)

    Kaplan, Michael L.; Lin, Yuh-Lang

    2005-01-01

    The purpose of the research was to develop and test improved hazard algorithms that could result in the development of sensors that are better able to anticipate potentially severe atmospheric turbulence, which affects aircraft safety. The research focused on employing numerical simulation models to develop improved algorithms for the prediction of aviation turbulence. This involved producing both research simulations and real-time simulations of environments predisposed to moderate and severe aviation turbulence. The research resulted in the following fundamental advancements toward the aforementioned goal: 1) very high resolution simulations of turbulent environments indicated how predictive hazard indices could be improved resulting in a candidate hazard index that indicated the potential for improvement over existing operational indices, 2) a real-time turbulence hazard numerical modeling system was improved by correcting deficiencies in its simulation of moist convection and 3) the same real-time predictive system was tested by running the code twice daily and the hazard prediction indices updated and improved. Additionally, a simple validation study was undertaken to determine how well a real time hazard predictive index performed when compared to commercial pilot observations of aviation turbulence. Simple statistical analyses were performed in this validation study indicating potential skill in employing the hazard prediction index to predict regions of varying intensities of aviation turbulence. Data sets from a research numerical model where provided to NASA for use in a large eddy simulation numerical model. A NASA contractor report and several refereed journal articles where prepared and submitted for publication during the course of this research.

  5. NCV Flow Diagnostic Test Results

    NASA Technical Reports Server (NTRS)

    Cappuccio, Mina

    1999-01-01

    There were two objectives for this test. First, was to assess the reasons why there is approximately 1.5 drag counts (cts) discrepancy between measured and computed drag improvement of the Non-linear Cruise Validation (NCV) over the Technology Concept Airplane (TCA) wing body (WB) configurations. The Navier-Stokes (N-S) pre-test predictions from Boeing Commercial Airplane Group (BCAG) show 4.5 drag cts of improvement for NCV over TCA at a lift coefficient (CL) of 0. I at Mach 2.4. The pre-test predictions from Boeing Phantom Works - Long Beach, BPW-LB, show 3.75 drag cts of improvement. BCAG used OVERFLOW and BPW-LB used CFL3D. The first test entry to validate the improvement was held at the NASA Langley Research Center (LARC) UPV;T, test number 1687. The experimental results showed that the drag improvement was only 2.6 cts, not accounting for laminar run and trip drag. This is approximately 1.5 cts less than predicted computationally. In addition to the low Reynolds Number (RN) test, there was a high RN test in the Boeing Supersonic Wind Tunnel (BSWT) of NCV and TCA. BSV@T test 647 showed that the drag improvement of NCV over TCA was also 2.6 cts, but this did account for laminar run and trip drag. Every effort needed to be done to assess if the improvement measured in LaRC UPWT and BSWT was correct. The second objective, once the first objective was met, was to assess the performance increment of NCV over TCA accounting for the associated laminar run and trip drag corrections in LaRC UPWT. We know that the configurations tested have laminar flow on portions of the wing and have trip drag due to the mechanisms used to force the flow to go from laminar to turbulent aft of the transition location.

  6. Combining Statistics and Physics to Improve Climate Downscaling

    NASA Astrophysics Data System (ADS)

    Gutmann, E. D.; Eidhammer, T.; Arnold, J.; Nowak, K.; Clark, M. P.

    2017-12-01

    Getting useful information from climate models is an ongoing problem that has plagued climate science and hydrologic prediction for decades. While it is possible to develop statistical corrections for climate models that mimic current climate almost perfectly, this does not necessarily guarantee that future changes are portrayed correctly. In contrast, convection permitting regional climate models (RCMs) have begun to provide an excellent representation of the regional climate system purely from first principles, providing greater confidence in their change signal. However, the computational cost of such RCMs prohibits the generation of ensembles of simulations or long time periods, thus limiting their applicability for hydrologic applications. Here we discuss a new approach combining statistical corrections with physical relationships for a modest computational cost. We have developed the Intermediate Complexity Atmospheric Research model (ICAR) to provide a climate and weather downscaling option that is based primarily on physics for a fraction of the computational requirements of a traditional regional climate model. ICAR also enables the incorporation of statistical adjustments directly within the model. We demonstrate that applying even simple corrections to precipitation while the model is running can improve the simulation of land atmosphere feedbacks in ICAR. For example, by incorporating statistical corrections earlier in the modeling chain, we permit the model physics to better represent the effect of mountain snowpack on air temperature changes.

  7. Evaluation of antibiotic resistance analysis and ribotyping for identification of faecal pollution sources in an urban watershed.

    PubMed

    Moore, D F; Harwood, V J; Ferguson, D M; Lukasik, J; Hannah, P; Getrich, M; Brownell, M

    2005-01-01

    The accuracy of ribotyping and antibiotic resistance analysis (ARA) for prediction of sources of faecal bacterial pollution in an urban southern California watershed was determined using blinded proficiency samples. Antibiotic resistance patterns and HindIII ribotypes of Escherichia coli (n = 997), and antibiotic resistance patterns of Enterococcus spp. (n = 3657) were used to construct libraries from sewage samples and from faeces of seagulls, dogs, cats, horses and humans within the watershed. The three libraries were analysed to determine the accuracy of host source prediction. The internal accuracy of the libraries (average rate of correct classification, ARCC) with six source categories was 44% for E. coli ARA, 69% for E. coli ribotyping and 48% for Enterococcus ARA. Each library's predictive ability towards isolates that were not part of the library was determined using a blinded proficiency panel of 97 E. coli and 99 Enterococcus isolates. Twenty-eight per cent (by ARA) and 27% (by ribotyping) of the E. coli proficiency isolates were assigned to the correct source category. Sixteen per cent were assigned to the same source category by both methods, and 6% were assigned to the correct category. Addition of 2480 E. coli isolates to the ARA library did not improve the ARCC or proficiency accuracy. In contrast, 45% of Enterococcus proficiency isolates were correctly identified by ARA. None of the methods performed well enough on the proficiency panel to be judged ready for application to environmental samples. Most microbial source tracking (MST) studies published have demonstrated library accuracy solely by the internal ARCC measurement. Low rates of correct classification for E. coli proficiency isolates compared with the ARCCs of the libraries indicate that testing of bacteria from samples that are not represented in the library, such as blinded proficiency samples, is necessary to accurately measure predictive ability. The library-based MST methods used in this study may not be suited for determination of the source(s) of faecal pollution in large, urban watersheds.

  8. The effects of sampling bias and model complexity on the predictive performance of MaxEnt species distribution models.

    PubMed

    Syfert, Mindy M; Smith, Matthew J; Coomes, David A

    2013-01-01

    Species distribution models (SDMs) trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a) choosing to correct for geographical sampling bias and (b) using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt). In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.

  9. The Cognitive and Perceptual Laws of the Inclined Plane.

    PubMed

    Masin, Sergio Cesare

    2016-09-01

    The study explored whether laypersons correctly tacitly know Galileo's law of the inclined plane and what the basis of such knowledge could be. Participants predicted the time a ball would take to roll down a slope with factorial combination of ball travel distance and slope angle. The resulting pattern of factorial curves relating the square of predicted time to travel distance for each slope angle was identical to that implied by Galileo's law, indicating a correct cognitive representation of this law. Intuitive physics research suggests that this cognitive representation may result from memories of past perceptions of objects rolling down a slope. Such a basis and the correct cognitive representation of Galileo's law led to the hypothesis that Galileo's law is also perceptually represented correctly. To test this hypothesis, participants were asked to judge the perceived travel time of a ball actually rolling down a slope, with perceived travel distance and perceived slope angle varied in a factorial design. The obtained pattern of factorial curves was equal to that implied by Galileo's law, indicating that the functional relationships defined in this law were perceptually represented correctly. The results foster the idea that laypersons may tacitly know both linear and nonlinear multiplicative physical laws of the everyday world. As a practical implication, the awareness of this conclusion may help develop more effective methods for teaching physics and for improving human performance in the physical environment.

  10. Measurement Error Correction for Predicted Spatiotemporal Air Pollution Exposures.

    PubMed

    Keller, Joshua P; Chang, Howard H; Strickland, Matthew J; Szpiro, Adam A

    2017-05-01

    Air pollution cohort studies are frequently analyzed in two stages, first modeling exposure then using predicted exposures to estimate health effects in a second regression model. The difference between predicted and unobserved true exposures introduces a form of measurement error in the second stage health model. Recent methods for spatial data correct for measurement error with a bootstrap and by requiring the study design ensure spatial compatibility, that is, monitor and subject locations are drawn from the same spatial distribution. These methods have not previously been applied to spatiotemporal exposure data. We analyzed the association between fine particulate matter (PM2.5) and birth weight in the US state of Georgia using records with estimated date of conception during 2002-2005 (n = 403,881). We predicted trimester-specific PM2.5 exposure using a complex spatiotemporal exposure model. To improve spatial compatibility, we restricted to mothers residing in counties with a PM2.5 monitor (n = 180,440). We accounted for additional measurement error via a nonparametric bootstrap. Third trimester PM2.5 exposure was associated with lower birth weight in the uncorrected (-2.4 g per 1 μg/m difference in exposure; 95% confidence interval [CI]: -3.9, -0.8) and bootstrap-corrected (-2.5 g, 95% CI: -4.2, -0.8) analyses. Results for the unrestricted analysis were attenuated (-0.66 g, 95% CI: -1.7, 0.35). This study presents a novel application of measurement error correction for spatiotemporal air pollution exposures. Our results demonstrate the importance of spatial compatibility between monitor and subject locations and provide evidence of the association between air pollution exposure and birth weight.

  11. Investigation of high-speed free shear flows using improved pressure-strain correlated Reynolds stress turbulence model

    NASA Technical Reports Server (NTRS)

    Tiwari, S. N.; Lakshmanan, B.

    1993-01-01

    A high-speed shear layer is studied using compressibility corrected Reynolds stress turbulence model which employs newly developed model for pressure-strain correlation. MacCormack explicit prediction-corrector method is used for solving the governing equations and the turbulence transport equations. The stiffness arising due to source terms in the turbulence equations is handled by a semi-implicit numerical technique. Results obtained using the new model show a sharper reduction in growth rate with increasing convective Mach number. Some improvements were also noted in the prediction of the normalized streamwise stress and Reynolds shear stress. The computed results are in good agreement with the experimental data.

  12. Correction coefficient for see-through labyrinth seal

    NASA Astrophysics Data System (ADS)

    Hasnedl, Dan; Epikaridis, Premysl; Slama, Vaclav

    In a steam turbine design, the flow-part design and blade shapes are influenced by the design mass-flow through each turbine stage. If it would be possible to predict this mass-flow more precisely, it will result in optimized design and therefore an efficiency benefit. This article is concerned with improving the prediction of losses caused by the seal leakage. In the common simulation of the thermodynamic cycle of a steam turbine, analytical formulas are used in order to simulate the seal leakage. Therefore, this article describes an improvement of analytical formulas used in a turbine heat balance calculation. The results are verified by numerical simulations and experimental data from the steam test rig.

  13. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    NASA Astrophysics Data System (ADS)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.

  14. Prediction of response factors for gas chromatography with flame ionization detection: Algorithm improvement, extension to silylated compounds, and application to the quantification of metabolites

    PubMed Central

    de Saint Laumer, Jean‐Yves; Leocata, Sabine; Tissot, Emeline; Baroux, Lucie; Kampf, David M.; Merle, Philippe; Boschung, Alain; Seyfried, Markus

    2015-01-01

    We previously showed that the relative response factors of volatile compounds were predictable from either combustion enthalpies or their molecular formulae only 1. We now extend this prediction to silylated derivatives by adding an increment in the ab initio calculation of combustion enthalpies. The accuracy of the experimental relative response factors database was also improved and its population increased to 490 values. In particular, more brominated compounds were measured, and their prediction accuracy was improved by adding a correction factor in the algorithm. The correlation coefficient between predicted and measured values increased from 0.936 to 0.972, leading to a mean prediction accuracy of ± 6%. Thus, 93% of the relative response factors values were predicted with an accuracy of better than ± 10%. The capabilities of the extended algorithm are exemplified by (i) the quick and accurate quantification of hydroxylated metabolites resulting from a biodegradation test after silylation and prediction of their relative response factors, without having the reference substances available; and (ii) the rapid purity determinations of volatile compounds. This study confirms that Gas chromatography with a flame ionization detector and using predicted relative response factors is one of the few techniques that enables quantification of volatile compounds without calibrating the instrument with the pure reference substance. PMID:26179324

  15. Biomarker Surrogates Do Not Accurately Predict Sputum Eosinophils and Neutrophils in Asthma

    PubMed Central

    Hastie, Annette T.; Moore, Wendy C.; Li, Huashi; Rector, Brian M.; Ortega, Victor E.; Pascual, Rodolfo M.; Peters, Stephen P.; Meyers, Deborah A.; Bleecker, Eugene R.

    2013-01-01

    Background Sputum eosinophils (Eos) are a strong predictor of airway inflammation, exacerbations, and aid asthma management, whereas sputum neutrophils (Neu) indicate a different severe asthma phenotype, potentially less responsive to TH2-targeted therapy. Variables such as blood Eos, total IgE, fractional exhaled nitric oxide (FeNO) or FEV1% predicted, may predict airway Eos, while age, FEV1%predicted, or blood Neu may predict sputum Neu. Availability and ease of measurement are useful characteristics, but accuracy in predicting airway Eos and Neu, individually or combined, is not established. Objectives To determine whether blood Eos, FeNO, and IgE accurately predict sputum eosinophils, and age, FEV1% predicted, and blood Neu accurately predict sputum neutrophils (Neu). Methods Subjects in the Wake Forest Severe Asthma Research Program (N=328) were characterized by blood and sputum cells, healthcare utilization, lung function, FeNO, and IgE. Multiple analytical techniques were utilized. Results Despite significant association with sputum Eos, blood Eos, FeNO and total IgE did not accurately predict sputum Eos, and combinations of these variables failed to improve prediction. Age, FEV1%predicted and blood Neu were similarly unsatisfactory for prediction of sputum Neu. Factor analysis and stepwise selection found FeNO, IgE and FEV1% predicted, but not blood Eos, correctly predicted 69% of sputum Eos

  16. PPREMO: a prospective cohort study of preterm infant brain structure and function to predict neurodevelopmental outcome.

    PubMed

    George, Joanne M; Boyd, Roslyn N; Colditz, Paul B; Rose, Stephen E; Pannek, Kerstin; Fripp, Jurgen; Lingwood, Barbara E; Lai, Melissa M; Kong, Annice H T; Ware, Robert S; Coulthard, Alan; Finn, Christine M; Bandaranayake, Sasaka E

    2015-09-16

    More than 50 percent of all infants born very preterm will experience significant motor and cognitive impairment. Provision of early intervention is dependent upon accurate, early identification of infants at risk of adverse outcomes. Magnetic resonance imaging at term equivalent age combined with General Movements assessment at 12 weeks corrected age is currently the most accurate method for early prediction of cerebral palsy at 12 months corrected age. To date no studies have compared the use of earlier magnetic resonance imaging combined with neuromotor and neurobehavioural assessments (at 30 weeks postmenstrual age) to predict later motor and neurodevelopmental outcomes including cerebral palsy (at 12-24 months corrected age). This study aims to investigate i) the relationship between earlier brain imaging and neuromotor/neurobehavioural assessments at 30 and 40 weeks postmenstrual age, and ii) their ability to predict motor and neurodevelopmental outcomes at 3 and 12 months corrected age. This prospective cohort study will recruit 80 preterm infants born ≤ 30 week's gestation and a reference group of 20 healthy term born infants from the Royal Brisbane & Women's Hospital in Brisbane, Australia. Infants will undergo brain magnetic resonance imaging at approximately 30 and 40 weeks postmenstrual age to develop our understanding of very early brain structure at 30 weeks and maturation that occurs between 30 and 40 weeks postmenstrual age. A combination of neurological (Hammersmith Neonatal Neurologic Examination), neuromotor (General Movements, Test of Infant Motor Performance), neurobehavioural (NICU Network Neurobehavioural Scale, Premie-Neuro) and visual assessments will be performed at 30 and 40 weeks postmenstrual age to improve our understanding of the relationship between brain structure and function. These data will be compared to motor assessments at 12 weeks corrected age and motor and neurodevelopmental outcomes at 12 months corrected age (neurological assessment by paediatrician, Bayley scales of Infant and Toddler Development, Alberta Infant Motor Scale, Neurosensory Motor Developmental Assessment) to differentiate atypical development (including cerebral palsy and/or motor delay). Earlier identification of those very preterm infants at risk of adverse neurodevelopmental and motor outcomes provides an additional period for intervention to optimise outcomes. Australian New Zealand Clinical Trials Registry ACTRN12613000280707. Registered 8 March 2013.

  17. Does correcting astigmatism with toric lenses improve driving performance?

    PubMed

    Cox, Daniel J; Banton, Thomas; Record, Steven; Grabman, Jesse H; Hawkins, Ronald J

    2015-04-01

    Driving is a vision-based activity of daily living that impacts safety. Because visual disruption can compromise driving safety, contact lens wearers with astigmatism may pose a driving safety risk if they experience residual blur from spherical lenses that do not correct their astigmatism or if they experience blur from toric lenses that rotate excessively. Given that toric lens stabilization systems are continually improving, this preliminary study tested the hypothesis that astigmats wearing toric contact lenses, compared with spherical lenses, would exhibit better overall driving performance and driving-specific visual abilities. A within-subject, single-blind, crossover, randomized design was used to evaluate driving performance in 11 young adults with astigmatism (-0.75 to -1.75 diopters cylinder). Each participant drove a highly immersive, virtual reality driving simulator (210 degrees field of view) with (1) no correction, (2) spherical contact lens correction (ACUVUE MOIST), and (3) toric contact lens correction (ACUVUE MOIST for Astigmatism). Tactical driving skills such as steering, speed management, and braking, as well as operational driving abilities such as visual acuity, contrast sensitivity, and foot and arm reaction time, were quantified. There was a main effect for type of correction on driving performance (p = 0.05). Correction with toric lenses resulted in significantly safer tactical driving performance than no correction (p < 0.05), whereas correction with spherical lenses did not differ in driving safety from no correction (p = 0.118). Operational tests differentiated corrected from uncorrected performance for both spherical (p = 0.008) and toric (p = 0.011) lenses, but they were not sensitive enough to differentiate toric from spherical lens conditions. Given previous research showing that deficits in these tactical skills are predictive of future real-world collisions, these preliminary data suggest that correcting low to moderate astigmatism with toric lenses may be important to driving safety. Their merits relative to spherical lens correction require further investigation.

  18. Bronchopulmonary dysplasia: effect of altitude correction and role for the Neonatal Research Network Prediction Algorithm.

    PubMed

    Gulliver, Kristina; Yoder, Bradley A

    2018-05-09

    To determine the effect of altitude correction on bronchopulmonary dysplasia (BPD) rates and to assess validity of the NICHD "Neonatal BPD Outcome Estimator" for predicting BPD with and without altitude correction. Retrospective analysis included neonates born <30 weeks gestational age (GA) between 2010 and 2016. "Effective" FiO 2 requirements were determined at 36 weeks corrected GA. Altitude correction performed via ratio of barometric pressure (BP) in our unit to sea level BP. Probability of death and/or moderate-to-severe BPD was calculated using the NICHD BPD Outcome Estimator. Five hundred and sixty-one infants were included. Rate of moderate-to-severe BPD decreased from 71 to 40% following altitude correction. Receiver-operating characteristic curves indicated high predictability of BPD Outcome Estimator for altitude-corrected moderate-to-severe BPD diagnosis. Correction for altitude reduced moderate-to-severe BPD rate by almost 50%, to a rate consistent with recent published values. NICHD BPD Outcome Estimator is a valid tool for predicting the risk of moderate-to-severe BPD following altitude correction.

  19. Advanced turboprop aircraft flyover noise annoyance - Comparison of different propeller configurations

    NASA Technical Reports Server (NTRS)

    Mccurdy, David A.

    1989-01-01

    A laboratory experiment was conducted to compare the annoyance of flyover noise from advanced turboprop aircraft having different propeller configurations with the annoyance of conventional turboprop and jet aircraft flyover noise. It was found that advanced turboprops with single-rotating propellers were, on average, slightly less annoying than the other aircraft. Fundamental frequency and tone-to-broadband noise ratio affected annoyance response to advanced turboprops but the effects varied with propeller configuration and noise metric. The addition of duration corrections and corrections for tones above 500 Hz to the noise measurement procedures improved prediction ability.

  20. Correlation-based Transition Modeling for External Aerodynamic Flows

    NASA Astrophysics Data System (ADS)

    Medida, Shivaji

    Conventional turbulence models calibrated for fully turbulent boundary layers often over-predict drag and heat transfer on aerodynamic surfaces with partially laminar boundary layers. A robust correlation-based model is developed for use in Reynolds-Averaged Navier-Stokes simulations to predict laminar-to-turbulent transition onset of boundary layers on external aerodynamic surfaces. The new model is derived from an existing transition model for the two-equation k-omega Shear Stress Transport (SST) turbulence model, and is coupled with the one-equation Spalart-Allmaras (SA) turbulence model. The transition model solves two transport equations for intermittency and transition momentum thickness Reynolds number. Experimental correlations and local mean flow quantities are used in the model to account for effects of freestream turbulence level and pressure gradients on transition onset location. Transition onset is triggered by activating intermittency production using a vorticity Reynolds number criterion. In the new model, production and destruction terms of the intermittency equation are modified to improve consistency in the fully turbulent boundary layer post-transition onset, as well as ensure insensitivity to freestream eddy viscosity value specified in the SA model. In the original model, intermittency was used to control production and destruction of turbulent kinetic energy. Whereas, in the new model, only the production of eddy viscosity in SA model is controlled, and the destruction term is not altered. Unlike the original model, the new model does not use an additional correction to intermittency for separation-induced transition. Accuracy of drag predictions are improved significantly with the use of the transition model for several two-dimensional single- and multi-element airfoil cases over a wide range of Reynolds numbers. The new model is able to predict the formation of stable and long laminar separation bubbles on low-Reynolds number airfoils that is not captured with conventional turbulence models. The validated transition model is successfully applied to rotating blade configurations in axial flow conditions to study the effects of transitional boundary layers on rotor thrust and torque. In helicopter rotors, inclusion of transition effects increased thrust prediction by 2% and decreased torque by as much as 8% at lower collective angles, due to reduced airfoil profile drag. In wind turbine rotors, transition model predicted a 7%--70% increase in generated shaft torque at lower wind speeds, due to lower viscous drag. This has important implications for CFD analysis of small wind turbines operating at low values of rated power. Transition onset locations along upper and lower surfaces of rotor blades are analyzed in detail. A new crossflow transition onset criterion is developed to account for crossflow instability effects in three-dimensional boundary layers. Preliminary results for swept wing and rotating blade flows demonstrate the need to account for crossflow transition in three-dimensional simulations of wings, rotating blades, and airframes. Inclusion of crossflow effects resulted in accelerated transition in the presence of favorable pressure gradients and yawed flow. Finally, a new correction to the wall damping function in the Spalart-Allmaras turbulence model is proposed to improve sensitivity of the model to strong adverse pressure gradients (APG). The correction reduces turbulence production in the boundary layer when the ratio of magnitudes of local turbulent stress to the wall shear stress exceeds a threshold value, therefore enabling earlier separation of boundary layer. Improved prediction of static and dynamic stall on two-dimensional airfoils is demonstrated with the APG correction.

  1. An entropy and viscosity corrected potential method for rotor performance prediction

    NASA Technical Reports Server (NTRS)

    Bridgeman, John O.; Strawn, Roger C.; Caradonna, Francis X.

    1988-01-01

    An unsteady Full-Potential Rotor code (FPR) has been enhanced with modifications directed at improving its drag prediction capability. The shock generated entropy has been included to provide solutions comparable to the Euler equations. A weakly interacted integral boundary layer has also been coupled to FPR in order to estimate skin-friction drag. Pressure distributions, shock positions, and drag comparisons are made with various data sets derived from two-dimensional airfoil, hovering, and advancing high speed rotor tests. In all these comparisons, the effect of the nonisentropic modification improves (i.e., weakens) the shock strength and wave drag. In addition, the boundary layer method yields reasonable estimates of skin-friction drag. Airfoil drag and hover torque data comparisons are excellent, as are predicted shock strength and positions for a high speed advancing rotor.

  2. Improved partition equilibrium model for predicting analyte response in electrospray ionization mass spectrometry.

    PubMed

    Du, Lihong; White, Robert L

    2009-02-01

    A previously proposed partition equilibrium model for quantitative prediction of analyte response in electrospray ionization mass spectrometry is modified to yield an improved linear relationship. Analyte mass spectrometer response is modeled by a competition mechanism between analyte and background electrolytes that is based on partition equilibrium considerations. The correlation between analyte response and solution composition is described by the linear model over a wide concentration range and the improved model is shown to be valid for a wide range of experimental conditions. The behavior of an analyte in a salt solution, which could not be explained by the original model, is correctly predicted. The ion suppression effects of 16:0 lysophosphatidylcholine (LPC) on analyte signals are attributed to a combination of competition for excess charge and reduction of total charge due to surface tension effects. In contrast to the complicated mathematical forms that comprise the original model, the simplified model described here can more easily be employed to predict analyte mass spectrometer responses for solutions containing multiple components. Copyright (c) 2008 John Wiley & Sons, Ltd.

  3. Assimilation of Satellite to Improve Cloud Simulation in Wrf Model

    NASA Astrophysics Data System (ADS)

    Park, Y. H.; Pour Biazar, A.; McNider, R. T.

    2012-12-01

    A simple approach has been introduced to improve cloud simulation spatially and temporally in a meteorological model. The first step for this approach is to use Geostationary Operational Environmental Satellite (GOES) observations to identify clouds and estimate the clouds structure. Then by comparing GOES observations to model cloud field, we identify areas in which model has under-predicted or over-predicted clouds. Next, by introducing subsidence in areas with over-prediction and lifting in areas with under-prediction, erroneous clouds are removed and new clouds are formed. The technique estimates a vertical velocity needed for the cloud correction and then uses a one dimensional variation schemes (1D_Var) to calculate the horizontal divergence components and the consequent horizontal wind components needed to sustain such vertical velocity. Finally, the new horizontal winds are provided as a nudging field to the model. This nudging provides the dynamical support needed to create/clear clouds in a sustainable manner. The technique was implemented and tested in the Weather Research and Forecast (WRF) Model and resulted in substantial improvement in model simulated clouds. Some of the results are presented here.

  4. Short-term Drought Prediction in India.

    NASA Astrophysics Data System (ADS)

    Shah, R.; Mishra, V.

    2014-12-01

    Medium range soil moisture drought forecast helps in decision making in the field of agriculture and water resources management. Part of skills in medium range drought forecast comes from precipitation. Proper evaluation and correction of precipitation forecast may improve drought predictions. Here, we evaluate skills of ensemble mean precipitation forecast from Global Ensemble Forecast System (GEFS) for medium range drought predictions over India. Climatological mean (CLIM) of historic data (OBS) are used as reference forecast to evaluate GEFS precipitation forecast. Analysis was conducted based on forecast initiated on 1st and 15th dates of each month for lead up to 7-days. Correlation and RMSE were used to estimate skill scores of accumulated GEFS precipitation forecast from lead 1 to 7-days. Volumetric indices based on the 2X2 contingency table were used to check missed and falsely predicted historic volume of daily precipitation from GEFS in different regions and at different thresholds. GEFS showed improvement in correlation of 0.44 over CLIM during the monsoon season and 0.55 during the winter season. Lower RMSE was showed by GEFS than CLIM. Ratio of RMSE in GEFS and CLIM comes out as 0.82 and 0.4 (perfect skill is at zero) during the monsoon and winter season, respectively. We finally used corrected GEFS forecast to derive the Variable Infiltration Capacity (VIC) model, which was used to develop short-term forecast of hydrologic and agricultural (soil moisture) droughts in India.

  5. Boosted classification trees result in minor to modest improvement in the accuracy in classifying cardiovascular outcomes compared to conventional classification trees

    PubMed Central

    Austin, Peter C; Lee, Douglas S

    2011-01-01

    Purpose: Classification trees are increasingly being used to classifying patients according to the presence or absence of a disease or health outcome. A limitation of classification trees is their limited predictive accuracy. In the data-mining and machine learning literature, boosting has been developed to improve classification. Boosting with classification trees iteratively grows classification trees in a sequence of reweighted datasets. In a given iteration, subjects that were misclassified in the previous iteration are weighted more highly than subjects that were correctly classified. Classifications from each of the classification trees in the sequence are combined through a weighted majority vote to produce a final classification. The authors' objective was to examine whether boosting improved the accuracy of classification trees for predicting outcomes in cardiovascular patients. Methods: We examined the utility of boosting classification trees for classifying 30-day mortality outcomes in patients hospitalized with either acute myocardial infarction or congestive heart failure. Results: Improvements in the misclassification rate using boosted classification trees were at best minor compared to when conventional classification trees were used. Minor to modest improvements to sensitivity were observed, with only a negligible reduction in specificity. For predicting cardiovascular mortality, boosted classification trees had high specificity, but low sensitivity. Conclusions: Gains in predictive accuracy for predicting cardiovascular outcomes were less impressive than gains in performance observed in the data mining literature. PMID:22254181

  6. Secular Orbit Evolution in Systems with a Strong External Perturber—A Simple and Accurate Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrade-Ines, Eduardo; Eggl, Siegfried, E-mail: eandrade.ines@gmail.com, E-mail: siegfried.eggl@jpl.nasa.gov

    We present a semi-analytical correction to the seminal solution for the secular motion of a planet’s orbit under gravitational influence of an external perturber derived by Heppenheimer. A comparison between analytical predictions and numerical simulations allows us to determine corrective factors for the secular frequency and forced eccentricity in the coplanar restricted three-body problem. The correction is given in the form of a polynomial function of the system’s parameters that can be applied to first-order forced eccentricity and secular frequency estimates. The resulting secular equations are simple, straight forward to use, and improve the fidelity of Heppenheimers solution well beyond higher-ordermore » models. The quality and convergence of the corrected secular equations are tested for a wide range of parameters and limits of its applicability are given.« less

  7. Simple improvements to classical bubble nucleation models.

    PubMed

    Tanaka, Kyoko K; Tanaka, Hidekazu; Angélil, Raymond; Diemand, Jürg

    2015-08-01

    We revisit classical nucleation theory (CNT) for the homogeneous bubble nucleation rate and improve the classical formula using a correct prefactor in the nucleation rate. Most of the previous theoretical studies have used the constant prefactor determined by the bubble growth due to the evaporation process from the bubble surface. However, the growth of bubbles is also regulated by the thermal conduction, the viscosity, and the inertia of liquid motion. These effects can decrease the prefactor significantly, especially when the liquid pressure is much smaller than the equilibrium one. The deviation in the nucleation rate between the improved formula and the CNT can be as large as several orders of magnitude. Our improved, accurate prefactor and recent advances in molecular dynamics simulations and laboratory experiments for argon bubble nucleation enable us to precisely constrain the free energy barrier for bubble nucleation. Assuming the correction to the CNT free energy is of the functional form suggested by Tolman, the precise evaluations of the free energy barriers suggest the Tolman length is ≃0.3σ independently of the temperature for argon bubble nucleation, where σ is the unit length of the Lennard-Jones potential. With this Tolman correction and our prefactor one gets accurate bubble nucleation rate predictions in the parameter range probed by current experiments and molecular dynamics simulations.

  8. GenePRIMP: A Gene Prediction Improvement Pipeline For Prokaryotic Genomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kyrpides, Nikos C.; Ivanova, Natalia N.; Pati, Amrita

    2010-07-08

    GenePRIMP (Gene Prediction Improvement Pipeline, Http://geneprimp.jgi-psf.org), a computational process that performs evidence-based evaluation of gene models in prokaryotic genomes and reports anomalies including inconsistent start sites, missing genes, and split genes. We show that manual curation of gene models using the anomaly reports generated by GenePRIMP improves their quality and demonstrate the applicability of GenePRIMP in improving finishing quality and comparing different genome sequencing and annotation technologies. Keywords in context: Gene model, Quality Control, Translation start sites, Automatic correction. Hardware requirements; PC, MAC; Operating System: UNIX/LINUX; Compiler/Version: Perl 5.8.5 or higher; Special requirements: NCBI Blast and nr installation; File Types:more » Source Code, Executable module(s), Sample problem input data; installation instructions other; programmer documentation. Location/transmission: http://geneprimp.jgi-psf.org/gp.tar.gz« less

  9. Identifying and Predicting Classes of Response to Explicit Phonological Spelling Instruction during Independent Composing

    ERIC Educational Resources Information Center

    Amtmann, Dagmar; Abbott, Robert D.; Berninger, Virginia W.

    2008-01-01

    After explicit spelling instruction, low achieving second grade spellers increased the number of correctly spelled words during composing but differed in response trajectories. Class 1 (low initial and slow growth) had the lowest initial performance and improved at a relatively slow rate. Class 2 (high initial and fast growth) started higher than…

  10. Sensorimotor synchronization with tempo-changing auditory sequences: Modeling temporal adaptation and anticipation.

    PubMed

    van der Steen, M C Marieke; Jacoby, Nori; Fairhurst, Merle T; Keller, Peter E

    2015-11-11

    The current study investigated the human ability to synchronize movements with event sequences containing continuous tempo changes. This capacity is evident, for example, in ensemble musicians who maintain precise interpersonal coordination while modulating the performance tempo for expressive purposes. Here we tested an ADaptation and Anticipation Model (ADAM) that was developed to account for such behavior by combining error correction processes (adaptation) with a predictive temporal extrapolation process (anticipation). While previous computational models of synchronization incorporate error correction, they do not account for prediction during tempo-changing behavior. The fit between behavioral data and computer simulations based on four versions of ADAM was assessed. These versions included a model with adaptation only, one in which adaptation and anticipation act in combination (error correction is applied on the basis of predicted tempo changes), and two models in which adaptation and anticipation were linked in a joint module that corrects for predicted discrepancies between the outcomes of adaptive and anticipatory processes. The behavioral experiment required participants to tap their finger in time with three auditory pacing sequences containing tempo changes that differed in the rate of change and the number of turning points. Behavioral results indicated that sensorimotor synchronization accuracy and precision, while generally high, decreased with increases in the rate of tempo change and number of turning points. Simulations and model-based parameter estimates showed that adaptation mechanisms alone could not fully explain the observed precision of sensorimotor synchronization. Including anticipation in the model increased the precision of simulated sensorimotor synchronization and improved the fit of model to behavioral data, especially when adaptation and anticipation mechanisms were linked via a joint module based on the notion of joint internal models. Overall results suggest that adaptation and anticipation mechanisms both play an important role during sensorimotor synchronization with tempo-changing sequences. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Can we predict failure in couple therapy early enough to enhance outcome?

    PubMed

    Pepping, Christopher A; Halford, W Kim; Doss, Brian D

    2015-02-01

    Feedback to therapists based on systematic monitoring of individual therapy progress reliably enhances therapy outcome. An implicit assumption of therapy progress feedback is that clients unlikely to benefit from therapy can be detected early enough in the course of therapy for corrective action to be taken. To explore the possibility of using feedback of therapy progress to enhance couple therapy outcome, the current study tested whether weekly therapy progress could detect off-track clients early in couple therapy. In an effectiveness trial of couple therapy, 136 couples were monitored weekly on relationship satisfaction and an expert derived algorithm was used to attempt to predict eventual therapy outcome. As expected, the algorithm detected a significant proportion of couples who did not benefit from couple therapy at Session 3, but prediction was substantially improved at Session 4 so that eventual outcome was accurately predicted for 70% of couples, with little improvement of prediction thereafter. More sophisticated algorithms might enhance prediction accuracy, and a trial of the effects of therapy progress feedback on couple therapy outcome is needed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Early prediction of extreme stratospheric polar vortex states based on causal precursors

    NASA Astrophysics Data System (ADS)

    Kretschmer, Marlene; Runge, Jakob; Coumou, Dim

    2017-08-01

    Variability in the stratospheric polar vortex (SPV) can influence the tropospheric circulation and thereby winter weather. Early predictions of extreme SPV states are thus important to improve forecasts of winter weather including cold spells. However, dynamical models are usually restricted in lead time because they poorly capture low-frequency processes. Empirical models often suffer from overfitting problems as the relevant physical processes and time lags are often not well understood. Here we introduce a novel empirical prediction method by uniting a response-guided community detection scheme with a causal discovery algorithm. This way, we objectively identify causal precursors of the SPV at subseasonal lead times and find them to be in good agreement with known physical drivers. A linear regression prediction model based on the causal precursors can explain most SPV variability (r2 = 0.58), and our scheme correctly predicts 58% (46%) of extremely weak SPV states for lead times of 1-15 (16-30) days with false-alarm rates of only approximately 5%. Our method can be applied to any variable relevant for (sub)seasonal weather forecasts and could thus help improving long-lead predictions.

  13. Can we predict age at natural menopause using ovarian reserve tests or mother's age at menopause? A systematic literature review.

    PubMed

    Depmann, Martine; Broer, Simone L; van der Schouw, Yvonne T; Tehrani, Fahimeh R; Eijkemans, Marinus J; Mol, Ben W; Broekmans, Frank J

    2016-02-01

    This review aimed to appraise data on prediction of age at natural menopause (ANM) based on antimüllerian hormone (AMH), antral follicle count (AFC), and mother's ANM to evaluate clinical usefulness and to identify directions for further research. We conducted three systematic reviews of the literature to identify studies of menopause prediction based on AMH, AFC, or mother's ANM, corrected for baseline age. Six studies selected in the search for AMH all consistently demonstrated AMH as being capable of predicting ANM (hazard ratio, 5.6-9.2). The sole study reporting on mother's ANM indicated that AMH was capable of predicting ANM (hazard ratio, 9.1-9.3). Two studies provided analyses of AFC and yielded conflicting results, making this marker less strong. AMH is currently the most promising marker for ANM prediction. The predictive capacity of mother's ANM demonstrated in a single study makes this marker a promising contributor to AMH for menopause prediction. Models, however, do not predict the extremes of menopause age very well and have wide prediction interval. These markers clearly need improvement before they can be used for individual prediction of menopause in the clinical setting. Moreover, potential limitations for such use include variations in AMH assays used and a lack of correction for factors or diseases affecting AMH levels or ANM. Future studies should include women of a broad age range (irrespective of cycle regularity) and should base predictions on repeated AMH measurements. Furthermore, currently unknown candidate predictors need to be identified.

  14. Functional neuroimaging of high-risk 6-month-old infants predicts a diagnosis of autism at 24 months of age

    PubMed Central

    Emerson, Robert W.; Adams, Chloe; Nishino, Tomoyuki; Hazlett, Heather Cody; Wolff, Jason J.; Zwaigenbaum, Lonnie; Constantino, John N.; Shen, Mark D.; Swanson, Meghan R.; Elison, Jed T.; Kandala, Sridhar; Estes, Annette M.; Botteron, Kelly N.; Collins, Louis; Dager, Stephen R.; Evans, Alan C.; Gerig, Guido; Gu, Hongbin; McKinstry, Robert C.; Paterson, Sarah; Schultz, Robert T.; Styner, Martin; Network, IBIS; Schlaggar, Bradley L.; Pruett, John R.; Piven, Joseph

    2018-01-01

    Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by social deficits and repetitive behaviors that typically emerge by 24 months of age. To develop effective early interventions that can potentially ameliorate the defining deficits of ASD and improve long-term outcomes, early detection is essential. Using prospective neuroimaging of 59 6-month-old infants with a high familial risk for ASD, we show that functional connectivity magnetic resonance imaging correctly identified which individual children would receive a research clinical best-estimate diagnosis of ASD at 24 months of age. Functional brain connections were defined in 6-month-old infants that correlated with 24-month scores on measures of social behavior, language, motor development, and repetitive behavior, which are all features common to the diagnosis of ASD. A fully cross-validated machine learning algorithm applied at age 6 months had a positive predictive value of 100% [95% confidence interval (CI), 62.9 to 100], correctly predicting 9 of 11 infants who received a diagnosis of ASD at 24 months (sensitivity, 81.8%; 95% CI, 47.8 to 96.8). All 48 6-month-old infants who were not diagnosed with ASD were correctly classified [specificity, 100% (95% CI, 90.8 to 100); negative predictive value, 96.0% (95% CI, 85.1 to 99.3)]. These findings have clinical implications for early risk assessment and the feasibility of developing early preventative interventions for ASD. PMID:28592562

  15. N3LO corrections to jet production in deep inelastic scattering using the Projection-to-Born method

    NASA Astrophysics Data System (ADS)

    Currie, J.; Gehrmann, T.; Glover, E. W. N.; Huss, A.; Niehues, J.; Vogt, A.

    2018-05-01

    Computations of higher-order QCD corrections for processes with exclusive final states require a subtraction method for real-radiation contributions. We present the first-ever generalisation of a subtraction method for third-order (N3LO) QCD corrections. The Projection-to-Born method is used to combine inclusive N3LO coefficient functions with an exclusive second-order (NNLO) calculation for a final state with an extra jet. The input requirements, advantages, and potential applications of the method are discussed, and validations at lower orders are performed. As a test case, we compute the N3LO corrections to kinematical distributions and production rates for single-jet production in deep inelastic scattering in the laboratory frame, and compare them with data from the ZEUS experiment at HERA. The corrections are small in the central rapidity region, where they stabilize the predictions to sub per-cent level. The corrections increase substantially towards forward rapidity where large logarithmic effects are expected, thereby yielding an improved description of the data in this region.

  16. Predicting elastic properties of β-HMX from first-principles calculations.

    PubMed

    Peng, Qing; Rahul; Wang, Guangyu; Liu, Gui-Rong; Grimme, Stefan; De, Suvranu

    2015-05-07

    We investigate the performance of van der Waals (vdW) functions in predicting the elastic constants of β cyclotetramethylene tetranitramine (HMX) energetic molecular crystals using density functional theory (DFT) calculations. We confirm that the accuracy of the elastic constants is significantly improved using the vdW corrections with environment-dependent C6 together with PBE and revised PBE exchange-correlation functionals. The elastic constants obtained using PBE-D3(0) calculations yield the most accurate mechanical response of β-HMX when compared with experimental stress-strain data. Our results suggest that PBE-D3 calculations are reliable in predicting the elastic constants of this material.

  17. Evaluation of performance of seasonal precipitation prediction at regional scale over India

    NASA Astrophysics Data System (ADS)

    Mohanty, U. C.; Nageswararao, M. M.; Sinha, P.; Nair, A.; Singh, A.; Rai, R. K.; Kar, S. C.; Ramesh, K. J.; Singh, K. K.; Ghosh, K.; Rathore, L. S.; Sharma, R.; Kumar, A.; Dhekale, B. S.; Maurya, R. K. S.; Sahoo, R. K.; Dash, G. P.

    2018-03-01

    The seasonal scale precipitation amount is an important ingredient in planning most of the agricultural practices (such as a type of crops, and showing and harvesting schedules). India being an agroeconomic country, the seasonal scale prediction of precipitation is directly linked to the socioeconomic growth of the nation. At present, seasonal precipitation prediction at regional scale is a challenging task for the scientific community. In the present study, an attempt is made to develop multi-model dynamical-statistical approach for seasonal precipitation prediction at the regional scale (meteorological subdivisions) over India for four prominent seasons which are winter (from December to February; DJF), pre-monsoon (from March to May; MAM), summer monsoon (from June to September; JJAS), and post-monsoon (from October to December; OND). The present prediction approach is referred as extended range forecast system (ERFS). For this purpose, precipitation predictions from ten general circulation models (GCMs) are used along with the India Meteorological Department (IMD) rainfall analysis data from 1982 to 2008 for evaluation of the performance of the GCMs, bias correction of the model results, and development of the ERFS. An extensive evaluation of the performance of the ERFS is carried out with dependent data (1982-2008) as well as independent predictions for the period 2009-2014. In general, the skill of the ERFS is reasonably better and consistent for all the seasons and different regions over India as compared to the GCMs and their simple mean. The GCM products failed to explain the extreme precipitation years, whereas the bias-corrected GCM mean and the ERFS improved the prediction and well represented the extremes in the hindcast period. The peak intensity, as well as regions of maximum precipitation, is better represented by the ERFS than the individual GCMs. The study highlights the improvement of forecast skill of the ERFS over 34 meteorological subdivisions as well as India as a whole during all the four seasons.

  18. Improving Earth/Prediction Models to Improve Network Processing

    NASA Astrophysics Data System (ADS)

    Wagner, G. S.

    2017-12-01

    The United States Atomic Energy Detection System (USAEDS) primaryseismic network consists of a relatively small number of arrays andthree-component stations. The relatively small number of stationsin the USAEDS primary network make it both necessary and feasibleto optimize both station and network processing.Station processing improvements include detector tuning effortsthat use Receiver Operator Characteristic (ROC) curves to helpjudiciously set acceptable Type 1 (false) vs. Type 2 (miss) errorrates. Other station processing improvements include the use ofempirical/historical observations and continuous background noisemeasurements to compute time-varying, maximum likelihood probabilityof detection thresholds.The USAEDS network processing software makes extensive use of theazimuth and slowness information provided by frequency-wavenumberanalysis at array sites, and polarization analysis at three-componentsites. Most of the improvements in USAEDS network processing aredue to improvements in the models used to predict azimuth, slowness,and probability of detection. Kriged travel-time, azimuth andslowness corrections-and associated uncertainties-are computedusing a ground truth database. Improvements in station processingand the use of improved models for azimuth, slowness, and probabilityof detection have led to significant improvements in USADES networkprocessing.

  19. Study program for design improvements of the X-3060 klystron. Phase 3: Electron gun fabrication and beam analyzer evaluation. Phase 4: Klystron prototype fabrication and testing

    NASA Technical Reports Server (NTRS)

    Goldfinger, A.

    1981-01-01

    A full scale model was produced to verify suggested design changes. Through beam analyzer study, the correct electron beam diameter and cross sectional profile were established in conjunction with the desired confining magnetic field. Comparative data on the performance of the X-3060 klystron, design predictions for the improved klystron, and performance data taken during acceptance testing of the prototype VKS-8274 JPL are presented.

  20. Improved packing of protein side chains with parallel ant colonies.

    PubMed

    Quan, Lijun; Lü, Qiang; Li, Haiou; Xia, Xiaoyan; Wu, Hongjie

    2014-01-01

    The accurate packing of protein side chains is important for many computational biology problems, such as ab initio protein structure prediction, homology modelling, and protein design and ligand docking applications. Many of existing solutions are modelled as a computational optimisation problem. As well as the design of search algorithms, most solutions suffer from an inaccurate energy function for judging whether a prediction is good or bad. Even if the search has found the lowest energy, there is no certainty of obtaining the protein structures with correct side chains. We present a side-chain modelling method, pacoPacker, which uses a parallel ant colony optimisation strategy based on sharing a single pheromone matrix. This parallel approach combines different sources of energy functions and generates protein side-chain conformations with the lowest energies jointly determined by the various energy functions. We further optimised the selected rotamers to construct subrotamer by rotamer minimisation, which reasonably improved the discreteness of the rotamer library. We focused on improving the accuracy of side-chain conformation prediction. For a testing set of 442 proteins, 87.19% of X1 and 77.11% of X12 angles were predicted correctly within 40° of the X-ray positions. We compared the accuracy of pacoPacker with state-of-the-art methods, such as CIS-RR and SCWRL4. We analysed the results from different perspectives, in terms of protein chain and individual residues. In this comprehensive benchmark testing, 51.5% of proteins within a length of 400 amino acids predicted by pacoPacker were superior to the results of CIS-RR and SCWRL4 simultaneously. Finally, we also showed the advantage of using the subrotamers strategy. All results confirmed that our parallel approach is competitive to state-of-the-art solutions for packing side chains. This parallel approach combines various sources of searching intelligence and energy functions to pack protein side chains. It provides a frame-work for combining different inaccuracy/usefulness objective functions by designing parallel heuristic search algorithms.

  1. Permutation importance: a corrected feature importance measure.

    PubMed

    Altmann, André; Toloşi, Laura; Sander, Oliver; Lengauer, Thomas

    2010-05-15

    In life sciences, interpretability of machine learning models is as important as their prediction accuracy. Linear models are probably the most frequently used methods for assessing feature relevance, despite their relative inflexibility. However, in the past years effective estimators of feature relevance have been derived for highly complex or non-parametric models such as support vector machines and RandomForest (RF) models. Recently, it has been observed that RF models are biased in such a way that categorical variables with a large number of categories are preferred. In this work, we introduce a heuristic for normalizing feature importance measures that can correct the feature importance bias. The method is based on repeated permutations of the outcome vector for estimating the distribution of measured importance for each variable in a non-informative setting. The P-value of the observed importance provides a corrected measure of feature importance. We apply our method to simulated data and demonstrate that (i) non-informative predictors do not receive significant P-values, (ii) informative variables can successfully be recovered among non-informative variables and (iii) P-values computed with permutation importance (PIMP) are very helpful for deciding the significance of variables, and therefore improve model interpretability. Furthermore, PIMP was used to correct RF-based importance measures for two real-world case studies. We propose an improved RF model that uses the significant variables with respect to the PIMP measure and show that its prediction accuracy is superior to that of other existing models. R code for the method presented in this article is available at http://www.mpi-inf.mpg.de/ approximately altmann/download/PIMP.R CONTACT: altmann@mpi-inf.mpg.de, laura.tolosi@mpi-inf.mpg.de Supplementary data are available at Bioinformatics online.

  2. Characterizing and Modeling the Cost of Rework in a Library of Reusable Software Components

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Condon, Steven E.; ElEmam, Khaled; Hendrick, Robert B.; Melo, Walcelio

    1997-01-01

    In this paper we characterize and model the cost of rework in a Component Factory (CF) organization. A CF is responsible for developing and packaging reusable software components. Data was collected on corrective maintenance activities for the Generalized Support Software reuse asset library located at the Flight Dynamics Division of NASA's GSFC. We then constructed a predictive model of the cost of rework using the C4.5 system for generating a logical classification model. The predictor variables for the model are measures of internal software product attributes. The model demonstrates good prediction accuracy, and can be used by managers to allocate resources for corrective maintenance activities. Furthermore, we used the model to generate proscriptive coding guidelines to improve programming, practices so that the cost of rework can be reduced in the future. The general approach we have used is applicable to other environments.

  3. Predicting mining activity with parallel genetic algorithms

    USGS Publications Warehouse

    Talaie, S.; Leigh, R.; Louis, S.J.; Raines, G.L.; Beyer, H.G.; O'Reilly, U.M.; Banzhaf, Arnold D.; Blum, W.; Bonabeau, C.; Cantu-Paz, E.W.; ,; ,

    2005-01-01

    We explore several different techniques in our quest to improve the overall model performance of a genetic algorithm calibrated probabilistic cellular automata. We use the Kappa statistic to measure correlation between ground truth data and data predicted by the model. Within the genetic algorithm, we introduce a new evaluation function sensitive to spatial correctness and we explore the idea of evolving different rule parameters for different subregions of the land. We reduce the time required to run a simulation from 6 hours to 10 minutes by parallelizing the code and employing a 10-node cluster. Our empirical results suggest that using the spatially sensitive evaluation function does indeed improve the performance of the model and our preliminary results also show that evolving different rule parameters for different regions tends to improve overall model performance. Copyright 2005 ACM.

  4. Elimination of Spurious Fractional Charges in Dissociating Molecules by Correcting the Shape of Approximate Kohn-Sham Potentials.

    PubMed

    Komsa, Darya N; Staroverov, Viktor N

    2016-11-08

    Standard density-functional approximations often incorrectly predict that heteronuclear diatomic molecules dissociate into fractionally charged atoms. We demonstrate that these spurious charges can be eliminated by adapting the shape-correction method for Kohn-Sham potentials that was originally introduced to improve Rydberg excitation energies [ Phys. Rev. Lett. 2012 , 108 , 253005 ]. Specifically, we show that if a suitably determined fraction of electron charge is added to or removed from a frontier Kohn-Sham orbital level, the approximate Kohn-Sham potential of a stretched molecule self-corrects by developing a semblance of step structure; if this potential is used to obtain the electron density of the neutral molecule, charge delocalization is blocked and spurious fractional charges disappear beyond a certain internuclear distance.

  5. Prediction of the production of nitrogen oxide (NOx) in turbojet engines

    NASA Astrophysics Data System (ADS)

    Tsague, Louis; Tsogo, Joseph; Tatietse, Thomas Tamo

    Gaseous nitrogen oxides (NO+NO2=NOx) are known as atmospheric trace constituent. These gases remain a big concern despite the advances in low NOx emission technology because they play a critical role in regulating the oxidization capacity of the atmosphere according to Crutzen [1995. My life with O 3, NO x and other YZO x S; Nobel Lecture; Chemistry 1995; pp 195; December 8, 1995] . Aircraft emissions of nitrogen oxides ( NOx) are regulated by the International Civil Aviation Organization. The prediction of NOx emission in turbojet engines by combining combustion operational data produced information showing correlation between the analytical and empirical results. There is close similarity between the calculated emission index and experimental data. The correlation shows improved accuracy when the 2124 experimental data from 11 gas turbine engines are evaluated than a previous semi empirical correlation approach proposed by Pearce et al. [1993. The prediction of thermal NOx in gas turbine exhausts. Eleventh International Symposium on Air Breathing Engines, Tokyo, 1993, pp. 6-9]. The new method we propose predict the production of NOx with far more improved accuracy than previous methods. Since a turbojet engine works in an atmosphere where temperature, pressure and humidity change frequently, a correction factor is developed with standard atmospheric laws and some correlations taken from scientific literature [Swartwelder, M., 2000. Aerospace engineering 410 Term Project performance analysis, November 17, 2000, pp. 2-5; Reed, J.A. Java Gas Turbine Simulator Documentation. pp. 4-5]. The new correction factor is validated with experimental observations from 19 turbojet engines cruising at altitudes of 9 and 13 km given in the ICAO repertory [Middleton, D., 1992. Appendix K (FAA/SETA). Section 1: Boeing Method Two Indices, 1992, pp. 2-3]. This correction factor will enable the prediction of cruise NOx emissions of turbojet engines at cruising speeds. The ICAO database [Goehlich, R.A., 2000. Investigation into the applicability of pollutant emission models for computer aided preliminary aircraft design, Book number 175654, 4.2.2000, pp. 57-79] can now be completed using the approach we propose to complete the whole mission flight NOx emissions.

  6. Influence of CT-based depth correction of renal scintigraphy in evaluation of living kidney donors on side selection and postoperative renal function: is it necessary to know the relative renal function?

    PubMed

    Weinberger, Sarah; Klarholz-Pevere, Carola; Liefeldt, Lutz; Baeder, Michael; Steckhan, Nico; Friedersdorff, Frank

    2018-03-22

    To analyse the influence of CT-based depth correction in the assessment of split renal function in potential living kidney donors. In 116 consecutive living kidney donors preoperative split renal function was assessed using the CT-based depth correction. Influence on donor side selection and postoperative renal function of the living kidney donors were analyzed. Linear regression analysis was performed to identify predictors of postoperative renal function. A left versus right kidney depth variation of more than 1 cm was found in 40/114 donors (35%). 11 patients (10%) had a difference of more than 5% in relative renal function after depth correction. Kidney depth variation and changes in relative renal function after depth correction would have had influence on side selection in 30 of 114 living kidney donors. CT depth correction did not improve the predictability of postoperative renal function of the living kidney donor. In general, it was not possible to predict the postoperative renal function from preoperative total and relative renal function. In multivariate linear regression analysis, age and BMI were identified as most important predictors for postoperative renal function of the living kidney donors. Our results clearly indicate that concerning the postoperative renal function of living kidney donors, the relative renal function of the donated kidney seems to be less important than other factors. A multimodal assessment with consideration of all available results including kidney size, location of the kidney and split renal function remains necessary.

  7. Bidirectional Contrast agent leakage correction of dynamic susceptibility contrast (DSC)-MRI improves cerebral blood volume estimation and survival prediction in recurrent glioblastoma treated with bevacizumab.

    PubMed

    Leu, Kevin; Boxerman, Jerrold L; Lai, Albert; Nghiemphu, Phioanh L; Pope, Whitney B; Cloughesy, Timothy F; Ellingson, Benjamin M

    2016-11-01

    To evaluate a leakage correction algorithm for T 1 and T2* artifacts arising from contrast agent extravasation in dynamic susceptibility contrast magnetic resonance imaging (DSC-MRI) that accounts for bidirectional contrast agent flux and compare relative cerebral blood volume (CBV) estimates and overall survival (OS) stratification from this model to those made with the unidirectional and uncorrected models in patients with recurrent glioblastoma (GBM). We determined median rCBV within contrast-enhancing tumor before and after bevacizumab treatment in patients (75 scans on 1.5T, 19 scans on 3.0T) with recurrent GBM without leakage correction and with application of the unidirectional and bidirectional leakage correction algorithms to determine whether rCBV stratifies OS. Decreased post-bevacizumab rCBV from baseline using the bidirectional leakage correction algorithm significantly correlated with longer OS (Cox, P = 0.01), whereas rCBV change using the unidirectional model (P = 0.43) or the uncorrected rCBV values (P = 0.28) did not. Estimates of rCBV computed with the two leakage correction algorithms differed on average by 14.9%. Accounting for T 1 and T2* leakage contamination in DSC-MRI using a two-compartment, bidirectional rather than unidirectional exchange model might improve post-bevacizumab survival stratification in patients with recurrent GBM. J. Magn. Reson. Imaging 2016;44:1229-1237. © 2016 International Society for Magnetic Resonance in Medicine.

  8. Higgs boson pair production at NNLO with top quark mass effects

    NASA Astrophysics Data System (ADS)

    Grazzini, M.; Heinrich, G.; Jones, S.; Kallweit, S.; Kerner, M.; Lindert, J. M.; Mazzitelli, J.

    2018-05-01

    We consider QCD radiative corrections to Higgs boson pair production through gluon fusion in proton collisions. We combine the exact next-to-leading order (NLO) contribution, which features two-loop virtual amplitudes with the full dependence on the top quark mass M t , with the next-to-next-to-leading order (NNLO) corrections computed in the large- M t approximation. The latter are improved with different reweighting techniques in order to account for finite- M t effects beyond NLO. Our reference NNLO result is obtained by combining one-loop double-real corrections with full M t dependence with suitably reweighted real-virtual and double-virtual contributions evaluated in the large- M t approximation. We present predictions for inclusive cross sections in pp collisions at √{s} = 13, 14, 27 and 100 TeV and we discuss their uncertainties due to missing M t effects. Our approximated NNLO corrections increase the NLO result by an amount ranging from +12% at √{s}=13 TeV to +7% at √{s}=100 TeV, and the residual uncertainty of the inclusive cross section from missing M t effects is estimated to be at the few percent level. Our calculation is fully differential in the Higgs boson pair and the associated jet activity: we also present predictions for various differential distributions at √{s}=14 and 100 TeV, and discuss the size of the missing M t effects, which can be larger, especially in the tails of certain observables. Our results represent the most advanced perturbative prediction available to date for this process.

  9. Benchmarking aerodynamic prediction of unsteady rotor aerodynamics of active flaps on wind turbine blades using ranging fidelity tools

    NASA Astrophysics Data System (ADS)

    Barlas, Thanasis; Jost, Eva; Pirrung, Georg; Tsiantas, Theofanis; Riziotis, Vasilis; Navalkar, Sachin T.; Lutz, Thorsten; van Wingerden, Jan-Willem

    2016-09-01

    Simulations of a stiff rotor configuration of the DTU 10MW Reference Wind Turbine are performed in order to assess the impact of prescribed flap motion on the aerodynamic loads on a blade sectional and rotor integral level. Results of the engineering models used by DTU (HAWC2), TUDelft (Bladed) and NTUA (hGAST) are compared to the CFD predictions of USTUTT-IAG (FLOWer). Results show fairly good comparison in terms of axial loading, while alignment of tangential and drag-related forces across the numerical codes needs to be improved, together with unsteady corrections associated with rotor wake dynamics. The use of a new wake model in HAWC2 shows considerable accuracy improvements.

  10. Application of JAERI quantum molecular dynamics model for collisions of heavy nuclei

    NASA Astrophysics Data System (ADS)

    Ogawa, Tatsuhiko; Hashimoto, Shintaro; Sato, Tatsuhiko; Niita, Koji

    2016-06-01

    The quantum molecular dynamics (QMD) model incorporated into the general-purpose radiation transport code PHITS was revised for accurate prediction of fragment yields in peripheral collisions. For more accurate simulation of peripheral collisions, stability of the nuclei at their ground state was improved and the algorithm to reject invalid events was modified. In-medium correction on nucleon-nucleon cross sections was also considered. To clarify the effect of this improvement on fragmentation of heavy nuclei, the new QMD model coupled with a statistical decay model was used to calculate fragment production cross sections of Ag and Au targets and compared with the data of earlier measurement. It is shown that the revised version can predict cross section more accurately.

  11. Classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.

    2002-01-01

    An improved classical least squares multivariate spectral analysis method that adds spectral shapes describing non-calibrated components and system effects (other than baseline corrections) present in the analyzed mixture to the prediction phase of the method. These improvements decrease or eliminate many of the restrictions to the CLS-type methods and greatly extend their capabilities, accuracy, and precision. One new application of PACLS includes the ability to accurately predict unknown sample concentrations when new unmodeled spectral components are present in the unknown samples. Other applications of PACLS include the incorporation of spectrometer drift into the quantitative multivariate model and the maintenance of a calibration on a drifting spectrometer. Finally, the ability of PACLS to transfer a multivariate model between spectrometers is demonstrated.

  12. Annoyance due to simulated blade-slap noise

    NASA Technical Reports Server (NTRS)

    Powell, C. A.

    1978-01-01

    The effects of several characteristics of blade slap noise on annoyance response were studied. These characteristics or parameters were the sound pressure level of the continuous noise used to simulate helicopter broadband noise, the ratio of impulse peak to broadband noise or crest factor, the number of pressure excursions comprising an impulse event, the rise and fall time of the individual impulses, and the repetition frequency of the impulses. Analyses were conducted to determine the correlation between subjective response and various physical measures for the range of parameters studied. A small but significant improvement in the predictive ability of PNL was provided by an A-weighted crest factor correlation. No significant improvement in predictive ability was provided by a rate correction.

  13. Aeroservoelasticity

    NASA Technical Reports Server (NTRS)

    Noll, Thomas E.

    1990-01-01

    The paper describes recent accomplishments and current research projects along four main thrusts in aeroservoelasticity at NASA Langley. One activity focuses on enhancing the modeling and analysis procedures to accurately predict aeroservoelastic interactions. Improvements to the minimum-state method of approximating unsteady aerodynamics are shown to provide precise low-order models for design and simulation tasks. Recent extensions in aerodynamic correction-factor methodology are also described. With respect to analysis procedures, the paper reviews novel enhancements to matched filter theory and random process theory for predicting the critical gust profile and the associated time-correlated gust loads for structural design considerations. Two research projects leading towards improved design capability are also summarized: (1) an integrated structure/control design capability and (2) procedures for obtaining low-order robust digital control laws for aeroelastic applications.

  14. The impacts of tracer selection and corrections for organic matter and particle size on the results of quantitative sediment fingerprinting. A case study from the Nene basin, UK.

    NASA Astrophysics Data System (ADS)

    Pulley, Simon; Ian, Foster; Paula, Antunes

    2014-05-01

    In recent years, sediment fingerprinting methodologies have gained widespread adoption when tracing sediment provenance in geomorphological research. A wide variety of tracers have been employed in the published literature, with corrections for particle size and organic matter applied when the researcher judged them necessary. This paper aims to explore the errors associated with tracer use by a comparison of fingerprinting results obtained using fallout and lithogenic radionuclides, geochemical, and mineral magnetic tracers in a range of environments located in the Nene basin, UK. Specifically, fingerprinting was undertaken on lake, reservoir and floodplain sediment cores, on actively transported suspended sediment and on overbank and channel bed sediment deposits. Tracer groups were investigated both alone and in combination to determine the differences between their sediment provenance predictions and potential causes of these differences. Additionally, simple organic and particle size corrections were applied to determine if they improve the agreement between the tracer group predictions. Key results showed that when fingerprinting contributions from channel banks to actively transported or recently deposited sediments the tracer group predictions varied by 24% on average. These differences could not be clearly attributed to changes in the sediment during erosion or transport. Instead, the most likely cause of differences was the pre-existing spatial variability in tracer concentrations within sediment sources, combined with highly localised erosion. This resulted in the collected sediment source samples not being representative of the actual sediment sources. Average differences in provenance predictions between the different tracer groups in lake, reservoir and floodplain sediment cores were lowest in the reservoir core at 19% and highest in some floodplain cores, with differences in predictions in excess of 50%. In these latter samples organic enrichment of the sediment, selective transport of fine particles and post-depositional chemical changes to the sediment were determined to be the likely cause of the differences. It was determined that organic and particle size corrections made the differences between tracer groups larger in most cases, although differences between tracer group predictions were reduced in two of the four floodplain cores.

  15. Using a knowledge-based planning solution to select patients for proton therapy.

    PubMed

    Delaney, Alexander R; Dahele, Max; Tol, Jim P; Kuijper, Ingrid T; Slotman, Ben J; Verbakel, Wilko F A R

    2017-08-01

    Patient selection for proton therapy by comparing proton/photon treatment plans is time-consuming and prone to bias. RapidPlan™, a knowledge-based-planning solution, uses plan-libraries to model and predict organ-at-risk (OAR) dose-volume-histograms (DVHs). We investigated whether RapidPlan, utilizing an algorithm based only on photon beam characteristics, could generate proton DVH-predictions and whether these could correctly identify patients for proton therapy. Model PROT and Model PHOT comprised 30 head-and-neck cancer proton and photon plans, respectively. Proton and photon knowledge-based-plans (KBPs) were made for ten evaluation-patients. DVH-prediction accuracy was analyzed by comparing predicted-vs-achieved mean OAR doses. KBPs and manual plans were compared using salivary gland and swallowing muscle mean doses. For illustration, patients were selected for protons if predicted Model PHOT mean dose minus predicted Model PROT mean dose (ΔPrediction) for combined OARs was ≥6Gy, and benchmarked using achieved KBP doses. Achieved and predicted Model PROT /Model PHOT mean dose R 2 was 0.95/0.98. Generally, achieved mean dose for Model PHOT /Model PROT KBPs was respectively lower/higher than predicted. Comparing Model PROT /Model PHOT KBPs with manual plans, salivary and swallowing mean doses increased/decreased by <2Gy, on average. ΔPrediction≥6Gy correctly selected 4 of 5 patients for protons. Knowledge-based DVH-predictions can provide efficient, patient-specific selection for protons. A proton-specific RapidPlan-solution could improve results. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Predicting water quality by relating secchi-disk transparency and chlorophyll a measurements to satellite imagery for Michigan Inland Lakes, August 2002

    USGS Publications Warehouse

    Fuller, L.M.; Aichele, Stephen S.; Minnerick, R.J.

    2004-01-01

    Inland lakes are an important economic and environmental resource for Michigan. The U.S. Geological Survey and the Michigan Department of Environmental Quality have been cooperatively monitoring the quality of selected lakes in Michigan through the Lake Water Quality Assessment program. Through this program, approximately 730 of Michigan's 11,000 inland lakes will be monitored once during this 15-year study. Targeted lakes will be sampled during spring turnover and again in late summer to characterize water quality. Because more extensive and more frequent sampling is not economically feasible in the Lake Water Quality Assessment program, the U.S. Geological Survey and Michigan Department of Environmental Quality investigate the use of satellite imagery as a means of estimating water quality in unsampled lakes. Satellite imagery has been successfully used in Minnesota, Wisconsin, and elsewhere to compute the trophic state of inland lakes from predicted secchi-disk measurements. Previous attempts of this kind in Michigan resulted in a poorer fit between observed and predicted data than was found for Minnesota or Wisconsin. This study tested whether estimates could be improved by using atmospherically corrected satellite imagery, whether a more appropriate regression model could be obtained for Michigan, and whether chlorophyll a concentrations could be reliably predicted from satellite imagery in order to compute trophic state of inland lakes. Although the atmospheric-correction did not significantly improve estimates of lake-water quality, a new regression equation was identified that consistently yielded better results than an equation obtained from the literature. A stepwise regression was used to determine an equation that accurately predicts chlorophyll a concentrations in northern Lower Michigan.

  17. The implementation of rare events logistic regression to predict the distribution of mesophotic hard corals across the main Hawaiian Islands.

    PubMed

    Veazey, Lindsay M; Franklin, Erik C; Kelley, Christopher; Rooney, John; Frazer, L Neil; Toonen, Robert J

    2016-01-01

    Predictive habitat suitability models are powerful tools for cost-effective, statistically robust assessment of the environmental drivers of species distributions. The aim of this study was to develop predictive habitat suitability models for two genera of scleractinian corals (Leptoserisand Montipora) found within the mesophotic zone across the main Hawaiian Islands. The mesophotic zone (30-180 m) is challenging to reach, and therefore historically understudied, because it falls between the maximum limit of SCUBA divers and the minimum typical working depth of submersible vehicles. Here, we implement a logistic regression with rare events corrections to account for the scarcity of presence observations within the dataset. These corrections reduced the coefficient error and improved overall prediction success (73.6% and 74.3%) for both original regression models. The final models included depth, rugosity, slope, mean current velocity, and wave height as the best environmental covariates for predicting the occurrence of the two genera in the mesophotic zone. Using an objectively selected theta ("presence") threshold, the predicted presence probability values (average of 0.051 for Leptoseris and 0.040 for Montipora) were translated to spatially-explicit habitat suitability maps of the main Hawaiian Islands at 25 m grid cell resolution. Our maps are the first of their kind to use extant presence and absence data to examine the habitat preferences of these two dominant mesophotic coral genera across Hawai'i.

  18. Improved Phase Corrections for Transoceanic Tsunami Data in Spatial and Temporal Source Estimation: Application to the 2011 Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Ho, Tung-Cheng; Satake, Kenji; Watada, Shingo

    2017-12-01

    Systemic travel time delays of up to 15 min relative to the linear long waves for transoceanic tsunamis have been reported. A phase correction method, which converts the linear long waves into dispersive waves, was previously proposed to consider seawater compressibility, the elasticity of the Earth, and gravitational potential change associated with tsunami motion. In the present study, we improved this method by incorporating the effects of ocean density stratification, actual tsunami raypath, and actual bathymetry. The previously considered effects amounted to approximately 74% for correction of the travel time delay, while the ocean density stratification, actual raypath, and actual bathymetry, contributed to approximately 13%, 4%, and 9% on average, respectively. The improved phase correction method accounted for almost all the travel time delay at far-field stations. We performed single and multiple time window inversions for the 2011 Tohoku tsunami using the far-field data (>3 h travel time) to investigate the initial sea surface displacement. The inversion result from only far-field data was similar to but smoother than that from near-field data and all stations, including a large sea surface rise increasing toward the trench followed by a migration northward along the trench. For the forward simulation, our results showed good agreement between the observed and computed waveforms at both near-field and far-field tsunami gauges, as well as with satellite altimeter data. The present study demonstrates that the improved method provides a more accurate estimate for the waveform inversion and forward prediction of far-field data.

  19. A subjective evaluation of synthesized STOL airplane noises

    NASA Technical Reports Server (NTRS)

    Powell, C. A., Jr.

    1973-01-01

    A magnitude-estimation experiment was conducted to evaluate the subjective annoyance of the noise generated by possible future turbofan STOL aircraft as compared to that of several current CTOL aircraft. In addition, some of the units used to scale the magnitude of aircraft noise were evaluated with respect to their applicability to STOL noise. Twenty test subjects rated their annoyance to a total of 119 noises over a range of 75 PNdb to 105 PNdb. Their subjective ratings were compared with acoustical analysis of the noises in terms of 28 rating scale units. The synthesized STOL noises of this experiment were found to be slightly more annoying than the conventional CTOL noises at equal levels of PNL and EPNL. Over the range of levels investigated the scaling units, with a few exceptions, were capable of predicting the points of equal annoyance for all of the noises with plus or minus 3 dB. The inclusion of duration corrections, in general, improved the predictive capabilities of the various scaling units; however, tone corrections reduced their predictive capabilities.

  20. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    NASA Astrophysics Data System (ADS)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  1. Analytically exploiting noise correlations inside the feedback loop to improve locked-oscillator performance.

    PubMed

    Sastrawan, J; Jones, C; Akhalwaya, I; Uys, H; Biercuk, M J

    2016-08-01

    We introduce concepts from optimal estimation to the stabilization of precision frequency standards limited by noisy local oscillators. We develop a theoretical framework casting various measures for frequency standard variance in terms of frequency-domain transfer functions, capturing the effects of feedback stabilization via a time series of Ramsey measurements. Using this framework, we introduce an optimized hybrid predictive feedforward measurement protocol that employs results from multiple past measurements and transfer-function-based calculations of measurement covariance to improve the accuracy of corrections within the feedback loop. In the presence of common non-Markovian noise processes these measurements will be correlated in a calculable manner, providing a means to capture the stochastic evolution of the local oscillator frequency during the measurement cycle. We present analytic calculations and numerical simulations of oscillator performance under competing feedback schemes and demonstrate benefits in both correction accuracy and long-term oscillator stability using hybrid feedforward. Simulations verify that in the presence of uncompensated dead time and noise with significant spectral weight near the inverse cycle time predictive feedforward outperforms traditional feedback, providing a path towards developing a class of stabilization software routines for frequency standards limited by noisy local oscillators.

  2. Reverberant acoustic energy in auditoria that comprise systems of coupled rooms

    NASA Astrophysics Data System (ADS)

    Summers, Jason Erik

    A frequency-dependent model for levels and decay rates of reverberant energy in systems of coupled rooms is developed and compared with measurements conducted in a 1:10 scale model and in Bass Hall, Fort Worth, TX. Schroeder frequencies of subrooms, fSch, characteristic size of coupling apertures, a, relative to wavelength lambda, and characteristic size of room surfaces, l, relative to lambda define the frequency regions. At high frequencies [HF (f >> f Sch, a >> lambda, l >> lambda)], this work improves upon prior statistical-acoustics (SA) coupled-ODE models by incorporating geometrical-acoustics (GA) corrections for the model of decay within subrooms and the model of energy transfer between subrooms. Previous researchers developed prediction algorithms based on computational GA. Comparisons of predictions derived from beam-axis tracing with scale-model measurements indicate that systematic errors for coupled rooms result from earlier tail-correction procedures that assume constant quadratic growth of reflection density. A new algorithm is developed that uses ray tracing rather than tail correction in the late part and is shown to correct this error. At midfrequencies [MF (f >> f Sch, a ˜ lambda)], HF models are modified to account for wave effects at coupling apertures by including analytically or heuristically derived power transmission coefficients tau. This work improves upon prior SA models of this type by developing more accurate estimates of random-incidence tau. While the accuracy of the MF models is difficult to verify, scale-model measurements evidence the expected behavior. The Biot-Tolstoy-Medwin-Svensson (BTMS) time-domain edge-diffraction model is newly adapted to study transmission through apertures. Multiple-order BTMS scattering is theoretically and experimentally shown to be inaccurate due to the neglect of slope diffraction. At low frequencies (f ˜ f Sch), scale-model measurements have been qualitatively explained by application of previously developed perturbation models. Measurements newly confirm that coupling strength between three-dimensional rooms is related to unperturbed pressure distribution on the coupling surface. In Bass Hall, measurements are conducted to determine the acoustical effects of the coupled stage house on stage and in the audience area. The high-frequency predictions of statistical- and geometrical-acoustics models agree well with measured results. Predictions of the transmission coefficients of the coupling apertures agree, at least qualitatively, with the observed behavior.

  3. Improved water-level forecasting for the Northwest European Shelf and North Sea through direct modelling of tide, surge and non-linear interaction

    NASA Astrophysics Data System (ADS)

    Zijl, Firmijn; Verlaan, Martin; Gerritsen, Herman

    2013-07-01

    In real-time operational coastal forecasting systems for the northwest European shelf, the representation accuracy of tide-surge models commonly suffers from insufficiently accurate tidal representation, especially in shallow near-shore areas with complex bathymetry and geometry. Therefore, in conventional operational systems, the surge component from numerical model simulations is used, while the harmonically predicted tide, accurately known from harmonic analysis of tide gauge measurements, is added to forecast the full water-level signal at tide gauge locations. Although there are errors associated with this so-called astronomical correction (e.g. because of the assumption of linearity of tide and surge), for current operational models, astronomical correction has nevertheless been shown to increase the representation accuracy of the full water-level signal. The simulated modulation of the surge through non-linear tide-surge interaction is affected by the poor representation of the tide signal in the tide-surge model, which astronomical correction does not improve. Furthermore, astronomical correction can only be applied to locations where the astronomic tide is known through a harmonic analysis of in situ measurements at tide gauge stations. This provides a strong motivation to improve both tide and surge representation of numerical models used in forecasting. In the present paper, we propose a new generation tide-surge model for the northwest European Shelf (DCSMv6). This is the first application on this scale in which the tidal representation is such that astronomical correction no longer improves the accuracy of the total water-level representation and where, consequently, the straightforward direct model forecasting of total water levels is better. The methodology applied to improve both tide and surge representation of the model is discussed, with emphasis on the use of satellite altimeter data and data assimilation techniques for reducing parameter uncertainty. Historic DCSMv6 model simulations are compared against shelf wide observations for a full calendar year. For a selection of stations, these results are compared to those with astronomical correction, which confirms that the tide representation in coastal regions has sufficient accuracy, and that forecasting total water levels directly yields superior results.

  4. Evaluating approaches to find exon chains based on long reads.

    PubMed

    Kuosmanen, Anna; Norri, Tuukka; Mäkinen, Veli

    2018-05-01

    Transcript prediction can be modeled as a graph problem where exons are modeled as nodes and reads spanning two or more exons are modeled as exon chains. Pacific Biosciences third-generation sequencing technology produces significantly longer reads than earlier second-generation sequencing technologies, which gives valuable information about longer exon chains in a graph. However, with the high error rates of third-generation sequencing, aligning long reads correctly around the splice sites is a challenging task. Incorrect alignments lead to spurious nodes and arcs in the graph, which in turn lead to incorrect transcript predictions. We survey several approaches to find the exon chains corresponding to long reads in a splicing graph, and experimentally study the performance of these methods using simulated data to allow for sensitivity/precision analysis. Our experiments show that short reads from second-generation sequencing can be used to significantly improve exon chain correctness either by error-correcting the long reads before splicing graph creation, or by using them to create a splicing graph on which the long-read alignments are then projected. We also study the memory and time consumption of various modules, and show that accurate exon chains lead to significantly increased transcript prediction accuracy. The simulated data and in-house scripts used for this article are available at http://www.cs.helsinki.fi/group/gsa/exon-chains/exon-chains-bib.tar.bz2.

  5. Predicting Soil Salinity with Vis–NIR Spectra after Removing the Effects of Soil Moisture Using External Parameter Orthogonalization

    PubMed Central

    Liu, Ya; Pan, Xianzhang; Wang, Changkun; Li, Yanli; Shi, Rongjie

    2015-01-01

    Robust models for predicting soil salinity that use visible and near-infrared (vis–NIR) reflectance spectroscopy are needed to better quantify soil salinity in agricultural fields. Currently available models are not sufficiently robust for variable soil moisture contents. Thus, we used external parameter orthogonalization (EPO), which effectively projects spectra onto the subspace orthogonal to unwanted variation, to remove the variations caused by an external factor, e.g., the influences of soil moisture on spectral reflectance. In this study, 570 spectra between 380 and 2400 nm were obtained from soils with various soil moisture contents and salt concentrations in the laboratory; 3 soil types × 10 salt concentrations × 19 soil moisture levels were used. To examine the effectiveness of EPO, we compared the partial least squares regression (PLSR) results established from spectra with and without EPO correction. The EPO method effectively removed the effects of moisture, and the accuracy and robustness of the soil salt contents (SSCs) prediction model, which was built using the EPO-corrected spectra under various soil moisture conditions, were significantly improved relative to the spectra without EPO correction. This study contributes to the removal of soil moisture effects from soil salinity estimations when using vis–NIR reflectance spectroscopy and can assist others in quantifying soil salinity in the future. PMID:26468645

  6. Analysis of the Mean Absolute Error (MAE) and the Root Mean Square Error (RMSE) in Assessing Rounding Model

    NASA Astrophysics Data System (ADS)

    Wang, Weijie; Lu, Yanmin

    2018-03-01

    Most existing Collaborative Filtering (CF) algorithms predict a rating as the preference of an active user toward a given item, which is always a decimal fraction. Meanwhile, the actual ratings in most data sets are integers. In this paper, we discuss and demonstrate why rounding can bring different influences to these two metrics; prove that rounding is necessary in post-processing of the predicted ratings, eliminate of model prediction bias, improving the accuracy of the prediction. In addition, we also propose two new rounding approaches based on the predicted rating probability distribution, which can be used to round the predicted rating to an optimal integer rating, and get better prediction accuracy compared to the Basic Rounding approach. Extensive experiments on different data sets validate the correctness of our analysis and the effectiveness of our proposed rounding approaches.

  7. Charm-Quark Production in Deep-Inelastic Neutrino Scattering at Next-to-Next-to-Leading Order in QCD.

    PubMed

    Berger, Edmond L; Gao, Jun; Li, Chong Sheng; Liu, Ze Long; Zhu, Hua Xing

    2016-05-27

    We present a fully differential next-to-next-to-leading order calculation of charm-quark production in charged-current deep-inelastic scattering, with full charm-quark mass dependence. The next-to-next-to-leading order corrections in perturbative quantum chromodynamics are found to be comparable in size to the next-to-leading order corrections in certain kinematic regions. We compare our predictions with data on dimuon production in (anti)neutrino scattering from a heavy nucleus. Our results can be used to improve the extraction of the parton distribution function of a strange quark in the nucleon.

  8. Investigation of optical current transformer signal processing method based on an improved Kalman algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Yan; Ge, Jin-ming; Zhang, Guo-qing; Yu, Wen-bin; Liu, Rui-tong; Fan, Wei; Yang, Ying-xuan

    2018-01-01

    This paper explores the problem of signal processing in optical current transformers (OCTs). Based on the noise characteristics of OCTs, such as overlapping signals, noise frequency bands, low signal-to-noise ratios, and difficulties in acquiring statistical features of noise power, an improved standard Kalman filtering algorithm was proposed for direct current (DC) signal processing. The state-space model of the OCT DC measurement system is first established, and then mixed noise can be processed by adding mixed noise into measurement and state parameters. According to the minimum mean squared error criterion, state predictions and update equations of the improved Kalman algorithm could be deduced based on the established model. An improved central difference Kalman filter was proposed for alternating current (AC) signal processing, which improved the sampling strategy and noise processing of colored noise. Real-time estimation and correction of noise were achieved by designing AC and DC noise recursive filters. Experimental results show that the improved signal processing algorithms had a good filtering effect on the AC and DC signals with mixed noise of OCT. Furthermore, the proposed algorithm was able to achieve real-time correction of noise during the OCT filtering process.

  9. Universality of quantum gravity corrections.

    PubMed

    Das, Saurya; Vagenas, Elias C

    2008-11-28

    We show that the existence of a minimum measurable length and the related generalized uncertainty principle (GUP), predicted by theories of quantum gravity, influence all quantum Hamiltonians. Thus, they predict quantum gravity corrections to various quantum phenomena. We compute such corrections to the Lamb shift, the Landau levels, and the tunneling current in a scanning tunneling microscope. We show that these corrections can be interpreted in two ways: (a) either that they are exceedingly small, beyond the reach of current experiments, or (b) that they predict upper bounds on the quantum gravity parameter in the GUP, compatible with experiments at the electroweak scale. Thus, more accurate measurements in the future should either be able to test these predictions, or further tighten the above bounds and predict an intermediate length scale between the electroweak and the Planck scale.

  10. Relative Packing Groups in Template-Based Structure Prediction: Cooperative Effects of True Positive Constraints

    PubMed Central

    Day, Ryan; Qu, Xiaotao; Swanson, Rosemarie; Bohannan, Zach; Bliss, Robert

    2011-01-01

    Abstract Most current template-based structure prediction methods concentrate on finding the correct backbone conformation and then packing sidechains within that backbone. Our packing-based method derives distance constraints from conserved relative packing groups (RPGs). In our refinement approach, the RPGs provide a level of resolution that restrains global topology while allowing conformational sampling. In this study, we test our template-based structure prediction method using 51 prediction units from CASP7 experiments. RPG-based constraints are able to substantially improve approximately two-thirds of starting templates. Upon deeper investigation, we find that true positive spatial constraints, especially those non-local in sequence, derived from the RPGs were important to building nearer native models. Surprisingly, the fraction of incorrect or false positive constraints does not strongly influence the quality of the final candidate. This result indicates that our RPG-based true positive constraints sample the self-consistent, cooperative interactions of the native structure. The lack of such reinforcing cooperativity explains the weaker effect of false positive constraints. Generally, these findings are encouraging indications that RPGs will improve template-based structure prediction. PMID:21210729

  11. Recurrent Neural Network Applications for Astronomical Time Series

    NASA Astrophysics Data System (ADS)

    Protopapas, Pavlos

    2017-06-01

    The benefits of good predictive models in astronomy lie in early event prediction systems and effective resource allocation. Current time series methods applicable to regular time series have not evolved to generalize for irregular time series. In this talk, I will describe two Recurrent Neural Network methods, Long Short-Term Memory (LSTM) and Echo State Networks (ESNs) for predicting irregular time series. Feature engineering along with a non-linear modeling proved to be an effective predictor. For noisy time series, the prediction is improved by training the network on error realizations using the error estimates from astronomical light curves. In addition to this, we propose a new neural network architecture to remove correlation from the residuals in order to improve prediction and compensate for the noisy data. Finally, I show how to set hyperparameters for a stable and performant solution correctly. In this work, we circumvent this obstacle by optimizing ESN hyperparameters using Bayesian optimization with Gaussian Process priors. This automates the tuning procedure, enabling users to employ the power of RNN without needing an in-depth understanding of the tuning procedure.

  12. Extra-articular osteotomy for malunited unicondylar fractures of the proximal phalanx.

    PubMed

    Harness, Neil G; Chen, Alvin; Jupiter, Jesse B

    2005-05-01

    To evaluate an extra-articular osteotomy rather than an intra-articular osteotomy in the treatment of malunited unicondylar fractures of the proximal phalanx. An extra-articular osteotomy was used to correct the deformity resulting from a malunion of a unicondylar fracture of the proximal phalanx in 5 patients. A closing wedge osteotomy that was stabilized with tension band fixation accomplished realignment of the joint. Each patient was evaluated at a minimum of 1 year after surgery for radiographic healing, correction of angulation, digital motion, postoperative complications, current level of pain with motion, and overall satisfaction with the procedure. All of the osteotomies healed by 10 to 12 weeks after surgery with an average angular correction from 25 degrees to 1 degrees . The average proximal interphalangeal joint motion improved to 86 degrees from the preoperative average of 40 degrees , whereas the average total digital motion improved from 154 degrees before surgery to 204 degrees at follow-up evaluation. This method of extra-articular osteotomy for malunited unicondylar fractures of the proximal phalanx is highly reproducible, avoids the risks of intra-articular surgery, and leads to a predictable outcome.

  13. IFACEwat: the interfacial water-implemented re-ranking algorithm to improve the discrimination of near native structures for protein rigid docking.

    PubMed

    Su, Chinh; Nguyen, Thuy-Diem; Zheng, Jie; Kwoh, Chee-Keong

    2014-01-01

    Protein-protein docking is an in silico method to predict the formation of protein complexes. Due to limited computational resources, the protein-protein docking approach has been developed under the assumption of rigid docking, in which one of the two protein partners remains rigid during the protein associations and water contribution is ignored or implicitly presented. Despite obtaining a number of acceptable complex predictions, it seems to-date that most initial rigid docking algorithms still find it difficult or even fail to discriminate successfully the correct predictions from the other incorrect or false positive ones. To improve the rigid docking results, re-ranking is one of the effective methods that help re-locate the correct predictions in top high ranks, discriminating them from the other incorrect ones. Our results showed that the IFACEwat increased both the numbers of the near-native structures and improved their ranks as compared to the initial rigid docking ZDOCK3.0.2. In fact, the IFACEwat achieved a success rate of 83.8% for Antigen/Antibody complexes, which is 10% better than ZDOCK3.0.2. As compared to another re-ranking technique ZRANK, the IFACEwat obtains success rates of 92.3% (8% better) and 90% (5% better) respectively for medium and difficult cases. When comparing with the latest published re-ranking method F2Dock, the IFACEwat performed equivalently well or even better for several Antigen/Antibody complexes. With the inclusion of interfacial water, the IFACEwat improves mostly results of the initial rigid docking, especially for Antigen/Antibody complexes. The improvement is achieved by explicitly taking into account the contribution of water during the protein interactions, which was ignored or not fully presented by the initial rigid docking and other re-ranking techniques. In addition, the IFACEwat maintains sufficient computational efficiency of the initial docking algorithm, yet improves the ranks as well as the number of the near native structures found. As our implementation so far targeted to improve the results of ZDOCK3.0.2, and particularly for the Antigen/Antibody complexes, it is expected in the near future that more implementations will be conducted to be applicable for other initial rigid docking algorithms.

  14. Tailoring the implementation of new biomarkers based on their added predictive value in subgroups of individuals.

    PubMed

    van Giessen, A; Moons, K G M; de Wit, G A; Verschuren, W M M; Boer, J M A; Koffijberg, H

    2015-01-01

    The value of new biomarkers or imaging tests, when added to a prediction model, is currently evaluated using reclassification measures, such as the net reclassification improvement (NRI). However, these measures only provide an estimate of improved reclassification at population level. We present a straightforward approach to characterize subgroups of reclassified individuals in order to tailor implementation of a new prediction model to individuals expected to benefit from it. In a large Dutch population cohort (n = 21,992) we classified individuals to low (< 5%) and high (≥ 5%) fatal cardiovascular disease risk by the Framingham risk score (FRS) and reclassified them based on the systematic coronary risk evaluation (SCORE). Subsequently, we characterized the reclassified individuals and, in case of heterogeneity, applied cluster analysis to identify and characterize subgroups. These characterizations were used to select individuals expected to benefit from implementation of SCORE. Reclassification after applying SCORE in all individuals resulted in an NRI of 5.00% (95% CI [-0.53%; 11.50%]) within the events, 0.06% (95% CI [-0.08%; 0.22%]) within the nonevents, and a total NRI of 0.051 (95% CI [-0.004; 0.116]). Among the correctly downward reclassified individuals cluster analysis identified three subgroups. Using the characterizations of the typically correctly reclassified individuals, implementing SCORE only in individuals expected to benefit (n = 2,707,12.3%) improved the NRI to 5.32% (95% CI [-0.13%; 12.06%]) within the events, 0.24% (95% CI [0.10%; 0.36%]) within the nonevents, and a total NRI of 0.055 (95% CI [0.001; 0.123]). Overall, the risk levels for individuals reclassified by tailored implementation of SCORE were more accurate. In our empirical example the presented approach successfully characterized subgroups of reclassified individuals that could be used to improve reclassification and reduce implementation burden. In particular when newly added biomarkers or imaging tests are costly or burdensome such a tailored implementation strategy may save resources and improve (cost-)effectiveness.

  15. Joint PET-MR respiratory motion models for clinical PET motion correction

    NASA Astrophysics Data System (ADS)

    Manber, Richard; Thielemans, Kris; Hutton, Brian F.; Wan, Simon; McClelland, Jamie; Barnes, Anna; Arridge, Simon; Ourselin, Sébastien; Atkinson, David

    2016-09-01

    Patient motion due to respiration can lead to artefacts and blurring in positron emission tomography (PET) images, in addition to quantification errors. The integration of PET with magnetic resonance (MR) imaging in PET-MR scanners provides complementary clinical information, and allows the use of high spatial resolution and high contrast MR images to monitor and correct motion-corrupted PET data. In this paper we build on previous work to form a methodology for respiratory motion correction of PET data, and show it can improve PET image quality whilst having minimal impact on clinical PET-MR protocols. We introduce a joint PET-MR motion model, using only 1 min per PET bed position of simultaneously acquired PET and MR data to provide a respiratory motion correspondence model that captures inter-cycle and intra-cycle breathing variations. In the model setup, 2D multi-slice MR provides the dynamic imaging component, and PET data, via low spatial resolution framing and principal component analysis, provides the model surrogate. We evaluate different motion models (1D and 2D linear, and 1D and 2D polynomial) by computing model-fit and model-prediction errors on dynamic MR images on a data set of 45 patients. Finally we apply the motion model methodology to 5 clinical PET-MR oncology patient datasets. Qualitative PET reconstruction improvements and artefact reduction are assessed with visual analysis, and quantitative improvements are calculated using standardised uptake value (SUVpeak and SUVmax) changes in avid lesions. We demonstrate the capability of a joint PET-MR motion model to predict respiratory motion by showing significantly improved image quality of PET data acquired before the motion model data. The method can be used to incorporate motion into the reconstruction of any length of PET acquisition, with only 1 min of extra scan time, and with no external hardware required.

  16. Does objective cluster analysis serve as a useful precursor to seasonal precipitation prediction at local scale? Application to western Ethiopia

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Moges, Semu; Block, Paul

    2018-01-01

    Prediction of seasonal precipitation can provide actionable information to guide management of various sectoral activities. For instance, it is often translated into hydrological forecasts for better water resources management. However, many studies assume homogeneity in precipitation across an entire study region, which may prove ineffective for operational and local-level decisions, particularly for locations with high spatial variability. This study proposes advancing local-level seasonal precipitation predictions by first conditioning on regional-level predictions, as defined through objective cluster analysis, for western Ethiopia. To our knowledge, this is the first study predicting seasonal precipitation at high resolution in this region, where lives and livelihoods are vulnerable to precipitation variability given the high reliance on rain-fed agriculture and limited water resources infrastructure. The combination of objective cluster analysis, spatially high-resolution prediction of seasonal precipitation, and a modeling structure spanning statistical and dynamical approaches makes clear advances in prediction skill and resolution, as compared with previous studies. The statistical model improves versus the non-clustered case or dynamical models for a number of specific clusters in northwestern Ethiopia, with clusters having regional average correlation and ranked probability skill score (RPSS) values of up to 0.5 and 33 %, respectively. The general skill (after bias correction) of the two best-performing dynamical models over the entire study region is superior to that of the statistical models, although the dynamical models issue predictions at a lower resolution and the raw predictions require bias correction to guarantee comparable skills.

  17. TU-G-BRA-05: Predicting Volume Change of the Tumor and Critical Structures Throughout Radiation Therapy by CT-CBCT Registration with Local Intensity Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, S; Robinson, A; Kiess, A

    2015-06-15

    Purpose: The purpose of this study is to develop an accurate and effective technique to predict and monitor volume changes of the tumor and organs at risk (OARs) from daily cone-beam CTs (CBCTs). Methods: While CBCT is typically used to minimize the patient setup error, its poor image quality impedes accurate monitoring of daily anatomical changes in radiotherapy. Reconstruction artifacts in CBCT often cause undesirable errors in registration-based contour propagation from the planning CT, a conventional way to estimate anatomical changes. To improve the registration and segmentation accuracy, we developed a new deformable image registration (DIR) that iteratively corrects CBCTmore » intensities using slice-based histogram matching during the registration process. Three popular DIR algorithms (hierarchical B-spline, demons, optical flow) augmented by the intensity correction were implemented on a graphics processing unit for efficient computation, and their performances were evaluated on six head and neck (HN) cancer cases. Four trained scientists manually contoured nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs for each case, to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial software, VelocityAI (Varian Medical Systems Inc.). Results: Manual contouring showed significant variations, [-76, +141]% from the mean of all four sets of contours. The volume differences (mean±std in cc) between the average manual segmentation and four automatic segmentations are 3.70±2.30(B-spline), 1.25±1.78(demons), 0.93±1.14(optical flow), and 4.39±3.86 (VelocityAI). In comparison to the average volume of the manual segmentations, the proposed approach significantly reduced the estimation error by 9%(B-spline), 38%(demons), and 51%(optical flow) over the conventional mutual information based method (VelocityAI). Conclusion: The proposed CT-CBCT registration with local CBCT intensity correction can accurately predict the tumor volume change with reduced errors. Although demonstrated only on HN nodal GTVs, the results imply improved accuracy for other critical structures. This work was supported by NIH/NCI under grant R42CA137886.« less

  18. Work characteristics as predictors of correctional supervisors’ health outcomes

    PubMed Central

    Buden, Jennifer C.; Dugan, Alicia G.; Namazi, Sara; Huedo-Medina, Tania B.; Cherniack, Martin G.; Faghri, Pouran D.

    2016-01-01

    Objective This study examined associations among health behaviors, psychosocial work factors, and health status. Methods Correctional supervisors (n=157) completed a survey that assessed interpersonal and organizational views on health. Chi-square and logistic regressions were used to examine relationships among variables. Results Respondents had a higher prevalence of obesity and comorbidities compared to the general U.S. adult population. Burnout was significantly associated with nutrition, physical activity, sleep duration, sleep quality, diabetes, and anxiety/depression. Job meaning, job satisfaction and workplace social support may predict health behaviors and outcomes. Conclusions Correctional supervisors are understudied and have poor overall health status. Improving health behaviors of middle-management employees may have a beneficial effect on the health of the entire workforce. This paper demonstrates the importance of psychosocial work factors that may contribute to health behaviors and outcomes. PMID:27483335

  19. Inter-model Diversity of ENSO simulation and its relation to basic states

    NASA Astrophysics Data System (ADS)

    Kug, J. S.; Ham, Y. G.

    2016-12-01

    In this study, a new methodology is developed to improve the climate simulation of state-of-the-art coupledglobal climate models (GCMs), by a postprocessing based on the intermodel diversity. Based on the closeconnection between the interannual variability and climatological states, the distinctive relation between theintermodel diversity of the interannual variability and that of the basic state is found. Based on this relation,the simulated interannual variabilities can be improved, by correcting their climatological bias. To test thismethodology, the dominant intermodel difference in precipitation responses during El Niño-SouthernOscillation (ENSO) is investigated, and its relationship with climatological state. It is found that the dominantintermodel diversity of the ENSO precipitation in phase 5 of the Coupled Model Intercomparison Project(CMIP5) is associated with the zonal shift of the positive precipitation center during El Niño. This dominantintermodel difference is significantly correlated with the basic states. The models with wetter (dryer) climatologythan the climatology of the multimodel ensemble (MME) over the central Pacific tend to shift positiveENSO precipitation anomalies to the east (west). Based on the model's systematic errors in atmosphericENSO response and bias, the models with better climatological state tend to simulate more realistic atmosphericENSO responses.Therefore, the statistical method to correct the ENSO response mostly improves the ENSO response. Afterthe statistical correction, simulating quality of theMMEENSO precipitation is distinctively improved. Theseresults provide a possibility that the present methodology can be also applied to improving climate projectionand seasonal climate prediction.

  20. Heroic Reliability Improvement in Manned Space Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2017-01-01

    System reliability can be significantly improved by a strong continued effort to identify and remove all the causes of actual failures. Newly designed systems often have unexpected high failure rates which can be reduced by successive design improvements until the final operational system has an acceptable failure rate. There are many causes of failures and many ways to remove them. New systems may have poor specifications, design errors, or mistaken operations concepts. Correcting unexpected problems as they occur can produce large early gains in reliability. Improved technology in materials, components, and design approaches can increase reliability. The reliability growth is achieved by repeatedly operating the system until it fails, identifying the failure cause, and fixing the problem. The failure rate reduction that can be obtained depends on the number and the failure rates of the correctable failures. Under the strong assumption that the failure causes can be removed, the decline in overall failure rate can be predicted. If a failure occurs at the rate of lambda per unit time, the expected time before the failure occurs and can be corrected is 1/lambda, the Mean Time Before Failure (MTBF). Finding and fixing a less frequent failure with the rate of lambda/2 per unit time requires twice as long, time of 1/(2 lambda). Cutting the failure rate in half requires doubling the test and redesign time and finding and eliminating the failure causes.Reducing the failure rate significantly requires a heroic reliability improvement effort.

  1. The impact of the photon PDF and electroweak corrections on [Formula: see text] distributions.

    PubMed

    Pagani, D; Tsinikos, I; Zaro, M

    2016-01-01

    We discuss the impact of EW corrections on differential distributions in top-quark pair production at the LHC and future hadron colliders, focussing on the effects of initial-state photons. Performing a calculation at Next-to-Leading Order QCD+EW accuracy, we investigate in detail the impact of photon-initiated channels on central values as well as PDF and scale uncertainties, both at order [Formula: see text] and [Formula: see text]. We present predictions at 13 and 100 TeV, and provide results for the 8 TeV differential measurements performed by ATLAS and CMS. A thorough comparison of results obtained with the NNPDF2.3QED and CT14QED PDF sets is performed. While contributions due to the photon PDF are negligible with CT14QED, this is not the case for NNPDF2.3QED, where such contributions are sizeable and show large PDF uncertainties. On the one hand, we show that differential observables in top-pair production, in particular top-quark and [Formula: see text] rapidities, can be used to improve the determination of the photon PDF within the NNPDF approach. On the other hand, with current PDF sets, we demonstrate the necessity of including EW corrections and photon-induced contributions for a correct determination of both the central value and the uncertainties of theoretical predictions.

  2. Compressibility Considerations for kappa-omega Turbulence Models in Hypersonic Boundary Layer Applications

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.

    2009-01-01

    The ability of kappa-omega models to predict compressible turbulent skin friction in hypersonic boundary layers is investigated. Although uncorrected two-equation models can agree well with correlations for hot-wall cases, they tend to perform progressively worse - particularly for cold walls - as the Mach number is increased in the hypersonic regime. Simple algebraic models such as Baldwin-Lomax perform better compared to experiments and correlations in these circumstances. Many of the compressibility corrections described in the literature are summarized here. These include corrections that have only a small influence for kappa-omega models, or that apply only in specific circumstances. The most widely-used general corrections were designed for use with jet or mixing-layer free shear flows. A less well-known dilatation-dissipation correction intended for boundary layer flows is also tested, and is shown to agree reasonably well with the Baldwin-Lomax model at cold-wall conditions. It exhibits a less dramatic influence than the free shear type of correction. There is clearly a need for improved understanding and better overall physical modeling for turbulence models applied to hypersonic boundary layer flows.

  3. Application of Pressure-Based Wall Correction Methods to Two NASA Langley Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Iyer, V.; Everhart, J. L.

    2001-01-01

    This paper is a description and status report on the implementation and application of the WICS wall interference method to the National Transonic Facility (NTF) and the 14 x 22-ft subsonic wind tunnel at the NASA Langley Research Center. The method calculates free-air corrections to the measured parameters and aerodynamic coefficients for full span and semispan models when the tunnels are in the solid-wall configuration. From a data quality point of view, these corrections remove predictable bias errors in the measurement due to the presence of the tunnel walls. At the NTF, the method is operational in the off-line and on-line modes, with three tests already computed for wall corrections. At the 14 x 22-ft tunnel, initial implementation has been done based on a test on a full span wing. This facility is currently scheduled for an upgrade to its wall pressure measurement system. With the addition of new wall orifices and other instrumentation upgrades, a significant improvement in the wall correction accuracy is expected.

  4. Predicting plantar fasciitis in runners.

    PubMed

    Warren, B L; Jones, C J

    1987-02-01

    Ninety-one runners were studied to determine whether specific variables were indicative of runners who had suffered with plantar fasciitis either presently or formerly vs runners who had never suffered with plantar fasciitis. Each runner was asked to complete a running history, was subjected to several anatomical measurements, and was asked to run on a treadmill in both a barefoot and shoe condition at a speed of 3.35 mps (8 min mile pace). Factor coefficients were used in a discriminant function analysis which revealed that, when group membership was predicted, 63% of the runners could be correctly assigned to their group. Considering that 76% of the control group was correctly predicted, it was concluded that the predictor variables were able to correctly predict membership of the control group, but not able to correctly predict the presently or formerly injured sufferers of plantar fasciitis.

  5. Reduce Manual Curation by Combining Gene Predictions from Multiple Annotation Engines, a Case Study of Start Codon Prediction

    PubMed Central

    Ederveen, Thomas H. A.; Overmars, Lex; van Hijum, Sacha A. F. T.

    2013-01-01

    Nowadays, prokaryotic genomes are sequenced faster than the capacity to manually curate gene annotations. Automated genome annotation engines provide users a straight-forward and complete solution for predicting ORF coordinates and function. For many labs, the use of AGEs is therefore essential to decrease the time necessary for annotating a given prokaryotic genome. However, it is not uncommon for AGEs to provide different and sometimes conflicting predictions. Combining multiple AGEs might allow for more accurate predictions. Here we analyzed the ab initio open reading frame (ORF) calling performance of different AGEs based on curated genome annotations of eight strains from different bacterial species with GC% ranging from 35–52%. We present a case study which demonstrates a novel way of comparative genome annotation, using combinations of AGEs in a pre-defined order (or path) to predict ORF start codons. The order of AGE combinations is from high to low specificity, where the specificity is based on the eight genome annotations. For each AGE combination we are able to derive a so-called projected confidence value, which is the average specificity of ORF start codon prediction based on the eight genomes. The projected confidence enables estimating likeliness of a correct prediction for a particular ORF start codon by a particular AGE combination, pinpointing ORFs notoriously difficult to predict start codons. We correctly predict start codons for 90.5±4.8% of the genes in a genome (based on the eight genomes) with an accuracy of 81.1±7.6%. Our consensus-path methodology allows a marked improvement over majority voting (9.7±4.4%) and with an optimal path ORF start prediction sensitivity is gained while maintaining a high specificity. PMID:23675487

  6. Grid-Adapted FUN3D Computations for the Second High Lift Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, E. M.; Rumsey, C. L.; Park, M. A.

    2014-01-01

    Contributions of the unstructured Reynolds-averaged Navier-Stokes code FUN3D to the 2nd AIAA CFD High Lift Prediction Workshop are described, and detailed comparisons are made with experimental data. Using workshop-supplied grids, results for the clean wing configuration are compared with results from the structured code CFL3D Using the same turbulence model, both codes compare reasonably well in terms of total forces and moments, and the maximum lift is similarly over-predicted for both codes compared to experiment. By including more representative geometry features such as slat and flap brackets and slat pressure tube bundles, FUN3D captures the general effects of the Reynolds number variation, but under-predicts maximum lift on workshop-supplied grids in comparison with the experimental data, due to excessive separation. However, when output-based, off-body grid adaptation in FUN3D is employed, results improve considerably. In particular, when the geometry includes both brackets and the pressure tube bundles, grid adaptation results in a more accurate prediction of lift near stall in comparison with the wind-tunnel data. Furthermore, a rotation-corrected turbulence model shows improved pressure predictions on the outboard span when using adapted grids.

  7. Rapid diagnostic tests for malaria at sites of varying transmission intensity in Uganda.

    PubMed

    Hopkins, Heidi; Bebell, Lisa; Kambale, Wilson; Dokomajilar, Christian; Rosenthal, Philip J; Dorsey, Grant

    2008-02-15

    In Africa, fever is often treated presumptively as malaria, resulting in misdiagnosis and the overuse of antimalarial drugs. Rapid diagnostic tests (RDTs) for malaria may allow improved fever management. We compared RDTs based on histidine-rich protein 2 (HRP2) and RDTs based on Plasmodium lactate dehydrogenase (pLDH) with expert microscopy and PCR-corrected microscopy for 7000 patients at sites of varying malaria transmission intensity across Uganda. When all sites were considered, the sensitivity of the HRP2-based test was 97% when compared with microscopy and 98% when corrected by PCR; the sensitivity of the pLDH-based test was 88% when compared with microscopy and 77% when corrected by PCR. The specificity of the HRP2-based test was 71% when compared with microscopy and 88% when corrected by PCR; the specificity of the pLDH-based test was 92% when compared with microscopy and >98% when corrected by PCR. Based on Plasmodium falciparum PCR-corrected microscopy, the positive predictive value (PPV) of the HRP2-based test was high (93%) at all but the site with the lowest transmission rate; the pLDH-based test and expert microscopy offered excellent PPVs (98%) for all sites. The negative predictive value (NPV) of the HRP2-based test was consistently high (>97%); in contrast, the NPV for the pLDH-based test dropped significantly (from 98% to 66%) as transmission intensity increased, and the NPV for expert microscopy decreased significantly (99% to 54%) because of increasing failure to detect subpatent parasitemia. Based on the high PPV and NPV, HRP2-based RDTs are likely to be the best diagnostic choice for areas with medium-to-high malaria transmission rates in Africa.

  8. Refraction-compensated motion tracking of unrestrained small animals in positron emission tomography.

    PubMed

    Kyme, Andre; Meikle, Steven; Baldock, Clive; Fulton, Roger

    2012-08-01

    Motion-compensated radiotracer imaging of fully conscious rodents represents an important paradigm shift for preclinical investigations. In such studies, if motion tracking is performed through a transparent enclosure containing the awake animal, light refraction at the interface will introduce errors in stereo pose estimation. We have performed a thorough investigation of how this impacts the accuracy of pose estimates and the resulting motion correction, and developed an efficient method to predict and correct for refraction-based error. The refraction model underlying this study was validated using a state-of-the-art motion tracking system. Refraction-based error was shown to be dependent on tracking marker size, working distance, and interface thickness and tilt. Correcting for refraction error improved the spatial resolution and quantitative accuracy of motion-corrected positron emission tomography images. Since the methods are general, they may also be useful in other contexts where data are corrupted by refraction effects. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.

  9. Hybrid error correction and de novo assembly of single-molecule sequencing reads

    PubMed Central

    Koren, Sergey; Schatz, Michael C.; Walenz, Brian P.; Martin, Jeffrey; Howard, Jason; Ganapathy, Ganeshkumar; Wang, Zhong; Rasko, David A.; McCombie, W. Richard; Jarvis, Erich D.; Phillippy, Adam M.

    2012-01-01

    Emerging single-molecule sequencing instruments can generate multi-kilobase sequences with the potential to dramatically improve genome and transcriptome assembly. However, the high error rate of single-molecule reads is challenging, and has limited their use to resequencing bacteria. To address this limitation, we introduce a novel correction algorithm and assembly strategy that utilizes shorter, high-identity sequences to correct the error in single-molecule sequences. We demonstrate the utility of this approach on Pacbio RS reads of phage, prokaryotic, and eukaryotic whole genomes, including the novel genome of the parrot Melopsittacus undulatus, as well as for RNA-seq reads of the corn (Zea mays) transcriptome. Our approach achieves over 99.9% read correction accuracy and produces substantially better assemblies than current sequencing strategies: in the best example, quintupling the median contig size relative to high-coverage, second-generation assemblies. Greater gains are predicted if read lengths continue to increase, including the prospect of single-contig bacterial chromosome assembly. PMID:22750884

  10. Measurement of sediment and crustal thickness corrected RDA for 2D profiles at rifted continental margins: Applications to the Iberian, Gulf of Aden and S Angolan margins

    NASA Astrophysics Data System (ADS)

    Cowie, Leanne; Kusznir, Nick

    2014-05-01

    Subsidence analysis of sedimentary basins and rifted continental margins requires a correction for the anomalous uplift or subsidence arising from mantle dynamic topography. Whilst different global model predictions of mantle dynamic topography may give a broadly similar pattern at long wavelengths, they differ substantially in the predicted amplitude and at shorter wavelengths. As a consequence the accuracy of predicted mantle dynamic topography is not sufficiently good to provide corrections for subsidence analysis. Measurements of present day anomalous subsidence, which we attribute to mantle dynamic topography, have been made for three rifted continental margins; offshore Iberia, the Gulf of Aden and southern Angola. We determine residual depth anomaly (RDA), corrected for sediment loading and crustal thickness variation for 2D profiles running from unequivocal oceanic crust across the continental ocean boundary onto thinned continental crust. Residual depth anomalies (RDA), corrected for sediment loading using flexural backstripping and decompaction, have been calculated by comparing observed and age predicted oceanic bathymetries at these margins. Age predicted bathymetric anomalies have been calculated using the thermal plate model predictions from Crosby & McKenzie (2009). Non-zero sediment corrected RDAs may result from anomalous oceanic crustal thickness with respect to the global average or from anomalous uplift or subsidence. Gravity anomaly inversion incorporating a lithosphere thermal gravity anomaly correction and sediment thickness from 2D seismic reflection data has been used to determine Moho depth, calibrated using seismic refraction, and oceanic crustal basement thickness. Crustal basement thicknesses derived from gravity inversion together with Airy isostasy have been used to correct for variations of crustal thickness from a standard oceanic thickness of 7km. The 2D profiles of RDA corrected for both sediment loading and non-standard crustal thickness provide a measurement of anomalous uplift or subsidence which we attribute to mantle dynamic topography. We compare our sediment and crustal thickness corrected RDA analysis results with published predictions of mantle dynamic topography from global models.

  11. Eighteen- and 24-Month-Old Infants Correct Others in Anticipation of Action Mistakes

    ERIC Educational Resources Information Center

    Knudsen, Birgit; Liszkowski, Ulf

    2012-01-01

    Much of human communication and collaboration is predicated on making predictions about others' actions. Humans frequently use predictions about others' action mistakes to correct others and spare them mistakes. Such anticipatory correcting reveals a social motivation for unsolicited helping. Cognitively, it requires forward inferences about…

  12. Tailored high-resolution numerical weather forecasts for energy efficient predictive building control

    NASA Astrophysics Data System (ADS)

    Stauch, V. J.; Gwerder, M.; Gyalistras, D.; Oldewurtel, F.; Schubiger, F.; Steiner, P.

    2010-09-01

    The high proportion of the total primary energy consumption by buildings has increased the public interest in the optimisation of buildings' operation and is also driving the development of novel control approaches for the indoor climate. In this context, the use of weather forecasts presents an interesting and - thanks to advances in information and predictive control technologies and the continuous improvement of numerical weather prediction (NWP) models - an increasingly attractive option for improved building control. Within the research project OptiControl (www.opticontrol.ethz.ch) predictive control strategies for a wide range of buildings, heating, ventilation and air conditioning (HVAC) systems, and representative locations in Europe are being investigated with the aid of newly developed modelling and simulation tools. Grid point predictions for radiation, temperature and humidity of the high-resolution limited area NWP model COSMO-7 (see www.cosmo-model.org) and local measurements are used as disturbances and inputs into the building system. The control task considered consists in minimizing energy consumption whilst maintaining occupant comfort. In this presentation, we use the simulation-based OptiControl methodology to investigate the impact of COSMO-7 forecasts on the performance of predictive building control and the resulting energy savings. For this, we have selected building cases that were shown to benefit from a prediction horizon of up to 3 days and therefore, are particularly suitable for the use of numerical weather forecasts. We show that the controller performance is sensitive to the quality of the weather predictions, most importantly of the incident radiation on differently oriented façades. However, radiation is characterised by a high temporal and spatial variability in part caused by small scale and fast changing cloud formation and dissolution processes being only partially represented in the COSMO-7 grid point predictions. On the other hand, buildings are affected by particularly local weather conditions at the building site. To overcome this discrepancy, we make use of local measurements to statistically adapt the COSMO-7 model output to the meteorological conditions at the building. For this, we have developed a general correction algorithm that exploits systematic properties of the COSMO-7 prediction error and explicitly estimates the degree of temporal autocorrelation using online recursive estimation. The resulting corrected predictions are improved especially for the first few hours being the most crucial for the predictive controller and, ultimately for the reduction of primary energy consumption using predictive control. The use of numerical weather forecasts in predictive building automation is one example in a wide field of weather dependent advanced energy saving technologies. Our work particularly highlights the need for the development of specifically tailored weather forecast products by (statistical) postprocessing in order to meet the requirements of specific applications.

  13. Prediction, Detection, and Validation of Isotope Clusters in Mass Spectrometry Data

    PubMed Central

    Treutler, Hendrik; Neumann, Steffen

    2016-01-01

    Mass spectrometry is a key analytical platform for metabolomics. The precise quantification and identification of small molecules is a prerequisite for elucidating the metabolism and the detection, validation, and evaluation of isotope clusters in LC-MS data is important for this task. Here, we present an approach for the improved detection of isotope clusters using chemical prior knowledge and the validation of detected isotope clusters depending on the substance mass using database statistics. We find remarkable improvements regarding the number of detected isotope clusters and are able to predict the correct molecular formula in the top three ranks in 92% of the cases. We make our methodology freely available as part of the Bioconductor packages xcms version 1.50.0 and CAMERA version 1.30.0. PMID:27775610

  14. Motion-corrected whole-heart PET-MR for the simultaneous visualisation of coronary artery integrity and myocardial viability: an initial clinical validation.

    PubMed

    Munoz, Camila; Kunze, Karl P; Neji, Radhouene; Vitadello, Teresa; Rischpler, Christoph; Botnar, René M; Nekolla, Stephan G; Prieto, Claudia

    2018-05-12

    Cardiac PET-MR has shown potential for the comprehensive assessment of coronary heart disease. However, image degradation due to physiological motion remains a challenge that could hinder the adoption of this technology in clinical practice. The purpose of this study was to validate a recently proposed respiratory motion-corrected PET-MR framework for the simultaneous visualisation of myocardial viability ( 18 F-FDG PET) and coronary artery anatomy (coronary MR angiography, CMRA) in patients with chronic total occlusion (CTO). A cohort of 14 patients was scanned with the proposed PET-CMRA framework. PET and CMRA images were reconstructed with and without the proposed motion correction approach for comparison purposes. Metrics of image quality including visible vessel length and sharpness were obtained for CMRA for both the right and left anterior descending coronary arteries (RCA, LAD), and relative increase in 18 F-FDG PET signal after motion correction for standard 17-segment polar maps was computed. Resulting coronary anatomy by CMRA and myocardial integrity by PET were visually compared against X-ray angiography and conventional Late Gadolinium Enhancement (LGE) MRI, respectively. Motion correction increased CMRA visible vessel length by 49.9% and 32.6% (RCA, LAD) and vessel sharpness by 12.3% and 18.9% (RCA, LAD) on average compared to uncorrected images. Coronary lumen delineation on motion-corrected CMRA images was in good agreement with X-ray angiography findings. For PET, motion correction resulted in an average 8% increase in 18 F-FDG signal in the inferior and inferolateral segments of the myocardial wall. An improved delineation of myocardial viability defects and reduced noise in the 18 F-FDG PET images was observed, improving correspondence to subendocardial LGE-MRI findings compared to uncorrected images. The feasibility of the PET-CMRA framework for simultaneous cardiac PET-MR imaging in a short and predictable scan time (~11 min) has been demonstrated in 14 patients with CTO. Motion correction increased visible length and sharpness of the coronary arteries by CMRA, and improved delineation of the myocardium by 18 F-FDG PET, resulting in good agreement with X-ray angiography and LGE-MRI.

  15. Simulation of Atmospheric-Entry Capsules in the Subsonic Regime

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Childs, Robert E.; Garcia, Joseph A.

    2015-01-01

    The accuracy of Computational Fluid Dynamics predictions of subsonic capsule aerodynamics is examined by comparison against recent NASA wind-tunnel data at high-Reynolds-number flight conditions. Several aspects of numerical and physical modeling are considered, including inviscid numerical scheme, mesh adaptation, rough-wall modeling, rotation and curvature corrections for eddy-viscosity models, and Detached-Eddy Simulations of the unsteady wake. All of these are considered in isolation against relevant data where possible. The results indicate that an improved predictive capability is developed by considering physics-based approaches and validating the results against flight-relevant experimental data.

  16. Dynamic properties of the adaptive optics system depending on the temporary transformations of mirror control voltages

    NASA Astrophysics Data System (ADS)

    Lavrinov, V. V.; Lavrinova, L. N.

    2017-11-01

    The statistically optimal control algorithm for the correcting mirror is formed by constructing a prediction of distortions of the optical signal and improves the time resolution of the adaptive optics system. The prediction of distortions is based on an analysis of the dynamics of changes in the optical inhomogeneities of the turbulent atmosphere or the evolution of phase fluctuations at the input aperture of the adaptive system. Dynamic properties of the system are manifested during the temporary transformation of the stresses controlling the mirror and are determined by the dynamic characteristics of the flexible mirror.

  17. Calibration and prediction of removal function in magnetorheological finishing.

    PubMed

    Dai, Yifan; Song, Ci; Peng, Xiaoqiang; Shi, Feng

    2010-01-20

    A calibrated and predictive model of the removal function has been established based on the analysis of a magnetorheological finishing (MRF) process. By introducing an efficiency coefficient of the removal function, the model can be used to calibrate the removal function in a MRF figuring process and to accurately predict the removal function of a workpiece to be polished whose material is different from the spot part. Its correctness and feasibility have been validated by simulations. Furthermore, applying this model to the MRF figuring experiments, the efficiency coefficient of the removal function can be identified accurately to make the MRF figuring process deterministic and controllable. Therefore, all the results indicate that the calibrated and predictive model of the removal function can improve the finishing determinacy and increase the model applicability in a MRF process.

  18. X-Band Acquisition Aid Software

    NASA Technical Reports Server (NTRS)

    Britcliffe, Michael J.; Strain, Martha M.; Wert, Michael

    2011-01-01

    The X-band Acquisition Aid (AAP) software is a low-cost acquisition aid for the Deep Space Network (DSN) antennas, and is used while acquiring a spacecraft shortly after it has launched. When enabled, the acquisition aid provides corrections to the antenna-predicted trajectory of the spacecraft to compensate for the variations that occur during the actual launch. The AAP software also provides the corrections to the antenna-predicted trajectory to the navigation team that uses the corrections to refine their model of the spacecraft in order to produce improved antenna-predicted trajectories for each spacecraft that passes over each complex. The software provides an automated Acquisition Aid receiver calibration, and provides graphical displays to the operator and remote viewers via an Ethernet connection. It has a Web server, and the remote workstations use the Firefox browser to view the displays. At any given time, only one operator can control any particular display in order to avoid conflicting commands from more than one control point. The configuration and control is accomplished solely via the graphical displays. The operator does not have to remember any commands. Only a few configuration parameters need to be changed, and can be saved to the appropriate spacecraft-dependent configuration file on the AAP s hard disk. AAP automates the calibration sequence by first commanding the antenna to the correct position, starting the receiver calibration sequence, and then providing the operator with the option of accepting or rejecting the new calibration parameters. If accepted, the new parameters are stored in the appropriate spacecraft-dependent configuration file. The calibration can be performed on the Sun, greatly expanding the window of opportunity for calibration. The spacecraft traditionally used for calibration is in view typically twice per day, and only for about ten minutes each pass.

  19. Comprehending 3D Diagrams: Sketching to Support Spatial Reasoning.

    PubMed

    Gagnier, Kristin M; Atit, Kinnari; Ormand, Carol J; Shipley, Thomas F

    2017-10-01

    Science, technology, engineering, and mathematics (STEM) disciplines commonly illustrate 3D relationships in diagrams, yet these are often challenging for students. Failing to understand diagrams can hinder success in STEM because scientific practice requires understanding and creating diagrammatic representations. We explore a new approach to improving student understanding of diagrams that convey 3D relations that is based on students generating their own predictive diagrams. Participants' comprehension of 3D spatial diagrams was measured in a pre- and post-design where students selected the correct 2D slice through 3D geologic block diagrams. Generating sketches that predicated the internal structure of a model led to greater improvement in diagram understanding than visualizing the interior of the model without sketching, or sketching the model without attempting to predict unseen spatial relations. In addition, we found a positive correlation between sketched diagram accuracy and improvement on the diagram comprehension measure. Results suggest that generating a predictive diagram facilitates students' abilities to make inferences about spatial relationships in diagrams. Implications for use of sketching in supporting STEM learning are discussed. Copyright © 2016 Cognitive Science Society, Inc.

  20. Effects of ionic strength and ion pairing on (plant-wide) modelling of anaerobic digestion.

    PubMed

    Solon, Kimberly; Flores-Alsina, Xavier; Mbamba, Christian Kazadi; Volcke, Eveline I P; Tait, Stephan; Batstone, Damien; Gernaey, Krist V; Jeppsson, Ulf

    2015-03-01

    Plant-wide models of wastewater treatment (such as the Benchmark Simulation Model No. 2 or BSM2) are gaining popularity for use in holistic virtual studies of treatment plant control and operations. The objective of this study is to show the influence of ionic strength (as activity corrections) and ion pairing on modelling of anaerobic digestion processes in such plant-wide models of wastewater treatment. Using the BSM2 as a case study with a number of model variants and cationic load scenarios, this paper presents the effects of an improved physico-chemical description on model predictions and overall plant performance indicators, namely effluent quality index (EQI) and operational cost index (OCI). The acid-base equilibria implemented in the Anaerobic Digestion Model No. 1 (ADM1) are modified to account for non-ideal aqueous-phase chemistry. The model corrects for ionic strength via the Davies approach to consider chemical activities instead of molar concentrations. A speciation sub-routine based on a multi-dimensional Newton-Raphson (NR) iteration method is developed to address algebraic interdependencies. The model also includes ion pairs that play an important role in wastewater treatment. The paper describes: 1) how the anaerobic digester performance is affected by physico-chemical corrections; 2) the effect on pH and the anaerobic digestion products (CO2, CH4 and H2); and, 3) how these variations are propagated from the sludge treatment to the water line. Results at high ionic strength demonstrate that corrections to account for non-ideal conditions lead to significant differences in predicted process performance (up to 18% for effluent quality and 7% for operational cost) but that for pH prediction, activity corrections are more important than ion pairing effects. Both are likely to be required when precipitation is to be modelled. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Correcting the anion gap for hypoalbuminaemia does not improve detection of hyperlactataemia

    PubMed Central

    Dinh, C H; Ng, R; Grandinetti, A; Joffe, A; Chow, D C

    2006-01-01

    Background An elevated lactate level reflects impaired tissue oxygenation and is a predictor of mortality. Studies have shown that the anion gap is inadequate as a screen for hyperlactataemia, particularly in critically ill and trauma patients. A proposed explanation for the anion gap's poor sensitivity and specificity in detecting hyperlactataemia is that the serum albumin is frequently low. This study therefore, sought to compare the predictive values of the anion gap and the anion gap corrected for albumin (cAG) as an indicator of hyperlactataemia as defined by a lactate ⩾2.5 mmol/l. Methods A retrospective review of 639 sets of laboratory values from a tertiary care hospital. Patients' laboratory results were included in the study if serum chemistries and lactate were drawn consecutively. The sensitivity, specificity, and predictive values were obtained. A receiver operator characteristics curve (ROC) was drawn and the area under the curve (AUC) was calculated. Results An anion gap ⩾12 provided a sensitivity, specificity, positive predictive value, and negative predictive value of 39%, 89%, 79%, and 58%, respectively, and a cAG ⩾12 provided a sensitivity, specificity, positive predictive value, and negative predictive value of 75%, 59%, 66%, and 69%, respectively. The ROC curves between anion gap and cAG as a predictor of hyperlactataemia were almost identical. The AUC was 0.757 and 0.750, respectively. Conclusions The sensitivities, specificities, and predictive values of the anion gap and cAG were inadequate in predicting the presence of hyperlactataemia. The cAG provides no additional advantage over the anion gap in the detection of hyperlactataemia. PMID:16858097

  2. Patient age, refractive index of the corneal stroma, and outcomes of uneventful laser in situ keratomileusis.

    PubMed

    Patel, Sudi; Alió, Jorge L; Walewska, Anna; Amparo, Francisco; Artola, Alberto

    2013-03-01

    To determine the influence of age and the corneal stromal refractive index on the difference between the predicted and actual postoperative refractive error after laser in situ keratomileusis (LASIK) and whether the precision of outcomes could be improved by considering age and the refractive index. Vissum Instituto Oftalmologico de Alicante, Alicante, Spain. Case series. Flaps were created using a mechanical microkeratome. The stromal refractive index was measured using a VCH-1 refractometer after flap lifting. Refractive data were obtained 1, 3, and 6 months postoperatively. Uneventful LASIK was performed in 133 eyes. The mean age, refractive index, and applied corrections were 33.4 years ± 9.49 (SD), 1.368 ± 0.006, and -2.43 ± 3.36 diopters (D), respectively. The difference between the predicted and actual postoperative refractive error = 2.315-0.021 age-1.106 refractive index (F = 3.647, r = 0.254, P=.029; n = 109) at 1 month and = 11.820-0.023 age-7.976 refractive index (F = 3.392, r = 0.261, P=.022, n = 106) at 3 months. A correlation between the actual and calculated postoperative refraction improved from r = -0.178 (P=.064; n = 75) to r = -0.418 (P<.001) after considering the true refractive index 6 months postoperatively. The predicted outcomes of LASIK can be improved by inputting the refractive index of the individual corneal stroma. Unexpected outcomes (>0.50 D) of LASIK could be avoided by considering patient age and the refractive index and by adjusting the applied correction accordingly. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  3. Predicting stability of DNA duplexes in solutions containing magnesium and monovalent cations.

    PubMed

    Owczarzy, Richard; Moreira, Bernardo G; You, Yong; Behlke, Mark A; Walder, Joseph A

    2008-05-13

    Accurate predictions of DNA stability in physiological and enzyme buffers are important for the design of many biological and biochemical assays. We therefore investigated the effects of magnesium, potassium, sodium, Tris ions, and deoxynucleoside triphosphates on melting profiles of duplex DNA oligomers and collected large melting data sets. An empirical correction function was developed that predicts melting temperatures, transition enthalpies, entropies, and free energies in buffers containing magnesium and monovalent cations. The new correction function significantly improves the accuracy of predictions and accounts for ion concentration, G-C base pair content, and length of the oligonucleotides. The competitive effects of potassium and magnesium ions were characterized. If the concentration ratio of [Mg (2+)] (0.5)/[Mon (+)] is less than 0.22 M (-1/2), monovalent ions (K (+), Na (+)) are dominant. Effects of magnesium ions dominate and determine duplex stability at higher ratios. Typical reaction conditions for PCR and DNA sequencing (1.5-5 mM magnesium and 20-100 mM monovalent cations) fall within this range. Conditions were identified where monovalent and divalent cations compete and their stability effects are more complex. When duplexes denature, some of the Mg (2+) ions associated with the DNA are released. The number of released magnesium ions per phosphate charge is sequence dependent and decreases surprisingly with increasing oligonucleotide length.

  4. Reliable and fast quantitative analysis of active ingredient in pharmaceutical suspension using Raman spectroscopy.

    PubMed

    Park, Seok Chan; Kim, Minjung; Noh, Jaegeun; Chung, Hoeil; Woo, Youngah; Lee, Jonghwa; Kemper, Mark S

    2007-06-12

    The concentration of acetaminophen in a turbid pharmaceutical suspension has been measured successfully using Raman spectroscopy. The spectrometer was equipped with a large spot probe which enabled the coverage of a representative area during sampling. This wide area illumination (WAI) scheme (coverage area 28.3 mm2) for Raman data collection proved to be more reliable for the compositional determination of these pharmaceutical suspensions, especially when the samples were turbid. The reproducibility of measurement using the WAI scheme was compared to that of using a conventional small-spot scheme which employed a much smaller illumination area (about 100 microm spot size). A layer of isobutyric anhydride was placed in front of the sample vials to correct the variation in the Raman intensity due to the fluctuation of laser power. Corrections were accomplished using the isolated carbonyl band of isobutyric anhydride. The acetaminophen concentrations of prediction samples were accurately estimated using a partial least squares (PLS) calibration model. The prediction accuracy was maintained even with changes in laser power. It was noted that the prediction performance was somewhat degraded for turbid suspensions with high acetaminophen contents. When comparing the results of reproducibility obtained with the WAI scheme and those obtained using the conventional scheme, it was concluded that the quantitative determination of the active pharmaceutical ingredient (API) in turbid suspensions is much improved when employing a larger laser coverage area. This is presumably due to the improvement in representative sampling.

  5. Predicting the helix packing of globular proteins by self-correcting distance geometry.

    PubMed

    Mumenthaler, C; Braun, W

    1995-05-01

    A new self-correcting distance geometry method for predicting the three-dimensional structure of small globular proteins was assessed with a test set of 8 helical proteins. With the knowledge of the amino acid sequence and the helical segments, our completely automated method calculated the correct backbone topology of six proteins. The accuracy of the predicted structures ranged from 2.3 A to 3.1 A for the helical segments compared to the experimentally determined structures. For two proteins, the predicted constraints were not restrictive enough to yield a conclusive prediction. The method can be applied to all small globular proteins, provided the secondary structure is known from NMR analysis or can be predicted with high reliability.

  6. Sub-Doppler Rovibrational Spectroscopy of the H_3^+ Cation and Isotopologues

    NASA Astrophysics Data System (ADS)

    Markus, Charles R.; McCollum, Jefferson E.; Dieter, Thomas S.; Kocheril, Philip A.; McCall, Benjamin J.

    2017-06-01

    Molecular ions play a central role in the chemistry of the interstellar medium (ISM) and act as benchmarks for state of the art ab initio theory. The molecular ion H_3^+ initiates a chain of ion-neutral reactions which drives chemistry in the ISM, and observing it either directly or indirectly through its isotopologues is valuable for understanding interstellar chemistry. Improving the accuracy of laboratory measurements will assist future astronomical observations. H_3^+ is also one of a few systems whose rovibrational transitions can be predicted to spectroscopic accuracy (<1 cm^{-1}), and with careful treatment of adiabatic, nonadiabatic, and quantum electrodynamic corrections to the potential energy surface, predictions of low lying rovibrational states can rival the uncertainty of experimental measurements New experimental data will be needed to benchmark future treatment of these corrections. Previously we have reported 26 transitions within the fundamental band of H_3^+ with MHz-level uncertainties. With recent improvements to our overall sensitivity, we have expanded this survey to include additional transitions within the fundamental band and the first hot band. These new data will ultimately be used to predict ground state rovibrational energy levels through combination differences which will act as benchmarks for ab initio theory and predict forbidden rotational transitions of H_3^+. We will also discuss progress in measuring rovibrational transitions of the isotopologues H_2D^+ and D_2H^+, which will be used to assist in future THz astronomical observations. New experimental data will be needed to benchmark future treatment of these corrections. J. N. Hodges, A. J. Perry, P. A. Jenkins II, B. M. Siller, and B. J. McCall, J. Chem. Phys. (2013), 139, 164201. A. J. Perry, J. N. Hodges, C. R. Markus, G. S. Kocheril, and B. J. McCall, J. Mol. Spectrosc. (2015), 317, 71-73. A. J. Perry, C. R. Markus, J. N. Hodges, G. S. Kocheril, and B. J. McCall, 71st International Symposium on Molecular Spectroscopy (2016), MH03. C. R. Markus, A. J. Perry, J. N. Hodges, and B. J. McCall, Opt. Express (2017), 25, 3709-3721.

  7. Hyperspectral analysis of columbia spotted frog habitat

    USGS Publications Warehouse

    Shive, J.P.; Pilliod, D.S.; Peterson, C.R.

    2010-01-01

    Wildlife managers increasingly are using remotely sensed imagery to improve habitat delineations and sampling strategies. Advances in remote sensing technology, such as hyperspectral imagery, provide more information than previously was available with multispectral sensors. We evaluated accuracy of high-resolution hyperspectral image classifications to identify wetlands and wetland habitat features important for Columbia spotted frogs (Rana luteiventris) and compared the results to multispectral image classification and United States Geological Survey topographic maps. The study area spanned 3 lake basins in the Salmon River Mountains, Idaho, USA. Hyperspectral data were collected with an airborne sensor on 30 June 2002 and on 8 July 2006. A 12-year comprehensive ground survey of the study area for Columbia spotted frog reproduction served as validation for image classifications. Hyperspectral image classification accuracy of wetlands was high, with a producer's accuracy of 96 (44 wetlands) correctly classified with the 2002 data and 89 (41 wetlands) correctly classified with the 2006 data. We applied habitat-based rules to delineate breeding habitat from other wetlands, and successfully predicted 74 (14 wetlands) of known breeding wetlands for the Columbia spotted frog. Emergent sedge microhabitat classification showed promise for directly predicting Columbia spotted frog egg mass locations within a wetland by correctly identifying 72 (23 of 32) of known locations. Our study indicates hyperspectral imagery can be an effective tool for mapping spotted frog breeding habitat in the selected mountain basins. We conclude that this technique has potential for improving site selection for inventory and monitoring programs conducted across similar wetland habitat and can be a useful tool for delineating wildlife habitats. ?? 2010 The Wildlife Society.

  8. Beta value coupled wave theory for nonslanted reflection gratings.

    PubMed

    Neipp, Cristian; Francés, Jorge; Gallego, Sergi; Bleda, Sergio; Martínez, Francisco Javier; Pascual, Inmaculada; Beléndez, Augusto

    2014-01-01

    We present a modified coupled wave theory to describe the properties of nonslanted reflection volume diffraction gratings. The method is based on the beta value coupled wave theory, which will be corrected by using appropriate boundary conditions. The use of this correction allows predicting the efficiency of the reflected order for nonslanted reflection gratings embedded in two media with different refractive indices. The results obtained by using this method will be compared to those obtained using a matrix method, which gives exact solutions in terms of Mathieu functions, and also to Kogelnik's coupled wave theory. As will be demonstrated, the technique presented in this paper means a significant improvement over Kogelnik's coupled wave theory.

  9. Beta Value Coupled Wave Theory for Nonslanted Reflection Gratings

    PubMed Central

    Neipp, Cristian; Francés, Jorge; Gallego, Sergi; Bleda, Sergio; Martínez, Francisco Javier; Pascual, Inmaculada; Beléndez, Augusto

    2014-01-01

    We present a modified coupled wave theory to describe the properties of nonslanted reflection volume diffraction gratings. The method is based on the beta value coupled wave theory, which will be corrected by using appropriate boundary conditions. The use of this correction allows predicting the efficiency of the reflected order for nonslanted reflection gratings embedded in two media with different refractive indices. The results obtained by using this method will be compared to those obtained using a matrix method, which gives exact solutions in terms of Mathieu functions, and also to Kogelnik's coupled wave theory. As will be demonstrated, the technique presented in this paper means a significant improvement over Kogelnik's coupled wave theory. PMID:24723811

  10. TH-C-BRD-06: A Novel MRI Based CT Artifact Correction Method for Improving Proton Range Calculation in the Presence of Severe CT Artifacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, P; Schreibmann, E; Fox, T

    2014-06-15

    Purpose: Severe CT artifacts can impair our ability to accurately calculate proton range thereby resulting in a clinically unacceptable treatment plan. In this work, we investigated a novel CT artifact correction method based on a coregistered MRI and investigated its ability to estimate CT HU and proton range in the presence of severe CT artifacts. Methods: The proposed method corrects corrupted CT data using a coregistered MRI to guide the mapping of CT values from a nearby artifact-free region. First patient MRI and CT images were registered using 3D deformable image registration software based on B-spline and mutual information. Themore » CT slice with severe artifacts was selected as well as a nearby slice free of artifacts (e.g. 1cm away from the artifact). The two sets of paired MRI and CT images at different slice locations were further registered by applying 2D deformable image registration. Based on the artifact free paired MRI and CT images, a comprehensive geospatial analysis was performed to predict the correct CT HU of the CT image with severe artifact. For a proof of concept, a known artifact was introduced that changed the ground truth CT HU value up to 30% and up to 5cm error in proton range. The ability of the proposed method to recover the ground truth was quantified using a selected head and neck case. Results: A significant improvement in image quality was observed visually. Our proof of concept study showed that 90% of area that had 30% errors in CT HU was corrected to 3% of its ground truth value. Furthermore, the maximum proton range error up to 5cm was reduced to 4mm error. Conclusion: MRI based CT artifact correction method can improve CT image quality and proton range calculation for patients with severe CT artifacts.« less

  11. Identification and correction of abnormal, incomplete and mispredicted proteins in public databases.

    PubMed

    Nagy, Alinda; Hegyi, Hédi; Farkas, Krisztina; Tordai, Hedvig; Kozma, Evelin; Bányai, László; Patthy, László

    2008-08-27

    Despite significant improvements in computational annotation of genomes, sequences of abnormal, incomplete or incorrectly predicted genes and proteins remain abundant in public databases. Since the majority of incomplete, abnormal or mispredicted entries are not annotated as such, these errors seriously affect the reliability of these databases. Here we describe the MisPred approach that may provide an efficient means for the quality control of databases. The current version of the MisPred approach uses five distinct routines for identifying abnormal, incomplete or mispredicted entries based on the principle that a sequence is likely to be incorrect if some of its features conflict with our current knowledge about protein-coding genes and proteins: (i) conflict between the predicted subcellular localization of proteins and the absence of the corresponding sequence signals; (ii) presence of extracellular and cytoplasmic domains and the absence of transmembrane segments; (iii) co-occurrence of extracellular and nuclear domains; (iv) violation of domain integrity; (v) chimeras encoded by two or more genes located on different chromosomes. Analyses of predicted EnsEMBL protein sequences of nine deuterostome (Homo sapiens, Mus musculus, Rattus norvegicus, Monodelphis domestica, Gallus gallus, Xenopus tropicalis, Fugu rubripes, Danio rerio and Ciona intestinalis) and two protostome species (Caenorhabditis elegans and Drosophila melanogaster) have revealed that the absence of expected signal peptides and violation of domain integrity account for the majority of mispredictions. Analyses of sequences predicted by NCBI's GNOMON annotation pipeline show that the rates of mispredictions are comparable to those of EnsEMBL. Interestingly, even the manually curated UniProtKB/Swiss-Prot dataset is contaminated with mispredicted or abnormal proteins, although to a much lesser extent than UniProtKB/TrEMBL or the EnsEMBL or GNOMON-predicted entries. MisPred works efficiently in identifying errors in predictions generated by the most reliable gene prediction tools such as the EnsEMBL and NCBI's GNOMON pipelines and also guides the correction of errors. We suggest that application of the MisPred approach will significantly improve the quality of gene predictions and the associated databases.

  12. Numerical analysis of hypersonic turbulent film cooling flows

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.; Chen, C. P.; Wei, H.

    1992-01-01

    As a building block, numerical capabilities for predicting heat flux and turbulent flowfields of hypersonic vehicles require extensive model validations. Computational procedures for calculating turbulent flows and heat fluxes for supersonic film cooling with parallel slot injections are described in this study. Two injectant mass flow rates with matched and unmatched pressure conditions using the database of Holden et al. (1990) are considered. To avoid uncertainties associated with the boundary conditions in testing turbulence models, detailed three-dimensional flowfields of the injection nozzle were calculated. Two computational fluid dynamics codes, GASP and FDNS, with the algebraic Baldwin-Lomax and k-epsilon models with compressibility corrections were used. It was found that the B-L model which resolves near-wall viscous sublayer is very sensitive to the inlet boundary conditions at the nozzle exit face. The k-epsilon models with improved wall functions are less sensitive to the inlet boundary conditions. The testings show that compressibility corrections are necessary for the k-epsilon model to realistically predict the heat fluxes of the hypersonic film cooling problems.

  13. Selection, adaptation, and predictive information in changing environments

    NASA Astrophysics Data System (ADS)

    Feltgen, Quentin; Nemenman, Ilya

    2014-03-01

    Adaptation by means of natural selection is a key concept in evolutionary biology. Individuals better matched to the surrounding environment outcompete the others. This increases the fraction of the better adapted individuals in the population, and hence increases its collective fitness. Adaptation is also prominent on the physiological scale in neuroscience and cell biology. There each individual infers properties of the environment and changes to become individually better, improving the overall population as well. Traditionally, these two notions of adaption have been considered distinct. Here we argue that both types of adaptation result in the same population growth in a broad class of analytically tractable population dynamics models in temporally changing environments. In particular, both types of adaptation lead to subextensive corrections to the population growth rates. These corrections are nearly universal and are equal to the predictive information in the environment time series, which is also the characterization of the time series complexity. This work has been supported by the James S. McDonnell Foundation.

  14. Towards good practice for health statistics: lessons from the Millennium Development Goal health indicators.

    PubMed

    Murray, Christopher J L

    2007-03-10

    Health statistics are at the centre of an increasing number of worldwide health controversies. Several factors are sharpening the tension between the supply and demand for high quality health information, and the health-related Millennium Development Goals (MDGs) provide a high-profile example. With thousands of indicators recommended but few measured well, the worldwide health community needs to focus its efforts on improving measurement of a small set of priority areas. Priority indicators should be selected on the basis of public-health significance and several dimensions of measurability. Health statistics can be divided into three types: crude, corrected, and predicted. Health statistics are necessary inputs to planning and strategic decision making, programme implementation, monitoring progress towards targets, and assessment of what works and what does not. Crude statistics that are biased have no role in any of these steps; corrected statistics are preferred. For strategic decision making, when corrected statistics are unavailable, predicted statistics can play an important part. For monitoring progress towards agreed targets and assessment of what works and what does not, however, predicted statistics should not be used. Perhaps the most effective method to decrease controversy over health statistics and to encourage better primary data collection and the development of better analytical methods is a strong commitment to provision of an explicit data audit trail. This initiative would make available the primary data, all post-data collection adjustments, models including covariates used for farcasting and forecasting, and necessary documentation to the public.

  15. "Ethnicity moderates the outcomes of self-enhancement and self-improvement themes in expressive writing": Correction to Tsai et al. (2015).

    PubMed

    2017-01-01

    Reports an error in "Ethnicity moderates the outcomes of self-enhancement and self-improvement themes in expressive writing" by William Tsai, Anna S. Lau, Andrea N. Niles, Jordan Coello, Matthew D. Lieberman, Ahra C. Ko, Christopher Hur and Annette L. Stanton ( Cultural Diversity and Ethnic Minority Psychology , 2015[Oct], Vol 21[4], 584-592). In this article, there were three errors in the Results section. Each are described in the erratum alongside the correct results. The interpretations of the findings remain the same. (The following abstract of the original article appeared in record 2014-32908-001.) The current study examined whether writing content related to self-enhancing (viz., downward social comparison and situational attributions) and self-improving (viz., upward social comparison and persistence) motivations were differentially related to expressive writing outcomes among 17 Asian American and 17 European American participants. Content analysis of the essays revealed no significant cultural group differences in the likelihood of engaging in self-enhancing versus self-improving reflections on negative personal experiences. However, cultural group differences were apparent in the relation between self-motivation processes and changes in anxiety and depressive symptoms at 3-month follow-up. Among European Americans, writing that reflected downward social comparison predicted positive outcomes, whereas persistence writing themes were related to poorer outcomes. For Asian Americans, writing about persistence was related to positive outcomes, whereas downward social comparison and situational attributions predicted poorer outcomes. Findings provide evidence suggesting culturally distinct mechanisms for the effects of expressive disclosure. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. EPG therapy for children with long-standing speech disorders: predictions and outcomes.

    PubMed

    Carter, Penny; Edwards, Susan

    2004-01-01

    This paper reports on a project using a series of single subjects to investigate the effectiveness of using electropalatography (EPG) in treating ten children with persisting speech difficulties of no known organic aetiology. The aims of the project were two-fold, firstly to assess whether the subjects selected benefited from this treatment, and secondly to investigate whether it was possible to predict which children would make maximum improvement. A number of factors were identified as possible predictors for successful EPG therapy and subjects were then ranked according to these predictions. Baseline measures of each subject's speech were taken using word lists. Scores reflected the correct number of realizations of consonants produced by each subject. Subjects received the same number of therapy sessions and were then re-tested. Scores before and after therapy were compared and found to be significantly different although the initial predictions as to the magnitude of improvement for each subject were not verified. The selection of appropriate candidates for therapy and the need for objective means of establishing effectiveness are discussed.

  17. Multi-scale enhancement of climate prediction over land by improving the model sensitivity to vegetation variability

    NASA Astrophysics Data System (ADS)

    Alessandri, A.; Catalano, F.; De Felice, M.; Hurk, B. V. D.; Doblas-Reyes, F. J.; Boussetta, S.; Balsamo, G.; Miller, P. A.

    2017-12-01

    Here we demonstrate, for the first time, that the implementation of a realistic representation of vegetation in Earth System Models (ESMs) can significantly improve climate simulation and prediction across multiple time-scales. The effective sub-grid vegetation fractional coverage vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the surface resistance to evapotranspiration, albedo, roughness lenght, and soil field capacity. To adequately represent this effect in the EC-Earth ESM, we included an exponential dependence of the vegetation cover on the Leaf Area Index.By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (20th Century) simulations and retrospective predictions to the decadal (5-years), seasonal (2-4 months) and weather (4 days) time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation-cover consistently correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in the surface evapotranspiration.Above results are discussed in a peer-review paper just being accepted for publication on Climate Dynamics (Alessandri et al., 2017; doi:10.1007/s00382-017-3766-y).

  18. Top-pair production at hadron colliders with next-to-next-to-leading logarithmic soft-gluon resummation

    NASA Astrophysics Data System (ADS)

    Cacciari, Matteo; Czakon, Michał; Mangano, Michelangelo; Mitov, Alexander; Nason, Paolo

    2012-04-01

    Incorporating all recent theoretical advances, we resum soft-gluon corrections to the total ttbar cross-section at hadron colliders at the next-to-next-to-leading logarithmic (NNLL) order. We perform the resummation in the well established framework of Mellin N-space resummation. We exhaustively study the sources of systematic uncertainty like renormalization and factorization scale variation, power suppressed effects and missing two- and higher-loop corrections. The inclusion of soft-gluon resummation at NNLL brings only a minor decrease in the perturbative uncertainty with respect to the NLL approximation, and a small shift in the central value, consistent with the quoted uncertainties. These numerical predictions agree with the currently available measurements from the Tevatron and LHC and have uncertainty of similar size. We conclude that significant improvements in the ttbar cross-sections can potentially be expected only upon inclusion of the complete NNLO corrections.

  19. Correcting intermittent central suppression improves binocular marksmanship.

    PubMed

    Hussey, Eric S

    2007-04-01

    Intermittent central suppression (ICS) is a defect in normal binocular (two-eyed) vision that causes confusion in visual detail. ICS is a repetitive intermittent loss of visual sensation in the central area of vision. As the central vision of either eye "turns on and off", aiming errors in sight can occur that must be corrected when both eyes are seeing again. Any aiming errors in sight might be expected to interfere with marksmanship during two-eyed seeing. We compared monocular (one-eyed, patched) and binocular (two-eyed) marksmanship with pistol shooting with an Army ROTC cadet before and after successful therapy for diagnosed ICS. Pretreatment, monocular marksmanship was significantly better than binocular marksmanship, suggesting defective binocularity reduced accuracy. After treatment for ICS, binocular and monocular marksmanship were essentially the same. Results confirmed predictions that with increased visual stability from correcting the suppression, binocular and monocular marksmanship accuracies should merge.

  20. Wavefront correction by target-phase-locking technology in a 500 TW laser facility

    NASA Astrophysics Data System (ADS)

    Wang, D. E.; Dai, W. J.; Zhou, K. N.; Su, J. Q.; Xue, Q.; Yuan, Q.; Zhang, X.; Deng, X. W.; Yang, Y.; Wang, Y. C.; Xie, N.; Sun, L.; Hu, D. X.; Zhu, Q. H.

    2017-03-01

    We demonstrate a novel approach termed target-phase-locking that could improve the entire beam wavefront quality of a 500 TW Nd3+:phosphate glass laser facility. The thermal and static wavefront from front-end to target is corrected by using one deformable mirror that receives feedback from both the focal-spot sensor and wavefront sensor, and only the main laser of the laser system is employed in the correction process, with auxiliary calibration light no longer necessary. As a result, a static focal spot with full width at half maximum of 8.87  ×  5.74 µm is achieved, the thermal wavefront induced by flash-lamp-pumped Nd3+:phosphate glass is compensated with PV from 3.54-0.43 µm, and a dynamic focal spot with intensity exceeding 1020 W cm-2 is precisely predicted at the target with such an approach.

  1. Physics-based protein-structure prediction using a hierarchical protocol based on the UNRES force field: assessment in two blind tests.

    PubMed

    Ołdziej, S; Czaplewski, C; Liwo, A; Chinchio, M; Nanias, M; Vila, J A; Khalili, M; Arnautova, Y A; Jagielska, A; Makowski, M; Schafroth, H D; Kaźmierkiewicz, R; Ripoll, D R; Pillardy, J; Saunders, J A; Kang, Y K; Gibson, K D; Scheraga, H A

    2005-05-24

    Recent improvements in the protein-structure prediction method developed in our laboratory, based on the thermodynamic hypothesis, are described. The conformational space is searched extensively at the united-residue level by using our physics-based UNRES energy function and the conformational space annealing method of global optimization. The lowest-energy coarse-grained structures are then converted to an all-atom representation and energy-minimized with the ECEPP/3 force field. The procedure was assessed in two recent blind tests of protein-structure prediction. During the first blind test, we predicted large fragments of alpha and alpha+beta proteins [60-70 residues with C(alpha) rms deviation (rmsd) <6 A]. However, for alpha+beta proteins, significant topological errors occurred despite low rmsd values. In the second exercise, we predicted whole structures of five proteins (two alpha and three alpha+beta, with sizes of 53-235 residues) with remarkably good accuracy. In particular, for the genomic target TM0487 (a 102-residue alpha+beta protein from Thermotoga maritima), we predicted the complete, topologically correct structure with 7.3-A C(alpha) rmsd. So far this protein is the largest alpha+beta protein predicted based solely on the amino acid sequence and a physics-based potential-energy function and search procedure. For target T0198, a phosphate transport system regulator PhoU from T. maritima (a 235-residue mainly alpha-helical protein), we predicted the topology of the whole six-helix bundle correctly within 8 A rmsd, except the 32 C-terminal residues, most of which form a beta-hairpin. These and other examples described in this work demonstrate significant progress in physics-based protein-structure prediction.

  2. Prediction of biodegradability from chemical structure: Modeling or ready biodegradation test data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loonen, H.; Lindgren, F.; Hansen, B.

    1999-08-01

    Biodegradation data were collected and evaluated for 894 substances with widely varying chemical structures. All data were determined according to the Japanese Ministry of International Trade and Industry (MITI) I test protocol. The MITI I test is a screening test for ready biodegradability and has been described by Organization for Economic Cooperation and Development (OECD) test guideline 301 C and European Union (EU) test guideline C4F. The chemicals were characterized by a set of 127 predefined structural fragments. This data set was used to develop a model for the prediction of the biodegradability of chemicals under standardized OECD and EUmore » ready biodegradation test conditions. Partial least squares (PLS) discriminant analysis was used for the model development. The model was evaluated by means of internal cross-validation and repeated external validation. The importance of various structural fragments and fragment interactions was investigated. The most important fragments include the presence of a long alkyl chain; hydroxy, ester, and acid groups (enhancing biodegradation); and the presence of one or more aromatic rings and halogen substituents (regarding biodegradation). More than 85% of the model predictions were correct for using the complete data set. The not readily biodegradable predictions were slightly better than the readily biodegradable predictions (86 vs 84%). The average percentage of correct predictions from four external validation studies was 83%. Model optimization by including fragment interactions improve the model predicting capabilities to 89%. It can be concluded that the PLS model provides predictions of high reliability for a diverse range of chemical structures. The predictions conform to the concept of readily biodegradable (or not readily biodegradable) as defined by OECD and EU test guidelines.« less

  3. Role of tunnelling in complete and incomplete fusion induced by 9Be on 169Tm and 187Re targets at around barrier energies

    NASA Astrophysics Data System (ADS)

    Kharab, Rajesh; Chahal, Rajiv; Kumar, Rajiv

    2017-04-01

    We have analyzed the complete and incomplete fusion excitation function for 9Be +169Tm, 187Re reactions at around barrier energies using the code PLATYPUS based on classical dynamical model. The quantum mechanical tunnelling correction is incorporated at near and sub barrier energies which significantly improves the matching between the data and prediction.

  4. Improved packing of protein side chains with parallel ant colonies

    PubMed Central

    2014-01-01

    Introduction The accurate packing of protein side chains is important for many computational biology problems, such as ab initio protein structure prediction, homology modelling, and protein design and ligand docking applications. Many of existing solutions are modelled as a computational optimisation problem. As well as the design of search algorithms, most solutions suffer from an inaccurate energy function for judging whether a prediction is good or bad. Even if the search has found the lowest energy, there is no certainty of obtaining the protein structures with correct side chains. Methods We present a side-chain modelling method, pacoPacker, which uses a parallel ant colony optimisation strategy based on sharing a single pheromone matrix. This parallel approach combines different sources of energy functions and generates protein side-chain conformations with the lowest energies jointly determined by the various energy functions. We further optimised the selected rotamers to construct subrotamer by rotamer minimisation, which reasonably improved the discreteness of the rotamer library. Results We focused on improving the accuracy of side-chain conformation prediction. For a testing set of 442 proteins, 87.19% of X1 and 77.11% of X12 angles were predicted correctly within 40° of the X-ray positions. We compared the accuracy of pacoPacker with state-of-the-art methods, such as CIS-RR and SCWRL4. We analysed the results from different perspectives, in terms of protein chain and individual residues. In this comprehensive benchmark testing, 51.5% of proteins within a length of 400 amino acids predicted by pacoPacker were superior to the results of CIS-RR and SCWRL4 simultaneously. Finally, we also showed the advantage of using the subrotamers strategy. All results confirmed that our parallel approach is competitive to state-of-the-art solutions for packing side chains. Conclusions This parallel approach combines various sources of searching intelligence and energy functions to pack protein side chains. It provides a frame-work for combining different inaccuracy/usefulness objective functions by designing parallel heuristic search algorithms. PMID:25474164

  5. Short-Range prediction of a Mediterranean Severe weather event using EnKF: Configuration tests

    NASA Astrophysics Data System (ADS)

    Carrio Carrio, Diego Saul; Homar Santaner, Víctor

    2014-05-01

    The afternoon of 4th October 2007, severe damaging winds and torrential rainfall affected the Island of Mallorca. This storm produced F2-F3 tornadoes in the vicinity of Palma, with one person killed and estimated damages to property exceeding 10 M€. Several studies have analysed the meteorological context in which this episode unfolded, describing the formation of a train of multiple thunderstorms along a warm front and the evolution of a squall line organized from convective activity initiated offshore Murcia during that morning. Couhet et al. (2011) attributed the correct simulation of the convective system and particularly its organization as a squall line to the correct representation of a convergence line at low-levels over the Alboran Sea during the first hours of the day. The numerical prediction of mesoscale phenomena which initiates, organizes and evolves over the sea is an extremely demanding challenge of great importance for coastal regions. In this study, we investigate the skill of a mesoscale ensemble data assimilation system to predict the severe phenomena occurred on 4th October 2007. We use an Ensemble Kalman Filter which assimilates conventional (surface, radiosonde and AMDAR) data using the DART implementation from (NCAR). On the one hand, we analyse the potential of the assimilation cycle to advect critical observational data towards decisive data-void areas over the sea. Furthermore, we assess the sensitivity of the ensemble products to the ensemble size, grid resolution, assimilation period and physics diversity in the mesoscale model. In particular, we focus on the effect of these numerical configurations on the representation of the convective activity and the precipitation field, as valuable predictands of high impact weather. Results show that the 6-h EnKF assimilation period produces initial fields that successfully represent the environment in which initiation occurred and thus the derived numerical predictions render improved evolutions of the squall line. Synthetic maps of severe convective risk reveals the improved predictability of the event using the EnKF as opposed to deterministic or downscaled configurations. Discussion on further improvements to the forecasting systems is provided.

  6. Comparative Study of Multiplet Structures of Mn4+ in K2SiF6, K2GeF6, and K2TiF6 Based on First-Principles Configuration-Interaction Calculations

    NASA Astrophysics Data System (ADS)

    Novita, Mega; Ogasawara, Kazuyoshi

    2012-02-01

    We performed first-principles configuration-interaction calculations of multiplet energies for Mn4+ in K2SiF6, K2GeF6, and K2TiF6 crystals. The results indicate that corrections based on a single-electron calculation are effective for the prediction of 4A2 → 4T2 and 4A2 → 4T1a transition energies, while such corrections are not necessary for the prediction of the 4A2 → 2E transition energy. The cluster size dependence of the multiplet energies is small. However, the 4A2 → 2E transition energy is slightly improved by using larger clusters including K ions. The theoretical multiplet energies are improved further by considering the lattice relaxation effect. As a result, the characteristic multiplet energy shifts depending on the host crystal are well reproduced without using any empirical parameters. Although K2GeF6 and K2TiF6 have lower symmetry than K2SiF6, the results indicate that the variation of the multiplet energy is mainly determined by the Mn-F bond length.

  7. Off the beaten path: a new approach to realistically model the orbital decay of supermassive black holes in galaxy formation simulations

    NASA Astrophysics Data System (ADS)

    Tremmel, M.; Governato, F.; Volonteri, M.; Quinn, T. R.

    2015-08-01

    We introduce a sub-grid force correction term to better model the dynamical friction experienced by a supermassive black hole (SMBH) as it orbits within its host galaxy. This new approach accurately follows an SMBH's orbital decay and drastically improves over commonly used `advection' methods. The force correction introduced here naturally scales with the force resolution of the simulation and converges as resolution is increased. In controlled experiments, we show how the orbital decay of the SMBH closely follows analytical predictions when particle masses are significantly smaller than that of the SMBH. In a cosmological simulation of the assembly of a small galaxy, we show how our method allows for realistic black hole orbits. This approach overcomes the limitations of the advection scheme, where black holes are rapidly and artificially pushed towards the halo centre and then forced to merge, regardless of their orbits. We find that SMBHs from merging dwarf galaxies can spend significant time away from the centre of the remnant galaxy. Improving the modelling of SMBH orbital decay will help in making robust predictions of the growth, detectability and merger rates of SMBHs, especially at low galaxy masses or at high redshift.

  8. Ionospheric Correction Based on Ingestion of Global Ionospheric Maps into the NeQuick 2 Model

    PubMed Central

    Yu, Xiao; She, Chengli; Zhen, Weimin; Bruno, Nava; Liu, Dun; Yue, Xinan; Ou, Ming; Xu, Jisheng

    2015-01-01

    The global ionospheric maps (GIMs), generated by Jet Propulsion Laboratory (JPL) and Center for Orbit Determination in Europe (CODE) during a period over 13 years, have been adopted as the primary source of data to provide global ionospheric correction for possible single frequency positioning applications. The investigation aims to assess the performance of new NeQuick model, NeQuick 2, in predicting global total electron content (TEC) through ingesting the GIMs data from the previous day(s). The results show good performance of the GIMs-driven-NeQuick model with average 86% of vertical TEC error less than 10 TECU, when the global daily effective ionization indices (Az) versus modified dip latitude (MODIP) are constructed as a second order polynomial. The performance of GIMs-driven-NeQuick model presents variability with solar activity and behaves better during low solar activity years. The accuracy of TEC prediction can be improved further through performing a four-coefficient function expression of Az versus MODIP. As more measurements from earlier days are involved in the Az optimization procedure, the accuracy may decrease. The results also reveal that more efforts are needed to improve the NeQuick 2 model capabilities to represent the ionosphere in the equatorial and high-latitude regions. PMID:25815369

  9. Anthropometry-corrected exposure modeling as a method to improve trunk posture assessment with a single inclinometer.

    PubMed

    Van Driel, Robin; Trask, Catherine; Johnson, Peter W; Callaghan, Jack P; Koehoorn, Mieke; Teschke, Kay

    2013-01-01

    Measuring trunk posture in the workplace commonly involves subjective observation or self-report methods or the use of costly and time-consuming motion analysis systems (current gold standard). This work compared trunk inclination measurements using a simple data-logging inclinometer with trunk flexion measurements using a motion analysis system, and evaluated adding measures of subject anthropometry to exposure prediction models to improve the agreement between the two methods. Simulated lifting tasks (n=36) were performed by eight participants, and trunk postures were simultaneously measured with each method. There were significant differences between the two methods, with the inclinometer initially explaining 47% of the variance in the motion analysis measurements. However, adding one key anthropometric parameter (lower arm length) to the inclinometer-based trunk flexion prediction model reduced the differences between the two systems and accounted for 79% of the motion analysis method's variance. Although caution must be applied when generalizing lower-arm length as a correction factor, the overall strategy of anthropometric modeling is a novel contribution. In this lifting-based study, by accounting for subject anthropometry, a single, simple data-logging inclinometer shows promise for trunk posture measurement and may have utility in larger-scale field studies where similar types of tasks are performed.

  10. Noninvasive prediction of shunt operation outcome in idiopathic normal pressure hydrocephalus

    PubMed Central

    Aoki, Yasunori; Kazui, Hiroaki; Tanaka, Toshihisa; Ishii, Ryouhei; Wada, Tamiki; Ikeda, Shunichiro; Hata, Masahiro; Canuet, Leonides; Katsimichas, Themistoklis; Musha, Toshimitsu; Matsuzaki, Haruyasu; Imajo, Kaoru; Kanemoto, Hideki; Yoshida, Tetsuhiko; Nomura, Keiko; Yoshiyama, Kenji; Iwase, Masao; Takeda, Masatoshi

    2015-01-01

    Idiopathic normal pressure hydrocephalus (iNPH) is a syndrome characterized by gait disturbance, cognitive deterioration and urinary incontinence in elderly individuals. These symptoms can be improved by shunt operation in some but not all patients. Therefore, discovering predictive factors for the surgical outcome is of great clinical importance. We used normalized power variance (NPV) of electroencephalography (EEG) waves, a sensitive measure of the instability of cortical electrical activity, and found significantly higher NPV in beta frequency band at the right fronto-temporo-occipital electrodes (Fp2, T4 and O2) in shunt responders compared to non-responders. By utilizing these differences, we were able to correctly identify responders and non-responders to shunt operation with a positive predictive value of 80% and a negative predictive value of 88%. Our findings indicate that NPV can be useful in noninvasively predicting the clinical outcome of shunt operation in patients with iNPH. PMID:25585705

  11. Tropical forecasting - Predictability perspective

    NASA Technical Reports Server (NTRS)

    Shukla, J.

    1989-01-01

    Results are presented of classical predictability studies and forecast experiments with observed initial conditions to show the nature of initial error growth and final error equilibration for the tropics and midlatitudes, separately. It is found that the theoretical upper limit of tropical circulation predictability is far less than for midlatitudes. The error growth for a complete general circulation model is compared to a dry version of the same model in which there is no prognostic equation for moisture, and diabatic heat sources are prescribed. It is found that the growth rate of synoptic-scale errors for the dry model is significantly smaller than for the moist model, suggesting that the interactions between dynamics and moist processes are among the important causes of atmospheric flow predictability degradation. Results are then presented of numerical experiments showing that correct specification of the slowly varying boundary condition of SST produces significant improvement in the prediction of time-averaged circulation and rainfall over the tropics.

  12. Spatial measurement error and correction by spatial SIMEX in linear regression models when using predicted air pollution exposures.

    PubMed

    Alexeeff, Stacey E; Carroll, Raymond J; Coull, Brent

    2016-04-01

    Spatial modeling of air pollution exposures is widespread in air pollution epidemiology research as a way to improve exposure assessment. However, there are key sources of exposure model uncertainty when air pollution is modeled, including estimation error and model misspecification. We examine the use of predicted air pollution levels in linear health effect models under a measurement error framework. For the prediction of air pollution exposures, we consider a universal Kriging framework, which may include land-use regression terms in the mean function and a spatial covariance structure for the residuals. We derive the bias induced by estimation error and by model misspecification in the exposure model, and we find that a misspecified exposure model can induce asymptotic bias in the effect estimate of air pollution on health. We propose a new spatial simulation extrapolation (SIMEX) procedure, and we demonstrate that the procedure has good performance in correcting this asymptotic bias. We illustrate spatial SIMEX in a study of air pollution and birthweight in Massachusetts. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. Mass spectrometric measurements of the freestream composition in the T4 free-piston shock-tunnel

    NASA Astrophysics Data System (ADS)

    Boyce, R. R.; Takahashi, M.; Stalker, R. J.

    2005-12-01

    The freestream composition is an important parameter in ground-based aerodynamic testing, and direct measurement of it is very important. This paper reports extensive composition measurements in the freestream of the T4 free-piston shock-tunnel, employing a recently improved time-of-flight mass spectrometer. A wide range of nozzle reservoir conditions were used. The results show good agreement between measured and theoretical values for nitric oxide over the entire enthalpy range reported (2.5 13 MJ/kg). This provides confidence that the chemistry model is correctly predicting sudden freezing of NO in the nozzle expansion. On the other hand, no monatomic species have been measured other than those produced by dissociative ionisation within the mass spectrometer, even at flow conditions where significant freestream dissociation is expected. Furthermore, excess diatomic oxygen is detected at high enthalpies. These observations are consistent with the possibility that oxygen recombination is not correctly predicted in the nozzle expansion, with sudden freezing occurring significantly later than predicted. However, the observations are also consistent with possible catalytic recombination in the skimmer system. The possibility for producing an empirical correlation between the freestream composition and the reservoir entropy has also been observed.

  14. Innovation in prediction planning for anterior open bite correction.

    PubMed

    Almuzian, Mohammed; Almukhtar, Anas; O'Neil, Michael; Benington, Philip; Al Anezi, Thamer; Ayoub, Ashraf

    2015-05-01

    This study applies recent advances in 3D virtual imaging for application in the prediction planning of dentofacial deformities. Stereo-photogrammetry has been used to create virtual and physical models, which are creatively combined in planning the surgical correction of anterior open bite. The application of these novel methods is demonstrated through the surgical correction of a case.

  15. I-TASSER: fully automated protein structure prediction in CASP8.

    PubMed

    Zhang, Yang

    2009-01-01

    The I-TASSER algorithm for 3D protein structure prediction was tested in CASP8, with the procedure fully automated in both the Server and Human sections. The quality of the server models is close to that of human ones but the human predictions incorporate more diverse templates from other servers which improve the human predictions in some of the distant homology targets. For the first time, the sequence-based contact predictions from machine learning techniques are found helpful for both template-based modeling (TBM) and template-free modeling (FM). In TBM, although the accuracy of the sequence based contact predictions is on average lower than that from template-based ones, the novel contacts in the sequence-based predictions, which are complementary to the threading templates in the weakly or unaligned regions, are important to improve the global and local packing in these regions. Moreover, the newly developed atomic structural refinement algorithm was tested in CASP8 and found to improve the hydrogen-bonding networks and the overall TM-score, which is mainly due to its ability of removing steric clashes so that the models can be generated from cluster centroids. Nevertheless, one of the major issues of the I-TASSER pipeline is the model selection where the best models could not be appropriately recognized when the correct templates are detected only by the minority of the threading algorithms. There are also problems related with domain-splitting and mirror image recognition which mainly influences the performance of I-TASSER modeling in the FM-based structure predictions. Copyright 2009 Wiley-Liss, Inc.

  16. A deep learning framework for improving long-range residue-residue contact prediction using a hierarchical strategy.

    PubMed

    Xiong, Dapeng; Zeng, Jianyang; Gong, Haipeng

    2017-09-01

    Residue-residue contacts are of great value for protein structure prediction, since contact information, especially from those long-range residue pairs, can significantly reduce the complexity of conformational sampling for protein structure prediction in practice. Despite progresses in the past decade on protein targets with abundant homologous sequences, accurate contact prediction for proteins with limited sequence information is still far from satisfaction. Methodologies for these hard targets still need further improvement. We presented a computational program DeepConPred, which includes a pipeline of two novel deep-learning-based methods (DeepCCon and DeepRCon) as well as a contact refinement step, to improve the prediction of long-range residue contacts from primary sequences. When compared with previous prediction approaches, our framework employed an effective scheme to identify optimal and important features for contact prediction, and was only trained with coevolutionary information derived from a limited number of homologous sequences to ensure robustness and usefulness for hard targets. Independent tests showed that 59.33%/49.97%, 64.39%/54.01% and 70.00%/59.81% of the top L/5, top L/10 and top 5 predictions were correct for CASP10/CASP11 proteins, respectively. In general, our algorithm ranked as one of the best methods for CASP targets. All source data and codes are available at http://166.111.152.91/Downloads.html . hgong@tsinghua.edu.cn or zengjy321@tsinghua.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  17. An Improved Method for TAL Effectors DNA-Binding Sites Prediction Reveals Functional Convergence in TAL Repertoires of Xanthomonas oryzae Strains

    PubMed Central

    Pérez-Quintero, Alvaro L.; Rodriguez-R, Luis M.; Dereeper, Alexis; López, Camilo; Koebnik, Ralf; Szurek, Boris; Cunnac, Sebastien

    2013-01-01

    Transcription Activators-Like Effectors (TALEs) belong to a family of virulence proteins from the Xanthomonas genus of bacterial plant pathogens that are translocated into the plant cell. In the nucleus, TALEs act as transcription factors inducing the expression of susceptibility genes. A code for TALE-DNA binding specificity and high-resolution three-dimensional structures of TALE-DNA complexes were recently reported. Accurate prediction of TAL Effector Binding Elements (EBEs) is essential to elucidate the biological functions of the many sequenced TALEs as well as for robust design of artificial TALE DNA-binding domains in biotechnological applications. In this work a program with improved EBE prediction performances was developed using an updated specificity matrix and a position weight correction function to account for the matching pattern observed in a validation set of TALE-DNA interactions. To gain a systems perspective on the large TALE repertoires from X. oryzae strains, this program was used to predict rice gene targets for 99 sequenced family members. Integrating predictions and available expression data in a TALE-gene network revealed multiple candidate transcriptional targets for many TALEs as well as several possible instances of functional convergence among TALEs. PMID:23869221

  18. On-Line Fringe Tracking and Prediction at IOTA

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Mah, Robert; Lau, Sonie (Technical Monitor)

    1999-01-01

    The Infrared/Optical Telescope Array (IOTA) is a multi-aperture Michelson interferometer located on Mt. Hopkins near Tucson, Arizona. To enable viewing of fainter targets, an on-line fringe tracking system is presently under development at NASA Ames Research Center. The system has been developed off-line using actual data from IOTA, and is presently undergoing on-line implementation at IOTA. The system has two parts: (1) a fringe tracking system that identifies the center of a fringe packet by fitting a parametric model to the data; and (2) a fringe packet motion prediction system that uses characteristics of past fringe packets to predict fringe packet motion. Combined, this information will be used to optimize on-line the scanning trajectory, resulting in improved visibility of faint targets. Fringe packet identification is highly accurate and robust (99% of the 4000 fringe packets were identified correctly, the remaining 1% were either out of the scan range or too noisy to be seen) and is performed in 30-90 milliseconds on a Pentium II-based computer. Fringe packet prediction, currently performed using an adaptive linear predictor, delivers a 10% improvement over the baseline of predicting no motion.

  19. A new method for the prediction of combustion instability

    NASA Astrophysics Data System (ADS)

    Flanagan, Steven Meville

    This dissertation presents a new approach to the prediction of combustion instability in solid rocket motors. Previous attempts at developing computational tools to solve this problem have been largely unsuccessful, showing very poor agreement with experimental results and having little or no predictive capability. This is due primarily to deficiencies in the linear stability theory upon which these efforts have been based. Recent advances in linear instability theory by Flandro have demonstrated the importance of including unsteady rotational effects, previously considered negligible. Previous versions of the theory also neglected corrections to the unsteady flow field of the first order in the mean flow Mach number. This research explores the stability implications of extending the solution to include these corrections. Also, the corrected linear stability theory based upon a rotational unsteady flow field extended to first order in mean flow Mach number has been implemented in two computer programs developed for the Macintosh platform. A quasi one-dimensional version of the program has been developed which is based upon an approximate solution to the cavity acoustics problem. The three-dimensional program applies Greens's Function Discretization (GFD) to the solution for the acoustic mode shapes and frequency. GFD is a recently developed numerical method for finding fully three dimensional solutions for this class of problems. The analysis of complex motor geometries, previously a tedious and time consuming task, has also been greatly simplified through the development of a drawing package designed specifically to facilitate the specification of typical motor geometries. The combination of the drawing package, improved acoustic solutions, and new analysis, results in a tool which is capable of producing more accurate and meaningful predictions than have been possible in the past.

  20. Multi-jet merged top-pair production including electroweak corrections

    NASA Astrophysics Data System (ADS)

    Gütschow, Christian; Lindert, Jonas M.; Schönherr, Marek

    2018-04-01

    We present theoretical predictions for the production of top-quark pairs in association with jets at the LHC including electroweak (EW) corrections. First, we present and compare differential predictions at the fixed-order level for t\\bar{t} and t\\bar{t}+ {jet} production at the LHC considering the dominant NLO EW corrections of order O(α_{s}^2 α ) and O(α_{s}^3 α ) respectively together with all additional subleading Born and one-loop contributions. The NLO EW corrections are enhanced at large energies and in particular alter the shape of the top transverse momentum distribution, whose reliable modelling is crucial for many searches for new physics at the energy frontier. Based on the fixed-order results we motivate an approximation of the EW corrections valid at the percent level, that allows us to readily incorporate the EW corrections in the MePs@Nlo framework of Sherpa combined with OpenLoops. Subsequently, we present multi-jet merged parton-level predictions for inclusive top-pair production incorporating NLO QCD + EW corrections to t\\bar{t} and t\\bar{t}+ {jet}. Finally, we compare at the particle-level against a recent 8 TeV measurement of the top transverse momentum distribution performed by ATLAS in the lepton + jet channel. We find very good agreement between the Monte Carlo prediction and the data when the EW corrections are included.

  1. Intra-field on-product overlay improvement by application of RegC and TWINSCAN corrections

    NASA Astrophysics Data System (ADS)

    Sharoni, Ofir; Dmitriev, Vladimir; Graitzer, Erez; Perets, Yuval; Gorhad, Kujan; van Haren, Richard; Cekli, Hakki E.; Mulkens, Jan

    2015-03-01

    The on product overlay specification and Advanced Process Control (APC) is getting extremely challenging particularly after the introduction of multi-patterning applications like Spacer Assisted Double Patterning (SADP) and multipatterning techniques like N-repetitive Litho-Etch steps (LEN, N >= 2). When the latter is considered, most of the intrafield overlay contributors drop out of the overlay budget. This is a direct consequence of the fact that the scanner settings (like dose, illumination settings, etc.) as well as the subsequent processing steps can be made very similar for two consecutive Litho-Etch layers. The major overlay contributor that may require additional attention is the Image Placement Error (IPE). When the inter-layer overlay is considered, controlling the intra-field overlay contribution gets more complicated. In addition to the IPE contribution, the TWINSCANTM lens fingerprint in combination with the exposure settings is going to play a role as well. Generally speaking, two subsequent functional layers have different exposure settings. This results in a (non-reticle) additional overlay contribution. In this paper, we have studied the wafer overlay correction capability by RegC® in addition to the TWINSCANTM intrafield corrections to improve the on product overlay performance. RegC® is a reticle intra-volume laser writing technique that causes a predictable deformation element (RegC® deformation element) inside the quartz (Qz) material of a reticle. This technique enables to post-process an existing reticle to correct for instance for IPE. Alternatively, a pre-determined intra-field fingerprint can be added to the reticle such that it results in a straight field after exposure. This second application might be very powerful to correct for instance for (cold) lens fingerprints that cannot be corrected by the scanner itself. Another possible application is the intra-field processing fingerprint. One should realize that a RegC® treatment of a reticle generally results in global distortion of the reticle. This is not a problem as long as these global distortions can be corrected by the TWINSCANTM system (currently up to the third order). It is anticipated that the combination of the RegC® and the TWINSCANTM corrections act as complementary solutions. These solutions perfectly fit into the ASML Litho InSight (LIS) product in which feedforward and feedback corrections based on YieldStar overlay measurements are used to improve the on product overlay.

  2. Predicting Multicomponent Adsorption Isotherms in Open-Metal Site Materials Using Force Field Calculations Based on Energy Decomposed Density Functional Theory.

    PubMed

    Heinen, Jurn; Burtch, Nicholas C; Walton, Krista S; Fonseca Guerra, Célia; Dubbeldam, David

    2016-12-12

    For the design of adsorptive-separation units, knowledge is required of the multicomponent adsorption behavior. Ideal adsorbed solution theory (IAST) breaks down for olefin adsorption in open-metal site (OMS) materials due to non-ideal donor-acceptor interactions. Using a density-function-theory-based energy decomposition scheme, we develop a physically justifiable classical force field that incorporates the missing orbital interactions using an appropriate functional form. Our first-principles derived force field shows greatly improved quantitative agreement with the inflection points, initial uptake, saturation capacity, and enthalpies of adsorption obtained from our in-house adsorption experiments. While IAST fails to make accurate predictions, our improved force field model is able to correctly predict the multicomponent behavior. Our approach is also transferable to other OMS structures, allowing the accurate study of their separation performances for olefins/paraffins and further mixtures involving complex donor-acceptor interactions. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Impacts of Interannual Climate Variability on Agricultural and Marine Ecosystems

    NASA Technical Reports Server (NTRS)

    Cane, M. A.; Zebiak, S.; Kaplan, A.; Chen, D.

    2001-01-01

    The El Nino - Southern Oscillation (ENSO) is the dominant mode of global interannual climate variability, and seems to be the only mode for which current prediction methods are more skillful than climatology or persistence. The Zebiak and Cane intermediate coupled ocean-atmosphere model has been in use for ENSO prediction for more than a decade, with notable success. However, the sole dependence of its original initialization scheme and the improved initialization on wind fields derived from merchant ship observations proved to be a liability during 1997/1998 El Nino event: the deficiencies of wind observations prevented the oceanic component of the model from reaching the realistic state during the year prior to the event, and the forecast failed. Our work on the project was concentrated on the use of satellite data for improving various stages of ENSO prediction technology: model initialization, bias correction, and data assimilation. Close collaboration with other teams of the IDS project was maintained throughout.

  4. Impact of chemical lateral boundary conditions in a regional air quality forecast model on surface ozone predictions during stratospheric intrusions

    NASA Astrophysics Data System (ADS)

    Pendlebury, Diane; Gravel, Sylvie; Moran, Michael D.; Lupu, Alexandru

    2018-02-01

    A regional air quality forecast model, GEM-MACH, is used to examine the conditions under which a limited-area air quality model can accurately forecast near-surface ozone concentrations during stratospheric intrusions. Periods in 2010 and 2014 with known stratospheric intrusions over North America were modelled using four different ozone lateral boundary conditions obtained from a seasonal climatology, a dynamically-interpolated monthly climatology, global air quality forecasts, and global air quality reanalyses. It is shown that the mean bias and correlation in surface ozone over the course of a season can be improved by using time-varying ozone lateral boundary conditions, particularly through the correct assignment of stratospheric vs. tropospheric ozone along the western lateral boundary (for North America). Part of the improvement in surface ozone forecasts results from improvements in the characterization of near-surface ozone along the lateral boundaries that then directly impact surface locations near the boundaries. However, there is an additional benefit from the correct characterization of the location of the tropopause along the western lateral boundary such that the model can correctly simulate stratospheric intrusions and their associated exchange of ozone from stratosphere to troposphere. Over a three-month period in spring 2010, the mean bias was seen to improve by as much as 5 ppbv and the correlation by 0.1 depending on location, and on the form of the chemical lateral boundary condition.

  5. Regional Climate Simulations over North America: Interaction of Local Processes with Improved Large-Scale Flow.

    NASA Astrophysics Data System (ADS)

    Miguez-Macho, Gonzalo; Stenchikov, Georgiy L.; Robock, Alan

    2005-04-01

    The reasons for biases in regional climate simulations were investigated in an attempt to discern whether they arise from deficiencies in the model parameterizations or are due to dynamical problems. Using the Regional Atmospheric Modeling System (RAMS) forced by the National Centers for Environmental Prediction-National Center for Atmospheric Research reanalysis, the detailed climate over North America at 50-km resolution for June 2000 was simulated. First, the RAMS equations were modified to make them applicable to a large region, and its turbulence parameterization was corrected. The initial simulations showed large biases in the location of precipitation patterns and surface air temperatures. By implementing higher-resolution soil data, soil moisture and soil temperature initialization, and corrections to the Kain-Fritch convective scheme, the temperature biases and precipitation amount errors could be removed, but the precipitation location errors remained. The precipitation location biases could only be improved by implementing spectral nudging of the large-scale (wavelength of 2500 km) dynamics in RAMS. This corrected for circulation errors produced by interactions and reflection of the internal domain dynamics with the lateral boundaries where the model was forced by the reanalysis.

  6. Bias-adjusted satellite-based rainfall estimates for predicting floods: Narayani Basin

    USGS Publications Warehouse

    Shrestha, M.S.; Artan, G.A.; Bajracharya, S.R.; Gautam, D.K.; Tokar, S.A.

    2011-01-01

    In Nepal, as the spatial distribution of rain gauges is not sufficient to provide detailed perspective on the highly varied spatial nature of rainfall, satellite-based rainfall estimates provides the opportunity for timely estimation. This paper presents the flood prediction of Narayani Basin at the Devghat hydrometric station (32000km2) using bias-adjusted satellite rainfall estimates and the Geospatial Stream Flow Model (GeoSFM), a spatially distributed, physically based hydrologic model. The GeoSFM with gridded gauge observed rainfall inputs using kriging interpolation from 2003 was used for calibration and 2004 for validation to simulate stream flow with both having a Nash Sutcliff Efficiency of above 0.7. With the National Oceanic and Atmospheric Administration Climate Prediction Centre's rainfall estimates (CPC-RFE2.0), using the same calibrated parameters, for 2003 the model performance deteriorated but improved after recalibration with CPC-RFE2.0 indicating the need to recalibrate the model with satellite-based rainfall estimates. Adjusting the CPC-RFE2.0 by a seasonal, monthly and 7-day moving average ratio, improvement in model performance was achieved. Furthermore, a new gauge-satellite merged rainfall estimates obtained from ingestion of local rain gauge data resulted in significant improvement in flood predictability. The results indicate the applicability of satellite-based rainfall estimates in flood prediction with appropriate bias correction. ?? 2011 The Authors. Journal of Flood Risk Management ?? 2011 The Chartered Institution of Water and Environmental Management.

  7. Bias-adjusted satellite-based rainfall estimates for predicting floods: Narayani Basin

    USGS Publications Warehouse

    Artan, Guleid A.; Tokar, S.A.; Gautam, D.K.; Bajracharya, S.R.; Shrestha, M.S.

    2011-01-01

    In Nepal, as the spatial distribution of rain gauges is not sufficient to provide detailed perspective on the highly varied spatial nature of rainfall, satellite-based rainfall estimates provides the opportunity for timely estimation. This paper presents the flood prediction of Narayani Basin at the Devghat hydrometric station (32 000 km2) using bias-adjusted satellite rainfall estimates and the Geospatial Stream Flow Model (GeoSFM), a spatially distributed, physically based hydrologic model. The GeoSFM with gridded gauge observed rainfall inputs using kriging interpolation from 2003 was used for calibration and 2004 for validation to simulate stream flow with both having a Nash Sutcliff Efficiency of above 0.7. With the National Oceanic and Atmospheric Administration Climate Prediction Centre's rainfall estimates (CPC_RFE2.0), using the same calibrated parameters, for 2003 the model performance deteriorated but improved after recalibration with CPC_RFE2.0 indicating the need to recalibrate the model with satellite-based rainfall estimates. Adjusting the CPC_RFE2.0 by a seasonal, monthly and 7-day moving average ratio, improvement in model performance was achieved. Furthermore, a new gauge-satellite merged rainfall estimates obtained from ingestion of local rain gauge data resulted in significant improvement in flood predictability. The results indicate the applicability of satellite-based rainfall estimates in flood prediction with appropriate bias correction.

  8. The prediction of resting energy expenditure in type 2 diabetes mellitus is improved by factoring for glycemia.

    PubMed

    Gougeon, R; Lamarche, M; Yale, J-F; Venuta, T

    2002-12-01

    Predictive equations have been reported to overestimate resting energy expenditure (REE) for obese persons. The presence of hyperglycemia results in elevated REE in obese persons with type 2 diabetes, and its effect on the validity of these equations is unknown. We tested whether (1) indicators of diabetes control were independent associates of REE in type 2 diabetes and (2) their inclusion would improve predictive equations. A cross-sectional study of 65 (25 men, 40 women) obese type 2 diabetic subjects. Variables measured were: REE by ventilated-hood indirect calorimetry, body composition by bioimpedance analysis, body circumferences, fasting plasma glucose (FPG) and hemoglobin A(1c). Data were analyzed using stepwise multiple linear regression. REE, corrected for weight, fat-free mass, age and gender, was significantly greater with FPG>10 mmol/l (P=0.017) and correlated with FPG (P=0.013) and hemoglobin A(1c) as percentage upper limit of normal (P=0.02). Weight was the main determinant of REE. Together with hip circumference and FPG, it explained 81% of the variation. FPG improved the predictability of the equation by >3%. With poor glycemic control, it can represent an increase in REE of up to 8%. Our data indicate that in a population of obese subjects with type 2 diabetes mellitus, REE is better predicted when fasting plasma glucose is included as a variable.

  9. Examination of multi-model ensemble seasonal prediction methods using a simple climate system

    NASA Astrophysics Data System (ADS)

    Kang, In-Sik; Yoo, Jin Ho

    2006-02-01

    A simple climate model was designed as a proxy for the real climate system, and a number of prediction models were generated by slightly perturbing the physical parameters of the simple model. A set of long (240 years) historical hindcast predictions were performed with various prediction models, which are used to examine various issues of multi-model ensemble seasonal prediction, such as the best ways of blending multi-models and the selection of models. Based on these results, we suggest a feasible way of maximizing the benefit of using multi models in seasonal prediction. In particular, three types of multi-model ensemble prediction systems, i.e., the simple composite, superensemble, and the composite after statistically correcting individual predictions (corrected composite), are examined and compared to each other. The superensemble has more of an overfitting problem than the others, especially for the case of small training samples and/or weak external forcing, and the corrected composite produces the best prediction skill among the multi-model systems.

  10. Towards automating measurements and predictions of Escherichia coli concentrations in the Cuyahoga River, Cuyahoga Valley National Park, Ohio, 2012–14

    USGS Publications Warehouse

    Brady, Amie M. G.; Meg B. Plona,

    2015-07-30

    A computer program was developed to manage the nowcasts by running the predictive models and posting the results to a publicly accessible Web site daily by 9 a.m. The nowcasts were able to correctly predict E. coli concentrations above or below the water-quality standard at Jaite for 79 percent of the samples compared with the measured concentrations. In comparison, the persistence model (using the previous day’s sample concentration) correctly predicted concentrations above or below the water-quality standard in only 68 percent of the samples. To determine if the Jaite nowcast could be used for the stretch of the river between Lock 29 and Jaite, the model predictions for Jaite were compared with the measured concentrations at Lock 29. The Jaite nowcast provided correct responses for 77 percent of the Lock 29 samples, which was a greater percentage than the percentage of correct responses (58 percent) from the persistence model at Lock 29.

  11. Index theorem and universality properties of the low-lying eigenvalues of improved staggered quarks.

    PubMed

    Follana, E; Hart, A; Davies, C T H

    2004-12-10

    We study various improved staggered quark Dirac operators on quenched gluon backgrounds in lattice QCD generated using a Symanzik-improved gluon action. We find a clear separation of the spectrum into would-be zero modes and others. The number of would-be zero modes depends on the topological charge as expected from the index theorem, and their chirality expectation value is large ( approximately 0.7). The remaining modes have low chirality and show clear signs of clustering into quartets and approaching the random matrix theory predictions for all topological charge sectors. We conclude that improvement of the fermionic and gauge actions moves the staggered quarks closer to the continuum limit where they respond correctly to QCD topology.

  12. Calculation of the Respiratory Modulation of the Photoplethysmogram (DPOP) Incorporating a Correction for Low Perfusion

    PubMed Central

    Addison, Paul S.; Wang, Rui; McGonigle, Scott J.; Bergese, Sergio D.

    2014-01-01

    DPOP quantifies respiratory modulations in the photoplethysmogram. It has been proposed as a noninvasive surrogate for pulse pressure variation (PPV) used in the prediction of the response to volume expansion in hypovolemic patients. The correlation between DPOP and PPV may degrade due to low perfusion effects. We implemented an automated DPOP algorithm with an optional correction for low perfusion. These two algorithm variants (DPOPa and DPOPb) were tested on data from 20 mechanically ventilated OR patients split into a benign “stable region” subset and a whole record “global set.” Strong correlation was found between DPOP and PPV for both algorithms when applied to the stable data set: R = 0.83/0.85 for DPOPa/DPOPb. However, a marked improvement was found when applying the low perfusion correction to the global data set: R = 0.47/0.73 for DPOPa/DPOPb. Sensitivities, Specificities, and AUCs were 0.86, 0.70, and 0.88 for DPOPa/stable region; 0.89, 0.82, and 0.92 for DPOPb/stable region; 0.81, 0.61, and 0.73 for DPOPa/global region; 0.83, 0.76, and 0.86 for DPOPb/global region. An improvement was found in all results across both data sets when using the DPOPb algorithm. Further, DPOPb showed marked improvements, both in terms of its values, and correlation with PPV, for signals exhibiting low percent modulations. PMID:25177348

  13. Improvement of analytical dynamic models using modal test data

    NASA Technical Reports Server (NTRS)

    Berman, A.; Wei, F. S.; Rao, K. V.

    1980-01-01

    A method developed to determine maximum changes in analytical mass and stiffness matrices to make them consistent with a set of measured normal modes and natural frequencies is presented. The corrected model will be an improved base for studies of physical changes, boundary condition changes, and for prediction of forced responses. The method features efficient procedures not requiring solutions of the eigenvalue problem, and the ability to have more degrees of freedom than the test data. In addition, modal displacements are obtained for all analytical degrees of freedom, and the frequency dependence of the coordinate transformations is properly treated.

  14. Individualized pharmacokinetic risk assessment for development of diabetes in high risk population.

    PubMed

    Gupta, N; Al-Huniti, N H; Veng-Pedersen, P

    2007-10-01

    The objective of this study is to propose a non-parametric pharmacokinetic prediction model that addresses the individualized risk of developing type-2 diabetes in subjects with family history of type-2 diabetes. All selected 191 healthy subjects had both parents as type-2 diabetic. Glucose was administered intravenously (0.5 g/kg body weight) and 13 blood samples taken at specified times were analyzed for plasma insulin and glucose concentrations. All subjects were followed for an average of 13-14 years for diabetic or normal (non-diabetic) outcome. The new logistic regression model predicts the development of diabetes based on body mass index and only one blood sample at 90 min analyzed for insulin concentration. Our model correctly identified 4.5 times more subjects (54% versus 11.6%) predicted to develop diabetes and more than twice the subjects (99% versus 46.4%) predicted not to develop diabetes compared to current non-pharmacokinetic probability estimates for development of type-2 diabetes. Our model can be useful for individualized prediction of development of type-2 diabetes in subjects with family history of type-2 diabetes. This improved prediction may be an important mediating factor for better perception of risk and may result in an improved intervention.

  15. Examination of Solar Cycle Statistical Model and New Prediction of Solar Cycle 23

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Wilson, John W.

    2000-01-01

    Sunspot numbers in the current solar cycle 23 were estimated by using a statistical model with the accumulating cycle sunspot data based on the odd-even behavior of historical sunspot cycles from 1 to 22. Since cycle 23 has progressed and the accurate solar minimum occurrence has been defined, the statistical model is validated by comparing the previous prediction with the new measured sunspot number; the improved sunspot projection in short range of future time is made accordingly. The current cycle is expected to have a moderate level of activity. Errors of this model are shown to be self-correcting as cycle observations become available.

  16. An approach to adjustment of relativistic mean field model parameters

    NASA Astrophysics Data System (ADS)

    Bayram, Tuncay; Akkoyun, Serkan

    2017-09-01

    The Relativistic Mean Field (RMF) model with a small number of adjusted parameters is powerful tool for correct predictions of various ground-state nuclear properties of nuclei. Its success for describing nuclear properties of nuclei is directly related with adjustment of its parameters by using experimental data. In the present study, the Artificial Neural Network (ANN) method which mimics brain functionality has been employed for improvement of the RMF model parameters. In particular, the understanding capability of the ANN method for relations between the RMF model parameters and their predictions for binding energies (BEs) of 58Ni and 208Pb have been found in agreement with the literature values.

  17. Enhancing BEM simulations of a stalled wind turbine using a 3D correction model

    NASA Astrophysics Data System (ADS)

    Bangga, Galih; Hutomo, Go; Syawitri, Taurista; Kusumadewi, Tri; Oktavia, Winda; Sabila, Ahmad; Setiadi, Herlambang; Faisal, Muhamad; Hendranata, Yongki; Lastomo, Dwi; Putra, Louis; Kristiadi, Stefanus; Bumi, Ilmi

    2018-03-01

    Nowadays wind turbine rotors are usually employed with pitch control mechanisms to avoid deep stall conditions. Despite that, wind turbines often operate under pitch fault situation causing massive flow separation to occur. Pure Blade Element Momentum (BEM) approaches are not designed for this situation and inaccurate load predictions are already expected. In the present studies, BEM predictions are improved through the inclusion of a stall delay model for a wind turbine rotor operating under pitch fault situation of -2.3° towards stall. The accuracy of the stall delay model is assessed by comparing the results with available Computational Fluid Dynamics (CFD) simulations data.

  18. Adaptive optics for peripheral vision

    NASA Astrophysics Data System (ADS)

    Rosén, R.; Lundström, L.; Unsbo, P.

    2012-07-01

    Understanding peripheral optical errors and their impact on vision is important for various applications, e.g. research on myopia development and optical correction of patients with central visual field loss. In this study, we investigated whether correction of higher order aberrations with adaptive optics (AO) improve resolution beyond what is achieved with best peripheral refractive correction. A laboratory AO system was constructed for correcting peripheral aberrations. The peripheral low contrast grating resolution acuity in the 20° nasal visual field of the right eye was evaluated for 12 subjects using three types of correction: refractive correction of sphere and cylinder, static closed loop AO correction and continuous closed loop AO correction. Running AO in continuous closed loop improved acuity compared to refractive correction for most subjects (maximum benefit 0.15 logMAR). The visual improvement from aberration correction was highly correlated with the subject's initial amount of higher order aberrations (p = 0.001, R 2 = 0.72). There was, however, no acuity improvement from static AO correction. In conclusion, correction of peripheral higher order aberrations can improve low contrast resolution, provided refractive errors are corrected and the system runs in continuous closed loop.

  19. SU-F-T-584: Investigating Correction Methods for Ion Recombination Effects in OCTAVIUS 1000 SRS Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knill, C; Wayne State University School of Medicine, Detroit, MI; Snyder, M

    Purpose: PTW’s Octavius 1000 SRS array performs IMRT QA measurements with liquid filled ionization chambers (LICs). Collection efficiencies of LICs have been shown to change during IMRT delivery as a function of LINAC pulse frequency and pulse dose, which affects QA results. In this study, two methods were developed to correct changes in collection efficiencies during IMRT QA measurements, and the effects of these corrections on QA pass rates were compared. Methods: For the first correction, Matlab software was developed that calculates pulse frequency and pulse dose for each detector, using measurement and DICOM RT Plan files. Pulse information ismore » converted to collection efficiency and measurements are corrected by multiplying detector dose by ratios of calibration to measured collection efficiencies. For the second correction, MU/min in daily 1000 SRS calibration was chosen to match average MU/min of the VMAT plan. Usefulness of derived corrections were evaluated using 6MV and 10FFF SBRT RapidArc plans delivered to the OCTAVIUS 4D system using a TrueBeam equipped with an HD- MLC. Effects of the two corrections on QA results were examined by performing 3D gamma analysis comparing predicted to measured dose, with and without corrections. Results: After complex Matlab corrections, average 3D gamma pass rates improved by [0.07%,0.40%,1.17%] for 6MV and [0.29%,1.40%,4.57%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. Maximum changes in gamma pass rates were [0.43%,1.63%,3.05%] for 6MV and [1.00%,4.80%,11.2%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. On average, pass rates of simple daily calibration corrections were within 1% of complex Matlab corrections. Conclusion: Ion recombination effects can potentially be clinically significant for OCTAVIUS 1000 SRS measurements, especially for higher pulse dose unflattened beams when using tighter gamma tolerances. Matching daily 1000 SRS calibration MU/min to average planned MU/min is a simple correction that greatly reduces ion recombination effects, improving measurements accuracy and gamma pass rates. This work was supported by PTW.« less

  20. An analysis of the ArcCHECK-MR diode array's performance for ViewRay quality assurance.

    PubMed

    Ellefson, Steven T; Culberson, Wesley S; Bednarz, Bryan P; DeWerd, Larry A; Bayouth, John E

    2017-07-01

    The ArcCHECK-MR diode array utilizes a correction system with a virtual inclinometer to correct the angular response dependencies of the diodes. However, this correction system cannot be applied to measurements on the ViewRay MR-IGRT system due to the virtual inclinometer's incompatibility with the ViewRay's multiple simultaneous beams. Additionally, the ArcCHECK's current correction factors were determined without magnetic field effects taken into account. In the course of performing ViewRay IMRT quality assurance with the ArcCHECK, measurements were observed to be consistently higher than the ViewRay TPS predictions. The goals of this study were to quantify the observed discrepancies and test whether applying the current factors improves the ArcCHECK's accuracy for measurements on the ViewRay. Gamma and frequency analysis were performed on 19 ViewRay patient plans. Ion chamber measurements were performed at a subset of diode locations using a PMMA phantom with the same dimensions as the ArcCHECK. A new method for applying directionally dependent factors utilizing beam information from the ViewRay TPS was developed in order to analyze the current ArcCHECK correction factors. To test the current factors, nine ViewRay plans were altered to be delivered with only a single simultaneous beam and were measured with the ArcCHECK. The current correction factors were applied using both the new and current methods. The new method was also used to apply corrections to the original 19 ViewRay plans. It was found the ArcCHECK systematically reports doses higher than those actually delivered by the ViewRay. Application of the current correction factors by either method did not consistently improve measurement accuracy. As dose deposition and diode response have both been shown to change under the influence of a magnetic field, it can be concluded the current ArcCHECK correction factors are invalid and/or inadequate to correct measurements on the ViewRay system. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  1. SU-E-I-07: An Improved Technique for Scatter Correction in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, S; Wang, Y; Lue, K

    2014-06-01

    Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends onmore » the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient tail information and therefore improve the accuracy of scatter estimation.« less

  2. Climate services for health: predicting the evolution of the 2016 dengue season in Machala, Ecuador.

    PubMed

    Lowe, Rachel; Stewart-Ibarra, Anna M; Petrova, Desislava; García-Díez, Markel; Borbor-Cordova, Mercy J; Mejía, Raúl; Regato, Mary; Rodó, Xavier

    2017-07-01

    El Niño and its effect on local meteorological conditions potentially influences interannual variability in dengue transmission in southern coastal Ecuador. El Oro province is a key dengue surveillance site, due to the high burden of dengue, seasonal transmission, co-circulation of all four dengue serotypes, and the recent introduction of chikungunya and Zika. In this study, we used climate forecasts to predict the evolution of the 2016 dengue season in the city of Machala, following one of the strongest El Niño events on record. We incorporated precipitation, minimum temperature, and Niño3·4 index forecasts in a Bayesian hierarchical mixed model to predict dengue incidence. The model was initiated on Jan 1, 2016, producing monthly dengue forecasts until November, 2016. We accounted for misreporting of dengue due to the introduction of chikungunya in 2015, by using active surveillance data to correct reported dengue case data from passive surveillance records. We then evaluated the forecast retrospectively with available epidemiological information. The predictions correctly forecast an early peak in dengue incidence in March, 2016, with a 90% chance of exceeding the mean dengue incidence for the previous 5 years. Accounting for the proportion of chikungunya cases that had been incorrectly recorded as dengue in 2015 improved the prediction of the magnitude of dengue incidence in 2016. This dengue prediction framework, which uses seasonal climate and El Niño forecasts, allows a prediction to be made at the start of the year for the entire dengue season. Combining active surveillance data with routine dengue reports improved not only model fit and performance, but also the accuracy of benchmark estimates based on historical seasonal averages. This study advances the state-of-the-art of climate services for the health sector, by showing the potential value of incorporating climate information in the public health decision-making process in Ecuador. European Union FP7, Royal Society, and National Science Foundation. Copyright © 2017 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY-NC-ND 4.0 license. Published by Elsevier Ltd.. All rights reserved.

  3. SU-F-R-04: Radiomics for Survival Prediction in Glioblastoma (GBM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, H; Molitoris, J; Bhooshan, N

    Purpose: To develop a quantitative radiomics approach for survival prediction of glioblastoma (GBM) patients treated with chemoradiotherapy (CRT). Methods: 28 GBM patients who received CRT at our institution were retrospectively studied. 255 radiomic features were extracted from 3 gadolinium-enhanced T1 weighted MRIs for 2 regions of interest (ROIs) (the surgical cavity and its surrounding enhancement rim). The 3 MRIs were at pre-treatment, 1-month and 3-month post-CRT. The imaging features comprehensively quantified the intensity, spatial variation (texture), geometric property and their spatial-temporal changes for the 2 ROIs. 3 demographics features (age, race, gender) and 12 clinical parameters (KPS, extent of resection,more » whether concurrent temozolomide was adjusted/stopped and radiotherapy related information) were also included. 4 Machine learning models (logistic regression (LR), support vector machine (SVM), decision tree (DT), neural network (NN)) were applied to predict overall survival (OS) and progression-free survival (PFS). The number of cases and percentage of cases predicted correctly were collected and AUC (area under the receiver operating characteristic (ROC) curve) were determined after leave-one-out cross-validation. Results: From univariate analysis, 27 features (1 demographic, 1 clinical and 25 imaging) were statistically significant (p<0.05) for both OS and PFS. Two sets of features (each contained 24 features) were algorithmically selected from all features to predict OS and PFS. High prediction accuracy of OS was achieved by using NN (96%, 27 of 28 cases were correctly predicted, AUC = 0.99), LR (93%, 26 of 28 cases were correctly predicted, AUC = 0.95) and SVM (93%, 26 of 28 cases were correctly predicted, AUC = 0.90). When predicting PFS, NN obtained the highest prediction accuracy (89%, 25 of 28 cases were correctly predicted, AUC = 0.92). Conclusion: Radiomics approach combined with patients’ demographics and clinical parameters can accurately predict survival in GBM patients treated with CRT.« less

  4. Predicting 30-Day Hospital Readmissions in Acute Myocardial Infarction: The AMI "READMITS" (Renal Function, Elevated Brain Natriuretic Peptide, Age, Diabetes Mellitus, Nonmale Sex, Intervention with Timely Percutaneous Coronary Intervention, and Low Systolic Blood Pressure) Score.

    PubMed

    Nguyen, Oanh Kieu; Makam, Anil N; Clark, Christopher; Zhang, Song; Das, Sandeep R; Halm, Ethan A

    2018-04-17

    Readmissions after hospitalization for acute myocardial infarction (AMI) are common. However, the few currently available AMI readmission risk prediction models have poor-to-modest predictive ability and are not readily actionable in real time. We sought to develop an actionable and accurate AMI readmission risk prediction model to identify high-risk patients as early as possible during hospitalization. We used electronic health record data from consecutive AMI hospitalizations from 6 hospitals in north Texas from 2009 to 2010 to derive and validate models predicting all-cause nonelective 30-day readmissions, using stepwise backward selection and 5-fold cross-validation. Of 826 patients hospitalized with AMI, 13% had a 30-day readmission. The first-day AMI model (the AMI "READMITS" score) included 7 predictors: renal function, elevated brain natriuretic peptide, age, diabetes mellitus, nonmale sex, intervention with timely percutaneous coronary intervention, and low systolic blood pressure, had an optimism-corrected C-statistic of 0.73 (95% confidence interval, 0.71-0.74) and was well calibrated. The full-stay AMI model, which included 3 additional predictors (use of intravenous diuretics, anemia on discharge, and discharge to postacute care), had an optimism-corrected C-statistic of 0.75 (95% confidence interval, 0.74-0.76) with minimally improved net reclassification and calibration. Both AMI models outperformed corresponding multicondition readmission models. The parsimonious AMI READMITS score enables early prospective identification of high-risk AMI patients for targeted readmissions reduction interventions within the first 24 hours of hospitalization. A full-stay AMI readmission model only modestly outperformed the AMI READMITS score in terms of discrimination, but surprisingly did not meaningfully improve reclassification. © 2018 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.

  5. Multiscale Quantum Mechanics/Molecular Mechanics Simulations with Neural Networks.

    PubMed

    Shen, Lin; Wu, Jingheng; Yang, Weitao

    2016-10-11

    Molecular dynamics simulation with multiscale quantum mechanics/molecular mechanics (QM/MM) methods is a very powerful tool for understanding the mechanism of chemical and biological processes in solution or enzymes. However, its computational cost can be too high for many biochemical systems because of the large number of ab initio QM calculations. Semiempirical QM/MM simulations have much higher efficiency. Its accuracy can be improved with a correction to reach the ab initio QM/MM level. The computational cost on the ab initio calculation for the correction determines the efficiency. In this paper we developed a neural network method for QM/MM calculation as an extension of the neural-network representation reported by Behler and Parrinello. With this approach, the potential energy of any configuration along the reaction path for a given QM/MM system can be predicted at the ab initio QM/MM level based on the semiempirical QM/MM simulations. We further applied this method to three reactions in water to calculate the free energy changes. The free-energy profile obtained from the semiempirical QM/MM simulation is corrected to the ab initio QM/MM level with the potential energies predicted with the constructed neural network. The results are in excellent accordance with the reference data that are obtained from the ab initio QM/MM molecular dynamics simulation or corrected with direct ab initio QM/MM potential energies. Compared with the correction using direct ab initio QM/MM potential energies, our method shows a speed-up of 1 or 2 orders of magnitude. It demonstrates that the neural network method combined with the semiempirical QM/MM calculation can be an efficient and reliable strategy for chemical reaction simulations.

  6. The effects of speech production and vocabulary training on different components of spoken language performance.

    PubMed

    Paatsch, Louise E; Blamey, Peter J; Sarant, Julia Z; Bow, Catherine P

    2006-01-01

    A group of 21 hard-of-hearing and deaf children attending primary school were trained by their teachers on the production of selected consonants and on the meanings of selected words. Speech production, vocabulary knowledge, reading aloud, and speech perception measures were obtained before and after each type of training. The speech production training produced a small but significant improvement in the percentage of consonants correctly produced in words. The vocabulary training improved knowledge of word meanings substantially. Performance on speech perception and reading aloud were significantly improved by both types of training. These results were in accord with the predictions of a mathematical model put forward to describe the relationships between speech perception, speech production, and language measures in children (Paatsch, Blamey, Sarant, Martin, & Bow, 2004). These training data demonstrate that the relationships between the measures are causal. In other words, improvements in speech production and vocabulary performance produced by training will carry over into predictable improvements in speech perception and reading scores. Furthermore, the model will help educators identify the most effective methods of improving receptive and expressive spoken language for individual children who are deaf or hard of hearing.

  7. Quantifying Improved Visual Performance Through Vision Training

    DTIC Science & Technology

    1991-02-22

    Eibschitz, N., Friedman, Z. and Neuman, E. (1978) Comparative results of amblyopia treatment . Metab Opthalmol, 2, 111-112. Evans, D.W. and Ginsburg, A... treatment . Am Orthopt J, 5, 61-64. Garzia, R.P. (1987) The efficacy of visual training in amblyopia : A literature review. Am J Optom Physiol Opt, 64, 393...predicts pilots’ performance in aircraft simulators. Am. J. Opt. Physiol. Opt., 59(1), 105-109. Gortz, H. (1960) The corrective treatment of amblyopia

  8. Ensemble of classifiers for confidence-rated classification of NDE signal

    NASA Astrophysics Data System (ADS)

    Banerjee, Portia; Safdarnejad, Seyed; Udpa, Lalita; Udpa, Satish

    2016-02-01

    Ensemble of classifiers in general, aims to improve classification accuracy by combining results from multiple weak hypotheses into a single strong classifier through weighted majority voting. Improved versions of ensemble of classifiers generate self-rated confidence scores which estimate the reliability of each of its prediction and boost the classifier using these confidence-rated predictions. However, such a confidence metric is based only on the rate of correct classification. In existing works, although ensemble of classifiers has been widely used in computational intelligence, the effect of all factors of unreliability on the confidence of classification is highly overlooked. With relevance to NDE, classification results are affected by inherent ambiguity of classifica-tion, non-discriminative features, inadequate training samples and noise due to measurement. In this paper, we extend the existing ensemble classification by maximizing confidence of every classification decision in addition to minimizing the classification error. Initial results of the approach on data from eddy current inspection show improvement in classification performance of defect and non-defect indications.

  9. The Theory and Practice of Estimating the Accuracy of Dynamic Flight-Determined Coefficients

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1981-01-01

    Means of assessing the accuracy of maximum likelihood parameter estimates obtained from dynamic flight data are discussed. The most commonly used analytical predictors of accuracy are derived and compared from both statistical and simplified geometrics standpoints. The accuracy predictions are evaluated with real and simulated data, with an emphasis on practical considerations, such as modeling error. Improved computations of the Cramer-Rao bound to correct large discrepancies due to colored noise and modeling error are presented. The corrected Cramer-Rao bound is shown to be the best available analytical predictor of accuracy, and several practical examples of the use of the Cramer-Rao bound are given. Engineering judgement, aided by such analytical tools, is the final arbiter of accuracy estimation.

  10. Vector boson production in pPb and PbPb collisions at the LHC and its impact on nCTEQ15 PDFs

    NASA Astrophysics Data System (ADS)

    Kusina, A.; Lyonnet, F.; Clark, D. B.; Godat, E.; Ježo, T.; Kovařík, K.; Olness, F. I.; Schienbein, I.; Yu, J. Y.

    2017-07-01

    We provide a comprehensive comparison of W^± / Z vector boson production data in pPb and PbPb collisions at the LHC with predictions obtained using the nCTEQ15 PDFs. We identify the measurements which have the largest potential impact on the PDFs, and estimate the effect of including these data using a Bayesian reweighting method. We find this data set can provide information as regards both the nuclear corrections and the heavy flavor (strange quark) PDF components. As for the proton, the parton flavor determination/separation is dependent on nuclear corrections (from heavy target DIS, for example), this information can also help improve the proton PDFs.

  11. Prediction of the space adaptation syndrome

    NASA Technical Reports Server (NTRS)

    Reschke, M. F.; Homick, J. L.; Ryan, P.; Moseley, E. C.

    1984-01-01

    The univariate and multivariate relationships of provocative measures used to produce motion sickness symptoms were described. Normative subjects were used to develop and cross-validate sets of linear equations that optimally predict motion sickness in parabolic flights. The possibility of reducing the number of measurements required for prediction was assessed. After describing the variables verbally and statistically for 159 subjects, a factor analysis of 27 variables was completed to improve understanding of the relationships between variables and to reduce the number of measures for prediction purposes. The results of this analysis show that none of variables are significantly related to the responses to parabolic flights. A set of variables was selected to predict responses to KC-135 flights. A series of discriminant analyses were completed. Results indicate that low, moderate, or severe susceptibility could be correctly predicted 64 percent and 53 percent of the time on original and cross-validation samples, respectively. Both the factor analysis and the discriminant analysis provided no basis for reducing the number of tests.

  12. Modified linear predictive coding approach for moving target tracking by Doppler radar

    NASA Astrophysics Data System (ADS)

    Ding, Yipeng; Lin, Xiaoyi; Sun, Ke-Hui; Xu, Xue-Mei; Liu, Xi-Yao

    2016-07-01

    Doppler radar is a cost-effective tool for moving target tracking, which can support a large range of civilian and military applications. A modified linear predictive coding (LPC) approach is proposed to increase the target localization accuracy of the Doppler radar. Based on the time-frequency analysis of the received echo, the proposed approach first real-time estimates the noise statistical parameters and constructs an adaptive filter to intelligently suppress the noise interference. Then, a linear predictive model is applied to extend the available data, which can help improve the resolution of the target localization result. Compared with the traditional LPC method, which empirically decides the extension data length, the proposed approach develops an error array to evaluate the prediction accuracy and thus, adjust the optimum extension data length intelligently. Finally, the prediction error array is superimposed with the predictor output to correct the prediction error. A series of experiments are conducted to illustrate the validity and performance of the proposed techniques.

  13. Measurements of top-quark pair differential cross-sections in the lepton+jets channel in pp collisions at [Formula: see text] using the ATLAS detector.

    PubMed

    Aad, G; Abbott, B; Abdallah, J; Abdinov, O; Aben, R; Abolins, M; AbouZeid, O S; Abramowicz, H; Abreu, H; Abreu, R; Abulaiti, Y; Acharya, B S; Adamczyk, L; Adams, D L; Adelman, J; Adomeit, S; Adye, T; Affolder, A A; Agatonovic-Jovin, T; Agricola, J; Aguilar-Saavedra, J A; Ahlen, S P; Ahmadov, F; Aielli, G; Akerstedt, H; Åkesson, T P A; Akimov, A V; Alberghi, G L; Albert, J; Albrand, S; Alconada Verzini, M J; Aleksa, M; Aleksandrov, I N; Alexa, C; Alexander, G; Alexopoulos, T; Alhroob, M; Alimonti, G; Alio, L; Alison, J; Alkire, S P; Allbrooke, B M M; Allport, P P; Aloisio, A; Alonso, A; Alonso, F; Alpigiani, C; Altheimer, A; Alvarez Gonzalez, B; Álvarez Piqueras, D; Alviggi, M G; Amadio, B T; Amako, K; Amaral Coutinho, Y; Amelung, C; Amidei, D; Amor Dos Santos, S P; Amorim, A; Amoroso, S; Amram, N; Amundsen, G; Anastopoulos, C; Ancu, L S; Andari, N; Andeen, T; Anders, C F; Anders, G; Anders, J K; Anderson, K J; Andreazza, A; Andrei, V; Angelidakis, S; Angelozzi, I; Anger, P; Angerami, A; Anghinolfi, F; Anisenkov, A V; Anjos, N; Annovi, A; Antonelli, M; Antonov, A; Antos, J; Anulli, F; Aoki, M; Aperio Bella, L; Arabidze, G; Arai, Y; Araque, J P; Arce, A T H; Arduh, F A; Arguin, J-F; Argyropoulos, S; Arik, M; Armbruster, A J; Arnaez, O; Arnold, H; Arratia, M; Arslan, O; Artamonov, A; Artoni, G; Artz, S; Asai, S; Asbah, N; Ashkenazi, A; Åsman, B; Asquith, L; Assamagan, K; Astalos, R; Atkinson, M; Atlay, N B; Augsten, K; Aurousseau, M; Avolio, G; Axen, B; Ayoub, M K; Azuelos, G; Baak, M A; Baas, A E; Baca, M J; Bacci, C; Bachacou, H; Bachas, K; Backes, M; Backhaus, M; Bagiacchi, P; Bagnaia, P; Bai, Y; Bain, T; Baines, J T; Baker, O K; Baldin, E M; Balek, P; Balestri, T; Balli, F; Balunas, W K; Banas, E; Banerjee, Sw; Bannoura, A A E; Barak, L; Barberio, E L; Barberis, D; Barbero, M; Barillari, T; Barisonzi, M; Barklow, T; Barlow, N; Barnes, S L; Barnett, B M; Barnett, R M; Barnovska, Z; Baroncelli, A; Barone, G; Barr, A J; Barreiro, F; Barreiro Guimarães da Costa, J; Bartoldus, R; Barton, A E; Bartos, P; Basalaev, A; Bassalat, A; Basye, A; Bates, R L; Batista, S J; Batley, J R; Battaglia, M; Bauce, M; Bauer, F; Bawa, H S; Beacham, J B; Beattie, M D; Beau, T; Beauchemin, P H; Beccherle, R; Bechtle, P; Beck, H P; Becker, K; Becker, M; Beckingham, M; Becot, C; Beddall, A J; Beddall, A; Bednyakov, V A; Bee, C P; Beemster, L J; Beermann, T A; Begel, M; Behr, J K; Belanger-Champagne, C; Bell, W H; Bella, G; Bellagamba, L; Bellerive, A; Bellomo, M; Belotskiy, K; Beltramello, O; Benary, O; Benchekroun, D; Bender, M; Bendtz, K; Benekos, N; Benhammou, Y; Benhar Noccioli, E; Benitez Garcia, J A; Benjamin, D P; Bensinger, J R; Bentvelsen, S; Beresford, L; Beretta, M; Berge, D; Bergeaas Kuutmann, E; Berger, N; Berghaus, F; Beringer, J; Bernard, C; Bernard, N R; Bernius, C; Bernlochner, F U; Berry, T; Berta, P; Bertella, C; Bertoli, G; Bertolucci, F; Bertsche, C; Bertsche, D; Besana, M I; Besjes, G J; Bessidskaia Bylund, O; Bessner, M; Besson, N; Betancourt, C; Bethke, S; Bevan, A J; Bhimji, W; Bianchi, R M; Bianchini, L; Bianco, M; Biebel, O; Biedermann, D; Biesuz, N V; Biglietti, M; Bilbao De Mendizabal, J; Bilokon, H; Bindi, M; Binet, S; Bingul, A; Bini, C; Biondi, S; Bjergaard, D M; Black, C W; Black, J E; Black, K M; Blackburn, D; Blair, R E; Blanchard, J-B; Blanco, J E; Blazek, T; Bloch, I; Blocker, C; Blum, W; Blumenschein, U; Blunier, S; Bobbink, G J; Bobrovnikov, V S; Bocchetta, S S; Bocci, A; Bock, C; Boehler, M; Bogaerts, J A; Bogavac, D; Bogdanchikov, A G; Bohm, C; Boisvert, V; Bold, T; Boldea, V; Boldyrev, A S; Bomben, M; Bona, M; Boonekamp, M; Borisov, A; Borissov, G; Borroni, S; Bortfeldt, J; Bortolotto, V; Bos, K; Boscherini, D; Bosman, M; Boudreau, J; Bouffard, J; Bouhova-Thacker, E V; Boumediene, D; Bourdarios, C; Bousson, N; Boutle, S K; Boveia, A; Boyd, J; Boyko, I R; Bozic, I; Bracinik, J; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Braun, H M; Breaden Madden, W D; Brendlinger, K; Brennan, A J; Brenner, L; Brenner, R; Bressler, S; Bristow, T M; Britton, D; Britzger, D; Brochu, F M; Brock, I; Brock, R; Bronner, J; Brooijmans, G; Brooks, T; Brooks, W K; Brosamer, J; Brost, E; Bruckman de Renstrom, P A; Bruncko, D; Bruneliere, R; Bruni, A; Bruni, G; Bruschi, M; Bruscino, N; Bryngemark, L; Buanes, T; Buat, Q; Buchholz, P; Buckley, A G; Budagov, I A; Buehrer, F; Bugge, L; Bugge, M K; Bulekov, O; Bullock, D; Burckhart, H; Burdin, S; Burgard, C D; Burghgrave, B; Burke, S; Burmeister, I; Busato, E; Büscher, D; Büscher, V; Bussey, P; Butler, J M; Butt, A I; Buttar, C M; Butterworth, J M; Butti, P; Buttinger, W; Buzatu, A; Buzykaev, A R; Urbán, S Cabrera; Caforio, D; Cairo, V M; Cakir, O; Calace, N; Calafiura, P; Calandri, A; Calderini, G; Calfayan, P; Caloba, L P; Calvet, D; Calvet, S; Camacho Toro, R; Camarda, S; Camarri, P; Cameron, D; Caminal Armadans, R; Campana, S; Campanelli, M; Campoverde, A; Canale, V; Canepa, A; Cano Bret, M; Cantero, J; Cantrill, R; Cao, T; Capeans Garrido, M D M; Caprini, I; Caprini, M; Capua, M; Caputo, R; Carbone, R M; Cardarelli, R; Cardillo, F; Carli, T; Carlino, G; Carminati, L; Caron, S; Carquin, E; Carrillo-Montoya, G D; Carter, J R; Carvalho, J; Casadei, D; Casado, M P; Casolino, M; Casper, D W; Castaneda-Miranda, E; Castelli, A; Gimenez, V Castillo; Castro, N F; Catastini, P; Catinaccio, A; Catmore, J R; Cattai, A; Caudron, J; Cavaliere, V; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Ceradini, F; Alberich, L Cerda; Cerio, B C; Cerny, K; Cerqueira, A S; Cerri, A; Cerrito, L; Cerutti, F; Cerv, M; Cervelli, A; Cetin, S A; Chafaq, A; Chakraborty, D; Chalupkova, I; Chan, Y L; Chang, P; Chapman, J D; Charlton, D G; Chau, C C; Chavez Barajas, C A; Cheatham, S; Chegwidden, A; Chekanov, S; Chekulaev, S V; Chelkov, G A; Chelstowska, M A; Chen, C; Chen, H; Chen, K; Chen, L; Chen, S; Chen, S; Chen, X; Chen, Y; Cheng, H C; Cheng, Y; Cheplakov, A; Cheremushkina, E; Cherkaoui El Moursli, R; Chernyatin, V; Cheu, E; Chevalier, L; Chiarella, V; Chiarelli, G; Chiodini, G; Chisholm, A S; Chislett, R T; Chitan, A; Chizhov, M V; Choi, K; Chouridou, S; Chow, B K B; Christodoulou, V; Chromek-Burckhart, D; Chudoba, J; Chuinard, A J; Chwastowski, J J; Chytka, L; Ciapetti, G; Ciftci, A K; Cinca, D; Cindro, V; Cioara, I A; Ciocio, A; Cirotto, F; Citron, Z H; Ciubancan, M; Clark, A; Clark, B L; Clark, P J; Clarke, R N; Clement, C; Coadou, Y; Cobal, M; Coccaro, A; Cochran, J; Coffey, L; Cogan, J G; Colasurdo, L; Cole, B; Cole, S; Colijn, A P; Collot, J; Colombo, T; Compostella, G; Conde Muiño, P; Coniavitis, E; Connell, S H; Connelly, I A; Consorti, V; Constantinescu, S; Conta, C; Conti, G; Conventi, F; Cooke, M; Cooper, B D; Cooper-Sarkar, A M; Cornelissen, T; Corradi, M; Corriveau, F; Corso-Radu, A; Cortes-Gonzalez, A; Cortiana, G; Costa, G; Costa, M J; Costanzo, D; Côté, D; Cottin, G; Cowan, G; Cox, B E; Cranmer, K; Cree, G; Crépé-Renaudin, S; Crescioli, F; Cribbs, W A; Crispin Ortuzar, M; Cristinziani, M; Croft, V; Crosetti, G; Cuhadar Donszelmann, T; Cummings, J; Curatolo, M; Cúth, J; Cuthbert, C; Czirr, H; Czodrowski, P; D'Auria, S; D'Onofrio, M; Da Cunha Sargedas De Sousa, M J; Via, C Da; Dabrowski, W; Dafinca, A; Dai, T; Dale, O; Dallaire, F; Dallapiccola, C; Dam, M; Dandoy, J R; Dang, N P; Daniells, A C; Danninger, M; Dano Hoffmann, M; Dao, V; Darbo, G; Darmora, S; Dassoulas, J; Dattagupta, A; Davey, W; David, C; Davidek, T; Davies, E; Davies, M; Davison, P; Davygora, Y; Dawe, E; Dawson, I; Daya-Ishmukhametova, R K; De, K; de Asmundis, R; De Benedetti, A; De Castro, S; De Cecco, S; De Groot, N; de Jong, P; De la Torre, H; De Lorenzi, F; De Pedis, D; De Salvo, A; De Sanctis, U; De Santo, A; De Vivie De Regie, J B; Dearnaley, W J; Debbe, R; Debenedetti, C; Dedovich, D V; Deigaard, I; Del Peso, J; Del Prete, T; Delgove, D; Deliot, F; Delitzsch, C M; Deliyergiyev, M; Dell'Acqua, A; Dell'Asta, L; Dell'Orso, M; Della Pietra, M; Della Volpe, D; Delmastro, M; Delsart, P A; Deluca, C; DeMarco, D A; Demers, S; Demichev, M; Demilly, A; Denisov, S P; Derendarz, D; Derkaoui, J E; Derue, F; Dervan, P; Desch, K; Deterre, C; Dette, K; Deviveiros, P O; Dewhurst, A; Dhaliwal, S; Di Ciaccio, A; Di Ciaccio, L; Di Domenico, A; Di Donato, C; Di Girolamo, A; Di Girolamo, B; Di Mattia, A; Di Micco, B; Di Nardo, R; Di Simone, A; Di Sipio, R; Di Valentino, D; Diaconu, C; Diamond, M; Dias, F A; Diaz, M A; Diehl, E B; Dietrich, J; Diglio, S; Dimitrievska, A; Dingfelder, J; Dita, P; Dita, S; Dittus, F; Djama, F; Djobava, T; Djuvsland, J I; do Vale, M A B; Dobos, D; Dobre, M; Doglioni, C; Dohmae, T; Dolejsi, J; Dolezal, Z; Dolgoshein, B A; Donadelli, M; Donati, S; Dondero, P; Donini, J; Dopke, J; Doria, A; Dova, M T; Doyle, A T; Drechsler, E; Dris, M; Du, Y; Dubreuil, E; Duchovni, E; Duckeck, G; Ducu, O A; Duda, D; Dudarev, A; Duflot, L; Duguid, L; Dührssen, M; Dunford, M; Duran Yildiz, H; Düren, M; Durglishvili, A; Duschinger, D; Dutta, B; Dyndal, M; Eckardt, C; Ecker, K M; Edgar, R C; Edson, W; Edwards, N C; Ehrenfeld, W; Eifert, T; Eigen, G; Einsweiler, K; Ekelof, T; El Kacimi, M; Ellert, M; Elles, S; Ellinghaus, F; Elliot, A A; Ellis, N; Elmsheuser, J; Elsing, M; Emeliyanov, D; Enari, Y; Endner, O C; Endo, M; Erdmann, J; Ereditato, A; Ernis, G; Ernst, J; Ernst, M; Errede, S; Ertel, E; Escalier, M; Esch, H; Escobar, C; Esposito, B; Etienvre, A I; Etzion, E; Evans, H; Ezhilov, A; Fabbri, L; Facini, G; Fakhrutdinov, R M; Falciano, S; Falla, R J; Faltova, J; Fang, Y; Fanti, M; Farbin, A; Farilla, A; Farooque, T; Farrell, S; Farrington, S M; Farthouat, P; Fassi, F; Fassnacht, P; Fassouliotis, D; Giannelli, M Faucci; Favareto, A; Fayard, L; Fedin, O L; Fedorko, W; Feigl, S; Feligioni, L; Feng, C; Feng, E J; Feng, H; Fenyuk, A B; Feremenga, L; Fernandez Martinez, P; Fernandez Perez, S; Ferrando, J; Ferrari, A; Ferrari, P; Ferrari, R; Ferreira de Lima, D E; Ferrer, A; Ferrere, D; Ferretti, C; Ferretto Parodi, A; Fiascaris, M; Fiedler, F; Filipčič, A; Filipuzzi, M; Filthaut, F; Fincke-Keeler, M; Finelli, K D; Fiolhais, M C N; Fiorini, L; Firan, A; Fischer, A; Fischer, C; Fischer, J; Fisher, W C; Flaschel, N; Fleck, I; Fleischmann, P; Fletcher, G T; Fletcher, G; Fletcher, R R M; Flick, T; Floderus, A; Castillo, L R Flores; Flowerdew, M J; Formica, A; Forti, A; Fournier, D; Fox, H; Fracchia, S; Francavilla, P; Franchini, M; Francis, D; Franconi, L; Franklin, M; Frate, M; Fraternali, M; Freeborn, D; French, S T; Fressard-Batraneanu, S M; Friedrich, F; Froidevaux, D; Frost, J A; Fukunaga, C; Fullana Torregrosa, E; Fulsom, B G; Fusayasu, T; Fuster, J; Gabaldon, C; Gabizon, O; Gabrielli, A; Gabrielli, A; Gach, G P; Gadatsch, S; Gadomski, S; Gagliardi, G; Gagnon, P; Galea, C; Galhardo, B; Gallas, E J; Gallop, B J; Gallus, P; Galster, G; Gan, K K; Gao, J; Gao, Y; Gao, Y S; Garay Walls, F M; Garberson, F; García, C; García Navarro, J E; Garcia-Sciveres, M; Gardner, R W; Garelli, N; Garonne, V; Gatti, C; Gaudiello, A; Gaudio, G; Gaur, B; Gauthier, L; Gauzzi, P; Gavrilenko, I L; Gay, C; Gaycken, G; Gazis, E N; Ge, P; Gecse, Z; Gee, C N P; Geich-Gimbel, Ch; Geisler, M P; Gemme, C; Genest, M H; Geng, C; Gentile, S; George, M; George, S; Gerbaudo, D; Gershon, A; Ghasemi, S; Ghazlane, H; Giacobbe, B; Giagu, S; Giangiobbe, V; Giannetti, P; Gibbard, B; Gibson, S M; Gignac, M; Gilchriese, M; Gillam, T P S; Gillberg, D; Gilles, G; Gingrich, D M; Giokaris, N; Giordani, M P; Giorgi, F M; Giorgi, F M; Giraud, P F; Giromini, P; Giugni, D; Giuliani, C; Giulini, M; Gjelsten, B K; Gkaitatzis, S; Gkialas, I; Gkougkousis, E L; Gladilin, L K; Glasman, C; Glatzer, J; Glaysher, P C F; Glazov, A; Goblirsch-Kolb, M; Goddard, J R; Godlewski, J; Goldfarb, S; Golling, T; Golubkov, D; Gomes, A; Gonçalo, R; Goncalves Pinto Firmino Da Costa, J; Gonella, L; González de la Hoz, S; Gonzalez Parra, G; Gonzalez-Sevilla, S; Goossens, L; Gorbounov, P A; Gordon, H A; Gorelov, I; Gorini, B; Gorini, E; Gorišek, A; Gornicki, E; Goshaw, A T; Gössling, C; Gostkin, M I; Goujdami, D; Goussiou, A G; Govender, N; Gozani, E; Grabas, H M X; Graber, L; Grabowska-Bold, I; Gradin, P O J; Grafström, P; Gramling, J; Gramstad, E; Grancagnolo, S; Gratchev, V; Gray, H M; Graziani, E; Greenwood, Z D; Grefe, C; Gregersen, K; Gregor, I M; Grenier, P; Griffiths, J; Grillo, A A; Grimm, K; Grinstein, S; Gris, Ph; Grivaz, J-F; Groh, S; Grohs, J P; Grohsjean, A; Gross, E; Grosse-Knetter, J; Grossi, G C; Grout, Z J; Guan, L; Guenther, J; Guescini, F; Guest, D; Gueta, O; Guido, E; Guillemin, T; Guindon, S; Gul, U; Gumpert, C; Guo, J; Guo, Y; Gupta, S; Gustavino, G; Gutierrez, P; Gutierrez Ortiz, N G; Gutschow, C; Guyot, C; Gwenlan, C; Gwilliam, C B; Haas, A; Haber, C; Hadavand, H K; Haddad, N; Haefner, P; Hageböck, S; Hajduk, Z; Hakobyan, H; Haleem, M; Haley, J; Hall, D; Halladjian, G; Hallewell, G D; Hamacher, K; Hamal, P; Hamano, K; Hamilton, A; Hamity, G N; Hamnett, P G; Han, L; Hanagaki, K; Hanawa, K; Hance, M; Haney, B; Hanke, P; Hanna, R; Hansen, J B; Hansen, J D; Hansen, M C; Hansen, P H; Hara, K; Hard, A S; Harenberg, T; Hariri, F; Harkusha, S; Harrington, R D; Harrison, P F; Hartjes, F; Hasegawa, M; Hasegawa, Y; Hasib, A; Hassani, S; Haug, S; Hauser, R; Hauswald, L; Havranek, M; Hawkes, C M; Hawkings, R J; Hawkins, A D; Hayashi, T; Hayden, D; Hays, C P; Hays, J M; Hayward, H S; Haywood, S J; Head, S J; Heck, T; Hedberg, V; Heelan, L; Heim, S; Heim, T; Heinemann, B; Heinrich, L; Hejbal, J; Helary, L; Hellman, S; Helsens, C; Henderson, J; Henderson, R C W; Heng, Y; Hengler, C; Henkelmann, S; Henrichs, A; Henriques Correia, A M; Henrot-Versille, S; Herbert, G H; Hernández Jiménez, Y; Herten, G; Hertenberger, R; Hervas, L; Hesketh, G G; Hessey, N P; Hetherly, J W; Hickling, R; Higón-Rodriguez, E; Hill, E; Hill, J C; Hiller, K H; Hillier, S J; Hinchliffe, I; Hines, E; Hinman, R R; Hirose, M; Hirschbuehl, D; Hobbs, J; Hod, N; Hodgkinson, M C; Hodgson, P; Hoecker, A; Hoeferkamp, M R; Hoenig, F; Hohlfeld, M; Hohn, D; Holmes, T R; Homann, M; Hong, T M; Hopkins, W H; Horii, Y; Horton, A J; Hostachy, J-Y; Hou, S; Hoummada, A; Howard, J; Howarth, J; Hrabovsky, M; Hristova, I; Hrivnac, J; Hryn'ova, T; Hrynevich, A; Hsu, C; Hsu, P J; Hsu, S-C; Hu, D; Hu, Q; Hu, X; Huang, Y; Hubacek, Z; Hubaut, F; Huegging, F; Huffman, T B; Hughes, E W; Hughes, G; Huhtinen, M; Hülsing, T A; Huseynov, N; Huston, J; Huth, J; Iacobucci, G; Iakovidis, G; Ibragimov, I; Iconomidou-Fayard, L; Ideal, E; Idrissi, Z; Iengo, P; Igonkina, O; Iizawa, T; Ikegami, Y; Ikematsu, K; Ikeno, M; Ilchenko, Y; Iliadis, D; Ilic, N; Ince, T; Introzzi, G; Ioannou, P; Iodice, M; Iordanidou, K; Ippolito, V; Quiles, A Irles; Isaksson, C; Ishino, M; Ishitsuka, M; Ishmukhametov, R; Issever, C; Istin, S; Iturbe Ponce, J M; Iuppa, R; Ivarsson, J; Iwanski, W; Iwasaki, H; Izen, J M; Izzo, V; Jabbar, S; Jackson, B; Jackson, M; Jackson, P; Jaekel, M R; Jain, V; Jakobi, K B; Jakobs, K; Jakobsen, S; Jakoubek, T; Jakubek, J; Jamin, D O; Jana, D K; Jansen, E; Jansky, R; Janssen, J; Janus, M; Jarlskog, G; Javadov, N; Javůrek, T; Jeanty, L; Jejelava, J; Jeng, G-Y; Jennens, D; Jenni, P; Jentzsch, J; Jeske, C; Jézéquel, S; Ji, H; Jia, J; Jiang, Y; Jiggins, S; Jimenez Pena, J; Jin, S; Jinaru, A; Jinnouchi, O; Joergensen, M D; Johansson, P; Johns, K A; Johnson, W J; Jon-And, K; Jones, G; Jones, R W L; Jones, T J; Jongmanns, J; Jorge, P M; Joshi, K D; Jovicevic, J; Ju, X; Juste Rozas, A; Kaci, M; Kaczmarska, A; Kado, M; Kagan, H; Kagan, M; Kahn, S J; Kajomovitz, E; Kalderon, C W; Kaluza, A; Kama, S; Kamenshchikov, A; Kanaya, N; Kaneti, S; Kantserov, V A; Kanzaki, J; Kaplan, B; Kaplan, L S; Kapliy, A; Kar, D; Karakostas, K; Karamaoun, A; Karastathis, N; Kareem, M J; Karentzos, E; Karnevskiy, M; Karpov, S N; Karpova, Z M; Karthik, K; Kartvelishvili, V; Karyukhin, A N; Kasahara, K; Kashif, L; Kass, R D; Kastanas, A; Kataoka, Y; Kato, C; Katre, A; Katzy, J; Kawade, K; Kawagoe, K; Kawamoto, T; Kawamura, G; Kazama, S; Kazanin, V F; Keeler, R; Kehoe, R; Keller, J S; Kempster, J J; Keoshkerian, H; Kepka, O; Kerševan, B P; Kersten, S; Keyes, R A; Khalil-Zada, F; Khandanyan, H; Khanov, A; Kharlamov, A G; Khoo, T J; Khovanskiy, V; Khramov, E; Khubua, J; Kido, S; Kim, H Y; Kim, S H; Kim, Y K; Kimura, N; Kind, O M; King, B T; King, M; King, S B; Kirk, J; Kiryunin, A E; Kishimoto, T; Kisielewska, D; Kiss, F; Kiuchi, K; Kivernyk, O; Kladiva, E; Klein, M H; Klein, M; Klein, U; Kleinknecht, K; Klimek, P; Klimentov, A; Klingenberg, R; Klinger, J A; Klioutchnikova, T; Kluge, E-E; Kluit, P; Kluth, S; Knapik, J; Kneringer, E; Knoops, E B F G; Knue, A; Kobayashi, A; Kobayashi, D; Kobayashi, T; Kobel, M; Kocian, M; Kodys, P; Koffas, T; Koffeman, E; Kogan, L A; Kohlmann, S; Kohout, Z; Kohriki, T; Koi, T; Kolanoski, H; Kolb, M; Koletsou, I; Komar, A A; Komori, Y; Kondo, T; Kondrashova, N; Köneke, K; König, A C; Kono, T; Konoplich, R; Konstantinidis, N; Kopeliansky, R; Koperny, S; Köpke, L; Kopp, A K; Korcyl, K; Kordas, K; Korn, A; Korol, A A; Korolkov, I; Korolkova, E V; Kortner, O; Kortner, S; Kosek, T; Kostyukhin, V V; Kotov, V M; Kotwal, A; Kourkoumeli-Charalampidi, A; Kourkoumelis, C; Kouskoura, V; Koutsman, A; Kowalewski, R; Kowalski, T Z; Kozanecki, W; Kozhin, A S; Kramarenko, V A; Kramberger, G; Krasnopevtsev, D; Krasny, M W; Krasznahorkay, A; Kraus, J K; Kravchenko, A; Kreiss, S; Kretz, M; Kretzschmar, J; Kreutzfeldt, K; Krieger, P; Krizka, K; Kroeninger, K; Kroha, H; Kroll, J; Kroseberg, J; Krstic, J; Kruchonak, U; Krüger, H; Krumnack, N; Kruse, A; Kruse, M C; Kruskal, M; Kubota, T; Kucuk, H; Kuday, S; Kuehn, S; Kugel, A; Kuger, F; Kuhl, A; Kuhl, T; Kukhtin, V; Kukla, R; Kulchitsky, Y; Kuleshov, S; Kuna, M; Kunigo, T; Kupco, A; Kurashige, H; Kurochkin, Y A; Kus, V; Kuwertz, E S; Kuze, M; Kvita, J; Kwan, T; Kyriazopoulos, D; La Rosa, A; La Rosa Navarro, J L; La Rotonda, L; Lacasta, C; Lacava, F; Lacey, J; Lacker, H; Lacour, D; Lacuesta, V R; Ladygin, E; Lafaye, R; Laforge, B; Lagouri, T; Lai, S; Lambourne, L; Lammers, S; Lampen, C L; Lampl, W; Lançon, E; Landgraf, U; Landon, M P J; Lang, V S; Lange, J C; Lankford, A J; Lanni, F; Lantzsch, K; Lanza, A; Laplace, S; Lapoire, C; Laporte, J F; Lari, T; Lasagni Manghi, F; Lassnig, M; Laurelli, P; Lavrijsen, W; Law, A T; Laycock, P; Lazovich, T; Le Dortz, O; Le Guirriec, E; Le Menedeu, E; LeBlanc, M; LeCompte, T; Ledroit-Guillon, F; Lee, C A; Lee, S C; Lee, L; Lefebvre, G; Lefebvre, M; Legger, F; Leggett, C; Lehan, A; Lehmann Miotto, G; Lei, X; Leight, W A; Leisos, A; Leister, A G; Leite, M A L; Leitner, R; Lellouch, D; Lemmer, B; Leney, K J C; Lenz, T; Lenzi, B; Leone, R; Leone, S; Leonidopoulos, C; Leontsinis, S; Leroy, C; Lester, C G; Levchenko, M; Levêque, J; Levin, D; Levinson, L J; Levy, M; Lewis, A; Leyko, A M; Leyton, M; Li, B; Li, H; Li, H L; Li, L; Li, L; Li, S; Li, X; Li, Y; Liang, Z; Liao, H; Liberti, B; Liblong, A; Lichard, P; Lie, K; Liebal, J; Liebig, W; Limbach, C; Limosani, A; Lin, S C; Lin, T H; Linde, F; Lindquist, B E; Linnemann, J T; Lipeles, E; Lipniacka, A; Lisovyi, M; Liss, T M; Lissauer, D; Lister, A; Litke, A M; Liu, B; Liu, D; Liu, H; Liu, J; Liu, J B; Liu, K; Liu, L; Liu, M; Liu, M; Liu, Y; Livan, M; Lleres, A; Llorente Merino, J; Lloyd, S L; Sterzo, F Lo; Lobodzinska, E; Loch, P; Lockman, W S; Loebinger, F K; Loevschall-Jensen, A E; Loew, K M; Loginov, A; Lohse, T; Lohwasser, K; Lokajicek, M; Long, B A; Long, J D; Long, R E; Looper, K A; Lopes, L; Lopez Mateos, D; Lopez Paredes, B; Lopez Paz, I; Lorenz, J; Lorenzo Martinez, N; Losada, M; Lösel, P J; Lou, X; Lounis, A; Love, J; Love, P A; Lu, H; Lu, N; Lubatti, H J; Luci, C; Lucotte, A; Luedtke, C; Luehring, F; Lukas, W; Luminari, L; Lundberg, O; Lund-Jensen, B; Lynn, D; Lysak, R; Lytken, E; Ma, H; Ma, L L; Maccarrone, G; Macchiolo, A; Macdonald, C M; Maček, B; Machado Miguens, J; Macina, D; Madaffari, D; Madar, R; Maddocks, H J; Mader, W F; Madsen, A; Maeda, J; Maeland, S; Maeno, T; Maevskiy, A; Magradze, E; Mahboubi, K; Mahlstedt, J; Maiani, C; Maidantchik, C; Maier, A A; Maier, T; Maio, A; Majewski, S; Makida, Y; Makovec, N; Malaescu, B; Malecki, Pa; Maleev, V P; Malek, F; Mallik, U; Malon, D; Malone, C; Maltezos, S; Malyshev, V M; Malyukov, S; Mamuzic, J; Mancini, G; Mandelli, B; Mandelli, L; Mandić, I; Mandrysch, R; Maneira, J; Manhaes de Andrade Filho, L; Manjarres Ramos, J; Mann, A; Manousakis-Katsikakis, A; Mansoulie, B; Mantifel, R; Mantoani, M; Mapelli, L; March, L; Marchiori, G; Marcisovsky, M; Marino, C P; Marjanovic, M; Marley, D E; Marroquim, F; Marsden, S P; Marshall, Z; Marti, L F; Marti-Garcia, S; Martin, B; Martin, T A; Martin, V J; Martin Dit Latour, B; Martinez, M; Martin-Haugh, S; Martoiu, V S; Martyniuk, A C; Marx, M; Marzano, F; Marzin, A; Masetti, L; Mashimo, T; Mashinistov, R; Masik, J; Maslennikov, A L; Massa, I; Massa, L; Mastrandrea, P; Mastroberardino, A; Masubuchi, T; Mättig, P; Mattmann, J; Maurer, J; Maxfield, S J; Maximov, D A; Mazini, R; Mazza, S M; Mc Goldrick, G; Mc Kee, S P; McCarn, A; McCarthy, R L; McCarthy, T G; McCubbin, N A; McFarlane, K W; Mcfayden, J A; Mchedlidze, G; McMahon, S J; McPherson, R A; Medinnis, M; Meehan, S; Mehlhase, S; Mehta, A; Meier, K; Meineck, C; Meirose, B; Mellado Garcia, B R; Meloni, F; Mengarelli, A; Menke, S; Meoni, E; Mercurio, K M; Mergelmeyer, S; Mermod, P; Merola, L; Meroni, C; Merritt, F S; Messina, A; Metcalfe, J; Mete, A S; Meyer, C; Meyer, C; Meyer, J-P; Meyer, J; Meyer Zu Theenhausen, H; Middleton, R P; Miglioranzi, S; Mijović, L; Mikenberg, G; Mikestikova, M; Mikuž, M; Milesi, M; Milic, A; Miller, D W; Mills, C; Milov, A; Milstead, D A; Minaenko, A A; Minami, Y; Minashvili, I A; Mincer, A I; Mindur, B; Mineev, M; Ming, Y; Mir, L M; Mistry, K P; Mitani, T; Mitrevski, J; Mitsou, V A; Miucci, A; Miyagawa, P S; Mjörnmark, J U; Moa, T; Mochizuki, K; Mohapatra, S; Mohr, W; Molander, S; Moles-Valls, R; Monden, R; Mondragon, M C; Mönig, K; Monini, C; Monk, J; Monnier, E; Montalbano, A; Montejo Berlingen, J; Monticelli, F; Monzani, S; Moore, R W; Morange, N; Moreno, D; Moreno Llácer, M; Morettini, P; Mori, D; Mori, T; Morii, M; Morinaga, M; Morisbak, V; Moritz, S; Morley, A K; Mornacchi, G; Morris, J D; Mortensen, S S; Morton, A; Morvaj, L; Mosidze, M; Moss, J; Motohashi, K; Mount, R; Mountricha, E; Mouraviev, S V; Moyse, E J W; Muanza, S; Mudd, R D; Mueller, F; Mueller, J; Mueller, R S P; Mueller, T; Muenstermann, D; Mullen, P; Mullier, G A; Munoz Sanchez, F J; Murillo Quijada, J A; Murray, W J; Musheghyan, H; Musto, E; Myagkov, A G; Myska, M; Nachman, B P; Nackenhorst, O; Nadal, J; Nagai, K; Nagai, R; Nagai, Y; Nagano, K; Nagarkar, A; Nagasaka, Y; Nagata, K; Nagel, M; Nagy, E; Nairz, A M; Nakahama, Y; Nakamura, K; Nakamura, T; Nakano, I; Namasivayam, H; Naranjo Garcia, R F; Narayan, R; Narrias Villar, D I; Naumann, T; Navarro, G; Nayyar, R; Neal, H A; Nechaeva, P Yu; Neep, T J; Nef, P D; Negri, A; Negrini, M; Nektarijevic, S; Nellist, C; Nelson, A; Nemecek, S; Nemethy, P; Nepomuceno, A A; Nessi, M; Neubauer, M S; Neumann, M; Neves, R M; Nevski, P; Newman, P R; Nguyen, D H; Nickerson, R B; Nicolaidou, R; Nicquevert, B; Nielsen, J; Nikiforou, N; Nikiforov, A; Nikolaenko, V; Nikolic-Audit, I; Nikolopoulos, K; Nilsen, J K; Nilsson, P; Ninomiya, Y; Nisati, A; Nisius, R; Nobe, T; Nomachi, M; Nomidis, I; Nooney, T; Norberg, S; Nordberg, M; Novgorodova, O; Nowak, S; Nozaki, M; Nozka, L; Ntekas, K; Nunes Hanninger, G; Nunnemann, T; Nurse, E; Nuti, F; O'grady, F; O'Neil, D C; O'Shea, V; Oakham, F G; Oberlack, H; Obermann, T; Ocariz, J; Ochi, A; Ochoa, I; Ochoa-Ricoux, J P; Oda, S; Odaka, S; Ogren, H; Oh, A; Oh, S H; Ohm, C C; Ohman, H; Oide, H; Okamura, W; Okawa, H; Okumura, Y; Okuyama, T; Olariu, A; Olivares Pino, S A; Oliveira Damazio, D; Olszewski, A; Olszowska, J; Onofre, A; Onogi, K; Onyisi, P U E; Oram, C J; Oreglia, M J; Oren, Y; Orestano, D; Orlando, N; Barrera, C Oropeza; Orr, R S; Osculati, B; Ospanov, R; Otero Y Garzon, G; Otono, H; Ouchrif, M; Ould-Saada, F; Ouraou, A; Oussoren, K P; Ouyang, Q; Ovcharova, A; Owen, M; Owen, R E; Ozcan, V E; Ozturk, N; Pachal, K; Pacheco Pages, A; Padilla Aranda, C; Pagáčová, M; Pagan Griso, S; Paganis, E; Paige, F; Pais, P; Pajchel, K; Palacino, G; Palestini, S; Palka, M; Pallin, D; Palma, A; Pan, Y B; Panagiotopoulou, E St; Pandini, C E; Panduro Vazquez, J G; Pani, P; Panitkin, S; Pantea, D; Paolozzi, L; Papadopoulou, Th D; Papageorgiou, K; Paramonov, A; Hernandez, D Paredes; Parker, M A; Parker, K A; Parodi, F; Parsons, J A; Parzefall, U; Pasqualucci, E; Passaggio, S; Pastore, F; Pastore, Fr; Pásztor, G; Pataraia, S; Patel, N D; Pater, J R; Pauly, T; Pearce, J; Pearson, B; Pedersen, L E; Pedersen, M; Pedraza Lopez, S; Pedro, R; Peleganchuk, S V; Pelikan, D; Penc, O; Peng, C; Peng, H; Penning, B; Penwell, J; Perepelitsa, D V; Perez Codina, E; Pérez García-Estañ, M T; Perini, L; Pernegger, H; Perrella, S; Peschke, R; Peshekhonov, V D; Peters, K; Peters, R F Y; Petersen, B A; Petersen, T C; Petit, E; Petridis, A; Petridou, C; Petroff, P; Petrolo, E; Petrucci, F; Pettersson, N E; Pezoa, R; Phillips, P W; Piacquadio, G; Pianori, E; Picazio, A; Piccaro, E; Piccinini, M; Pickering, M A; Piegaia, R; Pignotti, D T; Pilcher, J E; Pilkington, A D; Pin, A W J; Pina, J; Pinamonti, M; Pinfold, J L; Pingel, A; Pires, S; Pirumov, H; Pitt, M; Pizio, C; Plazak, L; Pleier, M-A; Pleskot, V; Plotnikova, E; Plucinski, P; Pluth, D; Poettgen, R; Poggioli, L; Pohl, D; Polesello, G; Poley, A; Policicchio, A; Polifka, R; Polini, A; Pollard, C S; Polychronakos, V; Pommès, K; Pontecorvo, L; Pope, B G; Popeneciu, G A; Popovic, D S; Poppleton, A; Pospisil, S; Potamianos, K; Potrap, I N; Potter, C J; Potter, C T; Poulard, G; Poveda, J; Pozdnyakov, V; Astigarraga, M E Pozo; Pralavorio, P; Pranko, A; Prasad, S; Prell, S; Price, D; Price, L E; Primavera, M; Prince, S; Proissl, M; Prokofiev, K; Prokoshin, F; Protopapadaki, E; Protopopescu, S; Proudfoot, J; Przybycien, M; Ptacek, E; Puddu, D; Pueschel, E; Puldon, D; Purohit, M; Puzo, P; Qian, J; Qin, G; Qin, Y; Quadt, A; Quarrie, D R; Quayle, W B; Queitsch-Maitland, M; Quilty, D; Raddum, S; Radeka, V; Radescu, V; Radhakrishnan, S K; Radloff, P; Rados, P; Ragusa, F; Rahal, G; Rajagopalan, S; Rammensee, M; Rangel-Smith, C; Rauscher, F; Rave, S; Ravenscroft, T; Raymond, M; Read, A L; Readioff, N P; Rebuzzi, D M; Redelbach, A; Redlinger, G; Reece, R; Reeves, K; Rehnisch, L; Reichert, J; Reisin, H; Rembser, C; Ren, H; Renaud, A; Rescigno, M; Resconi, S; Rezanova, O L; Reznicek, P; Rezvani, R; Richter, R; Richter, S; Richter-Was, E; Ricken, O; Ridel, M; Rieck, P; Riegel, C J; Rieger, J; Rifki, O; Rijssenbeek, M; Rimoldi, A; Rinaldi, L; Ristić, B; Ritsch, E; Riu, I; Rizatdinova, F; Rizvi, E; Robertson, S H; Robichaud-Veronneau, A; Robinson, D; Robinson, J E M; Robson, A; Roda, C; Roe, S; Røhne, O; Romaniouk, A; Romano, M; Romano Saez, S M; Romero Adam, E; Rompotis, N; Ronzani, M; Roos, L; Ros, E; Rosati, S; Rosbach, K; Rose, P; Rosenthal, O; Rossetti, V; Rossi, E; Rossi, L P; Rosten, J H N; Rosten, R; Rotaru, M; Roth, I; Rothberg, J; Rousseau, D; Royon, C R; Rozanov, A; Rozen, Y; Ruan, X; Rubbo, F; Rubinskiy, I; Rud, V I; Rudolph, C; Rudolph, M S; Rühr, F; Ruiz-Martinez, A; Rurikova, Z; Rusakovich, N A; Ruschke, A; Russell, H L; Rutherfoord, J P; Ruthmann, N; Ryabov, Y F; Rybar, M; Rybkin, G; Ryder, N C; Ryzhov, A; Saavedra, A F; Sabato, G; Sacerdoti, S; Saddique, A; Sadrozinski, H F-W; Sadykov, R; Safai Tehrani, F; Saha, P; Sahinsoy, M; Saimpert, M; Saito, T; Sakamoto, H; Sakurai, Y; Salamanna, G; Salamon, A; Salazar Loyola, J E; Saleem, M; Salek, D; Sales De Bruin, P H; Salihagic, D; Salnikov, A; Salt, J; Salvatore, D; Salvatore, F; Salvucci, A; Salzburger, A; Sammel, D; Sampsonidis, D; Sanchez, A; Sánchez, J; Sanchez Martinez, V; Sandaker, H; Sandbach, R L; Sander, H G; Sanders, M P; Sandhoff, M; Sandoval, C; Sandstroem, R; Sankey, D P C; Sannino, M; Sansoni, A; Santoni, C; Santonico, R; Santos, H; Santoyo Castillo, I; Sapp, K; Sapronov, A; Saraiva, J G; Sarrazin, B; Sasaki, O; Sasaki, Y; Sato, K; Sauvage, G; Sauvan, E; Savage, G; Savard, P; Sawyer, C; Sawyer, L; Saxon, J; Sbarra, C; Sbrizzi, A; Scanlon, T; Scannicchio, D A; Scarcella, M; Scarfone, V; Schaarschmidt, J; Schacht, P; Schaefer, D; Schaefer, R; Schaeffer, J; Schaepe, S; Schaetzel, S; Schäfer, U; Schaffer, A C; Schaile, D; Schamberger, R D; Scharf, V; Schegelsky, V A; Scheirich, D; Schernau, M; Schiavi, C; Schillo, C; Schioppa, M; Schlenker, S; Schmieden, K; Schmitt, C; Schmitt, S; Schmitt, S; Schmitz, S; Schneider, B; Schnellbach, Y J; Schnoor, U; Schoeffel, L; Schoening, A; Schoenrock, B D; Schopf, E; Schorlemmer, A L S; Schott, M; Schouten, D; Schovancova, J; Schramm, S; Schreyer, M; Schuh, N; Schultens, M J; Schultz-Coulon, H-C; Schulz, H; Schumacher, M; Schumm, B A; Schune, Ph; Schwanenberger, C; Schwartzman, A; Schwarz, T A; Schwegler, Ph; Schweiger, H; Schwemling, Ph; Schwienhorst, R; Schwindling, J; Schwindt, T; Scifo, E; Sciolla, G; Scuri, F; Scutti, F; Searcy, J; Sedov, G; Sedykh, E; Seema, P; Seidel, S C; Seiden, A; Seifert, F; Seixas, J M; Sekhniaidze, G; Sekhon, K; Sekula, S J; Seliverstov, D M; Semprini-Cesari, N; Serfon, C; Serin, L; Serkin, L; Serre, T; Sessa, M; Seuster, R; Severini, H; Sfiligoj, T; Sforza, F; Sfyrla, A; Shabalina, E; Shamim, M; Shan, L Y; Shang, R; Shank, J T; Shapiro, M; Shatalov, P B; Shaw, K; Shaw, S M; Shcherbakova, A; Shehu, C Y; Sherwood, P; Shi, L; Shimizu, S; Shimmin, C O; Shimojima, M; Shiyakova, M; Shmeleva, A; Saadi, D Shoaleh; Shochet, M J; Shojaii, S; Shrestha, S; Shulga, E; Shupe, M A; Sicho, P; Sidebo, P E; Sidiropoulou, O; Sidorov, D; Sidoti, A; Siegert, F; Sijacki, Dj; Silva, J; Silver, Y; Silverstein, S B; Simak, V; Simard, O; Simic, Lj; Simion, S; Simioni, E; Simmons, B; Simon, D; Simon, M; Sinervo, P; Sinev, N B; Sioli, M; Siragusa, G; Sisakyan, A N; Sivoklokov, S Yu; Sjölin, J; Sjursen, T B; Skinner, M B; Skottowe, H P; Skubic, P; Slater, M; Slavicek, T; Slawinska, M; Sliwa, K; Smakhtin, V; Smart, B H; Smestad, L; Smirnov, S Yu; Smirnov, Y; Smirnova, L N; Smirnova, O; Smith, M N K; Smith, R W; Smizanska, M; Smolek, K; Snesarev, A A; Snidero, G; Snyder, S; Sobie, R; Socher, F; Soffer, A; Soh, D A; Sokhrannyi, G; Solans, C A; Solar, M; Solc, J; Soldatov, E Yu; Soldevila, U; Solodkov, A A; Soloshenko, A; Solovyanov, O V; Solovyev, V; Sommer, P; Song, H Y; Soni, N; Sood, A; Sopczak, A; Sopko, B; Sopko, V; Sorin, V; Sosa, D; Sosebee, M; Sotiropoulou, C L; Soualah, R; Soukharev, A M; South, D; Sowden, B C; Spagnolo, S; Spalla, M; Spangenberg, M; Spanò, F; Spearman, W R; Sperlich, D; Spettel, F; Spighi, R; Spigo, G; Spiller, L A; Spousta, M; St Denis, R D; Stabile, A; Staerz, S; Stahlman, J; Stamen, R; Stamm, S; Stanecka, E; Stanescu, C; Stanescu-Bellu, M; Stanitzki, M M; Stapnes, S; Starchenko, E A; Stark, J; Staroba, P; Starovoitov, P; Staszewski, R; Steinberg, P; Stelzer, B; Stelzer, H J; Stelzer-Chilton, O; Stenzel, H; Stewart, G A; Stillings, J A; Stockton, M C; Stoebe, M; Stoicea, G; Stolte, P; Stonjek, S; Stradling, A R; Straessner, A; Stramaglia, M E; Strandberg, J; Strandberg, S; Strandlie, A; Strauss, E; Strauss, M; Strizenec, P; Ströhmer, R; Strom, D M; Stroynowski, R; Strubig, A; Stucci, S A; Stugu, B; Styles, N A; Su, D; Su, J; Subramaniam, R; Succurro, A; Suchek, S; Sugaya, Y; Suk, M; Sulin, V V; Sultansoy, S; Sumida, T; Sun, S; Sun, X; Sundermann, J E; Suruliz, K; Susinno, G; Sutton, M R; Suzuki, S; Svatos, M; Swiatlowski, M; Sykora, I; Sykora, T; Ta, D; Taccini, C; Tackmann, K; Taenzer, J; Taffard, A; Tafirout, R; Taiblum, N; Takai, H; Takashima, R; Takeda, H; Takeshita, T; Takubo, Y; Talby, M; Talyshev, A A; Tam, J Y C; Tan, K G; Tanaka, J; Tanaka, R; Tanaka, S; Tannenwald, B B; Araya, S Tapia; Tapprogge, S; Tarem, S; Tarrade, F; Tartarelli, G F; Tas, P; Tasevsky, M; Tashiro, T; Tassi, E; Tavares Delgado, A; Tayalati, Y; Taylor, A C; Taylor, F E; Taylor, G N; Taylor, P T E; Taylor, W; Teischinger, F A; Teixeira Dias Castanheira, M; Teixeira-Dias, P; Temming, K K; Temple, D; Kate, H Ten; Teng, P K; Teoh, J J; Tepel, F; Terada, S; Terashi, K; Terron, J; Terzo, S; Testa, M; Teuscher, R J; Theveneaux-Pelzer, T; Thomas, J P; Thomas-Wilsker, J; Thompson, E N; Thompson, P D; Thompson, R J; Thompson, A S; Thomsen, L A; Thomson, E; Thomson, M; Thun, R P; Tibbetts, M J; Torres, R E Ticse; Tikhomirov, V O; Tikhonov, Yu A; Timoshenko, S; Tiouchichine, E; Tipton, P; Tisserant, S; Todome, K; Todorov, T; Todorova-Nova, S; Tojo, J; Tokár, S; Tokushuku, K; Tollefson, K; Tolley, E; Tomlinson, L; Tomoto, M; Tompkins, L; Toms, K; Torrence, E; Torres, H; Torró Pastor, E; Toth, J; Touchard, F; Tovey, D R; Trefzger, T; Tremblet, L; Tricoli, A; Trigger, I M; Trincaz-Duvoid, S; Tripiana, M F; Trischuk, W; Trocmé, B; Troncon, C; Trottier-McDonald, M; Trovatelli, M; Truong, L; Trzebinski, M; Trzupek, A; Tsarouchas, C; Tseng, J C-L; Tsiareshka, P V; Tsionou, D; Tsipolitis, G; Tsirintanis, N; Tsiskaridze, S; Tsiskaridze, V; Tskhadadze, E G; Tsui, K M; Tsukerman, I I; Tsulaia, V; Tsuno, S; Tsybychev, D; Tudorache, A; Tudorache, V; Tuna, A N; Tupputi, S A; Turchikhin, S; Turecek, D; Turra, R; Turvey, A J; Tuts, P M; Tykhonov, A; Tylmad, M; Tyndel, M; Ueda, I; Ueno, R; Ughetto, M; Ukegawa, F; Unal, G; Undrus, A; Unel, G; Ungaro, F C; Unno, Y; Unverdorben, C; Urban, J; Urquijo, P; Urrejola, P; Usai, G; Usanova, A; Vacavant, L; Vacek, V; Vachon, B; Valderanis, C; Valencic, N; Valentinetti, S; Valero, A; Valery, L; Valkar, S; Vallecorsa, S; Valls Ferrer, J A; Van Den Wollenberg, W; Van Der Deijl, P C; van der Geer, R; van der Graaf, H; van Eldik, N; van Gemmeren, P; Van Nieuwkoop, J; van Vulpen, I; van Woerden, M C; Vanadia, M; Vandelli, W; Vanguri, R; Vaniachine, A; Vannucci, F; Vardanyan, G; Vari, R; Varnes, E W; Varol, T; Varouchas, D; Vartapetian, A; Varvell, K E; Vazeille, F; Vazquez Schroeder, T; Veatch, J; Veloce, L M; Veloso, F; Velz, T; Veneziano, S; Ventura, A; Ventura, D; Venturi, M; Venturi, N; Venturini, A; Vercesi, V; Verducci, M; Verkerke, W; Vermeulen, J C; Vest, A; Vetterli, M C; Viazlo, O; Vichou, I; Vickey, T; Vickey Boeriu, O E; Viehhauser, G H A; Viel, S; Vigne, R; Villa, M; Villaplana Perez, M; Vilucchi, E; Vincter, M G; Vinogradov, V B; Vivarelli, I; Vlachos, S; Vladoiu, D; Vlasak, M; Vogel, M; Vokac, P; Volpi, G; Volpi, M; von der Schmitt, H; von Radziewski, H; von Toerne, E; Vorobel, V; Vorobev, K; Vos, M; Voss, R; Vossebeld, J H; Vranjes, N; Milosavljevic, M Vranjes; Vrba, V; Vreeswijk, M; Vuillermet, R; Vukotic, I; Vykydal, Z; Wagner, P; Wagner, W; Wahlberg, H; Wahrmund, S; Wakabayashi, J; Walder, J; Walker, R; Walkowiak, W; Wang, C; Wang, F; Wang, H; Wang, H; Wang, J; Wang, J; Wang, K; Wang, R; Wang, S M; Wang, T; Wang, T; Wang, X; Wanotayaroj, C; Warburton, A; Ward, C P; Wardrope, D R; Washbrook, A; Wasicki, C; Watkins, P M; Watson, A T; Watson, I J; Watson, M F; Watts, G; Watts, S; Waugh, B M; Webb, S; Weber, M S; Weber, S W; Webster, J S; Weidberg, A R; Weinert, B; Weingarten, J; Weiser, C; Weits, H; Wells, P S; Wenaus, T; Wengler, T; Wenig, S; Wermes, N; Werner, M; Werner, P; Wessels, M; Wetter, J; Whalen, K; Wharton, A M; White, A; White, M J; White, R; White, S; Whiteson, D; Wickens, F J; Wiedenmann, W; Wielers, M; Wienemann, P; Wiglesworth, C; Wiik-Fuchs, L A M; Wildauer, A; Wilkens, H G; Williams, H H; Williams, S; Willis, C; Willocq, S; Wilson, A; Wilson, J A; Wingerter-Seez, I; Winklmeier, F; Winter, B T; Wittgen, M; Wittkowski, J; Wollstadt, S J; Wolter, M W; Wolters, H; Wosiek, B K; Wotschack, J; Woudstra, M J; Wozniak, K W; Wu, M; Wu, M; Wu, S L; Wu, X; Wu, Y; Wyatt, T R; Wynne, B M; Xella, S; Xu, D; Xu, L; Yabsley, B; Yacoob, S; Yakabe, R; Yamada, M; Yamaguchi, D; Yamaguchi, Y; Yamamoto, A; Yamamoto, S; Yamanaka, T; Yamauchi, K; Yamazaki, Y; Yan, Z; Yang, H; Yang, H; Yang, Y; Yao, W-M; Yap, Y C; Yasu, Y; Yatsenko, E; Yau Wong, K H; Ye, J; Ye, S; Yeletskikh, I; Yen, A L; Yildirim, E; Yorita, K; Yoshida, R; Yoshihara, K; Young, C; Young, C J S; Youssef, S; Yu, D R; Yu, J; Yu, J M; Yu, J; Yuan, L; Yuen, S P Y; Yurkewicz, A; Yusuff, I; Zabinski, B; Zaidan, R; Zaitsev, A M; Zalieckas, J; Zaman, A; Zambito, S; Zanello, L; Zanzi, D; Zeitnitz, C; Zeman, M; Zemla, A; Zeng, J C; Zeng, Q; Zengel, K; Zenin, O; Ženiš, T; Zerwas, D; Zhang, D; Zhang, F; Zhang, G; Zhang, H; Zhang, J; Zhang, L; Zhang, R; Zhang, X; Zhang, Z; Zhao, X; Zhao, Y; Zhao, Z; Zhemchugov, A; Zhong, J; Zhou, B; Zhou, C; Zhou, L; Zhou, L; Zhou, M; Zhou, N; Zhu, C G; Zhu, H; Zhu, J; Zhu, Y; Zhuang, X; Zhukov, K; Zibell, A; Zieminska, D; Zimine, N I; Zimmermann, C; Zimmermann, S; Zinonos, Z; Zinser, M; Ziolkowski, M; Živković, L; Zobernig, G; Zoccoli, A; Nedden, M Zur; Zurzolo, G; Zwalinski, L

    2016-01-01

    Measurements of normalized differential cross-sections of top-quark pair production are presented as a function of the top-quark, [Formula: see text] system and event-level kinematic observables in proton-proton collisions at a centre-of-mass energy of [Formula: see text]. The observables have been chosen to emphasize the [Formula: see text] production process and to be sensitive to effects of initial- and final-state radiation, to the different parton distribution functions, and to non-resonant processes and higher-order corrections. The dataset corresponds to an integrated luminosity of 20.3 fb[Formula: see text], recorded in 2012 with the ATLAS detector at the CERN Large Hadron Collider. Events are selected in the lepton+jets channel, requiring exactly one charged lepton and at least four jets with at least two of the jets tagged as originating from a  b -quark. The measured spectra are corrected for detector effects and are compared to several Monte Carlo simulations. The results are in fair agreement with the predictions over a wide kinematic range. Nevertheless, most generators predict a harder top-quark transverse momentum distribution at high values than what is observed in the data. Predictions beyond NLO accuracy improve the agreement with data at high top-quark transverse momenta. Using the current settings and parton distribution functions, the rapidity distributions are not well modelled by any generator under consideration. However, the level of agreement is improved when more recent sets of parton distribution functions are used.

  14. Implications of improved Higgs mass calculations for supersymmetric models.

    PubMed

    Buchmueller, O; Dolan, M J; Ellis, J; Hahn, T; Heinemeyer, S; Hollik, W; Marrouche, J; Olive, K A; Rzehak, H; de Vries, K J; Weiglein, G

    We discuss the allowed parameter spaces of supersymmetric scenarios in light of improved Higgs mass predictions provided by FeynHiggs 2.10.0. The Higgs mass predictions combine Feynman-diagrammatic results with a resummation of leading and subleading logarithmic corrections from the stop/top sector, which yield a significant improvement in the region of large stop masses. Scans in the pMSSM parameter space show that, for given values of the soft supersymmetry-breaking parameters, the new logarithmic contributions beyond the two-loop order implemented in FeynHiggs tend to give larger values of the light CP-even Higgs mass, [Formula: see text], in the region of large stop masses than previous predictions that were based on a fixed-order Feynman-diagrammatic result, though the differences are generally consistent with the previous estimates of theoretical uncertainties. We re-analyse the parameter spaces of the CMSSM, NUHM1 and NUHM2, taking into account also the constraints from CMS and LHCb measurements of [Formula: see text]and ATLAS searches for [Formula: see text] events using 20/fb of LHC data at 8 TeV. Within the CMSSM, the Higgs mass constraint disfavours [Formula: see text], though not in the NUHM1 or NUHM2.

  15. Requirements for Predictive Density Functional Theory Methods for Heavy Materials Equation of State

    NASA Astrophysics Data System (ADS)

    Mattsson, Ann E.; Wills, John M.

    2012-02-01

    The difficulties in experimentally determining the Equation of State of actinide and lanthanide materials has driven the development of many computational approaches with varying degree of empiricism and predictive power. While Density Functional Theory (DFT) based on the Schr"odinger Equation (possibly with relativistic corrections including the scalar relativistic approach) combined with local and semi-local functionals has proven to be a successful and predictive approach for many materials, it is not giving enough accuracy, or even is a complete failure, for the actinides. To remedy this failure both an improved fundamental description based on the Dirac Equation (DE) and improved functionals are needed. Based on results obtained using the appropriate fundamental approach of DFT based on the DE we discuss the performance of available semi-local functionals, the requirements for improved functionals for actinide/lanthanide materials, and the similarities in how functionals behave in transition metal oxides. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  16. DDR: efficient computational method to predict drug-target interactions using graph mining and machine learning approaches.

    PubMed

    Olayan, Rawan S; Ashoor, Haitham; Bajic, Vladimir B

    2018-04-01

    Finding computationally drug-target interactions (DTIs) is a convenient strategy to identify new DTIs at low cost with reasonable accuracy. However, the current DTI prediction methods suffer the high false positive prediction rate. We developed DDR, a novel method that improves the DTI prediction accuracy. DDR is based on the use of a heterogeneous graph that contains known DTIs with multiple similarities between drugs and multiple similarities between target proteins. DDR applies non-linear similarity fusion method to combine different similarities. Before fusion, DDR performs a pre-processing step where a subset of similarities is selected in a heuristic process to obtain an optimized combination of similarities. Then, DDR applies a random forest model using different graph-based features extracted from the DTI heterogeneous graph. Using 5-repeats of 10-fold cross-validation, three testing setups, and the weighted average of area under the precision-recall curve (AUPR) scores, we show that DDR significantly reduces the AUPR score error relative to the next best start-of-the-art method for predicting DTIs by 34% when the drugs are new, by 23% when targets are new and by 34% when the drugs and the targets are known but not all DTIs between them are not known. Using independent sources of evidence, we verify as correct 22 out of the top 25 DDR novel predictions. This suggests that DDR can be used as an efficient method to identify correct DTIs. The data and code are provided at https://bitbucket.org/RSO24/ddr/. vladimir.bajic@kaust.edu.sa. Supplementary data are available at Bioinformatics online.

  17. An Empirical Correction Method for Improving off-Axes Response Prediction in Component Type Flight Mechanics Helicopter Models

    NASA Technical Reports Server (NTRS)

    Mansur, M. Hossein; Tischler, Mark B.

    1997-01-01

    Historically, component-type flight mechanics simulation models of helicopters have been unable to satisfactorily predict the roll response to pitch stick input and the pitch response to roll stick input off-axes responses. In the study presented here, simple first-order low-pass filtering of the elemental lift and drag forces was considered as a means of improving the correlation. The method was applied to a blade-element model of the AH-64 APache, and responses of the modified model were compared with flight data in hover and forward flight. Results indicate that significant improvement in the off-axes responses can be achieved in hover. In forward flight, however, the best correlation in the longitudinal and lateral off-axes responses required different values of the filter time constant for each axis. A compromise value was selected and was shown to result in good overall improvement in the off-axes responses. The paper describes both the method and the model used for its implementation, and presents results obtained at hover and in forward flight.

  18. 34 CFR 200.49 - SEA responsibilities for school improvement, corrective action, and restructuring.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false SEA responsibilities for school improvement, corrective... Agencies Lea and School Improvement § 200.49 SEA responsibilities for school improvement, corrective action... subject to corrective action on January 7, 2002, the SEA must ensure that the LEA for that school provides...

  19. Using stable isotopes to associate migratory shorebirds with their wintering locations in Argentina

    USGS Publications Warehouse

    Farmer, A.H.; Abril, M.; Fernandez, M.; Torres, J.; Kester, C.; Bern, C.

    2004-01-01

    We are evaluating the use of stable isotopes to identify the wintering areas of Neotropical migratory shorebirds in Argentina. Our goal is to associate individual birds, captured on the breeding grounds or in migration with specific winter sites, thereby helping to identify distinct areas used by different subpopulations. In January and February 2002 and 2003, we collected flight feathers from shorebirds at 23 wintering sites distributed across seven province s in Argentina (n = 170). Feathers samples were pre- pared and analyzed for δ13C, δ15N, δ34S, δ18O and δD by continuous flow methods. A discriminant function based on deuterium alone was not an accurate predictor of a shorebird’s province of origin, ranging from 8% correct (Santiago del Estero) to 80% correct (San ta Cruz). When other isotopes were included, the prediction accuracy increased substantially (from 56% in Buenos Aires to 100% in Tucumán). The improvement in accuracy was due to C/N, which separated D-depleted sites in the Andes from those in the south, and the inclusion of S separated sites with respect to their distance from the Atlantic. We also were able to correctly discriminate shorebirds from among two closely spaced sites within the province of Tierra del Fuego. These results suggest the feasibility of identifying the origin of a shorebird at a provincial level of accuracy, as well as uniquely identifying birds from some closely spaced sites. There is a high degree of intra- and inter-bird variability, especially in the Pampas region, where there is wide variety of wetland/water conditions. In that important shorebird region, the variability itself may in fact be the “signature.” Future addition of trace elements to the analyses may improve predictions based solely on stable isotopes.

  20. Power System Transient Stability Based on Data Mining Theory

    NASA Astrophysics Data System (ADS)

    Cui, Zhen; Shi, Jia; Wu, Runsheng; Lu, Dan; Cui, Mingde

    2018-01-01

    In order to study the stability of power system, a power system transient stability based on data mining theory is designed. By introducing association rules analysis in data mining theory, an association classification method for transient stability assessment is presented. A mathematical model of transient stability assessment based on data mining technology is established. Meanwhile, combining rule reasoning with classification prediction, the method of association classification is proposed to perform transient stability assessment. The transient stability index is used to identify the samples that cannot be correctly classified in association classification. Then, according to the critical stability of each sample, the time domain simulation method is used to determine the state, so as to ensure the accuracy of the final results. The results show that this stability assessment system can improve the speed of operation under the premise that the analysis result is completely correct, and the improved algorithm can find out the inherent relation between the change of power system operation mode and the change of transient stability degree.

  1. Smoothing-based compressed state Kalman filter for joint state-parameter estimation: Applications in reservoir characterization and CO2 storage monitoring

    NASA Astrophysics Data System (ADS)

    Li, Y. J.; Kokkinaki, Amalia; Darve, Eric F.; Kitanidis, Peter K.

    2017-08-01

    The operation of most engineered hydrogeological systems relies on simulating physical processes using numerical models with uncertain parameters and initial conditions. Predictions by such uncertain models can be greatly improved by Kalman-filter techniques that sequentially assimilate monitoring data. Each assimilation constitutes a nonlinear optimization, which is solved by linearizing an objective function about the model prediction and applying a linear correction to this prediction. However, if model parameters and initial conditions are uncertain, the optimization problem becomes strongly nonlinear and a linear correction may yield unphysical results. In this paper, we investigate the utility of one-step ahead smoothing, a variant of the traditional filtering process, to eliminate nonphysical results and reduce estimation artifacts caused by nonlinearities. We present the smoothing-based compressed state Kalman filter (sCSKF), an algorithm that combines one step ahead smoothing, in which current observations are used to correct the state and parameters one step back in time, with a nonensemble covariance compression scheme, that reduces the computational cost by efficiently exploring the high-dimensional state and parameter space. Numerical experiments show that when model parameters are uncertain and the states exhibit hyperbolic behavior with sharp fronts, as in CO2 storage applications, one-step ahead smoothing reduces overshooting errors and, by design, gives physically consistent state and parameter estimates. We compared sCSKF with commonly used data assimilation methods and showed that for the same computational cost, combining one step ahead smoothing and nonensemble compression is advantageous for real-time characterization and monitoring of large-scale hydrogeological systems with sharp moving fronts.

  2. Predicting Rotator Cuff Tears Using Data Mining and Bayesian Likelihood Ratios

    PubMed Central

    Lu, Hsueh-Yi; Huang, Chen-Yuan; Su, Chwen-Tzeng; Lin, Chen-Chiang

    2014-01-01

    Objectives Rotator cuff tear is a common cause of shoulder diseases. Correct diagnosis of rotator cuff tears can save patients from further invasive, costly and painful tests. This study used predictive data mining and Bayesian theory to improve the accuracy of diagnosing rotator cuff tears by clinical examination alone. Methods In this retrospective study, 169 patients who had a preliminary diagnosis of rotator cuff tear on the basis of clinical evaluation followed by confirmatory MRI between 2007 and 2011 were identified. MRI was used as a reference standard to classify rotator cuff tears. The predictor variable was the clinical assessment results, which consisted of 16 attributes. This study employed 2 data mining methods (ANN and the decision tree) and a statistical method (logistic regression) to classify the rotator cuff diagnosis into “tear” and “no tear” groups. Likelihood ratio and Bayesian theory were applied to estimate the probability of rotator cuff tears based on the results of the prediction models. Results Our proposed data mining procedures outperformed the classic statistical method. The correction rate, sensitivity, specificity and area under the ROC curve of predicting a rotator cuff tear were statistical better in the ANN and decision tree models compared to logistic regression. Based on likelihood ratios derived from our prediction models, Fagan's nomogram could be constructed to assess the probability of a patient who has a rotator cuff tear using a pretest probability and a prediction result (tear or no tear). Conclusions Our predictive data mining models, combined with likelihood ratios and Bayesian theory, appear to be good tools to classify rotator cuff tears as well as determine the probability of the presence of the disease to enhance diagnostic decision making for rotator cuff tears. PMID:24733553

  3. A 9-protein biomarker molecular signature for predicting histologic type in endometrial carcinoma by immunohistochemistry.

    PubMed

    Santacana, Maria; Maiques, Oscar; Valls, Joan; Gatius, Sònia; Abó, Ana Isabel; López-García, María Ángeles; Mota, Alba; Reventós, Jaume; Moreno-Bueno, Gema; Palacios, Jose; Bartosch, Carla; Dolcet, Xavier; Matias-Guiu, Xavier

    2014-12-01

    Histologic typing may be difficult in a subset of endometrial carcinoma (EC) cases. In these cases, interobserver agreement improves when immunohistochemistry (IHC) is used. A series of endometrioid type (EEC) grades 1, 2, and 3 and serous type (SC) were immunostained for p53, p16, estrogen receptor, PTEN, IMP2, IMP3, HER2, cyclin B2 and E1, HMGA2, FolR1, MSLN, Claudins 3 and 4, and NRF2. Nine biomarkers showed significant differences with thresholds in IHC value scale between both types (p53 ≥ 20, IMP2 ≥ 115, IMP3 ≥ 2, cyclin E1 ≥ 220, HMGA2 ≥ 30, FolR1 ≥ 50, p16 ≥ 170, nuclear PTEN ≥ 2 and estrogen receptor ≤ 50; P < .005). This combination led to increased discrimination when considering cases satisfying 0 to 5 conditions predicted as EEC and those satisfying 6 to 9 conditions predicted as SC. This signature correctly predicted all 48 EEC grade 1-2 cases and 18 SC cases, but 3 SC cases were wrongly predicted as EEC. Sensitivity was 86% (95% confidence interval [CI], 64%-97%), and specificity was 100% (95% CI, 89%-100%). The classifier correctly predicted all 28 EEC grade 3 cases but only identified the EEC and SC components in 4 of 9 mixed EEC-SC. An independent validation series (29 EEC grades 1-2, 28 EEC grade 3, and 31 SC) showed 100% sensitivity (95% CI, 84%-100%) and 83% specificity (95% CI, 64%-94%). We propose an internally and externally validated 9-protein biomarker signature to predict the histologic type of EC (EEC or SC) by IHC. The results also suggest that mixed EEC-SC is molecularly ambiguous. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Station Correction Uncertainty in Multiple Event Location Algorithms and the Effect on Error Ellipses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne

    Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, itmore » is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.« less

  5. Predicting the Velocity Dispersions of the Dwarf Satellite Galaxies of Andromeda

    NASA Astrophysics Data System (ADS)

    McGaugh, Stacy S.

    2016-05-01

    Dwarf Spheroidal galaxies in the Local Group are the faintest and most diffuse stellar systems known. They exhibit large mass discrepancies, making them popular laboratories for studying the missing mass problem. The PANDAS survey of M31 revealed dozens of new examples of such dwarfs. As these systems were discovered, it was possible to use the observed photometric properties to predict their stellar velocity dispersions with the modified gravity theory MOND. These predictions, made in advance of the observations, have since been largely confirmed. A unique feature of MOND is that a structurally identical dwarf will behave differently when it is or is not subject to the external field of a massive host like Andromeda. The role of this "external field effect" is critical in correctly predicting the velocity dispersions of dwarfs that deviate from empirical scaling relations. With continued improvement in the observational data, these systems could provide a test of the strong equivalence principle.

  6. Improving operational flood ensemble prediction by the assimilation of satellite soil moisture: comparison between lumped and semi-distributed schemes

    NASA Astrophysics Data System (ADS)

    Alvarez-Garreton, C.; Ryu, D.; Western, A. W.; Su, C.-H.; Crow, W. T.; Robertson, D. E.; Leahy, C.

    2014-09-01

    Assimilation of remotely sensed soil moisture data (SM-DA) to correct soil water stores of rainfall-runoff models has shown skill in improving streamflow prediction. In the case of large and sparsely monitored catchments, SM-DA is a particularly attractive tool. Within this context, we assimilate active and passive satellite soil moisture (SSM) retrievals using an ensemble Kalman filter to improve operational flood prediction within a large semi-arid catchment in Australia (>40 000 km2). We assess the importance of accounting for channel routing and the spatial distribution of forcing data by applying SM-DA to a lumped and a semi-distributed scheme of the probability distributed model (PDM). Our scheme also accounts for model error representation and seasonal biases and errors in the satellite data. Before assimilation, the semi-distributed model provided more accurate streamflow prediction (Nash-Sutcliffe efficiency, NS = 0.77) than the lumped model (NS = 0.67) at the catchment outlet. However, this did not ensure good performance at the "ungauged" inner catchments. After SM-DA, the streamflow ensemble prediction at the outlet was improved in both the lumped and the semi-distributed schemes: the root mean square error of the ensemble was reduced by 27 and 31%, respectively; the NS of the ensemble mean increased by 7 and 38%, respectively; the false alarm ratio was reduced by 15 and 25%, respectively; and the ensemble prediction spread was reduced while its reliability was maintained. Our findings imply that even when rainfall is the main driver of flooding in semi-arid catchments, adequately processed SSM can be used to reduce errors in the model soil moisture, which in turn provides better streamflow ensemble prediction. We demonstrate that SM-DA efficacy is enhanced when the spatial distribution in forcing data and routing processes are accounted for. At ungauged locations, SM-DA is effective at improving streamflow ensemble prediction, however, the updated prediction is still poor since SM-DA does not address systematic errors in the model.

  7. Thermal stability of mullite RMn₂O₅ (R  =  Bi, Y, Pr, Sm or Gd): combined density functional theory and experimental study.

    PubMed

    Li, Chenzhe; Thampy, Sampreetha; Zheng, Yongping; Kweun, Joshua M; Ren, Yixin; Chan, Julia Y; Kim, Hanchul; Cho, Maenghyo; Kim, Yoon Young; Hsu, Julia W P; Cho, Kyeongjae

    2016-03-31

    Understanding and effectively predicting the thermal stability of ternary transition metal oxides with heavy elements using first principle simulations are vital for understanding performance of advanced materials. In this work, we have investigated the thermal stability of mullite RMn2O5 (R  =  Bi, Pr, Sm, or Gd) structures by constructing temperature phase diagrams using an efficient mixed generalized gradient approximation (GGA) and the GGA  +  U method. Simulation predicted stability regions without corrections on heavy elements show a 4-200 K underestimation compared to our experimental results. We have found the number of d/f electrons in the heavy elements shows a linear relationship with the prediction deviation. Further correction on the strongly correlated electrons in heavy elements could significantly reduce the prediction deviations. Our corrected simulation results demonstrate that further correction of R-site elements in RMn2O5 could effectively reduce the underestimation of the density functional theory-predicted decomposition temperature to within 30 K. Therefore, it could produce an accurate thermal stability prediction for complex ternary transition metal oxide compounds with heavy elements.

  8. Vehicle characteristics associated with LATCH use and correct use in real-world child restraint installations.

    PubMed

    Cicchino, Jessica B; Jermakian, Jessica S

    2015-06-01

    The objective of this study was to determine if vehicle features associated with LATCH ease-of-use in laboratory studies with volunteers predict LATCH use and misuse in real-world child restraint installations. Vehicle characteristics were extracted from prior surveys of more than 100 top-selling 2010-13 vehicles. Use and correct use of LATCH was determined from records of more than 14,000 child restraint installations in these vehicles that were inspected by child passenger safety technicians at Safe Kids car seat checkup events during 2010-12. Logistic regression was used to examine the association between vehicle features and use and correct use of lower anchors and top tethers, controlling for other relevant installation features. Lower anchors were more likely to be used and correctly used when the clearance angle around them was greater than 54°, the force required to attach them to the lower anchors was less than 178N, and their depth within the seat bight was less than 4cm. Restraints were more likely to be attached correctly when installed with the lower anchors than with the seat belt. After controlling for lower anchor use and other installation features, the likelihood of tether use and correct use in installations of forward-facing restraints was significantly higher when there was no hardware present that could potentially be confused with the tether anchor or when the tether anchor was located on the rear deck, which is typical in sedans. There is converging evidence from laboratory studies with volunteers and real-world child restraint installations that vehicle features are associated with correct LATCH use. Vehicle designs that improve the ease of installing child restraints with LATCH could improve LATCH use rates and reduce child restraint misuse. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.

  9. Adaptive Optics Analysis of Visual Benefit with Higher-order Aberrations Correction of Human Eye - Poster Paper

    NASA Astrophysics Data System (ADS)

    Xue, Lixia; Dai, Yun; Rao, Xuejun; Wang, Cheng; Hu, Yiyun; Liu, Qian; Jiang, Wenhan

    2008-01-01

    Higher-order aberrations correction can improve visual performance of human eye to some extent. To evaluate how much visual benefit can be obtained with higher-order aberrations correction we developed an adaptive optics vision simulator (AOVS). Dynamic real time optimized modal compensation was used to implement various customized higher-order ocular aberrations correction strategies. The experimental results indicate that higher-order aberrations correction can improve visual performance of human eye comparing with only lower-order aberration correction but the improvement degree and higher-order aberration correction strategy are different from each individual. Some subjects can acquire great visual benefit when higher-order aberrations were corrected but some subjects acquire little visual benefit even though all higher-order aberrations were corrected. Therefore, relative to general lower-order aberrations correction strategy, customized higher-order aberrations correction strategy is needed to obtain optimal visual improvement for each individual. AOVS provides an effective tool for higher-order ocular aberrations optometry for customized ocular aberrations correction.

  10. The robust corrective action priority-an improved approach for selecting competing corrective actions in FMEA based on principle of robust design

    NASA Astrophysics Data System (ADS)

    Sutrisno, Agung; Gunawan, Indra; Vanany, Iwan

    2017-11-01

    In spite of being integral part in risk - based quality improvement effort, studies improving quality of selection of corrective action priority using FMEA technique are still limited in literature. If any, none is considering robustness and risk in selecting competing improvement initiatives. This study proposed a theoretical model to select risk - based competing corrective action by considering robustness and risk of competing corrective actions. We incorporated the principle of robust design in counting the preference score among corrective action candidates. Along with considering cost and benefit of competing corrective actions, we also incorporate the risk and robustness of corrective actions. An example is provided to represent the applicability of the proposed model.

  11. Multi-step-ahead Method for Wind Speed Prediction Correction Based on Numerical Weather Prediction and Historical Measurement Data

    NASA Astrophysics Data System (ADS)

    Wang, Han; Yan, Jie; Liu, Yongqian; Han, Shuang; Li, Li; Zhao, Jing

    2017-11-01

    Increasing the accuracy of wind speed prediction lays solid foundation to the reliability of wind power forecasting. Most traditional correction methods for wind speed prediction establish the mapping relationship between wind speed of the numerical weather prediction (NWP) and the historical measurement data (HMD) at the corresponding time slot, which is free of time-dependent impacts of wind speed time series. In this paper, a multi-step-ahead wind speed prediction correction method is proposed with consideration of the passing effects from wind speed at the previous time slot. To this end, the proposed method employs both NWP and HMD as model inputs and the training labels. First, the probabilistic analysis of the NWP deviation for different wind speed bins is calculated to illustrate the inadequacy of the traditional time-independent mapping strategy. Then, support vector machine (SVM) is utilized as example to implement the proposed mapping strategy and to establish the correction model for all the wind speed bins. One Chinese wind farm in northern part of China is taken as example to validate the proposed method. Three benchmark methods of wind speed prediction are used to compare the performance. The results show that the proposed model has the best performance under different time horizons.

  12. Pulmonary function tests correlated with thoracic volumes in adolescent idiopathic scoliosis.

    PubMed

    Ledonio, Charles Gerald T; Rosenstein, Benjamin E; Johnston, Charles E; Regelmann, Warren E; Nuckley, David J; Polly, David W

    2017-01-01

    Scoliosis deformity has been linked with deleterious changes in the thoracic cavity that affect pulmonary function. The causal relationship between spinal deformity and pulmonary function has yet to be fully defined. It has been hypothesized that deformity correction improves pulmonary function by restoring both respiratory muscle efficiency and increasing the space available to the lungs. This research aims to correlate pulmonary function and thoracic volume before and after scoliosis correction. Retrospective correlational analysis between thoracic volume modeling from plain x-rays and pulmonary function tests was conducted. Adolescent idiopathic scoliosis patients enrolled in a multicenter database were sorted by pre-operative Total Lung Capacities (TLC) % predicted values from their Pulmonary Function Tests (PFT). Ten patients with the best and ten patients with the worst TLC values were included. Modeled thoracic volume and TLC values were compared before and 2 years after surgery. Scoliosis correction resulted in an increase in the thoracic volume for patients with the worst initial TLCs (11.7%) and those with the best initial TLCs (12.5%). The adolescents with the most severe pulmonary restriction prior to surgery strongly correlated with post-operative change in total lung capacity and thoracic volume (r 2  = 0.839; p < 0.001). The mean increase in thoracic volume in this group was 373.1 cm 3 (11.7%) which correlated with a 21.2% improvement in TLC. Scoliosis correction in adolescents was found to increase thoracic volume and is strongly correlated with improved TLC in cases with severe restrictive pulmonary function, but no correlation was found in cases with normal pulmonary function. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 35:175-182, 2017. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  13. A Hertzian contact mechanics based formulation to improve ultrasound elastography assessment of uterine cervical tissue stiffness.

    PubMed

    Briggs, Brandi N; Stender, Michael E; Muljadi, Patrick M; Donnelly, Meghan A; Winn, Virginia D; Ferguson, Virginia L

    2015-06-25

    Clinical practice requires improved techniques to assess human cervical tissue properties, especially at the internal os, or orifice, of the uterine cervix. Ultrasound elastography (UE) holds promise for non-invasively monitoring cervical stiffness throughout pregnancy. However, this technique provides qualitative strain images that cannot be linked to a material property (e.g., Young's modulus) without knowledge of the contact pressure under a rounded transvaginal transducer probe and correction for the resulting non-uniform strain dissipation. One technique to standardize elastogram images incorporates a material of known properties and uses one-dimensional, uniaxial Hooke's law to calculate Young's modulus within the compressed material half-space. However, this method does not account for strain dissipation and the strains that evolve in three-dimensional space. We demonstrate that an analytical approach based on 3D Hertzian contact mechanics provides a reasonable first approximation to correct for UE strain dissipation underneath a round transvaginal transducer probe and thus improves UE-derived estimates of tissue modulus. We validate the proposed analytical solution and evaluate sources of error using a finite element model. As compared to 1D uniaxial Hooke's law, the Hertzian contact-based solution yields significantly improved Young's modulus predictions in three homogeneous gelatin tissue phantoms possessing different moduli. We also demonstrate the feasibility of using this technique to image human cervical tissue, where UE-derived moduli estimations for the uterine cervix anterior lip agreed well with published, experimentally obtained values. Overall, UE with an attached reference standard and a Hertzian contact-based correction holds promise for improving quantitative estimates of cervical tissue modulus. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. IFACEwat: the interfacial water-implemented re-ranking algorithm to improve the discrimination of near native structures for protein rigid docking

    PubMed Central

    2014-01-01

    Background Protein-protein docking is an in silico method to predict the formation of protein complexes. Due to limited computational resources, the protein-protein docking approach has been developed under the assumption of rigid docking, in which one of the two protein partners remains rigid during the protein associations and water contribution is ignored or implicitly presented. Despite obtaining a number of acceptable complex predictions, it seems to-date that most initial rigid docking algorithms still find it difficult or even fail to discriminate successfully the correct predictions from the other incorrect or false positive ones. To improve the rigid docking results, re-ranking is one of the effective methods that help re-locate the correct predictions in top high ranks, discriminating them from the other incorrect ones. In this paper, we propose a new re-ranking technique using a new energy-based scoring function, namely IFACEwat - a combined Interface Atomic Contact Energy (IFACE) and water effect. The IFACEwat aims to further improve the discrimination of the near-native structures of the initial rigid docking algorithm ZDOCK3.0.2. Unlike other re-ranking techniques, the IFACEwat explicitly implements interfacial water into the protein interfaces to account for the water-mediated contacts during the protein interactions. Results Our results showed that the IFACEwat increased both the numbers of the near-native structures and improved their ranks as compared to the initial rigid docking ZDOCK3.0.2. In fact, the IFACEwat achieved a success rate of 83.8% for Antigen/Antibody complexes, which is 10% better than ZDOCK3.0.2. As compared to another re-ranking technique ZRANK, the IFACEwat obtains success rates of 92.3% (8% better) and 90% (5% better) respectively for medium and difficult cases. When comparing with the latest published re-ranking method F2Dock, the IFACEwat performed equivalently well or even better for several Antigen/Antibody complexes. Conclusions With the inclusion of interfacial water, the IFACEwat improves mostly results of the initial rigid docking, especially for Antigen/Antibody complexes. The improvement is achieved by explicitly taking into account the contribution of water during the protein interactions, which was ignored or not fully presented by the initial rigid docking and other re-ranking techniques. In addition, the IFACEwat maintains sufficient computational efficiency of the initial docking algorithm, yet improves the ranks as well as the number of the near native structures found. As our implementation so far targeted to improve the results of ZDOCK3.0.2, and particularly for the Antigen/Antibody complexes, it is expected in the near future that more implementations will be conducted to be applicable for other initial rigid docking algorithms. PMID:25521441

  15. A comparative study of the effects of cone-plate and parallel-plate geometries on rheological properties under oscillatory shear flow

    NASA Astrophysics Data System (ADS)

    Song, Hyeong Yong; Salehiyan, Reza; Li, Xiaolei; Lee, Seung Hak; Hyun, Kyu

    2017-11-01

    In this study, the effects of cone-plate (C/P) and parallel-plate (P/P) geometries were investigated on the rheological properties of various complex fluids, e.g. single-phase (polymer melts and solutions) and multiphase systems (polymer blend and nanocomposite, and suspension). Small amplitude oscillatory shear (SAOS) tests were carried out to compare linear rheological responses while nonlinear responses were compared using large amplitude oscillatory shear (LAOS) tests at different frequencies. Moreover, Fourier-transform (FT)-rheology method was used to analyze the nonlinear responses under LAOS flow. Experimental results were compared with predictions obtained by single-point correction and shear rate correction. For all systems, SAOS data measured by C/P and P/P coincide with each other, but results showed discordance between C/P and P/P measurements in the nonlinear regime. For all systems except xanthan gum solutions, first-harmonic moduli were corrected using a single horizontal shift factor, whereas FT rheology-based nonlinear parameters ( I 3/1, I 5/1, Q 3, and Q 5) were corrected using vertical shift factors that are well predicted by single-point correction. Xanthan gum solutions exhibited anomalous corrections. Their first-harmonic Fourier moduli were superposed using a horizontal shift factor predicted by shear rate correction applicable to highly shear thinning fluids. The distinguished corrections were observed for FT rheology-based nonlinear parameters. I 3/1 and I 5/1 were superposed by horizontal shifts, while the other systems displayed vertical shifts of I 3/1 and I 5/1. Q 3 and Q 5 of xanthan gum solutions were corrected using both horizontal and vertical shift factors. In particular, the obtained vertical shift factors for Q 3 and Q 5 were twice as large as predictions made by single-point correction. Such larger values are rationalized by the definitions of Q 3 and Q 5. These results highlight the significance of horizontal shift corrections in nonlinear oscillatory shear data.

  16. Predicting ambient aerosol Thermal Optical Reflectance (TOR) measurements from infrared spectra: organic carbon

    NASA Astrophysics Data System (ADS)

    Dillner, A. M.; Takahama, S.

    2014-11-01

    Organic carbon (OC) can constitute 50% or more of the mass of atmospheric particulate matter. Typically, the organic carbon concentration is measured using thermal methods such as Thermal-Optical Reflectance (TOR) from quartz fiber filters. Here, methods are presented whereby Fourier Transform Infrared (FT-IR) absorbance spectra from polytetrafluoroethylene (PTFE or Teflon) filters are used to accurately predict TOR OC. Transmittance FT-IR analysis is rapid, inexpensive, and non-destructive to the PTFE filters. To develop and test the method, FT-IR absorbance spectra are obtained from 794 samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites sampled during 2011. Partial least squares regression is used to calibrate sample FT-IR absorbance spectra to artifact-corrected TOR OC. The FTIR spectra are divided into calibration and test sets by sampling site and date which leads to precise and accurate OC predictions by FT-IR as indicated by high coefficient of determination (R2; 0.96), low bias (0.02 μg m-3, all μg m-3 values based on the nominal IMPROVE sample volume of 32.8 m-3), low error (0.08 μg m-3) and low normalized error (11%). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. FT-IR spectra are also divided into calibration and test sets by OC mass and by OM / OC which reflects the organic composition of the particulate matter and is obtained from organic functional group composition; this division also leads to precise and accurate OC predictions. Low OC concentrations have higher bias and normalized error due to TOR analytical errors and artifact correction errors, not due to the range of OC mass of the samples in the calibration set. However, samples with low OC mass can be used to predict samples with high OC mass indicating that the calibration is linear. Using samples in the calibration set that have a different OM / OC or ammonium / OC distributions than the test set leads to only a modest increase in bias and normalized error in the predicted samples. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR OC in IMPROVE network samples; providing complementary information to the organic functional group composition and organic aerosol mass estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).

  17. Improving strand pairing prediction through exploring folding cooperativity

    PubMed Central

    Jeong, Jieun; Berman, Piotr; Przytycka, Teresa M.

    2008-01-01

    The topology of β-sheets is defined by the pattern of hydrogen-bonded strand pairing. Therefore, predicting hydrogen bonded strand partners is a fundamental step towards predicting β-sheet topology. At the same time, finding the correct partners is very difficult due to long range interactions involved in strand pairing. Additionally, patterns of aminoacids observed in β-sheet formations are very general and therefore difficult to use for computational recognition of specific contacts between strands. In this work, we report a new strand pairing algorithm. To address above mentioned difficulties, our algorithm attempts to mimic elements of the folding process. Namely, in addition to ensuring that the predicted hydrogen bonded strand pairs satisfy basic global consistency constraints, it takes into account hypothetical folding pathways. Consistently with this view, introducing hydrogen bonds between a pair of strands changes the probabilities of forming hydrogen bonds between other pairs of strand. We demonstrate that this approach provides an improvement over previously proposed algorithms. We also compare the performance of this method to that of a global optimization algorithm that poses the problem as integer linear programming optimization problem and solves it using ILOG CPLEX™ package. PMID:18989036

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Simonetto, Andrea

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less

  19. Electron mobility in mercury cadmium telluride

    NASA Technical Reports Server (NTRS)

    Patterson, James D.

    1988-01-01

    A previously developed program, which includes all electronic interactions thought to be important, does not correctly predict the value of electron mobility in mercury cadmium telluride particularly near room temperature. Part of the reason for this discrepancy is thought to be the way screening is handled. It seems likely that there are a number of contributors to errors in the calculation. The objective is to survey the calculation, locate reasons for differences between experiment and calculation, and suggest improvements.

  20. A comparison of prognostic significance of strong ion gap (SIG) with other acid-base markers in the critically ill: a cohort study.

    PubMed

    Ho, Kwok M; Lan, Norris S H; Williams, Teresa A; Harahsheh, Yusra; Chapman, Andrew R; Dobb, Geoffrey J; Magder, Sheldon

    2016-01-01

    This cohort study compared the prognostic significance of strong ion gap (SIG) with other acid-base markers in the critically ill. The relationships between SIG, lactate, anion gap (AG), anion gap albumin-corrected (AG-corrected), base excess or strong ion difference-effective (SIDe), all obtained within the first hour of intensive care unit (ICU) admission, and the hospital mortality of 6878 patients were analysed. The prognostic significance of each acid-base marker, both alone and in combination with the Admission Mortality Prediction Model (MPM0 III) predicted mortality, were assessed by the area under the receiver operating characteristic curve (AUROC). Of the 6878 patients included in the study, 924 patients (13.4 %) died after ICU admission. Except for plasma chloride concentrations, all acid-base markers were significantly different between the survivors and non-survivors. SIG (with lactate: AUROC 0.631, confidence interval [CI] 0.611-0.652; without lactate: AUROC 0.521, 95 % CI 0.500-0.542) only had a modest ability to predict hospital mortality, and this was no better than using lactate concentration alone (AUROC 0.701, 95 % 0.682-0.721). Adding AG-corrected or SIG to a combination of lactate and MPM0 III predicted risks also did not substantially improve the latter's ability to differentiate between survivors and non-survivors. Arterial lactate concentrations explained about 11 % of the variability in the observed mortality, and it was more important than SIG (0.6 %) and SIDe (0.9 %) in predicting hospital mortality after adjusting for MPM0 III predicted risks. Lactate remained as the strongest predictor for mortality in a sensitivity multivariate analysis, allowing for non-linearity of all acid-base markers. The prognostic significance of SIG was modest and inferior to arterial lactate concentration for the critically ill. Lactate concentration should always be considered regardless whether physiological, base excess or physical-chemical approach is used to interpret acid-base disturbances in critically ill patients.

  1. Algorithms for selecting informative marker panels for population assignment.

    PubMed

    Rosenberg, Noah A

    2005-11-01

    Given a set of potential source populations, genotypes of an individual of unknown origin at a collection of markers can be used to predict the correct source population of the individual. For improved efficiency, informative markers can be chosen from a larger set of markers to maximize the accuracy of this prediction. However, selecting the loci that are individually most informative does not necessarily produce the optimal panel. Here, using genotypes from eight species--carp, cat, chicken, dog, fly, grayling, human, and maize--this univariate accumulation procedure is compared to new multivariate "greedy" and "maximin" algorithms for choosing marker panels. The procedures generally suggest similar panels, although the greedy method often recommends inclusion of loci that are not chosen by the other algorithms. In seven of the eight species, when applied to five or more markers, all methods achieve at least 94% assignment accuracy on simulated individuals, with one species--dog--producing this level of accuracy with only three markers, and the eighth species--human--requiring approximately 13-16 markers. The new algorithms produce substantial improvements over use of randomly selected markers; where differences among the methods are noticeable, the greedy algorithm leads to slightly higher probabilities of correct assignment. Although none of the approaches necessarily chooses the panel with optimal performance, the algorithms all likely select panels with performance near enough to the maximum that they all are suitable for practical use.

  2. Matching phenotypes to whole genomes: Lessons learned from four iterations of the personal genome project community challenges.

    PubMed

    Cai, Binghuang; Li, Biao; Kiga, Nikki; Thusberg, Janita; Bergquist, Timothy; Chen, Yun-Ching; Niknafs, Noushin; Carter, Hannah; Tokheim, Collin; Beleva-Guthrie, Violeta; Douville, Christopher; Bhattacharya, Rohit; Yeo, Hui Ting Grace; Fan, Jean; Sengupta, Sohini; Kim, Dewey; Cline, Melissa; Turner, Tychele; Diekhans, Mark; Zaucha, Jan; Pal, Lipika R; Cao, Chen; Yu, Chen-Hsin; Yin, Yizhou; Carraro, Marco; Giollo, Manuel; Ferrari, Carlo; Leonardi, Emanuela; Tosatto, Silvio C E; Bobe, Jason; Ball, Madeleine; Hoskins, Roger A; Repo, Susanna; Church, George; Brenner, Steven E; Moult, John; Gough, Julian; Stanke, Mario; Karchin, Rachel; Mooney, Sean D

    2017-09-01

    The advent of next-generation sequencing has dramatically decreased the cost for whole-genome sequencing and increased the viability for its application in research and clinical care. The Personal Genome Project (PGP) provides unrestricted access to genomes of individuals and their associated phenotypes. This resource enabled the Critical Assessment of Genome Interpretation (CAGI) to create a community challenge to assess the bioinformatics community's ability to predict traits from whole genomes. In the CAGI PGP challenge, researchers were asked to predict whether an individual had a particular trait or profile based on their whole genome. Several approaches were used to assess submissions, including ROC AUC (area under receiver operating characteristic curve), probability rankings, the number of correct predictions, and statistical significance simulations. Overall, we found that prediction of individual traits is difficult, relying on a strong knowledge of trait frequency within the general population, whereas matching genomes to trait profiles relies heavily upon a small number of common traits including ancestry, blood type, and eye color. When a rare genetic disorder is present, profiles can be matched when one or more pathogenic variants are identified. Prediction accuracy has improved substantially over the last 6 years due to improved methodology and a better understanding of features. © 2017 Wiley Periodicals, Inc.

  3. Improving the annotation of the Heterorhabditis bacteriophora genome.

    PubMed

    McLean, Florence; Berger, Duncan; Laetsch, Dominik R; Schwartz, Hillel T; Blaxter, Mark

    2018-04-01

    Genome assembly and annotation remain exacting tasks. As the tools available for these tasks improve, it is useful to return to data produced with earlier techniques to assess their credibility and correctness. The entomopathogenic nematode Heterorhabditis bacteriophora is widely used to control insect pests in horticulture. The genome sequence for this species was reported to encode an unusually high proportion of unique proteins and a paucity of secreted proteins compared to other related nematodes. We revisited the H. bacteriophora genome assembly and gene predictions to determine whether these unusual characteristics were biological or methodological in origin. We mapped an independent resequencing dataset to the genome and used the blobtools pipeline to identify potential contaminants. While present (0.2% of the genome span, 0.4% of predicted proteins), assembly contamination was not significant. Re-prediction of the gene set using BRAKER1 and published transcriptome data generated a predicted proteome that was very different from the published one. The new gene set had a much reduced complement of unique proteins, better completeness values that were in line with other related species' genomes, and an increased number of proteins predicted to be secreted. It is thus likely that methodological issues drove the apparent uniqueness of the initial H. bacteriophora genome annotation and that similar contamination and misannotation issues affect other published genome assemblies.

  4. Advanced numerical models and material characterisation techniques for composite materials subject to impact and shock wave loading

    NASA Astrophysics Data System (ADS)

    Clegg, R. A.; White, D. M.; Hayhurst, C.; Ridel, W.; Harwick, W.; Hiermaier, S.

    2003-09-01

    The development and validation of an advanced material model for orthotropic materials, such as fibre reinforced composites, is described. The model is specifically designed to facilitate the numerical simulation of impact and shock wave propagation through orthotropic materials and the prediction of subsequent material damage. Initial development of the model concentrated on correctly representing shock wave propagation in composite materials under high and hypervelocity impact conditions [1]. This work has now been extended to further concentrate on the development of improved numerical models and material characterisation techniques for the prediction of damage, including residual strength, in fibre reinforced composite materials. The work is focussed on Kevlar-epoxy however materials such as CFRP are also being considered. The paper describes our most recent activities in relation to the implementation of advanced material modelling options in this area. These enable refined non-liner directional characteristics of composite materials to be modelled, in addition to the correct thermodynamic response under shock wave loading. The numerical work is backed by an extensive experimental programme covering a wide range of static and dynamic tests to facilitate derivation of model input data and to validate the predicted material response. Finally, the capability of the developing composite material model is discussed in relation to a hypervelocity impact problem.

  5. Precise predictions for V+jets dark matter backgrounds

    NASA Astrophysics Data System (ADS)

    Lindert, J. M.; Pozzorini, S.; Boughezal, R.; Campbell, J. M.; Denner, A.; Dittmaier, S.; Gehrmann-De Ridder, A.; Gehrmann, T.; Glover, N.; Huss, A.; Kallweit, S.; Maierhöfer, P.; Mangano, M. L.; Morgan, T. A.; Mück, A.; Petriello, F.; Salam, G. P.; Schönherr, M.; Williams, C.

    2017-12-01

    High-energy jets recoiling against missing transverse energy (MET) are powerful probes of dark matter at the LHC. Searches based on large MET signatures require a precise control of the Z(ν {\\bar{ν }})+ jet background in the signal region. This can be achieved by taking accurate data in control regions dominated by Z(ℓ ^+ℓ ^-)+ jet, W(ℓ ν )+ jet and γ + jet production, and extrapolating to the Z(ν {\\bar{ν }})+ jet background by means of precise theoretical predictions. In this context, recent advances in perturbative calculations open the door to significant sensitivity improvements in dark matter searches. In this spirit, we present a combination of state-of-the-art calculations for all relevant V+ jets processes, including throughout NNLO QCD corrections and NLO electroweak corrections supplemented by Sudakov logarithms at two loops. Predictions at parton level are provided together with detailed recommendations for their usage in experimental analyses based on the reweighting of Monte Carlo samples. Particular attention is devoted to the estimate of theoretical uncertainties in the framework of dark matter searches, where subtle aspects such as correlations across different V+ jet processes play a key role. The anticipated theoretical uncertainty in the Z(ν {\\bar{ν }})+ jet background is at the few percent level up to the TeV range.

  6. The improvement of a simple theoretical model for the prediction of the sound insulation of double leaf walls.

    PubMed

    Davy, John L

    2010-02-01

    This paper presents a revised theory for predicting the sound insulation of double leaf cavity walls that removes an approximation, which is usually made when deriving the sound insulation of a double leaf cavity wall above the critical frequencies of the wall leaves due to the airborne transmission across the wall cavity. This revised theory is also used as a correction below the critical frequencies of the wall leaves instead of a correction due to Sewell [(1970). J. Sound Vib. 12, 21-32]. It is found necessary to include the "stud" borne transmission of the window frames when modeling wide air gap double glazed windows. A minimum value of stud transmission is introduced for use with resilient connections such as steel studs. Empirical equations are derived for predicting the effective sound absorption coefficient of wall cavities without sound absorbing material. The theory is compared with experimental results for double glazed windows and gypsum plasterboard cavity walls with and without sound absorbing material in their cavities. The overall mean, standard deviation, maximum, and minimum of the differences between experiment and theory are -0.6 dB, 3.1 dB, 10.9 dB at 1250 Hz, and -14.9 dB at 160 Hz, respectively.

  7. Long palatal connective tissue rolled pedicle graft with demineralized freeze-dried bone allograft plus platelet-rich fibrin combination: A novel technique for ridge augmentation - Three case reports

    PubMed Central

    Reddy, Pathakota Krishnajaneya; Bolla, Vijayalakshmi; Koppolu, Pradeep; Srujan, Peruka

    2015-01-01

    Replacement of missing maxillary anterior tooth with localized residual alveolar ridge defect is challenging, considering the high esthetic demand. Various soft and hard tissue procedures were proposed to correct alveolar ridge deformities. Novel techniques have evolved in treating these ridge defects to improve function and esthetics. In the present case reports, a novel technique using long palatal connective tissue rolled pedicle graft with demineralized freeze-dried bone allografts (DFDBAs) plus Platelet-rich fibrin (PRF) combination was proposed to correct the Class III localized anterior maxillary anterior alveolar ridge defect. The present technique resulted in predictable ridge augmentation, which can be attributed to the soft and hard tissue augmentation with a connective tissue pedicle and DFDBA plus PRF combination. This technique suggests a variation in roll technique with DFDBA plus PRF and appears to promise in gaining predictable volume in the residual ridge defect and can be considered for the treatment of moderate to severe maxillary anterior ridge defects. PMID:26015679

  8. Establishing a NORM based radiation calibration facility.

    PubMed

    Wallace, J

    2016-05-01

    An environmental radiation calibration facility has been constructed by the Radiation and Nuclear Sciences unit of Queensland Health at the Forensic and Scientific Services Coopers Plains campus in Brisbane. This facility consists of five low density concrete pads, spiked with a NORM source, to simulate soil and effectively provide a number of semi-infinite uniformly distributed sources for improved energy response calibrations of radiation equipment used in NORM measurements. The pads have been sealed with an environmental epoxy compound to restrict radon loss and so enhance the quality of secular equilibrium achieved. Monte Carlo models (MCNP),used to establish suitable design parameters and identify appropriate geometric correction factors linking the air kerma measured above these calibration pads to that predicted for an infinite plane using adjusted ICRU53 data, are discussed. Use of these correction factors as well as adjustments for cosmic radiation and the impact of surrounding low levels of NORM in the soil, allows for good agreement between the radiation fields predicted and measured above the pads at both 0.15 m and 1 m. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Turbulent flow in a 180 deg bend: Modeling and computations

    NASA Technical Reports Server (NTRS)

    Kaul, Upender K.

    1989-01-01

    A low Reynolds number k-epsilon turbulence model was presented which yields accurate predictions of the kinetic energy near the wall. The model is validated with the experimental channel flow data of Kreplin and Eckelmann. The predictions are also compared with earlier results from direct simulation of turbulent channel flow. The model is especially useful for internal flows where the inflow boundary condition of epsilon is not easily prescribed. The model partly derives from some observations based on earlier direct simulation results of near-wall turbulence. The low Reynolds number turbulence model together with an existing curvature correction appropriate to spinning cylinder flows was used to simulate the flow in a U-bend with the same radius of curvature as the Space Shuttle Main Engine (SSME) Turn-Around Duct (TAD). The present computations indicate a space varying curvature correction parameter as opposed to a constant parameter as used in the spinning cylinder flows. Comparison with limited available experimental data is made. The comparison is favorable, but detailed experimental data is needed to further improve the curvature model.

  10. Leading non-Gaussian corrections for diffusion orientation distribution function.

    PubMed

    Jensen, Jens H; Helpern, Joseph A; Tabesh, Ali

    2014-02-01

    An analytical representation of the leading non-Gaussian corrections for a class of diffusion orientation distribution functions (dODFs) is presented. This formula is constructed from the diffusion and diffusional kurtosis tensors, both of which may be estimated with diffusional kurtosis imaging (DKI). By incorporating model-independent non-Gaussian diffusion effects, it improves on the Gaussian approximation used in diffusion tensor imaging (DTI). This analytical representation therefore provides a natural foundation for DKI-based white matter fiber tractography, which has potential advantages over conventional DTI-based fiber tractography in generating more accurate predictions for the orientations of fiber bundles and in being able to directly resolve intra-voxel fiber crossings. The formula is illustrated with numerical simulations for a two-compartment model of fiber crossings and for human brain data. These results indicate that the inclusion of the leading non-Gaussian corrections can significantly affect fiber tractography in white matter regions, such as the centrum semiovale, where fiber crossings are common. 2013 John Wiley & Sons, Ltd.

  11. Leading Non-Gaussian Corrections for Diffusion Orientation Distribution Function

    PubMed Central

    Jensen, Jens H.; Helpern, Joseph A.; Tabesh, Ali

    2014-01-01

    An analytical representation of the leading non-Gaussian corrections for a class of diffusion orientation distribution functions (dODFs) is presented. This formula is constructed out of the diffusion and diffusional kurtosis tensors, both of which may be estimated with diffusional kurtosis imaging (DKI). By incorporating model-independent non-Gaussian diffusion effects, it improves upon the Gaussian approximation used in diffusion tensor imaging (DTI). This analytical representation therefore provides a natural foundation for DKI-based white matter fiber tractography, which has potential advantages over conventional DTI-based fiber tractography in generating more accurate predictions for the orientations of fiber bundles and in being able to directly resolve intra-voxel fiber crossings. The formula is illustrated with numerical simulations for a two-compartment model of fiber crossings and for human brain data. These results indicate that the inclusion of the leading non-Gaussian corrections can significantly affect fiber tractography in white matter regions, such as the centrum semiovale, where fiber crossings are common. PMID:24738143

  12. Automation of Physiologic Data Presentation and Alarms in the Post Anesthesia Care Unit

    PubMed Central

    Aukburg, S.J.; Ketikidis, P.H.; Kitz, D.S.; Mavrides, T.G.; Matschinsky, B.B.

    1989-01-01

    The routine use of pulse oximeters, non-invasive blood pressure monitors and electrocardiogram monitors have considerably improved patient care in the post anesthesia period. Using an automated data collection system, we investigated the occurrence of several adverse events frequently revealed by these monitors. We found that the incidence of hypoxia was 35%, hypertension 12%, hypotension 8%, tachycardia 25% and bradycardia 1%. Discriminant analysis was able to correctly predict classification of about 90% of patients into normal vs. hypotensive or hypotensive groups. The system software minimizes artifact, validates data for epidemiologic studies, and is able to identify variables that predict adverse events through application of appropriate statistical and artificial intelligence techniques.

  13. On turbulent friction in straight ducts with complex cross-section: the wall law and the hydraulic diameter

    NASA Astrophysics Data System (ADS)

    Pirozzoli, Sergio

    2018-07-01

    We develop predictive formulas for friction resistance in ducts with complex cross-sectional shape based on the use of the log law and neglect of wall shear stress nonuniformities. The traditional hydraulic diameter naturally emerges from the analysis as the controlling length scale for common duct shapes as triangles and regular polygons. The analysis also suggests that a new effective diameter should be used in more general cases, yielding corrections of a few percent to friction estimates based on the traditional hydraulic diameter. Fair but consistent predictive improvement is shown for duct geometries of practical relevance, including rectangular and annular ducts, and circular rod bundles.

  14. Combined linear theory/impact theory method for analysis and design of high speed configurations

    NASA Technical Reports Server (NTRS)

    Brooke, D.; Vondrasek, D. V.

    1980-01-01

    Pressure distributions on a wing body at Mach 4.63 are calculated. The combined theory is shown to give improved predictions over either linear theory or impact theory alone. The combined theory is also applied in the inverse design mode to calculate optimum camber slopes at Mach 4.63. Comparisons with optimum camber slopes obtained from unmodified linear theory show large differences. Analysis of the results indicate that the combined theory correctly predicts the effect of thickness on the loading distributions at high Mach numbers, and that finite thickness wings optimized at high Mach numbers using unmodified linear theory will not achieve the minimum drag characteristics for which they are designed.

  15. Bunch mode specific rate corrections for PILATUS3 detectors

    DOE PAGES

    Trueb, P.; Dejoie, C.; Kobas, M.; ...

    2015-04-09

    PILATUS X-ray detectors are in operation at many synchrotron beamlines around the world. This article reports on the characterization of the new PILATUS3 detector generation at high count rates. As for all counting detectors, the measured intensities have to be corrected for the dead-time of the counting mechanism at high photon fluxes. The large number of different bunch modes at these synchrotrons as well as the wide range of detector settings presents a challenge for providing accurate corrections. To avoid the intricate measurement of the count rate behaviour for every bunch mode, a Monte Carlo simulation of the counting mechanismmore » has been implemented, which is able to predict the corrections for arbitrary bunch modes and a wide range of detector settings. This article compares the simulated results with experimental data acquired at different synchrotrons. It is found that the usage of bunch mode specific corrections based on this simulation improves the accuracy of the measured intensities by up to 40% for high photon rates and highly structured bunch modes. For less structured bunch modes, the instant retrigger technology of PILATUS3 detectors substantially reduces the dependency of the rate correction on the bunch mode. The acquired data also demonstrate that the instant retrigger technology allows for data acquisition up to 15 million photons per second per pixel.« less

  16. The effect of surgical titanium rods on proton therapy delivered for cervical bone tumors: experimental validation using an anthropomorphic phantom

    NASA Astrophysics Data System (ADS)

    Dietlicher, Isabelle; Casiraghi, Margherita; Ares, Carmen; Bolsi, Alessandra; Weber, Damien C.; Lomax, Antony J.; Albertini, Francesca

    2014-12-01

    To investigate the effect of metal implants in proton radiotherapy, dose distributions of different, clinically relevant treatment plans have been measured in an anthropomorphic phantom and compared to treatment planning predictions. The anthropomorphic phantom, which is sliced into four segments in the cranio-caudal direction, is composed of tissue equivalent materials and contains a titanium implant in a vertebral body in the cervical region. GafChromic® films were laid between the different segments to measure the 2D delivered dose. Three different four-field plans have then been applied: a Single-Field-Uniform-Dose (SFUD) plan, both with and without artifact correction implemented, and an Intensity-Modulated-Proton-Therapy (IMPT) plan with the artifacts corrected. For corrections, the artifacts were manually outlined and the Hounsfield Units manually set to an average value for soft tissue. Results show a surprisingly good agreement between prescribed and delivered dose distributions when artifacts have been corrected, with > 97% and 98% of points fulfilling the gamma criterion of 3%/3 mm for both SFUD and the IMPT plans, respectively. In contrast, without artifact corrections, up to 18% of measured points fail the gamma criterion of 3%/3 mm for the SFUD plan. These measurements indicate that correcting manually for the reconstruction artifacts resulting from metal implants substantially improves the accuracy of the calculated dose distribution.

  17. Femtosecond-LASIK outcomes using the VisuMax®-MEL® 80 platform for mixed astigmatism refractive surgery.

    PubMed

    Stanca, Horia Tudor; Munteanu, Mihnea; Jianu, Dragoş Cătălin; Motoc, Andrei Gheorghe Marius; Jecan, Cristian Radu; Tăbăcaru, Bogdana; Stanca, Simona; Preda, Maria Alexandra

    2018-01-01

    To evaluate the predictability, efficacy and safety of Femtosecond-laser-assisted in situ keratomileusis (LASIK) procedure for mixed astigmatism. We prospectively evaluated for 12 months 74 eyes (52 patients) with mixed astigmatism that underwent Femtosecond-LASIK treatment. The preoperative mean refractive sphere value was +1.879±1.313 diopters (D) and the mean refractive cylinder value was -4.169±1.091 D. The anterior corneal flap was cut using the VisuMax® femtosecond laser and then the stromal ablation was done using the MEL® 80 excimer laser. Mean age was 30.22±6.421 years with 61.53% female patients. Postoperative spherical equivalent at 12 months was within ±0.5D of emmetropia in 75.8% of eyes and within ±1D in 97.3% of eyes. Postoperative uncorrected distance visual acuity was equivalent to or better than the preoperative corrected distance visual acuity in 91.9% of eyes. Compared to the preoperative corrected distance visual acuity (CDVA), 8.1% of eyes gained one line, 2.7% gained two lines and 2.7% gained three lines of visual acuity. Femtosecond-LASIK using the VisuMax®-MEL® 80 platform appears to have safe, effective and predictable results in mixed astigmatic eyes. The results are impressive for high refractive error treatment and for improvement of both uncorrected and corrected distance visual acuity.

  18. SpliceRover: Interpretable Convolutional Neural: Networks for Improved Splice Site Prediction.

    PubMed

    Zuallaert, Jasper; Godin, Fréderic; Kim, Mijung; Soete, Arne; Saeys, Yvan; De Neve, Wesley

    2018-06-21

    During the last decade, improvements in high-throughput sequencing have generated a wealth of genomic data. Functionally interpreting these sequences and finding the biological signals that are hallmarks of gene function and regulation is currently mostly done using automated genome annotation platforms, which mainly rely on integrated machine learning frameworks to identify different functional sites of interest, including splice sites. Splicing is an essential step in the gene regulation process, and the correct identification of splice sites is a major cornerstone in a genome annotation system. In this paper, we present SpliceRover, a predictive deep learning approach that outperforms the state-of-the-art in splice site prediction. SpliceRover uses convolutional neural networks (CNNs), which have been shown to obtain cutting edge performance on a wide variety of prediction tasks. We adapted this approach to deal with genomic sequence inputs, and show it consistently outperforms already existing approaches, with relative improvements in prediction effectiveness of up to 80.9% when measured in terms of false discovery rate. However, a major criticism of CNNs concerns their "black box" nature, as mechanisms to obtain insight into their reasoning processes are limited. To facilitate interpretability of the SpliceRover models, we introduce an approach to visualize the biologically relevant information learnt. We show that our visualization approach is able to recover features known to be important for splice site prediction (binding motifs around the splice site, presence of polypyrimidine tracts and branch points), as well as reveal new features (e.g., several types of exclusion patterns near splice sites). SpliceRover is available as a web service. The prediction tool and instructions can be found at http://bioit2.irc.ugent.be/splicerover/. Supplementary materials are available at Bioinformatics online.

  19. Differential distributions for t-channel single top-quark production and decay at next-to-next-to-leading order in QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berger, Edmond L.; Gao, Jun; Zhu, Hua Xing

    We present a detailed phenomenological study of the next-to-next-to-leading order (NNLO) QCD corrections for t-channel single top (anti-)quark production and its semi-leptonic decay at the CERN Large Hadron Collider (LHC). We find the NNLO corrections for the total inclusive rates at the LHC with different center of mass energies are generally smaller than the NLO corrections, indicative of improved convergence. However, they can be large for differential distributions, reaching a level of 10% or more in certain regions of the transverse momentum distributions of the top (anti-)quark and the pseudo-rapidity distributions of the leading jet in the event. In allmore » cases the perturbative hard-scale uncertainties are greatly reduced after the NNLO corrections are included. We also show a comparison of the normalized parton-level distributions to recent data from the 8 TeV measurement of the ATLAS collaboration. The NNLO corrections tend to shift the theoretical predictions closer to the measured transverse momentum distribution of the top (anti)-quark. Importantly, for the LHC at 13 TeV, we present NNLO cross sections in a fiducial volume with decays of the top quark included.« less

  20. Towards a universal method for calculating hydration free energies: a 3D reference interaction site model with partial molar volume correction.

    PubMed

    Palmer, David S; Frolov, Andrey I; Ratkova, Ekaterina L; Fedorov, Maxim V

    2010-12-15

    We report a simple universal method to systematically improve the accuracy of hydration free energies calculated using an integral equation theory of molecular liquids, the 3D reference interaction site model. A strong linear correlation is observed between the difference of the experimental and (uncorrected) calculated hydration free energies and the calculated partial molar volume for a data set of 185 neutral organic molecules from different chemical classes. By using the partial molar volume as a linear empirical correction to the calculated hydration free energy, we obtain predictions of hydration free energies in excellent agreement with experiment (R = 0.94, σ = 0.99 kcal mol (- 1) for a test set of 120 organic molecules).

  1. Vector boson production in pPb and PbPb collisions at the LHC and its impact on nCTEQ15 PDFs

    DOE PAGES

    Kusina, A.; Lyonnet, F.; Clark, D. B.; ...

    2017-07-21

    Here, we provide a comprehensive comparison of W ±/Z vector boson production data in pPb and PbPb collisions at the LHC with predictions obtained using the nCTEQ15 PDFs. We also identify the measurements which have the largest potential impact on the PDFs, and estimate the effect of including these data using a Bayesian reweighting method. We find this data set can provide information as regards both the nuclear corrections and the heavy flavor (strange quark) PDF components. As for the proton, the parton flavor determination/separation is dependent on nuclear corrections (from heavy target DIS, for example), this information can alsomore » help improve the proton PDFs.« less

  2. Vector boson production in pPb and PbPb collisions at the LHC and its impact on nCTEQ15 PDFs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kusina, A.; Lyonnet, F.; Clark, D. B.

    Here, we provide a comprehensive comparison of W ±/Z vector boson production data in pPb and PbPb collisions at the LHC with predictions obtained using the nCTEQ15 PDFs. We also identify the measurements which have the largest potential impact on the PDFs, and estimate the effect of including these data using a Bayesian reweighting method. We find this data set can provide information as regards both the nuclear corrections and the heavy flavor (strange quark) PDF components. As for the proton, the parton flavor determination/separation is dependent on nuclear corrections (from heavy target DIS, for example), this information can alsomore » help improve the proton PDFs.« less

  3. A new empirical potential energy function for Ar2

    NASA Astrophysics Data System (ADS)

    Myatt, Philip T.; Dham, Ashok K.; Chandrasekhar, Pragna; McCourt, Frederick R. W.; Le Roy, Robert J.

    2018-06-01

    A critical re-analysis of all available spectroscopic and virial coefficient data for Ar2 has been used to determine an improved empirical analytic potential energy function that has been 'tuned' to optimise its agreement with viscosity, diffusion and thermal diffusion data, and whose short-range behaviour is in reasonably good agreement with the most recent ab initio calculations for this system. The recommended Morse/long-range potential function is smooth and differentiable at all distances, and incorporates both the correct theoretically predicted long-range behaviour and the correct limiting short-range functional behaviour. The resulting value of the well depth is ? cm-1 and the associated equilibrium distance is re = 3.766 (±0.002) Å, while the 40Ar s-wave scattering length is -714 Å.

  4. Second derivatives for approximate spin projection methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Lee M.; Hratchian, Hrant P., E-mail: hhratchian@ucmerced.edu

    2015-02-07

    The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical secondmore » derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.« less

  5. Adsorption energies of benzene on close packed transition metal surfaces using the random phase approximation

    NASA Astrophysics Data System (ADS)

    Garrido Torres, José A.; Ramberger, Benjamin; Früchtl, Herbert A.; Schaub, Renald; Kresse, Georg

    2017-11-01

    The adsorption energy of benzene on various metal substrates is predicted using the random phase approximation (RPA) for the correlation energy. Agreement with available experimental data is systematically better than 10% for both coinage and reactive metals. The results are also compared with more approximate methods, including van der Waals density functional theory (DFT), as well as dispersion-corrected DFT functionals. Although dispersion-corrected DFT can yield accurate results, for instance, on coinage metals, the adsorption energies are clearly overestimated on more reactive transition metals. Furthermore, coverage dependent adsorption energies are well described by the RPA. This shows that for the description of aromatic molecules on metal surfaces further improvements in density functionals are necessary, or more involved many-body methods such as the RPA are required.

  6. First principles study of pressure induced polymorphic phase transition in KNO3

    NASA Astrophysics Data System (ADS)

    Yedukondalu, N.; Vaitheeswaran, G.

    2015-06-01

    We report the structural, elastic, electronic, and vibrational properties of polymorphic phases II and III of KNO3 based on density functional theory (DFT). Using semi-empirical dispersion correction (DFT-D2) method, we predicted the correct thermodynamic ground state of KNO3 and the obtained ground state properties of the polymorphs are in good agreement with the experiments. We further used this method to calculate the elastic constants, IR and Raman spectra, vibrational frequencies and their assignment of these polymorphs. The calculated Tran Blaha-modified Becke Johnson (TB-mBJ) electronic structure shows that both the polymorphic phases are direct band gap insulators with mixed ionic and covalent bonding. Also the TB-mBJ band gaps are improved over standard DFT functionals which are comparable with the available experiments.

  7. Predicting the extent of metabolism using in vitro permeability rate measurements and in silico permeability rate predictions

    PubMed Central

    Hosey, Chelsea M; Benet, Leslie Z

    2015-01-01

    The Biopharmaceutics Drug Disposition Classification System (BDDCS) can be utilized to predict drug disposition, including interactions with other drugs and transporter or metabolizing enzyme effects based on the extent of metabolism and solubility of a drug. However, defining the extent of metabolism relies upon clinical data. Drugs exhibiting high passive intestinal permeability rates are extensively metabolized. Therefore, we aimed to determine if in vitro measures of permeability rate or in silico permeability rate predictions could predict the extent of metabolism, to determine a reference compound representing the permeability rate above which compounds would be expected to be extensively metabolized, and to predict the major route of elimination of compounds in a two-tier approach utilizing permeability rate and a previously published model predicting the major route of elimination of parent drug. Twenty-two in vitro permeability rate measurement data sets in Caco-2 and MDCK cell lines and PAMPA were collected from the literature, while in silico permeability rate predictions were calculated using ADMET Predictor™ or VolSurf+. The potential for permeability rate to differentiate between extensively and poorly metabolized compounds was analyzed with receiver operating characteristic curves. Compounds that yielded the highest sensitivity-specificity average were selected as permeability rate reference standards. The major route of elimination of poorly permeable drugs was predicted by our previously published model and the accuracies and predictive values were calculated. The areas under the receiver operating curves were >0.90 for in vitro measures of permeability rate and >0.80 for the VolSurf+ model of permeability rate, indicating they were able to predict the extent of metabolism of compounds. Labetalol and zidovudine predicted greater than 80% of extensively metabolized drugs correctly and greater than 80% of poorly metabolized drugs correctly in Caco-2 and MDCK, respectively, while theophylline predicted greater than 80% of extensively and poorly metabolized drugs correctly in PAMPA. A two-tier approach predicting elimination route predicts 72±9%, 49±10%, and 66±7% of extensively metabolized, biliarily eliminated, and renally eliminated parent drugs correctly when the permeability rate is predicted in silico and 74±7%, 85±2%, and 73±8% of extensively metabolized, biliarily eliminated, and renally eliminated parent drugs correctly, respectively when the permeability rate is determined in vitro. PMID:25816851

  8. North Atlantic climate model bias influence on multiyear predictability

    NASA Astrophysics Data System (ADS)

    Wu, Y.; Park, T.; Park, W.; Latif, M.

    2018-01-01

    The influences of North Atlantic biases on multiyear predictability of unforced surface air temperature (SAT) variability are examined in the Kiel Climate Model (KCM). By employing a freshwater flux correction over the North Atlantic to the model, which strongly alleviates both North Atlantic sea surface salinity (SSS) and sea surface temperature (SST) biases, the freshwater flux-corrected integration depicts significantly enhanced multiyear SAT predictability in the North Atlantic sector in comparison to the uncorrected one. The enhanced SAT predictability in the corrected integration is due to a stronger and more variable Atlantic Meridional Overturning Circulation (AMOC) and its enhanced influence on North Atlantic SST. Results obtained from preindustrial control integrations of models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5) support the findings obtained from the KCM: models with large North Atlantic biases tend to have a weak AMOC influence on SAT and exhibit a smaller SAT predictability over the North Atlantic sector.

  9. Usefulness of 3-dimensional stereotactic surface projection FDG PET images for the diagnosis of dementia

    PubMed Central

    Kim, Jahae; Cho, Sang-Geon; Song, Minchul; Kang, Sae-Ryung; Kwon, Seong Young; Choi, Kang-Ho; Choi, Seong-Min; Kim, Byeong-Chae; Song, Ho-Chun

    2016-01-01

    Abstract To compare diagnostic performance and confidence of a standard visual reading and combined 3-dimensional stereotactic surface projection (3D-SSP) results to discriminate between Alzheimer disease (AD)/mild cognitive impairment (MCI), dementia with Lewy bodies (DLB), and frontotemporal dementia (FTD). [18F]fluorodeoxyglucose (FDG) PET brain images were obtained from 120 patients (64 AD/MCI, 38 DLB, and 18 FTD) who were clinically confirmed over 2 years follow-up. Three nuclear medicine physicians performed the diagnosis and rated diagnostic confidence twice; once by standard visual methods, and once by adding of 3D-SSP. Diagnostic performance and confidence were compared between the 2 methods. 3D-SSP showed higher sensitivity, specificity, accuracy, positive, and negative predictive values to discriminate different types of dementia compared with the visual method alone, except for AD/MCI specificity and FTD sensitivity. Correction of misdiagnosis after adding 3D-SSP images was greatest for AD/MCI (56%), followed by DLB (13%) and FTD (11%). Diagnostic confidence also increased in DLB (visual: 3.2; 3D-SSP: 4.1; P < 0.001), followed by AD/MCI (visual: 3.1; 3D-SSP: 3.8; P = 0.002) and FTD (visual: 3.5; 3D-SSP: 4.2; P = 0.022). Overall, 154/360 (43%) cases had a corrected misdiagnosis or improved diagnostic confidence for the correct diagnosis. The addition of 3D-SSP images to visual analysis helped to discriminate different types of dementia in FDG PET scans, by correcting misdiagnoses and enhancing diagnostic confidence in the correct diagnosis. Improvement of diagnostic accuracy and confidence by 3D-SSP images might help to determine the cause of dementia and appropriate treatment. PMID:27930593

  10. Theoretical prediction of crystallization kinetics of a supercooled Lennard-Jones fluid

    NASA Astrophysics Data System (ADS)

    Gunawardana, K. G. S. H.; Song, Xueyu

    2018-05-01

    The first order curvature correction to the crystal-liquid interfacial free energy is calculated using a theoretical model based on the interfacial excess thermodynamic properties. The correction parameter (δ), which is analogous to the Tolman length at a liquid-vapor interface, is found to be 0.48 ± 0.05 for a Lennard-Jones (LJ) fluid. We show that this curvature correction is crucial in predicting the nucleation barrier when the size of the crystal nucleus is small. The thermodynamic driving force (Δμ) corresponding to available simulated nucleation conditions is also calculated by combining the simulated data with a classical density functional theory. In this paper, we show that the classical nucleation theory is capable of predicting the nucleation barrier with excellent agreement to the simulated results when the curvature correction to the interfacial free energy is accounted for.

  11. Analyzing the uncertainty of suspended sediment load prediction using sequential data assimilation

    NASA Astrophysics Data System (ADS)

    Leisenring, Marc; Moradkhani, Hamid

    2012-10-01

    SummaryA first step in understanding the impacts of sediment and controlling the sources of sediment is to quantify the mass loading. Since mass loading is the product of flow and concentration, the quantification of loads first requires the quantification of runoff volume. Using the National Weather Service's SNOW-17 and the Sacramento Soil Moisture Accounting (SAC-SMA) models, this study employed particle filter based Bayesian data assimilation methods to predict seasonal snow water equivalent (SWE) and runoff within a small watershed in the Lake Tahoe Basin located in California, USA. A procedure was developed to scale the variance multipliers (a.k.a hyperparameters) for model parameters and predictions based on the accuracy of the mean predictions relative to the ensemble spread. In addition, an online bias correction algorithm based on the lagged average bias was implemented to detect and correct for systematic bias in model forecasts prior to updating with the particle filter. Both of these methods significantly improved the performance of the particle filter without requiring excessively wide prediction bounds. The flow ensemble was linked to a non-linear regression model that was used to predict suspended sediment concentrations (SSCs) based on runoff rate and time of year. Runoff volumes and SSC were then combined to produce an ensemble of suspended sediment load estimates. Annual suspended sediment loads for the 5 years of simulation were finally computed along with 95% prediction intervals that account for uncertainty in both the SSC regression model and flow rate estimates. Understanding the uncertainty associated with annual suspended sediment load predictions is critical for making sound watershed management decisions aimed at maintaining the exceptional clarity of Lake Tahoe. The computational methods developed and applied in this research could assist with similar studies where it is important to quantify the predictive uncertainty of pollutant load estimates.

  12. Characterizing bias correction uncertainty in wheat yield predictions

    NASA Astrophysics Data System (ADS)

    Ortiz, Andrea Monica; Jones, Julie; Freckleton, Robert; Scaife, Adam

    2017-04-01

    Farming systems are under increased pressure due to current and future climate change, variability and extremes. Research on the impacts of climate change on crop production typically rely on the output of complex Global and Regional Climate Models, which are used as input to crop impact models. Yield predictions from these top-down approaches can have high uncertainty for several reasons, including diverse model construction and parameterization, future emissions scenarios, and inherent or response uncertainty. These uncertainties propagate down each step of the 'cascade of uncertainty' that flows from climate input to impact predictions, leading to yield predictions that may be too complex for their intended use in practical adaptation options. In addition to uncertainty from impact models, uncertainty can also stem from the intermediate steps that are used in impact studies to adjust climate model simulations to become more realistic when compared to observations, or to correct the spatial or temporal resolution of climate simulations, which are often not directly applicable as input into impact models. These important steps of bias correction or calibration also add uncertainty to final yield predictions, given the various approaches that exist to correct climate model simulations. In order to address how much uncertainty the choice of bias correction method can add to yield predictions, we use several evaluation runs from Regional Climate Models from the Coordinated Regional Downscaling Experiment over Europe (EURO-CORDEX) at different resolutions together with different bias correction methods (linear and variance scaling, power transformation, quantile-quantile mapping) as input to a statistical crop model for wheat, a staple European food crop. The objective of our work is to compare the resulting simulation-driven hindcasted wheat yields to climate observation-driven wheat yield hindcasts from the UK and Germany in order to determine ranges of yield uncertainty that result from different climate model simulation input and bias correction methods. We simulate wheat yields using a General Linear Model that includes the effects of seasonal maximum temperatures and precipitation, since wheat is sensitive to heat stress during important developmental stages. We use the same statistical model to predict future wheat yields using the recently available bias-corrected simulations of EURO-CORDEX-Adjust. While statistical models are often criticized for their lack of complexity, an advantage is that we are here able to consider only the effect of the choice of climate model, resolution or bias correction method on yield. Initial results using both past and future bias-corrected climate simulations with a process-based model will also be presented. Through these methods, we make recommendations in preparing climate model output for crop models.

  13. Punishing an error improves learning: the influence of punishment magnitude on error-related neural activity and subsequent learning.

    PubMed

    Hester, Robert; Murphy, Kevin; Brown, Felicity L; Skilleter, Ashley J

    2010-11-17

    Punishing an error to shape subsequent performance is a major tenet of individual and societal level behavioral interventions. Recent work examining error-related neural activity has identified that the magnitude of activity in the posterior medial frontal cortex (pMFC) is predictive of learning from an error, whereby greater activity in this region predicts adaptive changes in future cognitive performance. It remains unclear how punishment influences error-related neural mechanisms to effect behavior change, particularly in key regions such as pMFC, which previous work has demonstrated to be insensitive to punishment. Using an associative learning task that provided monetary reward and punishment for recall performance, we observed that when recall errors were categorized by subsequent performance--whether the failure to accurately recall a number-location association was corrected at the next presentation of the same trial--the magnitude of error-related pMFC activity predicted future correction. However, the pMFC region was insensitive to the magnitude of punishment an error received and it was the left insula cortex that predicted learning from the most aversive outcomes. These findings add further evidence to the hypothesis that error-related pMFC activity may reflect more than a prediction error in representing the value of an outcome. The novel role identified here for the insular cortex in learning from punishment appears particularly compelling for our understanding of psychiatric and neurologic conditions that feature both insular cortex dysfunction and a diminished capacity for learning from negative feedback or punishment.

  14. Effects of correcting in situ ruminal microbial colonization of feed particles on the relationship between ruminally undegraded and intestinally digested crude protein in concentrate feeds.

    PubMed

    González, Javier; Mouhbi, Rabiaa; Guevara-González, Jesús Alberto; Arroyo, José María

    2018-02-01

    In situ estimates of ruminally undegraded protein (RUP) and intestinally digested protein (IDP) of ten concentrates, uncorrected or corrected for the ruminal microbial colonization, were used to examine the effects of this correction on the relationship between IDP and RUP values. Both variables were established for three rumen and duodenum cannulated wethers using 15 N labeling-techniques and considering measured rates of ruminal particle comminution (k c ) and outflow (k p ). A covariance analysis showed that the close relationship found between both variables (IDP = -0.0132 ± 0.00679 + 0.776 ± 0.0002 RUP; n = 60; P < 0.001; r = 0.960) is not affected by correcting for microbial colonization (P = 0.682). The IDP content in concentrates and industrial by-products can be predicted from RUP values, thus avoiding the laborious and complex procedure of determining intestinal digestibility; however, a larger sample of feeds is necessary to achieve more accurate predictions. The lack of influence of the correction for microbial contamination on the prediction observed in the present study increases the data available for this prediction. However, only the use of corrected values may provide an accurate evaluation. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  15. Using the Real-Ear-to-Coupler Difference within the American Academy of Audiology Pediatric Amplification Guideline: Protocols for Applying and Predicting Earmold RECDs.

    PubMed

    Moodie, Sheila; Pietrobon, Jonathan; Rall, Eileen; Lindley, George; Eiten, Leisha; Gordey, Dave; Davidson, Lisa; Moodie, K Shane; Bagatto, Marlene; Haluschak, Meredith Magathan; Folkeard, Paula; Scollie, Susan

    2016-03-01

    Real-ear-to-coupler difference (RECD) measurements are used for the purposes of estimating degree and configuration of hearing loss (in dB SPL ear canal) and predicting hearing aid output from coupler-based measures. Accurate measurements of hearing threshold, derivation of hearing aid fitting targets, and predictions of hearing aid output in the ear canal assume consistent matching of RECD coupling procedure (i.e., foam tip or earmold) with that used during assessment and in verification of the hearing aid fitting. When there is a mismatch between these coupling procedures, errors are introduced. The goal of this study was to quantify the systematic difference in measured RECD values obtained when using a foam tip versus an earmold with various tube lengths. Assuming that systematic errors exist, the second goal was to investigate the use of a foam tip to earmold correction for the purposes of improving fitting accuracy when mismatched RECD coupling conditions occur (e.g., foam tip at assessment, earmold at verification). Eighteen adults and 17 children (age range: 3-127 mo) participated in this study. Data were obtained using simulated ears of various volumes and earmold tubing lengths and from patients using their own earmolds. Derived RECD values based on simulated ear measurements were compared with RECD values obtained for adult and pediatric ears for foam tip and earmold coupling. Results indicate that differences between foam tip and earmold RECDs are consistent across test ears for adults and children which support the development of a correction between foam tip and earmold couplings for RECDs that can be applied across individuals. The foam tip to earmold correction values developed in this study can be used to provide improved estimations of earmold RECDs. This may support better accuracy in acoustic transforms related to transforming thresholds and/or hearing aid coupler responses to ear canal sound pressure level for the purposes of fitting behind-the-ear hearing aids. American Academy of Audiology.

  16. Ocular Chromatic Aberrations and Their Effects on Polychromatic Retinal Image Quality

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoxiao

    Previous studies of ocular chromatic aberrations have concentrated on chromatic difference of focus (CDF). Less is known about the chromatic difference of image position (CDP) in the peripheral retina and no experimental attempt has been made to measure the ocular chromatic difference of magnification (CDM). Consequently, theoretical modelling of human eyes is incomplete. The insufficient knowledge of ocular chromatic aberrations is partially responsible for two unsolved applied vision problems: (1) how to improve vision by correcting ocular chromatic aberration? (2) what is the impact of ocular chromatic aberration on the use of isoluminance gratings as a tool in spatial-color vision?. Using optical ray tracing methods, MTF analysis methods of image quality, and psychophysical methods, I have developed a more complete model of ocular chromatic aberrations and their effects on vision. The ocular CDM was determined psychophysically by measuring the tilt in the apparent frontal parallel plane (AFPP) induced by interocular difference in image wavelength. This experimental result was then used to verify a theoretical relationship between the ocular CDM, the ocular CDF and the entrance pupil of the eye. In the retinal image after correcting the ocular CDF with existing achromatizing methods, two forms of chromatic aberration (CDM and chromatic parallax) were examined. The CDM was predicted by theoretical ray tracing and measured with the same method used to determine ocular CDM. The chromatic parallax was predicted with a nodal ray model and measured with the two-color vernier alignment method. The influence of these two aberrations on polychromatic MTF were calculated. Using this improved model of ocular chromatic aberration, luminance artifacts in the images of isoluminance gratings were calculated. The predicted luminance artifacts were then compared with experimental data from previous investigators. The results show that: (1) A simple relationship exists between two major chromatic aberrations and the location of the pupil; (2) The ocular CDM is measurable and varies among individuals; (3) All existing methods to correct ocular chromatic aberration face another aberration, chromatic parallax, which is inherent in the methodology; (4) Ocular chromatic aberrations have the potential to contaminate psychophysical experimental results on human spatial-color vision.

  17. Assessment of specific characteristics of abnormal general movements: does it enhance the prediction of cerebral palsy?

    PubMed

    Hamer, Elisa G; Bos, Arend F; Hadders-Algra, Mijna

    2011-08-01

    Abnormal general movements at around 3 months corrected age indicate a high risk of cerebral palsy (CP). We aimed to determine whether specific movement characteristics can improve the predictive power of definitely abnormal general movements. Video recordings of 46 infants with definitely abnormal general movements at 9 to 13 weeks corrected age (20 males; 26 females; median gestational age 30wks; median birthweight 1200g) were analysed for the following characteristics: presence of fidgety, cramped synchronized, stiff, or jerky movements and asymmetrical tonic neck reflex pattern. Neurological condition (presence or absence of CP), gross motor development (Alberta Infant Motor Scales), quality of motor behaviour (Infant Motor Profile), functional mobility (Pediatric Evaluation of Disability Inventory), and Mental Developmental Index (Bayley Scales) were assessed at 18 months corrected age. Infants were excluded from participating in the study if they had severe congenital anomalies or if their caregivers had an insufficient knowledge of the Dutch language. Of the 46 assessed infants, 10 developed spastic CP (Gross Motor Function Classification System levels I to V; eight bilateral spastic CP, two unilateral spastic CP). The absence of fidgety movements and the presence of predominantly stiff movements were associated with CP (Fisher's exact test, p=0.018 and p=0.007 respectively) and lower Infant Motor Profile scores (Mann-Whitney U test, p=0.015 and p=0.022 respectively); stiff and predominantly stiff movements were associated with lower Alberta Infant Motor Scales scores (Mann-Whitney U test, p=0.01 and p=0.004 respectively). Cramped synchronized movements and the asymmetrical tonic neck reflex pattern were not related to outcome. None of the movement characteristics were associated with Pediatric Evaluation of Disability Inventory scores or the Mental Developmental Index. The assessment of fidgety movements and movement stiffness may improve the predictive power of definitely abnormal general movements for developmental outcome. However, the presence of fidgety movements does not preclude the development of CP. © The Authors. Developmental Medicine & Child Neurology © 2011 Mac Keith Press.

  18. Towards Cooperative Predictive Data Mining in Competitive Environments

    NASA Astrophysics Data System (ADS)

    Lisý, Viliam; Jakob, Michal; Benda, Petr; Urban, Štěpán; Pěchouček, Michal

    We study the problem of predictive data mining in a competitive multi-agent setting, in which each agent is assumed to have some partial knowledge required for correctly classifying a set of unlabelled examples. The agents are self-interested and therefore need to reason about the trade-offs between increasing their classification accuracy by collaborating with other agents and disclosing their private classification knowledge to other agents through such collaboration. We analyze the problem and propose a set of components which can enable cooperation in this otherwise competitive task. These components include measures for quantifying private knowledge disclosure, data-mining models suitable for multi-agent predictive data mining, and a set of strategies by which agents can improve their classification accuracy through collaboration. The overall framework and its individual components are validated on a synthetic experimental domain.

  19. Evaluation of 3D-Jury on CASP7 models.

    PubMed

    Kaján, László; Rychlewski, Leszek

    2007-08-21

    3D-Jury, the structure prediction consensus method publicly available in the Meta Server http://meta.bioinfo.pl/, was evaluated using models gathered in the 7th round of the Critical Assessment of Techniques for Protein Structure Prediction (CASP7). 3D-Jury is an automated expert process that generates protein structure meta-predictions from sets of models obtained from partner servers. The performance of 3D-Jury was analysed for three aspects. First, we examined the correlation between the 3D-Jury score and a model quality measure: the number of correctly predicted residues. The 3D-Jury score was shown to correlate significantly with the number of correctly predicted residues, the correlation is good enough to be used for prediction. 3D-Jury was also found to improve upon the competing servers' choice of the best structure model in most cases. The value of the 3D-Jury score as a generic reliability measure was also examined. We found that the 3D-Jury score separates bad models from good models better than the reliability score of the original server in 27 cases and falls short of it in only 5 cases out of a total of 38. We report the release of a new Meta Server feature: instant 3D-Jury scoring of uploaded user models. The 3D-Jury score continues to be a good indicator of structural model quality. It also provides a generic reliability score, especially important for models that were not assigned such by the original server. Individual structure modellers can also benefit from the 3D-Jury scoring system by testing their models in the new instant scoring feature http://meta.bioinfo.pl/compare_your_model_example.pl available in the Meta Server.

  20. Evaluation of 3D-Jury on CASP7 models

    PubMed Central

    Kaján, László; Rychlewski, Leszek

    2007-01-01

    Background 3D-Jury, the structure prediction consensus method publicly available in the Meta Server , was evaluated using models gathered in the 7th round of the Critical Assessment of Techniques for Protein Structure Prediction (CASP7). 3D-Jury is an automated expert process that generates protein structure meta-predictions from sets of models obtained from partner servers. Results The performance of 3D-Jury was analysed for three aspects. First, we examined the correlation between the 3D-Jury score and a model quality measure: the number of correctly predicted residues. The 3D-Jury score was shown to correlate significantly with the number of correctly predicted residues, the correlation is good enough to be used for prediction. 3D-Jury was also found to improve upon the competing servers' choice of the best structure model in most cases. The value of the 3D-Jury score as a generic reliability measure was also examined. We found that the 3D-Jury score separates bad models from good models better than the reliability score of the original server in 27 cases and falls short of it in only 5 cases out of a total of 38. We report the release of a new Meta Server feature: instant 3D-Jury scoring of uploaded user models. Conclusion The 3D-Jury score continues to be a good indicator of structural model quality. It also provides a generic reliability score, especially important for models that were not assigned such by the original server. Individual structure modellers can also benefit from the 3D-Jury scoring system by testing their models in the new instant scoring feature available in the Meta Server. PMID:17711571

  1. Comparison of Different Attitude Correction Models for ZY-3 Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Song, Wenping; Liu, Shijie; Tong, Xiaohua; Niu, Changling; Ye, Zhen; Zhang, Han; Jin, Yanmin

    2018-04-01

    ZY-3 satellite, launched in 2012, is the first civilian high resolution stereo mapping satellite of China. This paper analyzed the positioning errors of ZY-3 satellite imagery and conducted compensation for geo-position accuracy improvement using different correction models, including attitude quaternion correction, attitude angle offset correction, and attitude angle linear correction. The experimental results revealed that there exist systematic errors with ZY-3 attitude observations and the positioning accuracy can be improved after attitude correction with aid of ground controls. There is no significant difference between the results of attitude quaternion correction method and the attitude angle correction method. However, the attitude angle offset correction model produced steady improvement than the linear correction model when limited ground control points are available for single scene.

  2. Sequence fingerprints distinguish erroneous from correct predictions of intrinsically disordered protein regions.

    PubMed

    Saravanan, Konda Mani; Dunker, A Keith; Krishnaswamy, Sankaran

    2017-12-27

    More than 60 prediction methods for intrinsically disordered proteins (IDPs) have been developed over the years, many of which are accessible on the World Wide Web. Nearly, all of these predictors give balanced accuracies in the ~65%-~80% range. Since predictors are not perfect, further studies are required to uncover the role of amino acid residues in native IDP as compared to predicted IDP regions. In the present work, we make use of sequences of 100% predicted IDP regions, false positive disorder predictions, and experimentally determined IDP regions to distinguish the characteristics of native versus predicted IDP regions. A higher occurrence of asparagine is observed in sequences of native IDP regions but not in sequences of false positive predictions of IDP regions. The occurrences of certain combinations of amino acids at the pentapeptide level provide a distinguishing feature in the IDPs with respect to globular proteins. The distinguishing features presented in this paper provide insights into the sequence fingerprints of amino acid residues in experimentally determined as compared to predicted IDP regions. These observations and additional work along these lines should enable the development of improvements in the accuracy of disorder prediction algorithm.

  3. Robust inference of population structure for ancestry prediction and correction of stratification in the presence of relatedness.

    PubMed

    Conomos, Matthew P; Miller, Michael B; Thornton, Timothy A

    2015-05-01

    Population structure inference with genetic data has been motivated by a variety of applications in population genetics and genetic association studies. Several approaches have been proposed for the identification of genetic ancestry differences in samples where study participants are assumed to be unrelated, including principal components analysis (PCA), multidimensional scaling (MDS), and model-based methods for proportional ancestry estimation. Many genetic studies, however, include individuals with some degree of relatedness, and existing methods for inferring genetic ancestry fail in related samples. We present a method, PC-AiR, for robust population structure inference in the presence of known or cryptic relatedness. PC-AiR utilizes genome-screen data and an efficient algorithm to identify a diverse subset of unrelated individuals that is representative of all ancestries in the sample. The PC-AiR method directly performs PCA on the identified ancestry representative subset and then predicts components of variation for all remaining individuals based on genetic similarities. In simulation studies and in applications to real data from Phase III of the HapMap Project, we demonstrate that PC-AiR provides a substantial improvement over existing approaches for population structure inference in related samples. We also demonstrate significant efficiency gains, where a single axis of variation from PC-AiR provides better prediction of ancestry in a variety of structure settings than using 10 (or more) components of variation from widely used PCA and MDS approaches. Finally, we illustrate that PC-AiR can provide improved population stratification correction over existing methods in genetic association studies with population structure and relatedness. © 2015 WILEY PERIODICALS, INC.

  4. Multivariate Bias Correction Procedures for Improving Water Quality Predictions from the SWAT Model

    NASA Astrophysics Data System (ADS)

    Arumugam, S.; Libera, D.

    2017-12-01

    Water quality observations are usually not available on a continuous basis for longer than 1-2 years at a time over a decadal period given the labor requirements making calibrating and validating mechanistic models difficult. Further, any physical model predictions inherently have bias (i.e., under/over estimation) and require post-simulation techniques to preserve the long-term mean monthly attributes. This study suggests a multivariate bias-correction technique and compares to a common technique in improving the performance of the SWAT model in predicting daily streamflow and TN loads across the southeast based on split-sample validation. The approach is a dimension reduction technique, canonical correlation analysis (CCA) that regresses the observed multivariate attributes with the SWAT model simulated values. The common approach is a regression based technique that uses an ordinary least squares regression to adjust model values. The observed cross-correlation between loadings and streamflow is better preserved when using canonical correlation while simultaneously reducing individual biases. Additionally, canonical correlation analysis does a better job in preserving the observed joint likelihood of observed streamflow and loadings. These procedures were applied to 3 watersheds chosen from the Water Quality Network in the Southeast Region; specifically, watersheds with sufficiently large drainage areas and number of observed data points. The performance of these two approaches are compared for the observed period and over a multi-decadal period using loading estimates from the USGS LOADEST model. Lastly, the CCA technique is applied in a forecasting sense by using 1-month ahead forecasts of P & T from ECHAM4.5 as forcings in the SWAT model. Skill in using the SWAT model for forecasting loadings and streamflow at the monthly and seasonal timescale is also discussed.

  5. Measurements of top-quark pair differential cross-sections in the lepton+jets channel in pp collisions at $$\\sqrt{s}=8\\,~{\\mathrm {TeV}}$$ using the ATLAS detector

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2016-10-03

    Measurements of normalized differential cross-sections of top-quark pair production are presented as a function of the top-quark, tmore » $$\\bar{t}$$ system and event-level kinematic observables in proton–proton collisions at a centre-of-mass energy of √s=8TeV. The observables have been chosen to emphasize the t$$\\bar{t}$$ production process and to be sensitive to effects of initial- and final-state radiation, to the different parton distribution functions, and to non-resonant processes and higher-order corrections. The dataset corresponds to an integrated luminosity of 20.3 fb - 1 , recorded in 2012 with the ATLAS detector at the CERN Large Hadron Collider. Events are selected in the lepton+jets channel, requiring exactly one charged lepton and at least four jets with at least two of the jets tagged as originating from a b-quark. The measured spectra are corrected for detector effects and are compared to several Monte Carlo simulations. The results are in fair agreement with the predictions over a wide kinematic range. Nevertheless, most generators predict a harder top-quark transverse momentum distribution at high values than what is observed in the data. Predictions beyond NLO accuracy improve the agreement with data at high top-quark transverse momenta. Using the current settings and parton distribution functions, the rapidity distributions are not well modelled by any generator under consideration. However, the level of agreement is improved when more recent sets of parton distribution functions are used.« less

  6. A 3D correction method for predicting the readings of a PinPoint chamber on the CyberKnife® M6™ machine

    NASA Astrophysics Data System (ADS)

    Zhang, Yongqian; Brandner, Edward; Ozhasoglu, Cihat; Lalonde, Ron; Heron, Dwight E.; Saiful Huq, M.

    2018-02-01

    The use of small fields in radiation therapy techniques has increased substantially in particular in stereotactic radiosurgery (SRS) and stereotactic body radiation therapy (SBRT). However, as field size reduces further still, the response of the detector changes more rapidly with field size, and the effects of measurement uncertainties become increasingly significant due to the lack of lateral charged particle equilibrium, spectral changes as a function of field size, detector choice, and subsequent perturbations of the charged particle fluence. This work presents a novel 3D dose volume-to-point correction method to predict the readings of a 0.015 cc PinPoint chamber (PTW 31014) for both small static-fields and composite-field dosimetry formed by fixed cones on the CyberKnife® M6™ machine. A 3D correction matrix is introduced to link the 3D dose distribution to the response of the PinPoint chamber in water. The parameters of the correction matrix are determined by modeling its 3D dose response in circular fields created using the 12 fixed cones (5 mm-60 mm) on a CyberKnife® M6™ machine. A penalized least-square optimization problem is defined by fitting the calculated detector reading to the experimental measurement data to generate the optimal correction matrix; the simulated annealing algorithm is used to solve the inverse optimization problem. All the experimental measurements are acquired for every 2 mm chamber shift in the horizontal planes for each field size. The 3D dose distributions for the measurements are calculated using the Monte Carlo calculation with the MultiPlan® treatment planning system (Accuray Inc., Sunnyvale, CA, USA). The performance evaluation of the 3D conversion matrix is carried out by comparing the predictions of the output factors (OFs), off-axis ratios (OARs) and percentage depth dose (PDD) data to the experimental measurement data. The discrepancy of the measurement and the prediction data for composite fields is also performed for clinical SRS plans. The optimization algorithm used for generating the optimal correction factors is stable, and the resulting correction factors were smooth in the spatial domain. The measurement and prediction of OFs agree closely with percentage differences of less than 1.9% for all the 12 cones. The discrepancies between the prediction and the measurement PDD readings at 50 mm and 80 mm depth are 1.7% and 1.9%, respectively. The percentage differences of OARs between measurement and prediction data are less than 2% in the low dose gradient region, and 2%/1 mm discrepancies are observed within the high dose gradient regions. The differences between the measurement and prediction data for all the CyberKnife based SRS plans are less than 1%. These results demonstrate the existence and efficiency of the novel 3D correction method for small field dosimetry. The 3D correction matrix links the 3D dose distribution and the reading of the PinPoint chamber. The comparison between the predicted reading and the measurement data for static small fields (OFs, OARs and PDDs) yield discrepancies within 2% for low dose gradient regions and 2%/1 mm for high dose gradient regions; the discrepancies between the predicted and the measurement data are less than 1% for all the SRS plans. The 3D correction method provides an access to evaluate the clinical measurement data and can be applied to non-standard composite fields intensity modulated radiation therapy point dose verification.

  7. Development and Validation of a Multidisciplinary Tool for Accurate and Efficient Rotorcraft Noise Prediction (MUTE)

    NASA Technical Reports Server (NTRS)

    Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris

    2011-01-01

    A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.

  8. Assessment of the forecast skill of spring onset in the NMME experiment

    NASA Astrophysics Data System (ADS)

    Carrillo, C. M.; Ault, T.

    2017-12-01

    This study assesses the predictability of spring onset using an index of its interannual variability. We use the North American Multi-Model Ensemble (NMME) experiment to assess this predictability. The input dataset to compute spring onset index, SI-x, were treated with a daily joint bias correction (JBC) approach, and the SI-x outputs were post-processed using three ensemble model output statistic (EMOS) approaches—logistic regression, Gaussian Ensemble Dressing, and non-homogeneous Gaussian regression. These EMOS approaches quantify the effect of training period length and ensemble size on forecast skill. The highest range of predictability for the timing spring onset is from 10 to 60 days, and it is located along a narrow band between 35° to 45°N in the US. Using rank probability scores based on quantiles (q), a forecast threshold (q) of 0.5 provides a range of predictability that falls into two categories 10-40 and 40-60 days, which seems to represent the effect of the intra-seasonal scale. Using higher thresholds (q=0.6 and 0.7) predictability shows lower range with values around 10-30 days. The post-processing work using JBC improves the predictability skill by 13% from uncorrected results. Using EMOS, a significant positive change in the skill score is noted in regions where the skill with JBC shows evidence of improvement. The consensus of these techniques shows that regions of better predictability can be expanded.

  9. Empirical parameterization of a model for predicting peptide helix/coil equilibrium populations.

    PubMed Central

    Andersen, N. H.; Tong, H.

    1997-01-01

    A modification of the Lifson-Roig formulation of helix/coil transitions is presented; it (1) incorporates end-capping and coulombic (salt bridges, hydrogen bonding, and side-chain interactions with charged termini and the helix dipole) effects, (2) helix-stabilizing hydrophobic clustering, (3) allows for different inherent termination probabilities of individual residues, and (4) differentiates helix elongation in the first versus subsequent turns of a helix. Each residue is characterized by six parameters governing helix formation. The formulation of the conditional probability of helix initiation and termination that we developed is essentially the same as one presented previously (Shalongo W, Stellwagen, E. 1995. Protein Sci 4:1161-1166) and nearly the mathematical equivalent of the new capping formulation incorporated in the model presented by Rohl et al. (1996. Protein Sci 5:2623-2637). Side-chain/side-chain interactions are, in most cases, incorporated as context dependent modifications of propagation rather than nucleation parameters. An alternative procedure for converting [theta]221 values to experimental fractional helicities () is presented. Tests of the program predictions suggest this method may have some advantages both for designed peptides and for the analysis of secondary structure preferences that could drive the formation of molten-globule intermediates on protein folding pathways. The model predicts the fractional helicity of 385 peptides with a root-mean-square deviation (RMSD) of 0.050 and locates (with precise definition of the termini in many cases) helices in proteins as well as competing methods. The propagation and nucleation parameters were derived from NMR data and from the CD data for a 79 peptide "learning set" for which an excellent fit resulted (RMSD = 0.0295). The current set of parameter corrections for capping boxes, helix dipole interactions, and side-chain/side-chain interactions (coulombic, hydrogen bonding and hydrophobic clustering), although still under development provide a significant improvement in both helix/coil equilibrium prediction for peptides and helix location in protein sequences. This is clearly evident in the rms deviations between CD measures and calculated values of fractional helicity for different classes of peptides before and after applying the corrections: for peptides lacking capping boxes and i/i + 3 and i/i + 4 side-chain/side-chain interactions RMSD = 0.044 (n = 164) versus RMSD = 0.054 (0.172 without the corrections, n = 221) for peptides that required context-dependent corrections of the parameters. If we restrict the analysis to N-acylated peptides with helix stabilizing side-chain/side-chain interactions (including N-capping boxes), the degree to which our corrections account for the stabilizing interaction can be judged from the change in helicity underestimation, (calc-CD): -0.15 +/- 0.10, which is reduced to -0.018 +/- 0.048 (n = 191) upon applying the corrections. PMID:9300492

  10. The Algorithm Theoretical Basis Document for Tidal Corrections

    NASA Technical Reports Server (NTRS)

    Fricker, Helen A.; Ridgway, Jeff R.; Minster, Jean-Bernard; Yi, Donghui; Bentley, Charles R.`

    2012-01-01

    This Algorithm Theoretical Basis Document deals with the tidal corrections that need to be applied to range measurements made by the Geoscience Laser Altimeter System (GLAS). These corrections result from the action of ocean tides and Earth tides which lead to deviations from an equilibrium surface. Since the effect of tides is dependent of the time of measurement, it is necessary to remove the instantaneous tide components when processing altimeter data, so that all measurements are made to the equilibrium surface. The three main tide components to consider are the ocean tide, the solid-earth tide and the ocean loading tide. There are also long period ocean tides and the pole tide. The approximate magnitudes of these components are illustrated in Table 1, together with estimates of their uncertainties (i.e. the residual error after correction). All of these components are important for GLAS measurements over the ice sheets since centimeter-level accuracy for surface elevation change detection is required. The effect of each tidal component is to be removed by approximating their magnitude using tidal prediction models. Conversely, assimilation of GLAS measurements into tidal models will help to improve them, especially at high latitudes.

  11. Positioning performance of the NTCM model driven by GPS Klobuchar model parameters

    NASA Astrophysics Data System (ADS)

    Hoque, Mohammed Mainul; Jakowski, Norbert; Berdermann, Jens

    2018-03-01

    Users of the Global Positioning System (GPS) utilize the Ionospheric Correction Algorithm (ICA) also known as Klobuchar model for correcting ionospheric signal delay or range error. Recently, we developed an ionosphere correction algorithm called NTCM-Klobpar model for single frequency GNSS applications. The model is driven by a parameter computed from GPS Klobuchar model and consecutively can be used instead of the GPS Klobuchar model for ionospheric corrections. In the presented work we compare the positioning solutions obtained using NTCM-Klobpar with those using the Klobuchar model. Our investigation using worldwide ground GPS data from a quiet and a perturbed ionospheric and geomagnetic activity period of 17 days each shows that the 24-hour prediction performance of the NTCM-Klobpar is better than the GPS Klobuchar model in global average. The root mean squared deviation of the 3D position errors are found to be about 0.24 and 0.45 m less for the NTCM-Klobpar compared to the GPS Klobuchar model during quiet and perturbed condition, respectively. The presented algorithm has the potential to continuously improve the accuracy of GPS single frequency mass market devices with only little software modification.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sonzogni, A. A.; McCutchan, E. A.; Johnson, T. D.

    Fission yields form an integral part of the prediction of antineutrino spectra generated by nuclear reactors, but little attention has been paid to the quality and reliability of the data used in current calculations. Following a critical review of the thermal and fast ENDF/B-VII.1 235U fission yields, deficiencies are identified and improved yields are obtained, based on corrections of erroneous yields, consistency between decay and fission yield data, and updated isomeric ratios. These corrected yields are used to calculate antineutrino spectra using the summation method. An anomalous value for the thermal fission yield of 86Ge generates an excess of antineutrinosmore » at 5–7 MeV, a feature which is no longer present when the corrected yields are used. Thermal spectra calculated with two distinct fission yield libraries (corrected ENDF/B and JEFF) differ by up to 6% in the 0–7 MeV energy window, allowing for a basic estimate of the uncertainty involved in the fission yield component of summation calculations. Lastly, the fast neutron antineutrino spectrum is calculated, which at the moment can only be obtained with the summation method and may be relevant for short baseline reactor experiments using highly enriched uranium fuel.« less

  13. Interference correction by extracting the information of interference dominant regions: Application to near-infrared spectra

    NASA Astrophysics Data System (ADS)

    Bi, Yiming; Tang, Liang; Shan, Peng; Xie, Qiong; Hu, Yong; Peng, Silong; Tan, Jie; Li, Changwen

    2014-08-01

    Interference such as baseline drift and light scattering can degrade the model predictability in multivariate analysis of near-infrared (NIR) spectra. Usually interference can be represented by an additive and a multiplicative factor. In order to eliminate these interferences, correction parameters are needed to be estimated from spectra. However, the spectra are often mixed of physical light scattering effects and chemical light absorbance effects, making it difficult for parameter estimation. Herein, a novel algorithm was proposed to find a spectral region automatically that the interesting chemical absorbance and noise are low, that is, finding an interference dominant region (IDR). Based on the definition of IDR, a two-step method was proposed to find the optimal IDR and the corresponding correction parameters estimated from IDR. Finally, the correction was performed to the full spectral range using previously obtained parameters for the calibration set and test set, respectively. The method can be applied to multi target systems with one IDR suitable for all targeted analytes. Tested on two benchmark data sets of near-infrared spectra, the performance of the proposed method provided considerable improvement compared with full spectral estimation methods and comparable with other state-of-art methods.

  14. Antarctic contribution to sea level rise observed by GRACE with improved GIA correction

    NASA Astrophysics Data System (ADS)

    Ivins, Erik R.; James, Thomas S.; Wahr, John; Schrama, Ernst J. O.; Landerer, Felix W.; Simon, Karen M.

    2013-06-01

    Antarctic volume changes during the past 21 thousand years are smaller than previously thought, and here we construct an ice sheet history that drives a forward model prediction of the glacial isostatic adjustment (GIA) gravity signal. The new model, in turn, should give predictions that are constrained with recent uplift data. The impact of the GIA signal on a Gravity Recovery and Climate Experiment (GRACE) Antarctic mass balance estimate depends on the specific GRACE analysis method used. For the method described in this paper, the GIA contribution to the apparent surface mass change is re-evaluated to be +55±13 Gt/yr by considering a revised ice history model and a parameter search for vertical motion predictions that best fit the GPS observations at 18 high-quality stations. Although the GIA model spans a range of possible Earth rheological structure values, the data are not yet sufficient for solving for a preferred value of upper and lower mantle viscosity nor for a preferred lithospheric thickness. GRACE monthly solutions from the Center for Space Research Release 04 (CSR-RL04) release time series from January 2003 to the beginning of January 2012, uncorrected for GIA, yield an ice mass rate of +2.9± 29 Gt/yr. The new GIA correction increases the solved-for ice mass imbalance of Antarctica to -57±34 Gt/yr. The revised GIA correction is smaller than past GRACE estimates by about 50 to 90 Gt/yr. The new upper bound to the sea level rise from the Antarctic ice sheet, averaged over the time span 2003.0-2012.0, is about 0.16±0.09 mm/yr.

  15. Evaluation of artificial neural network algorithms for predicting METs and activity type from accelerometer data: validation on an independent sample.

    PubMed

    Freedson, Patty S; Lyden, Kate; Kozey-Keadle, Sarah; Staudenmayer, John

    2011-12-01

    Previous work from our laboratory provided a "proof of concept" for use of artificial neural networks (nnets) to estimate metabolic equivalents (METs) and identify activity type from accelerometer data (Staudenmayer J, Pober D, Crouter S, Bassett D, Freedson P, J Appl Physiol 107: 1330-1307, 2009). The purpose of this study was to develop new nnets based on a larger, more diverse, training data set and apply these nnet prediction models to an independent sample to evaluate the robustness and flexibility of this machine-learning modeling technique. The nnet training data set (University of Massachusetts) included 277 participants who each completed 11 activities. The independent validation sample (n = 65) (University of Tennessee) completed one of three activity routines. Criterion measures were 1) measured METs assessed using open-circuit indirect calorimetry; and 2) observed activity to identify activity type. The nnet input variables included five accelerometer count distribution features and the lag-1 autocorrelation. The bias and root mean square errors for the nnet MET trained on University of Massachusetts and applied to University of Tennessee were +0.32 and 1.90 METs, respectively. Seventy-seven percent of the activities were correctly classified as sedentary/light, moderate, or vigorous intensity. For activity type, household and locomotion activities were correctly classified by the nnet activity type 98.1 and 89.5% of the time, respectively, and sport was correctly classified 23.7% of the time. Use of this machine-learning technique operates reasonably well when applied to an independent sample. We propose the creation of an open-access activity dictionary, including accelerometer data from a broad array of activities, leading to further improvements in prediction accuracy for METs, activity intensity, and activity type.

  16. Horizontal Contraction of Oceanic Lithosphere Tested Using Azimuths of Transform Faults

    NASA Astrophysics Data System (ADS)

    Gordon, R. G.; Mishra, J. K.

    2012-12-01

    A central hypothesis or approximation of plate tectonics is that the plates are rigid, which implies that oceanic lithosphere does not contract horizontally as it cools (hereinafter "no contraction"). An alternative hypothesis is that vertically averaged tensional thermal stress in the competent lithosphere is fully relieved by horizontal thermal contraction (hereinafter "full contraction"). These two hypotheses predict different azimuths for transform faults. We build on prior predictions of horizontal thermal contraction of oceanic lithosphere as a function of age to predict the bias induced in transform-fault azimuths by full contraction for 140 azimuths of transform faults that are globally distributed between 15 plate pairs. Predicted bias increases with the length of adjacent segments of mid-ocean ridges and depends on whether the adjacent ridges are stepped, crenellated, or a combination of the two. All else being equal, the bias decreases with the length of a transform fault and modestly decreases with increasing spreading rate. The value of the bias varies along a transform fault. To correct the observed transform-fault azimuths for the biases, we average the predicted values over the insonified portions of each transform fault. We find the bias to be as large as 2.5°, but more typically is ≤ 1.0°. We test whether correcting for the predicted biases improves the fit to plate motion data. To do so, we determine the sum-squared normalized misfit for various values of γ, which we define to be the fractional multiple of bias predicted for full contraction. γ = 1 corresponds to the full contraction, while γ = 0 corresponds to no contraction. We find that the minimum in sum-squared normalized misfit is obtained for γ = 0.9 ±0.4 (95% confidence limits), which excludes the hypothesis of no contraction, but is consistent with the hypothesis of full contraction. Application of the correction reduces but does not eliminate the longstanding misfit between the azimuth of the Kane transform fault with respect to those of the other North America-Nubia transform faults. We conclude that significant ridge-parallel horizontal thermal contraction occurs in young oceanic lithosphere and that it is accommodated by widening of transform-fault valleys, which causes biases in transform-fault azimuths up to 2.5°.

  17. Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept

    PubMed Central

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion. PMID:25097873

  18. Accuracy improvement capability of advanced projectile based on course correction fuze concept.

    PubMed

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  19. Thermodynamic heuristics with case-based reasoning: combined insights for RNA pseudoknot secondary structure.

    PubMed

    Al-Khatib, Ra'ed M; Rashid, Nur'Aini Abdul; Abdullah, Rosni

    2011-08-01

    The secondary structure of RNA pseudoknots has been extensively inferred and scrutinized by computational approaches. Experimental methods for determining RNA structure are time consuming and tedious; therefore, predictive computational approaches are required. Predicting the most accurate and energy-stable pseudoknot RNA secondary structure has been proven to be an NP-hard problem. In this paper, a new RNA folding approach, termed MSeeker, is presented; it includes KnotSeeker (a heuristic method) and Mfold (a thermodynamic algorithm). The global optimization of this thermodynamic heuristic approach was further enhanced by using a case-based reasoning technique as a local optimization method. MSeeker is a proposed algorithm for predicting RNA pseudoknot structure from individual sequences, especially long ones. This research demonstrates that MSeeker improves the sensitivity and specificity of existing RNA pseudoknot structure predictions. The performance and structural results from this proposed method were evaluated against seven other state-of-the-art pseudoknot prediction methods. The MSeeker method had better sensitivity than the DotKnot, FlexStem, HotKnots, pknotsRG, ILM, NUPACK and pknotsRE methods, with 79% of the predicted pseudoknot base-pairs being correct.

  20. Predicting termination of atrial fibrillation based on the structure and quantification of the recurrence plot.

    PubMed

    Sun, Rongrong; Wang, Yuanyuan

    2008-11-01

    Predicting the spontaneous termination of the atrial fibrillation (AF) leads to not only better understanding of mechanisms of the arrhythmia but also the improved treatment of the sustained AF. A novel method is proposed to characterize the AF based on structure and the quantification of the recurrence plot (RP) to predict the termination of the AF. The RP of the electrocardiogram (ECG) signal is firstly obtained and eleven features are extracted to characterize its three basic patterns. Then the sequential forward search (SFS) algorithm and Davies-Bouldin criterion are utilized to select the feature subset which can predict the AF termination effectively. Finally, the multilayer perceptron (MLP) neural network is applied to predict the AF termination. An AF database which includes one training set and two testing sets (A and B) of Holter ECG recordings is studied. Experiment results show that 97% of testing set A and 95% of testing set B are correctly classified. It demonstrates that this algorithm has the ability to predict the spontaneous termination of the AF effectively.

  1. Probabilistic Forecasting of Coastal Morphodynamic Storm Response at Fire Island, New York

    NASA Astrophysics Data System (ADS)

    Wilson, K.; Adams, P. N.; Hapke, C. J.; Lentz, E. E.; Brenner, O.

    2013-12-01

    Site-specific probabilistic models of shoreline change are useful because they are derived from direct observations so that local factors, which greatly influence coastal response, are inherently considered by the model. Fire Island, a 50-km barrier island off Long Island, New York, is periodically subject to large storms, whose waves and storm surge dramatically alter beach morphology. Nor'Ida, which impacted the Fire Island coast in 2009, was one of the larger storms to occur in the early 2000s. In this study, we improve upon a Bayesian Network (BN) model informed with historical data to predict shoreline change from Nor'Ida. We present two BN models, referred to as 'original' model (BNo) and 'revised' model (BNr), designed to predict the most probable magnitude of net shoreline movement (NSM), as measured at 934 cross-shore transects, spanning 46 km. Both are informed with observational data (wave impact hours, shoreline and dune toe change rates, pre-storm beach width, and measured NSM) organized within five nodes, but the revised model contains a sixth node to represent the distribution of material added during an April 2009 nourishment project. We evaluate model success by examining the percentage of transects on which the model chooses the correct (observed) bin value of NSM. Comparisons of observed to model-predicted NSM show BNr has slightly higher predictive success over the total study area and significantly higher success at nourished locations. The BNo, which neglects anthropogenic modification history, correctly predicted the most probable NSM in 66.6% of transects, with ambiguous prediction at 12.7% of the locations. BNr, which incorporates anthropogenic modification history, resulted in 69.4% predictive accuracy and 13.9% ambiguity. However, across nourished transects, BNr reported 72.9% predictive success, while BNo reported 61.5% success. Further, at nourished transects, BNr reported higher ambiguity of 23.5% compared to 9.9% in BNo. These results demonstrate that BNr recognizes that nourished transects may behave differently from the expectation derived from historical data and therefore is more 'cautious' in its predictions at these locations. In contrast, BNo is more confident, but less accurate, demonstrating the risk of ignoring the influences of anthropogenic modification in a probabilistic model. Over the entire study region, both models produced greatest predictive accuracy for low retreat observations (BNo: 77.6%; BNr: 76.0%) and least success at predicting low advance observations, although BNr shows considerable improvement over BNo (39.4% vs. 28.6%, respectively). BNr also was significantly more accurate at predicting observations of no shoreline change (BNo: 56.2%; BNr: 68.93%). Both models were accurate for 60% of high advance observations, and reported high predictive success for high retreat observations (BNo: 69.1%; BNr: 67.6%), the scenario of greatest concern to coastal managers.

  2. Climatic extremes improve predictions of spatial patterns of tree species

    USGS Publications Warehouse

    Zimmermann, N.E.; Yoccoz, N.G.; Edwards, T.C.; Meier, E.S.; Thuiller, W.; Guisan, Antoine; Schmatz, D.R.; Pearman, P.B.

    2009-01-01

    Understanding niche evolution, dynamics, and the response of species to climate change requires knowledge of the determinants of the environmental niche and species range limits. Mean values of climatic variables are often used in such analyses. In contrast, the increasing frequency of climate extremes suggests the importance of understanding their additional influence on range limits. Here, we assess how measures representing climate extremes (i.e., interannual variability in climate parameters) explain and predict spatial patterns of 11 tree species in Switzerland. We find clear, although comparably small, improvement (+20% in adjusted D2, +8% and +3% in cross-validated True Skill Statistic and area under the receiver operating characteristics curve values) in models that use measures of extremes in addition to means. The primary effect of including information on climate extremes is a correction of local overprediction and underprediction. Our results demonstrate that measures of climate extremes are important for understanding the climatic limits of tree species and assessing species niche characteristics. The inclusion of climate variability likely will improve models of species range limits under future conditions, where changes in mean climate and increased variability are expected.

  3. Large-extent digital soil mapping approaches for total soil depth

    NASA Astrophysics Data System (ADS)

    Mulder, Titia; Lacoste, Marine; Saby, Nicolas P. A.; Arrouays, Dominique

    2015-04-01

    Total soil depth (SDt) plays a key role in supporting various ecosystem services and properties, including plant growth, water availability and carbon stocks. Therefore, predictive mapping of SDt has been included as one of the deliverables within the GlobalSoilMap project. In this work SDt was predicted for France following the directions of GlobalSoilMap, which requires modelling at 90m resolution. This first method, further referred to as DM, consisted of modelling the deterministic trend in SDt using data mining, followed by a bias correction and ordinary kriging of the residuals. Considering the total surface area of France, being about 540K km2, employed methods may need to be able dealing with large data sets. Therefore, a second method, multi-resolution kriging (MrK) for large datasets, was implemented. This method consisted of modelling the deterministic trend by a linear model, followed by interpolation of the residuals. For the two methods, the general trend was assumed to be explained by the biotic and abiotic environmental conditions, as described by the Soil-Landscape paradigm. The mapping accuracy was evaluated by an internal validation and its concordance with previous soil maps. In addition, the prediction interval for DM and the confidence interval for MrK were determined. Finally, the opportunities and limitations of both approaches were evaluated. The results showed consistency in mapped spatial patterns and a good prediction of the mean values. DM was better capable in predicting extreme values due to the bias correction. Also, DM was more powerful in capturing the deterministic trend than the linear model of the MrK approach. However, MrK was found to be more straightforward and flexible in delivering spatial explicit uncertainty measures. The validation indicated that DM was more accurate than MrK. Improvements for DM may be expected by predicting soil depth classes. MrK shows potential for modelling beyond the country level, at high resolution. Large-extent digital soil mapping approaches for SDt may be improved by (1) taking into account SDt observations which are censored and (2) using high-resolution biotic and abiotic environmental data. The latter may improve modelling the soil-landscape interactions influencing soil pedogenesis. Concluding, this work provided a robust and reproducible method (DM) for high-resolution soil property modelling, in accordance with the GlobalSoilMap requirements and an efficient alternative for large-extent digital soil mapping (MrK).

  4. Updating the Skating Multistage Aerobic Test and Correction for V[Combining Dot Above]O2max Prediction Using a New Skating Economy Index in Elite Youth Ice Hockey Players.

    PubMed

    Allisse, Maxime; Bui, Hung Tien; Léger, Luc; Comtois, Alain-Steve; Leone, Mario

    2018-05-07

    Allisse, M, Bui, HT, Léger, L, Comtois, A-S, and Leone, M. Updating the skating multistage aerobic test and correction for V[Combining Dot Above]O2max prediction using a new skating economy index in elite youth ice hockey players. J Strength Cond Res XX(X): 000-000, 2018-A number of field tests, including the skating multistage aerobic test (SMAT), have been developed to predict V[Combining Dot Above]O2max in ice hockey players. The SMAT, like most field tests, assumes that participants who reach a given stage have the same oxygen uptake, which is not usually true. Thus, the objectives of this research are to update the V[Combining Dot Above]O2 values during the SMAT using a portable breath-by-breath metabolic analyzer and to propose a simple index of skating economy to improve the prediction of oxygen uptake. Twenty-six elite hockey players (age 15.8 ± 1.3 years) participated in this study. The oxygen uptake was assessed using a portable metabolic analyzer (K4b) during an on-ice maximal shuttle skate test. To develop an index of skating economy called the skating stride index (SSI), the number of skating strides was compiled for each stage of the test. The SMAT enabled the prediction of the V[Combining Dot Above]O2max (ml·kg·min) from the maximal velocity (m·s) and the SSI (skating strides·kg) using the following regression equation: V[Combining Dot Above]O2max = (14.94 × maximal velocity) + (3.68 × SSI) - 24.98 (r = 0.95, SEE = 1.92). This research allowed for the update of the oxygen uptake values of the SMAT and proposed a simple measure of skating efficiency for a more accurate evaluation of V[Combining Dot Above]O2max in elite youth hockey players. By comparing the highest and lowest observed SSI scores in our sample, it was noted that the V[Combining Dot Above]O2 values can vary by up to 5 ml·kg·min. Our results suggest that skating economy should be included in the prediction of V[Combining Dot Above]O2max to improve prediction accuracy.

  5. Prostate Health Index improves multivariable risk prediction of aggressive prostate cancer.

    PubMed

    Loeb, Stacy; Shin, Sanghyuk S; Broyles, Dennis L; Wei, John T; Sanda, Martin; Klee, George; Partin, Alan W; Sokoll, Lori; Chan, Daniel W; Bangma, Chris H; van Schaik, Ron H N; Slawin, Kevin M; Marks, Leonard S; Catalona, William J

    2017-07-01

    To examine the use of the Prostate Health Index (PHI) as a continuous variable in multivariable risk assessment for aggressive prostate cancer in a large multicentre US study. The study population included 728 men, with prostate-specific antigen (PSA) levels of 2-10 ng/mL and a negative digital rectal examination, enrolled in a prospective, multi-site early detection trial. The primary endpoint was aggressive prostate cancer, defined as biopsy Gleason score ≥7. First, we evaluated whether the addition of PHI improves the performance of currently available risk calculators (the Prostate Cancer Prevention Trial [PCPT] and European Randomised Study of Screening for Prostate Cancer [ERSPC] risk calculators). We also designed and internally validated a new PHI-based multivariable predictive model, and created a nomogram. Of 728 men undergoing biopsy, 118 (16.2%) had aggressive prostate cancer. The PHI predicted the risk of aggressive prostate cancer across the spectrum of values. Adding PHI significantly improved the predictive accuracy of the PCPT and ERSPC risk calculators for aggressive disease. A new model was created using age, previous biopsy, prostate volume, PSA and PHI, with an area under the curve of 0.746. The bootstrap-corrected model showed good calibration with observed risk for aggressive prostate cancer and had net benefit on decision-curve analysis. Using PHI as part of multivariable risk assessment leads to a significant improvement in the detection of aggressive prostate cancer, potentially reducing harms from unnecessary prostate biopsy and overdiagnosis. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  6. Further Improvements to Linear Mixed Models for Genome-Wide Association Studies

    PubMed Central

    Widmer, Christian; Lippert, Christoph; Weissbrod, Omer; Fusi, Nicolo; Kadie, Carl; Davidson, Robert; Listgarten, Jennifer; Heckerman, David

    2014-01-01

    We examine improvements to the linear mixed model (LMM) that better correct for population structure and family relatedness in genome-wide association studies (GWAS). LMMs rely on the estimation of a genetic similarity matrix (GSM), which encodes the pairwise similarity between every two individuals in a cohort. These similarities are estimated from single nucleotide polymorphisms (SNPs) or other genetic variants. Traditionally, all available SNPs are used to estimate the GSM. In empirical studies across a wide range of synthetic and real data, we find that modifications to this approach improve GWAS performance as measured by type I error control and power. Specifically, when only population structure is present, a GSM constructed from SNPs that well predict the phenotype in combination with principal components as covariates controls type I error and yields more power than the traditional LMM. In any setting, with or without population structure or family relatedness, a GSM consisting of a mixture of two component GSMs, one constructed from all SNPs and another constructed from SNPs that well predict the phenotype again controls type I error and yields more power than the traditional LMM. Software implementing these improvements and the experimental comparisons are available at http://microsoft.com/science. PMID:25387525

  7. Further Improvements to Linear Mixed Models for Genome-Wide Association Studies

    NASA Astrophysics Data System (ADS)

    Widmer, Christian; Lippert, Christoph; Weissbrod, Omer; Fusi, Nicolo; Kadie, Carl; Davidson, Robert; Listgarten, Jennifer; Heckerman, David

    2014-11-01

    We examine improvements to the linear mixed model (LMM) that better correct for population structure and family relatedness in genome-wide association studies (GWAS). LMMs rely on the estimation of a genetic similarity matrix (GSM), which encodes the pairwise similarity between every two individuals in a cohort. These similarities are estimated from single nucleotide polymorphisms (SNPs) or other genetic variants. Traditionally, all available SNPs are used to estimate the GSM. In empirical studies across a wide range of synthetic and real data, we find that modifications to this approach improve GWAS performance as measured by type I error control and power. Specifically, when only population structure is present, a GSM constructed from SNPs that well predict the phenotype in combination with principal components as covariates controls type I error and yields more power than the traditional LMM. In any setting, with or without population structure or family relatedness, a GSM consisting of a mixture of two component GSMs, one constructed from all SNPs and another constructed from SNPs that well predict the phenotype again controls type I error and yields more power than the traditional LMM. Software implementing these improvements and the experimental comparisons are available at http://microsoft.com/science.

  8. Further improvements to linear mixed models for genome-wide association studies.

    PubMed

    Widmer, Christian; Lippert, Christoph; Weissbrod, Omer; Fusi, Nicolo; Kadie, Carl; Davidson, Robert; Listgarten, Jennifer; Heckerman, David

    2014-11-12

    We examine improvements to the linear mixed model (LMM) that better correct for population structure and family relatedness in genome-wide association studies (GWAS). LMMs rely on the estimation of a genetic similarity matrix (GSM), which encodes the pairwise similarity between every two individuals in a cohort. These similarities are estimated from single nucleotide polymorphisms (SNPs) or other genetic variants. Traditionally, all available SNPs are used to estimate the GSM. In empirical studies across a wide range of synthetic and real data, we find that modifications to this approach improve GWAS performance as measured by type I error control and power. Specifically, when only population structure is present, a GSM constructed from SNPs that well predict the phenotype in combination with principal components as covariates controls type I error and yields more power than the traditional LMM. In any setting, with or without population structure or family relatedness, a GSM consisting of a mixture of two component GSMs, one constructed from all SNPs and another constructed from SNPs that well predict the phenotype again controls type I error and yields more power than the traditional LMM. Software implementing these improvements and the experimental comparisons are available at http://microsoft.com/science.

  9. The use of the logistic model in space motion sickness prediction

    NASA Technical Reports Server (NTRS)

    Lin, Karl K.; Reschke, Millard F.

    1987-01-01

    The one-equation and the two-equation logistic models were used to predict subjects' susceptibility to motion sickness in KC-135 parabolic flights using data from other ground-based motion sickness tests. The results show that the logistic models correctly predicted substantially more cases (an average of 13 percent) in the data subset used for model building. Overall, the logistic models ranged from 53 to 65 percent predictions of the three endpoint parameters, whereas the Bayes linear discriminant procedure ranged from 48 to 65 percent correct for the cross validation sample.

  10. Adherence to the gluten-free diet can achieve the therapeutic goals in almost all patients with coeliac disease: A 5-year longitudinal study from diagnosis.

    PubMed

    Newnham, Evan D; Shepherd, Susan J; Strauss, Boyd J; Hosking, Patrick; Gibson, Peter R

    2016-02-01

    Key aims of treatment of coeliac disease are to heal the intestinal mucosa and correct nutritional abnormalities. We aim to determine prospectively the degree of success and time course of achieving those goals with a gluten-free diet. Ninety-nine patients were enrolled at diagnosis and taught the diet. The first 52 were reassessed at 1 year and 46 at 5 years, 25 being assessed at the three time points regarding dietary compliance (dietitian-assessed), coeliac serology, bone mineral density and body composition analysis by dual energy X-ray absorptiometry, and intestinal histology. Mean age (range) was 40 (18-71) years and 48 (76%) were female. Dietary compliance was very good to excellent in all but one. Tissue transglutaminase IgA was persistently elevated in 44% at 1 year and 30% at 5 years and were poorly predictive of mucosal disease. Rates of mucosal remission (Marsh 0) and response (Marsh 0/1) were 37% and 54%, and 50% and 85% at 1 and 5 years, respectively. Fat mass increased significantly over the first year in those with normal/reduced body mass index. Lean body mass indices more slowly improved irrespective of status at diagnosis with significant improvement at 5 years. Bone mass increased only in those with osteopenia or osteoporosis, mostly in year 1. Dietary compliance is associated with a high chance of healing the intestinal lesion and correction of specific body compositional abnormalities. The time course differed with body fat improving within 1 year, and correction of the mucosal lesion and improvement in lean mass and bone mass taking longer. © 2015 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.

  11. A review of propeller noise prediction methodology: 1919-1994

    NASA Technical Reports Server (NTRS)

    Metzger, F. Bruce

    1995-01-01

    This report summarizes a review of the literature regarding propeller noise prediction methods. The review is divided into six sections: (1) early methods; (2) more recent methods based on earlier theory; (3) more recent methods based on the Acoustic Analogy; (4) more recent methods based on Computational Acoustics; (5) empirical methods; and (6) broadband methods. The report concludes that there are a large number of noise prediction procedures available which vary markedly in complexity. Deficiencies in accuracy of methods in many cases may be related, not to the methods themselves, but the accuracy and detail of the aerodynamic inputs used to calculate noise. The steps recommended in the report to provide accurate and easy to use prediction methods are: (1) identify reliable test data; (2) define and conduct test programs to fill gaps in the existing data base; (3) identify the most promising prediction methods; (4) evaluate promising prediction methods relative to the data base; (5) identify and correct the weaknesses in the prediction methods, including lack of user friendliness, and include features now available only in research codes; (6) confirm the accuracy of improved prediction methods to the data base; and (7) make the methods widely available and provide training in their use.

  12. Hypothesis, Prediction, and Conclusion: Using Nature of Science Terminology Correctly

    ERIC Educational Resources Information Center

    Eastwell, Peter

    2012-01-01

    This paper defines the terms "hypothesis," "prediction," and "conclusion" and shows how to use the terms correctly in scientific investigations in both the school and science education research contexts. The scientific method, or hypothetico-deductive (HD) approach, is described and it is argued that an understanding of the scientific method,…

  13. Comparison of full field and anomaly initialisation for decadal climate prediction: towards an optimal consistency between the ocean and sea-ice anomaly initialisation state

    NASA Astrophysics Data System (ADS)

    Volpi, Danila; Guemas, Virginie; Doblas-Reyes, Francisco J.

    2017-08-01

    Decadal prediction exploits sources of predictability from both the internal variability through the initialisation of the climate model from observational estimates, and the external radiative forcings. When a model is initialised with the observed state at the initial time step (Full Field Initialisation—FFI), the forecast run drifts towards the biased model climate. Distinguishing between the climate signal to be predicted and the model drift is a challenging task, because the application of a-posteriori bias correction has the risk of removing part of the variability signal. The anomaly initialisation (AI) technique aims at addressing the drift issue by answering the following question: if the model is allowed to start close to its own attractor (i.e. its biased world), but the phase of the simulated variability is constrained toward the contemporaneous observed one at the initialisation time, does the prediction skill improve? The relative merits of the FFI and AI techniques applied respectively to the ocean component and the ocean and sea ice components simultaneously in the EC-Earth global coupled model are assessed. For both strategies the initialised hindcasts show better skill than historical simulations for the ocean heat content and AMOC along the first two forecast years, for sea ice and PDO along the first forecast year, while for AMO the improvements are statistically significant for the first two forecast years. The AI in the ocean and sea ice components significantly improves the skill of the Arctic sea surface temperature over the FFI.

  14. Accurate Prediction of Contact Numbers for Multi-Spanning Helical Membrane Proteins

    PubMed Central

    Li, Bian; Mendenhall, Jeffrey; Nguyen, Elizabeth Dong; Weiner, Brian E.; Fischer, Axel W.; Meiler, Jens

    2017-01-01

    Prediction of the three-dimensional (3D) structures of proteins by computational methods is acknowledged as an unsolved problem. Accurate prediction of important structural characteristics such as contact number is expected to accelerate the otherwise slow progress being made in the prediction of 3D structure of proteins. Here, we present a dropout neural network-based method, TMH-Expo, for predicting the contact number of transmembrane helix (TMH) residues from sequence. Neuronal dropout is a strategy where certain neurons of the network are excluded from back-propagation to prevent co-adaptation of hidden-layer neurons. By using neuronal dropout, overfitting was significantly reduced and performance was noticeably improved. For multi-spanning helical membrane proteins, TMH-Expo achieved a remarkable Pearson correlation coefficient of 0.69 between predicted and experimental values and a mean absolute error of only 1.68. In addition, among those membrane protein–membrane protein interface residues, 76.8% were correctly predicted. Mapping of predicted contact numbers onto structures indicates that contact numbers predicted by TMH-Expo reflect the exposure patterns of TMHs and reveal membrane protein–membrane protein interfaces, reinforcing the potential of predicted contact numbers to be used as restraints for 3D structure prediction and protein–protein docking. TMH-Expo can be accessed via a Web server at www.meilerlab.org. PMID:26804342

  15. Using discharge data to reduce structural deficits in a hydrological model with a Bayesian inference approach and the implications for the prediction of critical source areas

    NASA Astrophysics Data System (ADS)

    Frey, M. P.; Stamm, C.; Schneider, M. K.; Reichert, P.

    2011-12-01

    A distributed hydrological model was used to simulate the distribution of fast runoff formation as a proxy for critical source areas for herbicide pollution in a small agricultural catchment in Switzerland. We tested to what degree predictions based on prior knowledge without local measurements could be improved upon relying on observed discharge. This learning process consisted of five steps: For the prior prediction (step 1), knowledge of the model parameters was coarse and predictions were fairly uncertain. In the second step, discharge data were used to update the prior parameter distribution. Effects of uncertainty in input data and model structure were accounted for by an autoregressive error model. This step decreased the width of the marginal distributions of parameters describing the lower boundary (percolation rates) but hardly affected soil hydraulic parameters. Residual analysis (step 3) revealed model structure deficits. We modified the model, and in the subsequent Bayesian updating (step 4) the widths of the posterior marginal distributions were reduced for most parameters compared to those of the prior. This incremental procedure led to a strong reduction in the uncertainty of the spatial prediction. Thus, despite only using spatially integrated data (discharge), the spatially distributed effect of the improved model structure can be expected to improve the spatially distributed predictions also. The fifth step consisted of a test with independent spatial data on herbicide losses and revealed ambiguous results. The comparison depended critically on the ratio of event to preevent water that was discharged. This ratio cannot be estimated from hydrological data only. The results demonstrate that the value of local data is strongly dependent on a correct model structure. An iterative procedure of Bayesian updating, model testing, and model modification is suggested.

  16. Seasonal predictions of equatorial Atlantic SST in a low-resolution CGCM with surface heat flux correction

    NASA Astrophysics Data System (ADS)

    Dippe, Tina; Greatbatch, Richard; Ding, Hui

    2016-04-01

    The dominant mode of interannual variability in tropical Atlantic sea surface temperatures (SSTs) is the Atlantic Niño or Zonal Mode. Akin to the El Niño-Southern Oscillation in the Pacific sector, it is able to impact the climate both of the adjacent equatorial African continent and remote regions. Due to heavy biases in the mean state climate of the equatorial-to-subtropical Atlantic, however, most state-of-the-art coupled global climate models (CGCMs) are unable to realistically simulate equatorial Atlantic variability. In this study, the Kiel Climate Model (KCM) is used to investigate the impact of a simple bias alleviation technique on the predictability of equatorial Atlantic SSTs. Two sets of seasonal forecasting experiments are performed: An experiment using the standard KCM (STD), and an experiment with additional surface heat flux correction (FLX) that efficiently removes the SST bias from simulations. Initial conditions for both experiments are generated by the KCM run in partially coupled mode, a simple assimilation technique that forces the KCM with observed wind stress anomalies and preserves SST as a fully prognostic variable. Seasonal predictions for both sets of experiments are run four times yearly for 1981-2012. Results: Heat flux correction substantially improves the simulated variability in the initialization runs for boreal summer and fall (June-October). In boreal spring (March-May), however, neither the initialization runs of the STD or FLX-experiments are able to capture the observed variability. FLX-predictions show no consistent enhancement of skill relative to the predictions of the STD experiment over the course of the year. The skill of persistence forecasts is hardly beat by either of the two experiments in any season, limiting the usefulness of the few forecasts that show significant skill. However, FLX-forecasts initialized in May recover skill in July and August, the peak season of the Atlantic Niño (anomaly correlation coefficients of about 0.3). Further study is necessary to determine the mechanism that drives this potentially useful recovery.

  17. Sexing adult black-legged kittiwakes by DNA, behavior, and morphology

    USGS Publications Warehouse

    Jodice, P.G.R.; Lanctot, Richard B.; Gill, V.A.; Roby, D.D.; Hatch, Shyla A.

    2000-01-01

    We sexed adult Black-legged Kittiwakes (Rissa tridactyla) using DNA-based genetic techniques, behavior and morphology and compared results from these techniques. Genetic and morphology data were collected on 605 breeding kittiwakes and sex-specific behaviors were recorded for a sub-sample of 285 of these individuals. We compared sex classification based on both genetic and behavioral techniques for this sub-sample to assess the accuracy of the genetic technique. DNA-based techniques correctly sexed 97.2% and sex-specific behaviors, 96.5% of this sub-sample. We used the corrected genetic classifications from this sub-sample and the genetic classifications for the remaining birds, under the assumption they were correct, to develop predictive morphometric discriminant function models for all 605 birds. These models accurately predicted the sex of 73-96% of individuals examined, depending on the sample of birds used and the characters included. The most accurate single measurement for determining sex was length of head plus bill, which correctly classified 88% of individuals tested. When both members of a pair were measured, classification levels improved and approached the accuracy of both behavioral observations and genetic analyses. Morphometric techniques were only slightly less accurate than genetic techniques but were easier to implement in the field and less costly. Behavioral observations, while highly accurate, required that birds be easily observable during the breeding season and that birds be identifiable. As such, sex-specific behaviors may best be applied as a confirmation of sex for previously marked birds. All three techniques thus have the potential to be highly accurate, and the selection of one or more will depend on the circumstances of any particular field study.

  18. Do Current Recommendations for Upper Instrumented Vertebra Predict Shoulder Imbalance? An Attempted Validation of Level Selection for Adolescent Idiopathic Scoliosis.

    PubMed

    Bjerke, Benjamin T; Cheung, Zoe B; Shifflett, Grant D; Iyer, Sravisht; Derman, Peter B; Cunningham, Matthew E

    2015-10-01

    Shoulder balance for adolescent idiopathic scoliosis (AIS) patients is associated with patient satisfaction and self-image. However, few validated systems exist for selecting the upper instrumented vertebra (UIV) post-surgical shoulder balance. The purpose is to examine the existing UIV selection criteria and correlate with post-surgical shoulder balance in AIS patients. Patients who underwent spinal fusion at age 10-18 years for AIS over a 6-year period were reviewed. All patients with a minimum of 1-year radiographic follow-up were included. Imbalance was determined to be radiographic shoulder height |RSH| ≥ 15 mm at latest follow-up. Three UIV selection methods were considered: Lenke, Ilharreborde, and Trobisch. A recommended UIV was determined using each method from pre-surgical radiographs. The recommended UIV for each method was compared to the actual UIV instrumented for all three methods; concordance between these levels was defined as "Correct" UIV selection, and discordance was defined as "Incorrect" selection. One hundred seventy-one patients were included with 2.3 ± 1.1 year follow-up. For all methods, "Correct" UIV selection resulted in more shoulder imbalance than "Incorrect" UIV selection. Overall shoulder imbalance incidence was improved from 31.0% (53/171) to 15.2% (26/171). New shoulder imbalance incidence for patients with previously level shoulders was 8.8%. We could not identify a set of UIV selection criteria that accurately predicted post-surgical shoulder balance. Further validated measures are needed in this area. The complexity of proximal thoracic curve correction is underscored in a case example, where shoulder imbalance occurred despite "Correct" UIV selection by all methods.

  19. Lipid correction model of carbon stable isotopes for a cosmopolitan predator, spiny dogfish Squalus acanthias.

    PubMed

    Reum, J C P

    2011-12-01

    Three lipid correction models were evaluated for liver and white dorsal muscle from Squalus acanthias. For muscle, all three models performed well, based on the Akaike Information Criterion value corrected for small sample sizes (AIC(c) ), and predicted similar lipid corrections to δ(13) C that were up to 2.8 ‰ higher than those predicted using previously published models based on multispecies data. For liver, which possessed higher bulk C:N values compared to that of white muscle, all three models performed poorly and lipid-corrected δ(13) C values were best approximated by simply adding 5.74 ‰ to bulk δ(13) C values. © 2011 The Author. Journal of Fish Biology © 2011 The Fisheries Society of the British Isles.

  20. The impact of missing trauma data on predicting massive transfusion

    PubMed Central

    Trickey, Amber W.; Fox, Erin E.; del Junco, Deborah J.; Ning, Jing; Holcomb, John B.; Brasel, Karen J.; Cohen, Mitchell J.; Schreiber, Martin A.; Bulger, Eileen M.; Phelan, Herb A.; Alarcon, Louis H.; Myers, John G.; Muskat, Peter; Cotton, Bryan A.; Wade, Charles E.; Rahbar, Mohammad H.

    2013-01-01

    INTRODUCTION Missing data are inherent in clinical research and may be especially problematic for trauma studies. This study describes a sensitivity analysis to evaluate the impact of missing data on clinical risk prediction algorithms. Three blood transfusion prediction models were evaluated utilizing an observational trauma dataset with valid missing data. METHODS The PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study included patients requiring ≥ 1 unit of red blood cells (RBC) at 10 participating U.S. Level I trauma centers from July 2009 – October 2010. Physiologic, laboratory, and treatment data were collected prospectively up to 24h after hospital admission. Subjects who received ≥ 10 RBC units within 24h of admission were classified as massive transfusion (MT) patients. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation. A sensitivity analysis for missing data was conducted to determine the upper and lower bounds for correct classification percentages. RESULTS PROMMTT enrolled 1,245 subjects. MT was received by 297 patients (24%). Missing percentage ranged from 2.2% (heart rate) to 45% (respiratory rate). Proportions of complete cases utilized in the MT prediction models ranged from 41% to 88%. All models demonstrated similar correct classification percentages using complete case analysis and multiple imputation. In the sensitivity analysis, correct classification upper-lower bound ranges per model were 4%, 10%, and 12%. Predictive accuracy for all models using PROMMTT data was lower than reported in the original datasets. CONCLUSIONS Evaluating the accuracy clinical prediction models with missing data can be misleading, especially with many predictor variables and moderate levels of missingness per variable. The proposed sensitivity analysis describes the influence of missing data on risk prediction algorithms. Reporting upper/lower bounds for percent correct classification may be more informative than multiple imputation, which provided similar results to complete case analysis in this study. PMID:23778514

  1. Improving Flood Prediction By the Assimilation of Satellite Soil Moisture in Poorly Monitored Catchments.

    NASA Astrophysics Data System (ADS)

    Alvarez-Garreton, C. D.; Ryu, D.; Western, A. W.; Crow, W. T.; Su, C. H.; Robertson, D. E.

    2014-12-01

    Flood prediction in poorly monitored catchments is among the greatest challenges faced by hydrologists. To address this challenge, an increasing number of studies in the last decade have explored methods to integrate various existing observations from ground and satellites. One approach in particular, is the assimilation of satellite soil moisture (SM-DA) into rainfall-runoff models. The rationale is that satellite soil moisture (SSM) can be used to correct model soil water states, enabling more accurate prediction of catchment response to precipitation and thus better streamflow. However, there is still no consensus on the most effective SM-DA scheme and how this might depend on catchment scale, climate characteristics, runoff mechanisms, model and SSM products used, etc. In this work, an operational SM-DA scheme was set up in the poorly monitored, large (>40,000 km2), semi-arid Warrego catchment situated in eastern Australia. We assimilated passive and active SSM products into the probability distributed model (PDM) using an ensemble Kalman filter. We explored factors influencing the SM-DA framework, including relatively new techniques to remove model-observation bias, estimate observation errors and represent model errors. Furthermore, we explored the advantages of accounting for the spatial distribution of forcing and channel routing processes within the catchment by implementing and comparing lumped and semi-distributed model setups. Flood prediction is improved by SM-DA (Figure), with a 30% reduction of the average root-mean-squared difference of the ensemble prediction, a 20% reduction of the false alarm ratio and a 40% increase of the ensemble mean Nash-Sutcliffe efficiency. SM-DA skill does not significantly change with different observation error assumptions, but the skill strongly depends on the observational bias correction technique used, and more importantly, on the performance of the open-loop model before assimilation. Our findings imply that proper pre-processing of SSM is important for the efficacy of the SM-DA and assimilation performance is critically affected by the quality of model calibration. We therefore recommend focusing efforts on these two factors, while further evaluating the trade-offs between model complexity and data availability.

  2. Professionals' and laypersons' appreciation of various options for Class III surgical correction.

    PubMed

    Fabré, M; Mossaz, C; Christou, P; Kiliaridis, S

    2010-08-01

    The objectives of this study were to evaluate the assessments of maxillofacial surgeons, orthodontists, and laypersons on the predicted aesthetic outcome of various surgical options in Class III correction and the associations between certain initial cephalometric values and the judges' preferred option. Pre-surgical lateral headfilms and coloured profile photographs of 18 skeletal Class III Caucasian adult patients (10 males and 8 females) with a mean age of 24.5 years were used. The headfilms were hand traced and digitized. Conventional cephalometric analysis was performed. Computerized predictions of three surgical options, mandibular setback, Le Fort I advancement, and bimaxillary surgery, were made. For each case, the pre-surgical profile photograph with the three predictions was presented on a printed page. The questionnaire was sent to 51 maxillofacial surgeons (response rate 45.1 per cent), 78 orthodontists (response rate 71.8 per cent), and 61 laypersons (response rate 100 per cent) to aesthetically evaluate the pre-surgical photographs and the surgical predictions by placing a mark along a 10-graded visual analogue scale (VAS) using a standard profile for calibration. Confidence interval was calculated for each patient. An independent samples t-test was used to detect initial cephalometric values associated with the judges' preferred option and analysis of variance/Tukey's honestly significant differences to evaluate differences between judges. Intra-observer reliability was assessed with a paired t-test. All treatment predictions led to improved scoring of facial aesthetics with the exception of the setback option for three patients. For 14 patients, general agreement for the preferred option existed between the three groups of judges. Laypersons tended to give lower improvement scores than professionals. Overjet, nasofacial, and nasomental angles were important in decision making between the mandibular setback and Le Fort I options (the more negative the overjet, the larger the nasofacial angle, the smaller the nasomental angle, the greater the preference for the Le Fort I option). Wits appraisal seemed to be important in decision making between the mandibular setback and bimaxillary options (the more negative the Wits appraisal, the greater the preference for the latter option).

  3. Fiducial marker-based correction for involuntary motion in weight-bearing C-arm CT scanning of knees. II. Experiment.

    PubMed

    Choi, Jang-Hwan; Maier, Andreas; Keil, Andreas; Pal, Saikat; McWalter, Emily J; Beaupré, Gary S; Gold, Garry E; Fahrig, Rebecca

    2014-06-01

    A C-arm CT system has been shown to be capable of scanning a single cadaver leg under loaded conditions by virtue of its highly flexible acquisition trajectories. In Part I of this study, using the 4D XCAT-based numerical simulation, the authors predicted that the involuntary motion in the lower body of subjects in weight-bearing positions would seriously degrade image quality and the authors suggested three motion compensation methods by which the reconstructions could be corrected to provide diagnostic image quality. Here, the authors demonstrate that a flat-panel angiography system is appropriate for scanning both legs of subjects in vivo under weight-bearing conditions and further evaluate the three motion-correction algorithms using in vivo data. The geometry of a C-arm CT system for a horizontal scan trajectory was calibrated using the PDS-2 phantom. The authors acquired images of two healthy volunteers while lying supine on a table, standing, and squatting at several knee flexion angles. In order to identify the involuntary motion of the lower body, nine 1-mm-diameter tantalum fiducial markers were attached around the knee. The static mean marker position in 3D, a reference for motion compensation, was estimated by back-projecting detected markers in multiple projections using calibrated projection matrices and identifying the intersection points in 3D of the back-projected rays. Motion was corrected using three different methods (described in detail previously): (1) 2D projection shifting, (2) 2D deformable projection warping, and (3) 3D rigid body warping. For quantitative image quality analysis, SSIM indices for the three methods were compared using the supine data as a ground truth. A 2D Euclidean distance-based metric of subjects' motion ranged from 0.85 mm (±0.49 mm) to 3.82 mm (±2.91 mm) (corresponding to 2.76 to 12.41 pixels) resulting in severe motion artifacts in 3D reconstructions. Shifting in 2D, 2D warping, and 3D warping improved the SSIM in the central slice by 20.22%, 16.83%, and 25.77% in the data with the largest motion among the five datasets (SCAN5); improvement in off-center slices was 18.94%, 29.14%, and 36.08%, respectively. The authors showed that C-arm CT control can be implemented for nonstandard horizontal trajectories which enabled us to scan and successfully reconstruct both legs of volunteers in weight-bearing positions. As predicted using theoretical models, the proposed motion correction methods improved image quality by reducing motion artifacts in reconstructions; 3D warping performed better than the 2D methods, especially in off-center slices.

  4. Fiducial marker-based correction for involuntary motion in weight-bearing C-arm CT scanning of knees. II. Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Jang-Hwan; Maier, Andreas; Keil, Andreas

    2014-06-15

    Purpose: A C-arm CT system has been shown to be capable of scanning a single cadaver leg under loaded conditions by virtue of its highly flexible acquisition trajectories. In Part I of this study, using the 4D XCAT-based numerical simulation, the authors predicted that the involuntary motion in the lower body of subjects in weight-bearing positions would seriously degrade image quality and the authors suggested three motion compensation methods by which the reconstructions could be corrected to provide diagnostic image quality. Here, the authors demonstrate that a flat-panel angiography system is appropriate for scanning both legs of subjectsin vivo undermore » weight-bearing conditions and further evaluate the three motion-correction algorithms using in vivo data. Methods: The geometry of a C-arm CT system for a horizontal scan trajectory was calibrated using the PDS-2 phantom. The authors acquired images of two healthy volunteers while lying supine on a table, standing, and squatting at several knee flexion angles. In order to identify the involuntary motion of the lower body, nine 1-mm-diameter tantalum fiducial markers were attached around the knee. The static mean marker position in 3D, a reference for motion compensation, was estimated by back-projecting detected markers in multiple projections using calibrated projection matrices and identifying the intersection points in 3D of the back-projected rays. Motion was corrected using three different methods (described in detail previously): (1) 2D projection shifting, (2) 2D deformable projection warping, and (3) 3D rigid body warping. For quantitative image quality analysis, SSIM indices for the three methods were compared using the supine data as a ground truth. Results: A 2D Euclidean distance-based metric of subjects’ motion ranged from 0.85 mm (±0.49 mm) to 3.82 mm (±2.91 mm) (corresponding to 2.76 to 12.41 pixels) resulting in severe motion artifacts in 3D reconstructions. Shifting in 2D, 2D warping, and 3D warping improved the SSIM in the central slice by 20.22%, 16.83%, and 25.77% in the data with the largest motion among the five datasets (SCAN5); improvement in off-center slices was 18.94%, 29.14%, and 36.08%, respectively. Conclusions: The authors showed that C-arm CT control can be implemented for nonstandard horizontal trajectories which enabled us to scan and successfully reconstruct both legs of volunteers in weight-bearing positions. As predicted using theoretical models, the proposed motion correction methods improved image quality by reducing motion artifacts in reconstructions; 3D warping performed better than the 2D methods, especially in off-center slices.« less

  5. Feedback-related brain activity predicts learning from feedback in multiple-choice testing.

    PubMed

    Ernst, Benjamin; Steinhauser, Marco

    2012-06-01

    Different event-related potentials (ERPs) have been shown to correlate with learning from feedback in decision-making tasks and with learning in explicit memory tasks. In the present study, we investigated which ERPs predict learning from corrective feedback in a multiple-choice test, which combines elements from both paradigms. Participants worked through sets of multiple-choice items of a Swahili-German vocabulary task. Whereas the initial presentation of an item required the participants to guess the answer, corrective feedback could be used to learn the correct response. Initial analyses revealed that corrective feedback elicited components related to reinforcement learning (FRN), as well as to explicit memory processing (P300) and attention (early frontal positivity). However, only the P300 and early frontal positivity were positively correlated with successful learning from corrective feedback, whereas the FRN was even larger when learning failed. These results suggest that learning from corrective feedback crucially relies on explicit memory processing and attentional orienting to corrective feedback, rather than on reinforcement learning.

  6. Simulation of cryogenic turbopump annular seals

    NASA Astrophysics Data System (ADS)

    Palazzolo, Alan B.

    1992-12-01

    The goal of the current work is to develop software that can accurately predict the dynamic coefficients, forces, leakage and horsepower loss for annular seals which have a potential for affecting the rotordynamic behavior of the pumps. The fruit of last year's research was the computer code SEALPAL which included capabilities for linear tapered geometry, Moody friction factor and inlet pre-swirl. This code produced results which in most cases compared very well with check cases presented in the literature. TAMUSEAL Icode, which was written to improve SEALPAL by correcting a bug and by adding more accurate integration algorithms and additional capabilities, was then used to predict dynamic coefficients and leakage for the NASA/Pratt and Whitney Alternate Turbopump Development (ATD) LOX Pump's seal.

  7. Verification of sub-grid filtered drag models for gas-particle fluidized beds with immersed cylinder arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkar, Avik; Sun, Xin; Sundaresan, Sankaran

    2014-04-23

    The accuracy of coarse-grid multiphase CFD simulations of fluidized beds may be improved via the inclusion of filtered constitutive models. In our previous study (Sarkar et al., Chem. Eng. Sci., 104, 399-412), we developed such a set of filtered drag relationships for beds with immersed arrays of cooling tubes. Verification of these filtered drag models is addressed in this work. Predictions from coarse-grid simulations with the sub-grid filtered corrections are compared against accurate, highly-resolved simulations of full-scale turbulent and bubbling fluidized beds. The filtered drag models offer a computationally efficient yet accurate alternative for obtaining macroscopic predictions, but the spatialmore » resolution of meso-scale clustering heterogeneities is sacrificed.« less

  8. Using Unified Modelling Language (UML) as a process-modelling technique for clinical-research process improvement.

    PubMed

    Kumarapeli, P; De Lusignan, S; Ellis, T; Jones, B

    2007-03-01

    The Primary Care Data Quality programme (PCDQ) is a quality-improvement programme which processes routinely collected general practice computer data. Patient data collected from a wide range of different brands of clinical computer systems are aggregated, processed, and fed back to practices in an educational context to improve the quality of care. Process modelling is a well-established approach used to gain understanding and systematic appraisal, and identify areas of improvement of a business process. Unified modelling language (UML) is a general purpose modelling technique used for this purpose. We used UML to appraise the PCDQ process to see if the efficiency and predictability of the process could be improved. Activity analysis and thinking-aloud sessions were used to collect data to generate UML diagrams. The UML model highlighted the sequential nature of the current process as a barrier for efficiency gains. It also identified the uneven distribution of process controls, lack of symmetric communication channels, critical dependencies among processing stages, and failure to implement all the lessons learned in the piloting phase. It also suggested that improved structured reporting at each stage - especially from the pilot phase, parallel processing of data and correctly positioned process controls - should improve the efficiency and predictability of research projects. Process modelling provided a rational basis for the critical appraisal of a clinical data processing system; its potential maybe underutilized within health care.

  9. An Improved K-Epsilon Model for Near-Wall Turbulence and Comparison with Direct Numerical Simulation

    NASA Technical Reports Server (NTRS)

    Shih, T. H.

    1990-01-01

    An improved k-epsilon model for low Reynolds number turbulence near a wall is presented. The near-wall asymptotic behavior of the eddy viscosity and the pressure transport term in the turbulent kinetic energy equation is analyzed. Based on this analysis, a modified eddy viscosity model, having correct near-wall behavior, is suggested, and a model for the pressure transport term in the k-equation is proposed. In addition, a modeled dissipation rate equation is reformulated. Fully developed channel flows were used for model testing. The calculations using various k-epsilon models are compared with direct numerical simulations. The results show that the present k-epsilon model performs well in predicting the behavior of near-wall turbulence. Significant improvement over previous k-epsilon models is obtained.

  10. Can bias correction and statistical downscaling methods improve the skill of seasonal precipitation forecasts?

    NASA Astrophysics Data System (ADS)

    Manzanas, R.; Lucero, A.; Weisheimer, A.; Gutiérrez, J. M.

    2018-02-01

    Statistical downscaling methods are popular post-processing tools which are widely used in many sectors to adapt the coarse-resolution biased outputs from global climate simulations to the regional-to-local scale typically required by users. They range from simple and pragmatic Bias Correction (BC) methods, which directly adjust the model outputs of interest (e.g. precipitation) according to the available local observations, to more complex Perfect Prognosis (PP) ones, which indirectly derive local predictions (e.g. precipitation) from appropriate upper-air large-scale model variables (predictors). Statistical downscaling methods have been extensively used and critically assessed in climate change applications; however, their advantages and limitations in seasonal forecasting are not well understood yet. In particular, a key problem in this context is whether they serve to improve the forecast quality/skill of raw model outputs beyond the adjustment of their systematic biases. In this paper we analyze this issue by applying two state-of-the-art BC and two PP methods to downscale precipitation from a multimodel seasonal hindcast in a challenging tropical region, the Philippines. To properly assess the potential added value beyond the reduction of model biases, we consider two validation scores which are not sensitive to changes in the mean (correlation and reliability categories). Our results show that, whereas BC methods maintain or worsen the skill of the raw model forecasts, PP methods can yield significant skill improvement (worsening) in cases for which the large-scale predictor variables considered are better (worse) predicted by the model than precipitation. For instance, PP methods are found to increase (decrease) model reliability in nearly 40% of the stations considered in boreal summer (autumn). Therefore, the choice of a convenient downscaling approach (either BC or PP) depends on the region and the season.

  11. Predicting Statistical Response and Extreme Events in Uncertainty Quantification through Reduced-Order Models

    NASA Astrophysics Data System (ADS)

    Qi, D.; Majda, A.

    2017-12-01

    A low-dimensional reduced-order statistical closure model is developed for quantifying the uncertainty in statistical sensitivity and intermittency in principal model directions with largest variability in high-dimensional turbulent system and turbulent transport models. Imperfect model sensitivity is improved through a recent mathematical strategy for calibrating model errors in a training phase, where information theory and linear statistical response theory are combined in a systematic fashion to achieve the optimal model performance. The idea in the reduced-order method is from a self-consistent mathematical framework for general systems with quadratic nonlinearity, where crucial high-order statistics are approximated by a systematic model calibration procedure. Model efficiency is improved through additional damping and noise corrections to replace the expensive energy-conserving nonlinear interactions. Model errors due to the imperfect nonlinear approximation are corrected by tuning the model parameters using linear response theory with an information metric in a training phase before prediction. A statistical energy principle is adopted to introduce a global scaling factor in characterizing the higher-order moments in a consistent way to improve model sensitivity. Stringent models of barotropic and baroclinic turbulence are used to display the feasibility of the reduced-order methods. Principal statistical responses in mean and variance can be captured by the reduced-order models with accuracy and efficiency. Besides, the reduced-order models are also used to capture crucial passive tracer field that is advected by the baroclinic turbulent flow. It is demonstrated that crucial principal statistical quantities like the tracer spectrum and fat-tails in the tracer probability density functions in the most important large scales can be captured efficiently with accuracy using the reduced-order tracer model in various dynamical regimes of the flow field with distinct statistical structures.

  12. Improving the accuracy of flood forecasting with transpositions of ensemble NWP rainfall fields considering orographic effects

    NASA Astrophysics Data System (ADS)

    Yu, Wansik; Nakakita, Eiichi; Kim, Sunmin; Yamaguchi, Kosei

    2016-08-01

    The use of meteorological ensembles to produce sets of hydrological predictions increased the capability to issue flood warnings. However, space scale of the hydrological domain is still much finer than meteorological model, and NWP models have challenges with displacement. The main objective of this study to enhance the transposition method proposed in Yu et al. (2014) and to suggest the post-processing ensemble flood forecasting method for the real-time updating and the accuracy improvement of flood forecasts that considers the separation of the orographic rainfall and the correction of misplaced rain distributions using additional ensemble information through the transposition of rain distributions. In the first step of the proposed method, ensemble forecast rainfalls from a numerical weather prediction (NWP) model are separated into orographic and non-orographic rainfall fields using atmospheric variables and the extraction of topographic effect. Then the non-orographic rainfall fields are examined by the transposition scheme to produce additional ensemble information and new ensemble NWP rainfall fields are calculated by recombining the transposition results of non-orographic rain fields with separated orographic rainfall fields for a generation of place-corrected ensemble information. Then, the additional ensemble information is applied into a hydrologic model for post-flood forecasting with a 6-h interval. The newly proposed method has a clear advantage to improve the accuracy of mean value of ensemble flood forecasting. Our study is carried out and verified using the largest flood event by typhoon 'Talas' of 2011 over the two catchments, which are Futatsuno (356.1 km2) and Nanairo (182.1 km2) dam catchments of Shingu river basin (2360 km2), which is located in the Kii peninsula, Japan.

  13. Global seasonal climate predictability in a two tiered forecast system: part I: boreal summer and fall seasons

    NASA Astrophysics Data System (ADS)

    Misra, Vasubandhu; Li, H.; Wu, Z.; DiNapoli, S.

    2014-03-01

    This paper shows demonstrable improvement in the global seasonal climate predictability of boreal summer (at zero lead) and fall (at one season lead) seasonal mean precipitation and surface temperature from a two-tiered seasonal hindcast forced with forecasted SST relative to two other contemporary operational coupled ocean-atmosphere climate models. The results from an extensive set of seasonal hindcasts are analyzed to come to this conclusion. This improvement is attributed to: (1) The multi-model bias corrected SST used to force the atmospheric model. (2) The global atmospheric model which is run at a relatively high resolution of 50 km grid resolution compared to the two other coupled ocean-atmosphere models. (3) The physics of the atmospheric model, especially that related to the convective parameterization scheme. The results of the seasonal hindcast are analyzed for both deterministic and probabilistic skill. The probabilistic skill analysis shows that significant forecast skill can be harvested from these seasonal hindcasts relative to the deterministic skill analysis. The paper concludes that the coupled ocean-atmosphere seasonal hindcasts have reached a reasonable fidelity to exploit their SST anomaly forecasts to force such relatively higher resolution two tier prediction experiments to glean further boreal summer and fall seasonal prediction skill.

  14. Control Strategy of Active Power Filter Based on Modular Multilevel Converter

    NASA Astrophysics Data System (ADS)

    Xie, Xifeng

    2018-03-01

    To improve the capacity, pressure resistance and the equivalent switching frequency of active power filter (APF), a control strategy of APF based on Modular Multilevel Converter (MMC) is presented. In this Control Strategy, the indirect current control method is used to achieve active current and reactive current decoupling control; Voltage Balance Control Strategy is to stabilize sub-module capacitor voltage, the predictive current control method is used to Track and control of harmonic currents. As a result, the harmonic current is restrained, and power quality is improved. Finally, the simulation model of active power filter controller based on MMC is established in Matlab/Simulink, the simulation proves that the proposed strategy is feasible and correct.

  15. Virtual simulation of the postsurgical cosmetic outcome in patients with Pectus Excavatum

    NASA Astrophysics Data System (ADS)

    Vilaça, João L.; Moreira, António H. J.; L-Rodrigues, Pedro; Rodrigues, Nuno; Fonseca, Jaime C.; Pinho, A. C. M.; Correia-Pinto, Jorge

    2011-03-01

    Pectus excavatum is the most common congenital deformity of the anterior chest wall, in which several ribs and the sternum grow abnormally. Nowadays, the surgical correction is carried out in children and adults through Nuss technic. This technic has been shown to be safe with major drivers as cosmesis and the prevention of psychological problems and social stress. Nowadays, no application is known to predict the cosmetic outcome of the pectus excavatum surgical correction. Such tool could be used to help the surgeon and the patient in the moment of deciding the need for surgery correction. This work is a first step to predict postsurgical outcome in pectus excavatum surgery correction. Facing this goal, it was firstly determined a point cloud of the skin surface along the thoracic wall using Computed Tomography (before surgical correction) and the Polhemus FastSCAN (after the surgical correction). Then, a surface mesh was reconstructed from the two point clouds using a Radial Basis Function algorithm for further affine registration between the meshes. After registration, one studied the surgical correction influence area (SCIA) of the thoracic wall. This SCIA was used to train, test and validate artificial neural networks in order to predict the surgical outcome of pectus excavatum correction and to determine the degree of convergence of SCIA in different patients. Often, ANN did not converge to a satisfactory solution (each patient had its own deformity characteristics), thus invalidating the creation of a mathematical model capable of estimating, with satisfactory results, the postsurgical outcome.

  16. The relationship between change in subjective outcome and change in disease: a potential paradox.

    PubMed

    Kievit, Wietske; Hendrikx, Jos; Stalmeier, Peep F M; van de Laar, Mart A F J; Van Riel, Piet L C M; Adang, Eddy M

    2010-09-01

    Response shift theory suggests that improvements in health lead patients to change their internal standards and re-assess former health states as worse than initially rated when using retrospective ratings via the then-test. The predictions of response shift theory can be illustrated using prospect theory, whereby a change in current health causes a change in reference frame. Therefore, if health deteriorates, the former health state will receive a better rating, whereas if it improves, the former health state will receive a worse rating. To explore the predictions of response shift and prospect theory by relating subjective change to objective change. Baseline and 3-month follow-up data from a cohort of rheumatoid arthritis patients (N = 197) starting on TNFalpha-blocking agents were used. Objective disease change was classified according to a disease-specific clinical outcome measure (DAS28). Visual analogue scales (VAS) for general health (GH) and pain were used as self-reported measures. Three months after starting on anti-TNFalpha, patients used the then-test to re-rate their baseline health with regard to general health and pain. Differences between then-test value and baseline values were calculated and tested between improved, non-improved and deteriorated patients by the Student t-test. At 3 months, 51 (25.9%) patients had good improvement in health, 83 (42.1%) had moderate improvement, and 63 (32.0%) had no improvement or deteriorated in health. All patients no matter whether they improved, did not improve, or even became worse rated their health as worse retrospectively. The difference between the then-test rating and the baseline value was similarly sized in all groups. More positive ratings of retrospective health are independent of disease change. This suggests that patients do not necessarily change their standards in line with their disease change, and therefore it is inappropriate to use the then-test to correct for such a change. If a then-test is used to correct for shifts in internal standards, it might lead to the paradoxical result that patients who do not improve or even deteriorate increase significantly on self-reported health and pain. An alternative explanation for differences in retrospective and prospective ratings of health is the implicit theory of change which is more successful in explaining our results than prospect theory.

  17. Automatic gender detection of dream reports: A promising approach.

    PubMed

    Wong, Christina; Amini, Reza; De Koninck, Joseph

    2016-08-01

    A computer program was developed in an attempt to differentiate the dreams of males from females. Hypothesized gender predictors were based on previous literature concerning both dream content and written language features. Dream reports from home-collected dream diaries of 100 male (144 dreams) and 100 female (144 dreams) adolescent Anglophones were matched for equal length. They were first scored with the Hall and Van de Castle (HVDC) scales and quantified using DreamSAT. Two male and two female undergraduate students were asked to read all dreams and predict the dreamer's gender. They averaged a pairwise percent correct gender prediction of 75.8% (κ=0.516), while the Automatic Analysis showed that the computer program's accuracy was 74.5% (κ=0.492), both of which were higher than chance of 50% (κ=0.00). The prediction levels were maintained when dreams containing obvious gender identifiers were eliminated and integration of HVDC scales did not improve prediction. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs

    NASA Astrophysics Data System (ADS)

    Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken

    2015-09-01

    To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.

  19. Models for H₃ receptor antagonist activity of sulfonylurea derivatives.

    PubMed

    Khatri, Naveen; Madan, A K

    2014-03-01

    The histamine H₃ receptor has been perceived as an auspicious target for the treatment of various central and peripheral nervous system diseases. In present study, a wide variety of 60 2D and 3D molecular descriptors (MDs) were successfully utilized for the development of models for the prediction of antagonist activity of sulfonylurea derivatives for histamine H₃ receptors. Models were developed through decision tree (DT), random forest (RF) and moving average analysis (MAA). Dragon software version 6.0.28 was employed for calculation of values of diverse MDs of each analogue involved in the data set. The DT classified and correctly predicted the input data with an impressive non-error rate of 94% in the training set and 82.5% during cross validation. RF correctly classified the analogues into active and inactive with a non-error rate of 79.3%. The MAA based models predicted the antagonist histamine H₃ receptor activity with non-error rate up to 90%. Active ranges of the proposed MAA based models not only exhibited high potency but also showed improved safety as indicated by relatively high values of selectivity index. The statistical significance of the models was assessed through sensitivity, specificity, non-error rate, Matthew's correlation coefficient and intercorrelation analysis. Proposed models offer vast potential for providing lead structures for development of potent but safe H₃ receptor antagonist sulfonylurea derivatives. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Method and apparatus for sensor fusion

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar (Inventor); Shaw, Scott (Inventor); Defigueiredo, Rui J. P. (Inventor)

    1991-01-01

    Method and apparatus for fusion of data from optical and radar sensors by error minimization procedure is presented. The method was applied to the problem of shape reconstruction of an unknown surface at a distance. The method involves deriving an incomplete surface model from an optical sensor. The unknown characteristics of the surface are represented by some parameter. The correct value of the parameter is computed by iteratively generating theoretical predictions of the radar cross sections (RCS) of the surface, comparing the predicted and the observed values for the RCS, and improving the surface model from results of the comparison. Theoretical RCS may be computed from the surface model in several ways. One RCS prediction technique is the method of moments. The method of moments can be applied to an unknown surface only if some shape information is available from an independent source. The optical image provides the independent information.

  1. An exponential filter model predicts lightness illusions

    PubMed Central

    Zeman, Astrid; Brooks, Kevin R.; Ghebreab, Sennay

    2015-01-01

    Lightness, or perceived reflectance of a surface, is influenced by surrounding context. This is demonstrated by the Simultaneous Contrast Illusion (SCI), where a gray patch is perceived lighter against a black background and vice versa. Conversely, assimilation is where the lightness of the target patch moves toward that of the bounding areas and can be demonstrated in White's effect. Blakeslee and McCourt (1999) introduced an oriented difference-of-Gaussian (ODOG) model that is able to account for both contrast and assimilation in a number of lightness illusions and that has been subsequently improved using localized normalization techniques. We introduce a model inspired by image statistics that is based on a family of exponential filters, with kernels spanning across multiple sizes and shapes. We include an optional second stage of normalization based on contrast gain control. Our model was tested on a well-known set of lightness illusions that have previously been used to evaluate ODOG and its variants, and model lightness values were compared with typical human data. We investigate whether predictive success depends on filters of a particular size or shape and whether pooling information across filters can improve performance. The best single filter correctly predicted the direction of lightness effects for 21 out of 27 illusions. Combining two filters together increased the best performance to 23, with asymptotic performance at 24 for an arbitrarily large combination of filter outputs. While normalization improved prediction magnitudes, it only slightly improved overall scores in direction predictions. The prediction performance of 24 out of 27 illusions equals that of the best performing ODOG variant, with greater parsimony. Our model shows that V1-style orientation-selectivity is not necessary to account for lightness illusions and that a low-level model based on image statistics is able to account for a wide range of both contrast and assimilation effects. PMID:26157381

  2. Multi-scale enhancement of climate prediction over land by increasing the model sensitivity to vegetation variability in EC-Earth

    NASA Astrophysics Data System (ADS)

    Alessandri, Andrea; Catalano, Franco; De Felice, Matteo; Van Den Hurk, Bart; Doblas Reyes, Francisco; Boussetta, Souhail; Balsamo, Gianpaolo; Miller, Paul

    2016-04-01

    The EC-Earth earth system model has been recently developed to include the dynamics of vegetation. In its original formulation, vegetation variability is simply operated by the Leaf Area Index (LAI), which affects climate basically by changing the vegetation physiological resistance to evapotranspiration. This coupling has been found to have only a weak effect on the surface climate modeled by EC-Earth. In reality, the effective sub-grid vegetation fractional coverage will vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the albedo, surface roughness and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation cover on the LAI. By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning retrospective predictions at the decadal (5-years), seasonal and sub-seasonal time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation cover tends to correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and sub-seasonal time-scales. Significant improvements of the prediction of 2m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in the surface evapotranspiration.

  3. Multi-scale enhancement of climate prediction over land by increasing the model sensitivity to vegetation variability in EC-Earth

    NASA Astrophysics Data System (ADS)

    Alessandri, A.; Catalano, F.; De Felice, M.; van den Hurk, B.; Doblas-Reyes, F. J.; Boussetta, S.; Balsamo, G.; Miller, P. A.

    2016-12-01

    The European consortium earth system model (EC-Earth; http://www.ec-earth.org) has been recently developed to include the dynamics of vegetation. In its original formulation, vegetation variability is simply operated by the Leaf Area Index (LAI), which affects climate basically by changing the vegetation physiological resistance to evapotranspiration. This coupling has been found to have only a weak effect on the surface climate modeled by EC-Earth. In reality, the effective sub-grid vegetation fractional coverage will vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the albedo, surface roughness and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation cover on the LAI. By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (20th Century) simulations and retrospective predictions to the decadal (5-years), seasonal and weather time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation cover tends to correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in the surface evapotranspiration.

  4. Multi-scale enhancement of climate prediction over land by increasing the model sensitivity to vegetation variability in EC-Earth

    NASA Astrophysics Data System (ADS)

    Alessandri, Andrea; Catalano, Franco; De Felice, Matteo; Van Den Hurk, Bart; Doblas Reyes, Francisco; Boussetta, Souhail; Balsamo, Gianpaolo; Miller, Paul A.

    2017-08-01

    The EC-Earth earth system model has been recently developed to include the dynamics of vegetation. In its original formulation, vegetation variability is simply operated by the Leaf Area Index (LAI), which affects climate basically by changing the vegetation physiological resistance to evapotranspiration. This coupling has been found to have only a weak effect on the surface climate modeled by EC-Earth. In reality, the effective sub-grid vegetation fractional coverage will vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the albedo, surface roughness and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation cover on the LAI. By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (twentieth century) simulations and retrospective predictions to the decadal (5-years), seasonal and weather time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation cover tends to correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2 m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in the surface evapotranspiration.

  5. Multi-scale enhancement of climate prediction over land by increasing the model sensitivity to vegetation variability in EC-Earth

    NASA Astrophysics Data System (ADS)

    Alessandri, Andrea; Catalano, Franco; De Felice, Matteo; Van Den Hurk, Bart; Doblas Reyes, Francisco; Boussetta, Souhail; Balsamo, Gianpaolo; Miller, Paul A.

    2017-04-01

    The EC-Earth earth system model has been recently developed to include the dynamics of vegetation. In its original formulation, vegetation variability is simply operated by the Leaf Area Index (LAI), which affects climate basically by changing the vegetation physiological resistance to evapotranspiration. This coupling has been found to have only a weak effect on the surface climate modeled by EC-Earth. In reality, the effective sub-grid vegetation fractional coverage will vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the albedo, surface roughness and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation cover on the LAI. By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (20th Century) simulations and retrospective predictions to the decadal (5-years), seasonal and weather time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation cover tends to correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in the surface evapotranspiration.

  6. Corrected ROC analysis for misclassified binary outcomes.

    PubMed

    Zawistowski, Matthew; Sussman, Jeremy B; Hofer, Timothy P; Bentley, Douglas; Hayward, Rodney A; Wiitala, Wyndy L

    2017-06-15

    Creating accurate risk prediction models from Big Data resources such as Electronic Health Records (EHRs) is a critical step toward achieving precision medicine. A major challenge in developing these tools is accounting for imperfect aspects of EHR data, particularly the potential for misclassified outcomes. Misclassification, the swapping of case and control outcome labels, is well known to bias effect size estimates for regression prediction models. In this paper, we study the effect of misclassification on accuracy assessment for risk prediction models and find that it leads to bias in the area under the curve (AUC) metric from standard ROC analysis. The extent of the bias is determined by the false positive and false negative misclassification rates as well as disease prevalence. Notably, we show that simply correcting for misclassification while building the prediction model is not sufficient to remove the bias in AUC. We therefore introduce an intuitive misclassification-adjusted ROC procedure that accounts for uncertainty in observed outcomes and produces bias-corrected estimates of the true AUC. The method requires that misclassification rates are either known or can be estimated, quantities typically required for the modeling step. The computational simplicity of our method is a key advantage, making it ideal for efficiently comparing multiple prediction models on very large datasets. Finally, we apply the correction method to a hospitalization prediction model from a cohort of over 1 million patients from the Veterans Health Administrations EHR. Implementations of the ROC correction are provided for Stata and R. Published 2017. This article is a U.S. Government work and is in the public domain in the USA. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  7. Comparison of techniques for correction of magnification of pelvic X-rays for hip surgery planning.

    PubMed

    The, Bertram; Kootstra, Johan W J; Hosman, Anton H; Verdonschot, Nico; Gerritsma, Carina L E; Diercks, Ron L

    2007-12-01

    The aim of this study was to develop an accurate method for correction of magnification of pelvic x-rays to enhance accuracy of hip surgery planning. All investigated methods aim at estimating the anteroposterior location of the hip joint in supine position to correctly position a reference object for correction of magnification. An existing method-which is currently being used in clinical practice in our clinics-is based on estimating the position of the hip joint by palpation of the greater trochanter. It is only moderately accurate and difficult to execute reliably in clinical practice. To develop a new method, 99 patients who already had a hip implant in situ were included; this enabled determining the true location of the hip joint deducted from the magnification of the prosthesis. Physical examination was used to obtain predictor variables possibly associated with the height of the hip joint. This included a simple dynamic hip joint examination to estimate the position of the center of rotation. Prediction equations were then constructed using regression analysis. The performance of these prediction equations was compared with the performance of the existing protocol. The mean absolute error in predicting the height of the hip joint center using the old method was 20 mm (range -79 mm to +46 mm). This was 11 mm for the new method (-32 mm to +39 mm). The prediction equation is: height (mm) = 34 + 1/2 abdominal circumference (cm). The newly developed prediction equation is a superior method for predicting the height of the hip joint center for correction of magnification of pelvic x-rays. We recommend its implementation in the departments of radiology and orthopedic surgery.

  8. Piggyback intraocular lens implantation to correct pseudophakic refractive error after segmental multifocal intraocular lens implantation.

    PubMed

    Venter, Jan A; Oberholster, Andre; Schallhorn, Steven C; Pelouskova, Martina

    2014-04-01

    To evaluate refractive and visual outcomes of secondary piggyback intraocular lens implantation in patients diagnosed as having residual ametropia following segmental multifocal lens implantation. Data of 80 pseudophakic eyes with ametropia that underwent Sulcoflex aspheric 653L intraocular lens implantation (Rayner Intraocular Lenses Ltd., East Sussex, United Kingdom) to correct residual refractive error were analyzed. All eyes previously had in-the-bag zonal refractive multifocal intraocular lens implantation (Lentis Mplus MF30, models LS-312 and LS-313; Oculentis GmbH, Berlin, Germany) and required residual refractive error correction. Outcome measurements included uncorrected distance visual acuity, corrected distance visual acuity, uncorrected near visual acuity, distance-corrected near visual acuity, manifest refraction, and complications. One-year data are presented in this study. The mean spherical equivalent ranged from -1.75 to +3.25 diopters (D) preoperatively (mean: +0.58 ± 1.15 D) and reduced to -1.25 to +0.50 D (mean: -0.14 ± 0.28 D; P < .01). Postoperatively, 93.8% of eyes were within ±0.50 D and 98.8% were within ±1.00 D of emmetropia. The mean uncorrected distance visual acuity improved significantly from 0.28 ± 0.16 to 0.01 ± 0.10 logMAR and 78.8% of eyes achieved 6/6 (Snellen 20/20) or better postoperatively. The mean uncorrected near visual acuity changed from 0.43 ± 0.28 to 0.19 ± 0.15 logMAR. There was no significant change in corrected distance visual acuity or distance-corrected near visual acuity. No serious intraoperative or postoperative complications requiring secondary intraocular lens removal occurred. Sulcoflex lenses proved to be a predictable and safe option for correcting residual refractive error in patients diagnosed as having pseudophakia. Copyright 2014, SLACK Incorporated.

  9. Predicting Intervention Effectiveness from Reading Accuracy and Rate Measures through the Instructional Hierarchy: Evidence for a Skill-by-Treatment Interaction

    ERIC Educational Resources Information Center

    Szadokierski, Isadora; Burns, Matthew K.; McComas, Jennifer J.

    2017-01-01

    The current study used the learning hierarchy/instructional hierarchy phases of acquisition and fluency to predict intervention effectiveness based on preintervention reading skills. Preintervention reading accuracy (percentage of words read correctly) and rate (number of words read correctly per minute) were assessed for 49 second- and…

  10. Directivity in NGA earthquake ground motions: Analysis using isochrone theory

    USGS Publications Warehouse

    Spudich, P.; Chiou, B.S.J.

    2008-01-01

    We present correction factors that may be applied to the ground motion prediction relations of Abrahamson and Silva, Boore and Atkinson, Campbell and Bozorgnia, and Chiou and Youngs (all in this volume) to model the azimuthally varying distribution of the GMRotI50 component of ground motion (commonly called 'directivity') around earthquakes. Our correction factors may be used for planar or nonplanar faults having any dip or slip rake (faulting mechanism). Our correction factors predict directivity-induced variations of spectral acceleration that are roughly half of the strike-slip variations predicted by Somerville et al. (1997), and use of our factors reduces record-to-record sigma by about 2-20% at 5 sec or greater period. ?? 2008, Earthquake Engineering Research Institute.

  11. Lock-in amplifier error prediction and correction in frequency sweep measurements.

    PubMed

    Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose

    2007-01-01

    This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.

  12. Quantitative Characterizations of Ultrashort Echo (UTE) Images for Supporting Air-Bone Separation in the Head

    PubMed Central

    Hsu, Shu-Hui; Cao, Yue; Lawrence, Theodore S.; Tsien, Christina; Feng, Mary; Grodzki, David M.; Balter, James M.

    2015-01-01

    Accurate separation of air and bone is critical for creating synthetic CT from MRI to support Radiation Oncology workflow. This study compares two different ultrashort echo-time sequences in the separation of air from bone, and evaluates post-processing methods that correct intensity nonuniformity of images and account for intensity gradients at tissue boundaries to improve this discriminatory power. CT and MRI scans were acquired on 12 patients under an institution review board-approved prospective protocol. The two MRI sequences tested were ultra-short TE imaging using 3D radial acquisition (UTE), and using pointwise encoding time reduction with radial acquisition (PETRA). Gradient nonlinearity correction was applied to both MR image volumes after acquisition. MRI intensity nonuniformity was corrected by vendor-provided normalization methods, and then further corrected using the N4itk algorithm. To overcome the intensity-gradient at air-tissue boundaries, spatial dilations, from 0 to 4 mm, were applied to threshold-defined air regions from MR images. Receiver operating characteristic (ROC) analyses, by comparing predicted (defined by MR images) versus “true” regions of air and bone (defined by CT images), were performed with and without residual bias field correction and local spatial expansion. The post-processing corrections increased the areas under the ROC curves (AUC) from 0.944 ± 0.012 to 0.976 ± 0.003 for UTE images, and from 0.850 ± 0.022 to 0.887 ± 0.012 for PETRA images, compared to without corrections. When expanding the threshold-defined air volumes, as expected, sensitivity of air identification decreased with an increase in specificity of bone discrimination, but in a non-linear fashion. A 1-mm air mask expansion yielded AUC increases of 1% and 4% for UTE and PETRA images, respectively. UTE images had significantly greater discriminatory power in separating air from bone than PETRA images. Post-processing strategies improved the discriminatory power of air from bone for both UTE and PETRA images, and reduced the difference between the two imaging sequences. Both postprocessed UTE and PETRA images demonstrated sufficient power to discriminate air from bone to support synthetic CT generation from MRI data. PMID:25776205

  13. Improved Accuracy of the Inherent Shrinkage Method for Fast and More Reliable Welding Distortion Calculations

    NASA Astrophysics Data System (ADS)

    Mendizabal, A.; González-Díaz, J. B.; San Sebastián, M.; Echeverría, A.

    2016-07-01

    This paper describes the implementation of a simple strategy adopted for the inherent shrinkage method (ISM) to predict welding-induced distortion. This strategy not only makes it possible for the ISM to reach accuracy levels similar to the detailed transient analysis method (considered the most reliable technique for calculating welding distortion) but also significantly reduces the time required for these types of calculations. This strategy is based on the sequential activation of welding blocks to account for welding direction and transient movement of the heat source. As a result, a significant improvement in distortion prediction is achieved. This is demonstrated by experimentally measuring and numerically analyzing distortions in two case studies: a vane segment subassembly of an aero-engine, represented with 3D-solid elements, and a car body component, represented with 3D-shell elements. The proposed strategy proves to be a good alternative for quickly estimating the correct behaviors of large welded components and may have important practical applications in the manufacturing industry.

  14. Understanding the Day Cent model: Calibration, sensitivity, and identifiability through inverse modeling

    USGS Publications Warehouse

    Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.

    2015-01-01

    The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.

  15. T1- and T2*-dominant extravasation correction in DSC-MRI: Part I—theoretical considerations and implications for assessment of tumor hemodynamic properties

    PubMed Central

    Bjornerud, Atle; Sorensen, A Gregory; Mouridsen, Kim; Emblem, Kyrre E

    2011-01-01

    We present a novel contrast agent (CA) extravasation-correction method based on analysis of the tissue residue function for assessment of multiple hemodynamic parameters. The method enables semiquantitative determination of the transfer constant and can be used to distinguish between T1- and T2*-dominant extravasation effects, while being insensitive to variations in tissue mean transit time (MTT). Results in 101 patients with confirmed glioma suggest that leakage-corrected absolute cerebral blood volume (CBV) values obtained with the proposed method provide improved overall survival prediction compared with normalized CBV values combined with an established leakage-correction method. Using a standard gradient-echo echo-planar imaging sequence, ∼60% and 10% of tumors with detectable CA extravasation mainly exhibited T1- and T2*-dominant leakage effects, respectively. The remaining 30% of leaky tumors had mixed T1- and T2*-dominant effects. Using an MTT-sensitive correction method, our results show that CBV is underestimated when tumor MTT is significantly longer than MTT in the reference tissue. Furthermore, results from our simulations suggest that the relative contribution of T1- versus T2*-dominant extravasation effects is strongly dependent on the effective transverse relaxivity in the extravascular space and may thus be a potential marker for cellular integrity and tissue structure. PMID:21505483

  16. Can we ease the financial burden of colonoscopy? Using real-time endoscopic assessment of polyp histology to predict surveillance intervals.

    PubMed

    Chandran, S; Parker, F; Lontos, S; Vaughan, R; Efthymiou, M

    2015-12-01

    Polyps identified at colonoscopy are predominantly diminutive (<5 mm) with a small risk (>1%) of high-grade dysplasia or carcinoma; however, the cost of histological assessment is substantial. The aim of this study was to determine whether prediction of colonoscopy surveillance intervals based on real-time endoscopic assessment of polyp histology is accurate and cost effective. A prospective cohort study was conducted across a tertiary care and private community hospital. Ninety-four patients underwent colonoscopy and polypectomy of diminutive (≤5 mm) polyps from October 2012 to July 2013, yielding a total of 159 polyps. Polyps were examined and classified according to the Sano-Emura classification system. The endoscopic assessment (optical diagnosis) of polyp histology was used to predict appropriate colonoscopy surveillance intervals. The main outcome measure was the accuracy of optical diagnosis of diminutive colonic polyps against the gold standard of histological assessment. Optical diagnosis was correct in 105/108 (97.2%) adenomas. This yielded a sensitivity, specificity and positive and negative predictive values (with 95%CI) of 97.2% (92.1-99.4%), 78.4% (64.7-88.7%), 90.5% (83.7-95.2%) and 93% (80.9-98.5%) respectively. Ninety-two (98%) patients were correctly triaged to their repeat surveillance colonoscopy. Based on these findings, a cut and discard approach would have resulted in a saving of $319.77 per patient. Endoscopists within a tertiary care setting can accurately predict diminutive polyp histology and confer an appropriate surveillance interval with an associated financial benefit to the healthcare system. However, limitations to its application in the community setting exist, which may improve with further training and high-definition colonoscopes. © 2015 Royal Australasian College of Physicians.

  17. Embedded Model Error Representation and Propagation in Climate Models

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.

    2017-12-01

    Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.

  18. Lung Cancer Risk Prediction Model Incorporating Lung Function: Development and Validation in the UK Biobank Prospective Cohort Study.

    PubMed

    Muller, David C; Johansson, Mattias; Brennan, Paul

    2017-03-10

    Purpose Several lung cancer risk prediction models have been developed, but none to date have assessed the predictive ability of lung function in a population-based cohort. We sought to develop and internally validate a model incorporating lung function using data from the UK Biobank prospective cohort study. Methods This analysis included 502,321 participants without a previous diagnosis of lung cancer, predominantly between 40 and 70 years of age. We used flexible parametric survival models to estimate the 2-year probability of lung cancer, accounting for the competing risk of death. Models included predictors previously shown to be associated with lung cancer risk, including sex, variables related to smoking history and nicotine addiction, medical history, family history of lung cancer, and lung function (forced expiratory volume in 1 second [FEV1]). Results During accumulated follow-up of 1,469,518 person-years, there were 738 lung cancer diagnoses. A model incorporating all predictors had excellent discrimination (concordance (c)-statistic [95% CI] = 0.85 [0.82 to 0.87]). Internal validation suggested that the model will discriminate well when applied to new data (optimism-corrected c-statistic = 0.84). The full model, including FEV1, also had modestly superior discriminatory power than one that was designed solely on the basis of questionnaire variables (c-statistic = 0.84 [0.82 to 0.86]; optimism-corrected c-statistic = 0.83; p FEV1 = 3.4 × 10 -13 ). The full model had better discrimination than standard lung cancer screening eligibility criteria (c-statistic = 0.66 [0.64 to 0.69]). Conclusion A risk prediction model that includes lung function has strong predictive ability, which could improve eligibility criteria for lung cancer screening programs.

  19. A Pilot Study Combining a GC-Sensor Device with a Statistical Model for the Identification of Bladder Cancer from Urine Headspace

    PubMed Central

    Khalid, Tanzeela; White, Paul; De Lacy Costello, Ben; Persad, Raj; Ewen, Richard; Johnson, Emmanuel; Probert, Chris S.; Ratcliffe, Norman

    2013-01-01

    There is a need to reduce the number of cystoscopies on patients with haematuria. Presently there are no reliable biomarkers to screen for bladder cancer. In this paper, we evaluate a new simple in–house fabricated, GC-sensor device in the diagnosis of bladder cancer based on volatiles. Sensor outputs from 98 urine samples were used to build and test diagnostic models. Samples were taken from 24 patients with transitional (urothelial) cell carcinoma (age 27-91 years, median 71 years) and 74 controls presenting with urological symptoms, but without a urological malignancy (age 29-86 years, median 64 years); results were analysed using two statistical approaches to assess the robustness of the methodology. A two-group linear discriminant analysis method using a total of 9 time points (which equates to 9 biomarkers) correctly assigned 24/24 (100%) of cancer cases and 70/74 (94.6%) controls. Under leave-one-out cross-validation 23/24 (95.8%) of cancer cases were correctly predicted with 69/74 (93.2%) of controls. For partial least squares discriminant analysis, the correct leave-one-out cross-validation prediction values were 95.8% (cancer cases) and 94.6% (controls). These data are an improvement on those reported by other groups studying headspace gases and also superior to current clinical techniques. This new device shows potential for the diagnosis of bladder cancer, but the data must be reproduced in a larger study. PMID:23861976

  20. A comparison of methods to estimate future sub-daily design rainfall

    NASA Astrophysics Data System (ADS)

    Li, J.; Johnson, F.; Evans, J.; Sharma, A.

    2017-12-01

    Warmer temperatures are expected to increase extreme short-duration rainfall due to the increased moisture-holding capacity of the atmosphere. While attention has been paid to the impacts of climate change on future design rainfalls at daily or longer time scales, the potential changes in short duration design rainfalls have been often overlooked due to the limited availability of sub-daily projections and observations. This study uses a high-resolution regional climate model (RCM) to predict the changes in sub-daily design rainfalls for the Greater Sydney region in Australia. Sixteen methods for predicting changes to sub-daily future extremes are assessed based on different options for bias correction, disaggregation and frequency analysis. A Monte Carlo cross-validation procedure is employed to evaluate the skill of each method in estimating the design rainfall for the current climate. It is found that bias correction significantly improves the accuracy of the design rainfall estimated for the current climate. For 1 h events, bias correcting the hourly annual maximum rainfall simulated by the RCM produces design rainfall closest to observations, whereas for multi-hour events, disaggregating the daily rainfall total is recommended. This suggests that the RCM fails to simulate the observed multi-duration rainfall persistence, which is a common issue for most climate models. Despite the significant differences in the estimated design rainfalls between different methods, all methods lead to an increase in design rainfalls across the majority of the study region.

  1. A web-based Tamsui River flood early-warning system with correction of real-time water stage using monitoring data

    NASA Astrophysics Data System (ADS)

    Liao, H. Y.; Lin, Y. J.; Chang, H. K.; Shang, R. K.; Kuo, H. C.; Lai, J. S.; Tan, Y. C.

    2017-12-01

    Taiwan encounters heavy rainfalls frequently. There are three to four typhoons striking Taiwan every year. To provide lead time for reducing flood damage, this study attempt to build a flood early-warning system (FEWS) in Tanshui River using time series correction techniques. The predicted rainfall is used as the input for the rainfall-runoff model. Then, the discharges calculated by the rainfall-runoff model is converted to the 1-D river routing model. The 1-D river routing model will output the simulating water stages in 487 cross sections for the future 48-hr. The downstream water stage at the estuary in 1-D river routing model is provided by storm surge simulation. Next, the water stages of 487 cross sections are corrected by time series model such as autoregressive (AR) model using real-time water stage measurements to improve the predicted accuracy. The results of simulated water stages are displayed on a web-based platform. In addition, the models can be performed remotely by any users with web browsers through a user interface. The on-line video surveillance images, real-time monitoring water stages, and rainfalls can also be shown on this platform. If the simulated water stage exceeds the embankments of Tanshui River, the alerting lights of FEWS will be flashing on the screen. This platform runs periodically and automatically to generate the simulation graphic data of flood water stages for flood disaster prevention and decision making.

  2. Determination of surgical variables for a brain shift correction pipeline using an Android application

    NASA Astrophysics Data System (ADS)

    Vijayan, Rohan; Conley, Rebekah H.; Thompson, Reid C.; Clements, Logan W.; Miga, Michael I.

    2016-03-01

    Brain shift describes the deformation that the brain undergoes from mechanical and physiological effects typically during a neurosurgical or neurointerventional procedure. With respect to image guidance techniques, brain shift has been shown to compromise the fidelity of these approaches. In recent work, a computational pipeline has been developed to predict "brain shift" based on preoperatively determined surgical variables (such as head orientation), and subsequently correct preoperative images to more closely match the intraoperative state of the brain. However, a clinical workflow difficulty in the execution of this pipeline has been acquiring the surgical variables by the neurosurgeon prior to surgery. In order to simplify and expedite this process, an Android, Java-based application designed for tablets was developed to provide the neurosurgeon with the ability to orient 3D computer graphic models of the patient's head, determine expected location and size of the craniotomy, and provide the trajectory into the tumor. These variables are exported for use as inputs for the biomechanical models of the preoperative computing phase for the brain shift correction pipeline. The accuracy of the application's exported data was determined by comparing it to data acquired from the physical execution of the surgeon's plan on a phantom head. Results indicated good overlap of craniotomy predictions, craniotomy centroid locations, and estimates of patient's head orientation with respect to gravity. However, improvements in the app interface and mock surgical setup are needed to minimize error.

  3. The ophthalmic implications of the correction of late enophthalmos following severe midfacial trauma.

    PubMed Central

    Iliff, N T

    1991-01-01

    Severe midfacial trauma presents several challenges to the reconstructive surgeon. Acute rigid fixation of the facial skeleton accompanied by bone grafting to restore the confines and volume of the orbit provide the best opportunity for acceptable aesthetic results. The severity of the trauma causes the late postoperative complication of enophthalmos. Injury to orbital structures with subsequent cicatricial change results in significant alteration in extraocular motility with resultant diplopia. There are no reports in the literature which critically evaluate the effect of late enophthalmos correction on extraocular motility, diplopia, and vision in patients who have suffered Le Fort or NOE fractures. A retrospective study is presented which reviews the results of late surgery for the correction of enophthalmos in 40 patients, all of whom had severe "impure" orbital fractures. This study addresses the following questions: (1) Can the globe effectively be repositioned?, (2) Is there a change in subjective diplopia?, (3) Does a change in extraocular motility occur, and if it does, is it predictable?, (4) Is there a risk to visual acuity? and finally, (5) Do the answers to questions 1 through 4 suggest that late surgical intervention for the correction of enophthalmos should be recommended for this patient population? During a 9-year period, 44 patients with severe diplopia trauma received surgery for enophthalmos correction. A review of 40 patients on whom 56 operations were performed is presented. Thirty-eight patients had enophthalmos and 35 had inferior displacement of the globe. Medial displacement of the globe occurred in 11 patients. Twenty-nine patients had diplopia. Six patients had vision too poor on the injured side to have diplopia. Enophthalmos was improved in 32 patients. Dystopia of the globe was improved in 31 cases. However, neither enophthalmos nor dystopia of the globe could be improved with every operation. Only 35 of the 48 operations for enophthalmos for which measurements were available produced an improvement; in 1 case the enophthalmos was thought to be worse postoperatively. Dystopia operations resulted in improvement in 40 of 48 operations; in 2 instances dystopia was worse postoperatively. Diplopia was unchanged by 33 operations, improved by 11 procedures, and worsened by 6. If patients are considered before and after their total reconstruction course, diplopia was improved in 9 of the 29 patients. In seven of these nine, diplopia was eliminated. There was no change in or production of diplopia in 19 patients, and 5 patients had worsening of their double vision.(ABSTRACT TRUNCATED AT 400 WORDS) Images FIGURE 4 FIGURE 5 FIGURE 6 FIGURE 7 FIGURE 8 FIGURE 9 FIGURE 16 FIGURE 17 FIGURE 18 FIGURE 19 FIGURE 20 FIGURE20 FIGURE 20 FIGURE 22 FIGURE 23 PMID:1808816

  4. Estimating lift from unsteady wakes by using the Kutta-Joukowski theorem with vorticity-weighted wake width

    NASA Astrophysics Data System (ADS)

    Wang, Shizhao; He, Guowei; Liu, Tianshu

    2017-11-01

    The Kutta-Joukowski (KJ) theorem usually leads to puzzling results when it is applied to estimating the lift from the unsteady wakes generated by flapping wings. We investigate this problem by using a prevalent flapping rectangular wing model, where the unsteady wakes are obtained by numerically solving the Navier-Stokes equations at a low Reynolds number. It is found that neither the unsteady nor the time-averaged lift coefficient is correctly predicted when the parameters for the KJ theorem are selected according to the widely accepted ways in the literature. We propose a vorticity-weighted wake width model based on the vortex impulse theory to improve the prediction of the time-averaged lift. Furthermore, we investigate the phase difference of unsteady lift caused by the quasi-steady assumption of the application of the KJ theorem to the flapping flight and quantitatively link the phase difference to the local fluid acceleration. We show the phase difference can be corrected by using an added mass lift model. This work is helpful to clarify the error in estimating the lift of animal flight. Supported by the National Natural Science Foundation of China (No. 11672305).

  5. Tools based on multivariate statistical analysis for classification of soil and groundwater in Apulian agricultural sites.

    PubMed

    Ielpo, Pierina; Leardi, Riccardo; Pappagallo, Giuseppe; Uricchio, Vito Felice

    2017-06-01

    In this paper, the results obtained from multivariate statistical techniques such as PCA (Principal component analysis) and LDA (Linear discriminant analysis) applied to a wide soil data set are presented. The results have been compared with those obtained on a groundwater data set, whose samples were collected together with soil ones, within the project "Improvement of the Regional Agro-meteorological Monitoring Network (2004-2007)". LDA, applied to soil data, has allowed to distinguish the geographical origin of the sample from either one of the two macroaeras: Bari and Foggia provinces vs Brindisi, Lecce e Taranto provinces, with a percentage of correct prediction in cross validation of 87%. In the case of the groundwater data set, the best classification was obtained when the samples were grouped into three macroareas: Foggia province, Bari province and Brindisi, Lecce and Taranto provinces, by reaching a percentage of correct predictions in cross validation of 84%. The obtained information can be very useful in supporting soil and water resource management, such as the reduction of water consumption and the reduction of energy and chemical (nutrients and pesticides) inputs in agriculture.

  6. A new symmetry model for hohlraum-driven capsule implosion experiments on the NIF

    NASA Astrophysics Data System (ADS)

    Jones, O.; Rygg, R.; Tomasini, R.; Eder, D.; Kritcher, A.; Milovich, J.; Peterson, L.; Thomas, C.; Barrios, M.; Benedetti, R.; Doeppner, T.; Ma, T.; Nagel, S.; Pak, A.; Field, J.; Izumi, N.; Glenn, S.; Town, R.; Bradley, D.

    2016-03-01

    We have developed a new model for predicting the time-dependent radiation drive asymmetry in laser-heated hohlraums. The model consists of integrated Hydra capsule-hohlraum calculations coupled to a separate model for calculating the crossbeam energy transfer between the inner and outer cones of the National Ignition Facility (NIF) indirect drive configuration. The time- dependent crossbeam transfer model parameters were adjusted in order to best match the P2 component of the shape of the inflight shell inferred from backlit radiographs of the capsule taken when the shell was at a radius of 150-250 μm. The adjusted model correctly predicts the observed inflight P2 and P4 components of the shape of the inflight shell, and also the P2 component of the shape of the hotspot inferred from x-ray self-emission images at the time of peak emission. It also correctly captures the scaling of the inflight P4 as the hohlraum length is varied. We then applied the newly benchmarked model to quantify the improved symmetry of the N130331 layered deuterium- tritium (DT) experiment in a re-optimized longer hohlraum.

  7. A study of fault prediction and reliability assessment in the SEL environment

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Patnaik, Debabrata

    1986-01-01

    An empirical study on estimation and prediction of faults, prediction of fault detection and correction effort, and reliability assessment in the Software Engineering Laboratory environment (SEL) is presented. Fault estimation using empirical relationships and fault prediction using curve fitting method are investigated. Relationships between debugging efforts (fault detection and correction effort) in different test phases are provided, in order to make an early estimate of future debugging effort. This study concludes with the fault analysis, application of a reliability model, and analysis of a normalized metric for reliability assessment and reliability monitoring during development of software.

  8. Genetic determinants of freckle occurrence in the Spanish population: Towards ephelides prediction from human DNA samples.

    PubMed

    Hernando, Barbara; Ibañez, Maria Victoria; Deserio-Cuesta, Julio Alberto; Soria-Navarro, Raquel; Vilar-Sastre, Inca; Martinez-Cadenas, Conrado

    2018-03-01

    Prediction of human pigmentation traits, one of the most differentiable externally visible characteristics among individuals, from biological samples represents a useful tool in the field of forensic DNA phenotyping. In spite of freckling being a relatively common pigmentation characteristic in Europeans, little is known about the genetic basis of this largely genetically determined phenotype in southern European populations. In this work, we explored the predictive capacity of eight freckle and sunlight sensitivity-related genes in 458 individuals (266 non-freckled controls and 192 freckled cases) from Spain. Four loci were associated with freckling (MC1R, IRF4, ASIP and BNC2), and female sex was also found to be a predictive factor for having a freckling phenotype in our population. After identifying the most informative genetic variants responsible for human ephelides occurrence in our sample set, we developed a DNA-based freckle prediction model using a multivariate regression approach. Once developed, the capabilities of the prediction model were tested by a repeated 10-fold cross-validation approach. The proportion of correctly predicted individuals using the DNA-based freckle prediction model was 74.13%. The implementation of sex into the DNA-based freckle prediction model slightly improved the overall prediction accuracy by 2.19% (76.32%). Further evaluation of the newly-generated prediction model was performed by assessing the model's performance in a new cohort of 212 Spanish individuals, reaching a classification success rate of 74.61%. Validation of this prediction model may be carried out in larger populations, including samples from different European populations. Further research to validate and improve this newly-generated freckle prediction model will be needed before its forensic application. Together with DNA tests already validated for eye and hair colour prediction, this freckle prediction model may lead to a substantially more detailed physical description of unknown individuals from DNA found at the crime scene. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. An approach for reduction of false predictions in reverse engineering of gene regulatory networks.

    PubMed

    Khan, Abhinandan; Saha, Goutam; Pal, Rajat Kumar

    2018-05-14

    A gene regulatory network discloses the regulatory interactions amongst genes, at a particular condition of the human body. The accurate reconstruction of such networks from time-series genetic expression data using computational tools offers a stiff challenge for contemporary computer scientists. This is crucial to facilitate the understanding of the proper functioning of a living organism. Unfortunately, the computational methods produce many false predictions along with the correct predictions, which is unwanted. Investigations in the domain focus on the identification of as many correct regulations as possible in the reverse engineering of gene regulatory networks to make it more reliable and biologically relevant. One way to achieve this is to reduce the number of incorrect predictions in the reconstructed networks. In the present investigation, we have proposed a novel scheme to decrease the number of false predictions by suitably combining several metaheuristic techniques. We have implemented the same using a dataset ensemble approach (i.e. combining multiple datasets) also. We have employed the proposed methodology on real-world experimental datasets of the SOS DNA Repair network of Escherichia coli and the IMRA network of Saccharomyces cerevisiae. Subsequently, we have experimented upon somewhat larger, in silico networks, namely, DREAM3 and DREAM4 Challenge networks, and 15-gene and 20-gene networks extracted from the GeneNetWeaver database. To study the effect of multiple datasets on the quality of the inferred networks, we have used four datasets in each experiment. The obtained results are encouraging enough as the proposed methodology can reduce the number of false predictions significantly, without using any supplementary prior biological information for larger gene regulatory networks. It is also observed that if a small amount of prior biological information is incorporated here, the results improve further w.r.t. the prediction of true positives. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Innovative Liner Concepts: Experiments and Impedance Modeling of Liners Including the Effect of Bias Flow

    NASA Technical Reports Server (NTRS)

    Kelly, Jeff; Betts, Juan Fernando; Fuller, Chris

    2000-01-01

    The study of normal impedance of perforated plate acoustic liners including the effect of bias flow was studied. Two impedance models were developed by modeling the internal flows of perforate orifices as infinite tubes with the inclusion of end corrections to handle finite length effects. These models assumed incompressible and compressible flows, respectively, between the far field and the perforate orifice. The incompressible model was used to predict impedance results for perforated plates with percent open areas ranging from 5% to 15%. The predicted resistance results showed better agreement with experiments for the higher percent open area samples. The agreement also tended to deteriorate as bias flow was increased. For perforated plates with percent open areas ranging from 1% to 5%, the compressible model was used to predict impedance results. The model predictions were closer to the experimental resistance results for the 2% to 3% open area samples. The predictions tended to deteriorate as bias flow was increased. The reactance results were well predicted by the models for the higher percent open area, but deteriorated as the percent open area was lowered (5%) and bias flow was increased. A fit was done on the incompressible model to the experimental database. The fit was performed using an optimization routine that found the optimal set of multiplication coefficients to the non-dimensional groups that minimized the least squares slope error between predictions and experiments. The result of the fit indicated that terms not associated with bias flow required a greater degree of correction than the terms associated with the bias flow. This model improved agreement with experiments by nearly 15% for the low percent open area (5%) samples when compared to the unfitted model. The fitted model and the unfitted model performed equally well for the higher percent open area (10% and 15%).

  11. The Effects of Writing Anxiety and Motivation on EFL College Students' Self-Evaluative Judgments of Corrective Feedback.

    PubMed

    Tsao, Jui-Jung; Tseng, Wen-Ta; Wang, Chaochang

    2017-04-01

    Feedback is regarded as a way to foster students' motivation and to ensure linguistic accuracy. However, mixed findings are reported in the research on written corrective feedback because of its multifaceted nature and its correlations with learners' individual differences. It is necessary, therefore, to conduct further research on corrective feedback from the student's perspective and to examine how individual differences in terms of factors such as writing anxiety and motivation predict learners' self-evaluative judgments of both teacher-corrected and peer-corrected feedback. For this study, 158 Taiwanese college sophomores participated in a survey that comprised three questionnaires. Results demonstrated that intrinsic motivation and different types of writing anxiety predicted English as foreign language learners' evaluative judgments of teacher and peer feedback. The findings have implications for English-writing instruction.

  12. Parameters leading to a successful radiographic outcome following surgical treatment for Lenke 2 curves.

    PubMed

    Koller, Heiko; Meier, Oliver; McClung, Anna; Hitzl, Wolfgang; Mayer, Michael; Sucato, Daniel

    2015-07-01

    In Lenke 2 curves, there are conflicting data when to include the PTC into the fusion. Studies focusing on Lenke 2 curves are scant. The number of patients with significant postoperative shoulder height difference (SHD) or trunk shift (TS) is as high as 30 % indicating further research. Therefore, the purpose of the current study was to improve understanding of curve resolution and shoulder balance following surgical correction of Lenke 2 curves as well as the identification of radiographic parameters predicting postoperative curve resolution, shoulder and trunk balance in perspective of inclusion/exclusion of the proximal thoracic curve (PTC). This is a retrospective study of a 158 Lenke 2 curves. Serial radiographs were analyzed for the main thoracic curve (MTC), PTC, and lumbar curve (LC), SHD, clavicle angle (CA), T-1 tilt, deviation of the central sacral vertical line (CSVL) off the C7 plumb line.Patients were stratified whether the PTC was included in the fusion (+PTC group, n = 60) or not (-PTC group, n = 98). Intergroup results were studied. Compensatory mechanisms for SHD were studied in detail. Adding-on distally was defined as an increase of the lowest instrumented vertebra adjacent disc angle (LIVDA) >3°. Stepwise regression analyses were performed to establish predictive radiographic parameters. At follow-up averaging 24 months significant differences between the +PTC and -PTC group existed for the PTC (24° vs 28°, p < .01), PTC correction (42 vs 29 %, p < .01), rate of MTC-loss >5° (27 vs 53 %, p < .01), and spontaneous LC correction in patients with a selective thoracic fusion (STF) (80/93 %, p = .04). The number of patients with a new trunk shift (CSVL > 2 cm) was 9 (6 %): 7 in the -PTC vs 2 in the +PTC group (p = .03). Utilization of compensatory mechanisms (99 vs 83 %, p < .01) and adding-on (35 vs 20 %, p < .05) occurred more often in the +PTC vs the -PTC groups. Statistics showed postoperative SHD improvement in both the +PTC and -PTC groups. There were no significant differences regarding SHD, CA and T1-Tilt between groups. However, only in the -PTC group, a significant change between postoperative and follow-up SHD existed (p = .02). Statistics identified a preoperative 'left shoulder up' (p < .01) and CSVL (p = .03) predictive for follow-up SHD ≥1.5 cm. A statistical model only for the -PTC group showed 9 parameters highly predictive for a follow-up SHD ≥1.5 cm with highest prediction strength for a PTC >40° (p = .01), a preoperative 'left shoulder up' (p < .01) and anterior fusion (p = .02). To account for baseline differences between the +PTC and -PTC groups, 49 matched-pairs were studied. Postoperative differences remained significant between the +PTC and -PTC groups for the PTC (p < .01), MTC (p = .03) and the rate of loss of MTC >5° (p < .01). Prediction of a successful surgical outcome for Lenke 2 curves depends on multiple variables, in particular a preoperative left shoulder up, preoperative PTC >40°, MTC correction, and surgical approach. Shoulder balance is not significantly different whether the PTC is included in the fusion or not. But, powerful compensation mechanisms utilized to balance shoulder in the -PTC group can impose changes of trunk alignment, main and compensatory lumbar curves.

  13. Goldmann tonometer error correcting prism: clinical evaluation.

    PubMed

    McCafferty, Sean; Lim, Garrett; Duncan, William; Enikov, Eniko T; Schwiegerling, Jim; Levine, Jason; Kew, Corin

    2017-01-01

    Clinically evaluate a modified applanating surface Goldmann tonometer prism designed to substantially negate errors due to patient variability in biomechanics. A modified Goldmann prism with a correcting applanation tonometry surface (CATS) was mathematically optimized to minimize the intraocular pressure (IOP) measurement error due to patient variability in corneal thickness, stiffness, curvature, and tear film adhesion force. A comparative clinical study of 109 eyes measured IOP with CATS and Goldmann prisms. The IOP measurement differences between the CATS and Goldmann prisms were correlated to corneal thickness, hysteresis, and curvature. The CATS tonometer prism in correcting for Goldmann central corneal thickness (CCT) error demonstrated a reduction to <±2 mmHg in 97% of a standard CCT population. This compares to only 54% with CCT error <±2 mmHg using the Goldmann prism. Equal reductions of ~50% in errors due to corneal rigidity and curvature were also demonstrated. The results validate the CATS prism's improved accuracy and expected reduced sensitivity to Goldmann errors without IOP bias as predicted by mathematical modeling. The CATS replacement for the Goldmann prism does not change Goldmann measurement technique or interpretation.

  14. Kalman/Map filtering-aided fast normalized cross correlation-based Wi-Fi fingerprinting location sensing.

    PubMed

    Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin

    2013-11-13

    A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results.

  15. Kalman/Map Filtering-Aided Fast Normalized Cross Correlation-Based Wi-Fi Fingerprinting Location Sensing

    PubMed Central

    Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin

    2013-01-01

    A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results. PMID:24233027

  16. Improvement of Storm Forecasts Using Gridded Bayesian Linear Regression for Northeast United States

    NASA Astrophysics Data System (ADS)

    Yang, J.; Astitha, M.; Schwartz, C. S.

    2017-12-01

    Bayesian linear regression (BLR) is a post-processing technique in which regression coefficients are derived and used to correct raw forecasts based on pairs of observation-model values. This study presents the development and application of a gridded Bayesian linear regression (GBLR) as a new post-processing technique to improve numerical weather prediction (NWP) of rain and wind storm forecasts over northeast United States. Ten controlled variables produced from ten ensemble members of the National Center for Atmospheric Research (NCAR) real-time prediction system are used for a GBLR model. In the GBLR framework, leave-one-storm-out cross-validation is utilized to study the performances of the post-processing technique in a database composed of 92 storms. To estimate the regression coefficients of the GBLR, optimization procedures that minimize the systematic and random error of predicted atmospheric variables (wind speed, precipitation, etc.) are implemented for the modeled-observed pairs of training storms. The regression coefficients calculated for meteorological stations of the National Weather Service are interpolated back to the model domain. An analysis of forecast improvements based on error reductions during the storms will demonstrate the value of GBLR approach. This presentation will also illustrate how the variances are optimized for the training partition in GBLR and discuss the verification strategy for grid points where no observations are available. The new post-processing technique is successful in improving wind speed and precipitation storm forecasts using past event-based data and has the potential to be implemented in real-time.

  17. Risk reclassification analysis investigating the added value of fatigue to sickness absence predictions.

    PubMed

    Roelen, Corné A M; Bültmann, Ute; Groothoff, Johan W; Twisk, Jos W R; Heymans, Martijn W

    2015-11-01

    Prognostic models including age, self-rated health and prior sickness absence (SA) have been found to predict high (≥ 30) SA days and high (≥ 3) SA episodes during 1-year follow-up. More predictors of high SA are needed to improve these SA prognostic models. The purpose of this study was to investigate fatigue as new predictor in SA prognostic models by using risk reclassification methods and measures. This was a prospective cohort study with 1-year follow-up of 1,137 office workers. Fatigue was measured at baseline with the 20-item checklist individual strength and added to the existing SA prognostic models. SA days and episodes during 1-year follow-up were retrieved from an occupational health service register. The added value of fatigue was investigated with Net Reclassification Index (NRI) and integrated discrimination improvement (IDI) measures. In total, 579 (51 %) office workers had complete data for analysis. Fatigue was prospectively associated with both high SA days and episodes. The NRI revealed that adding fatigue to the SA days model correctly reclassified workers with high SA days, but incorrectly reclassified workers without high SA days. The IDI indicated no improvement in risk discrimination by the SA days model. Both NRI and IDI showed that the prognostic model predicting high SA episodes did not improve when fatigue was added as predictor variable. In the present study, fatigue increased false-positive rates which may reduce the cost-effectiveness of interventions for preventing SA.

  18. Sequential search leads to faster, more efficient fragment-based de novo protein structure prediction.

    PubMed

    de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M

    2018-04-01

    Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.

  19. COUSCOus: improved protein contact prediction using an empirical Bayes covariance estimator.

    PubMed

    Rawi, Reda; Mall, Raghvendra; Kunji, Khalid; El Anbari, Mohammed; Aupetit, Michael; Ullah, Ehsan; Bensmail, Halima

    2016-12-15

    The post-genomic era with its wealth of sequences gave rise to a broad range of protein residue-residue contact detecting methods. Although various coevolution methods such as PSICOV, DCA and plmDCA provide correct contact predictions, they do not completely overlap. Hence, new approaches and improvements of existing methods are needed to motivate further development and progress in the field. We present a new contact detecting method, COUSCOus, by combining the best shrinkage approach, the empirical Bayes covariance estimator and GLasso. Using the original PSICOV benchmark dataset, COUSCOus achieves mean accuracies of 0.74, 0.62 and 0.55 for the top L/10 predicted long, medium and short range contacts, respectively. In addition, COUSCOus attains mean areas under the precision-recall curves of 0.25, 0.29 and 0.30 for long, medium and short contacts and outperforms PSICOV. We also observed that COUSCOus outperforms PSICOV w.r.t. Matthew's correlation coefficient criterion on full list of residue contacts. Furthermore, COUSCOus achieves on average 10% more gain in prediction accuracy compared to PSICOV on an independent test set composed of CASP11 protein targets. Finally, we showed that when using a simple random forest meta-classifier, by combining contact detecting techniques and sequence derived features, PSICOV predictions should be replaced by the more accurate COUSCOus predictions. We conclude that the consideration of superior covariance shrinkage approaches will boost several research fields that apply the GLasso procedure, amongst the presented one of residue-residue contact prediction as well as fields such as gene network reconstruction.

  20. DOT2: Macromolecular Docking With Improved Biophysical Models

    PubMed Central

    Roberts, Victoria A.; Thompson, Elaine E.; Pique, Michael E.; Perez, Martin S.; Eyck, Lynn Ten

    2015-01-01

    Computational docking is a useful tool for predicting macromolecular complexes, which are often difficult to determine experimentally. Here we present the DOT2 software suite, an updated version of the DOT intermolecular docking program. DOT2 provides straightforward, automated construction of improved biophysical models based on molecular coordinates, offering checkpoints that guide the user to include critical features. DOT has been updated to run more quickly, allow flexibility in grid size and spacing, and generate a complete list of favorable candidate configu-rations. Output can be filtered by experimental data and rescored by the sum of electrostatic and atomic desolvation energies. We show that this rescoring method improves the ranking of correct complexes for a wide range of macromolecular interactions, and demonstrate that biologically relevant models are essential for biologically relevant results. The flexibility and versatility of DOT2 accommodate realistic models of complex biological systems, improving the likelihood of a successful docking outcome. PMID:23695987

Top