Extended Glauert tip correction to include vortex rollup effects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maniaci, David; Schmitz, Sven
Wind turbine loads predictions by blade-element momentum theory using the standard tip-loss correction have been shown to over-predict loading near the blade tip in comparison to experimental data. This over-prediction is theorized to be due to the assumption of light rotor loading, inherent in the standard tip-loss correction model of Glauert. A higher- order free-wake method, WindDVE, is used to compute the rollup process of the trailing vortex sheets downstream of wind turbine blades. Results obtained serve an exact correction function to the Glauert tip correction used in blade-element momentum methods. Lastly, it is found that accounting for the effectsmore » of tip vortex rollup within the Glauert tip correction indeed results in improved prediction of blade tip loads computed by blade-element momentum methods.« less
Extended Glauert tip correction to include vortex rollup effects
Maniaci, David; Schmitz, Sven
2016-10-03
Wind turbine loads predictions by blade-element momentum theory using the standard tip-loss correction have been shown to over-predict loading near the blade tip in comparison to experimental data. This over-prediction is theorized to be due to the assumption of light rotor loading, inherent in the standard tip-loss correction model of Glauert. A higher- order free-wake method, WindDVE, is used to compute the rollup process of the trailing vortex sheets downstream of wind turbine blades. Results obtained serve an exact correction function to the Glauert tip correction used in blade-element momentum methods. Lastly, it is found that accounting for the effectsmore » of tip vortex rollup within the Glauert tip correction indeed results in improved prediction of blade tip loads computed by blade-element momentum methods.« less
Prediction-Correction Algorithms for Time-Varying Constrained Optimization
Simonetto, Andrea; Dall'Anese, Emiliano
2017-07-26
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
NASA Astrophysics Data System (ADS)
Wang, Qianxin; Hu, Chao; Xu, Tianhe; Chang, Guobin; Hernández Moraleda, Alberto
2017-12-01
Analysis centers (ACs) for global navigation satellite systems (GNSSs) cannot accurately obtain real-time Earth rotation parameters (ERPs). Thus, the prediction of ultra-rapid orbits in the international terrestrial reference system (ITRS) has to utilize the predicted ERPs issued by the International Earth Rotation and Reference Systems Service (IERS) or the International GNSS Service (IGS). In this study, the accuracy of ERPs predicted by IERS and IGS is analyzed. The error of the ERPs predicted for one day can reach 0.15 mas and 0.053 ms in polar motion and UT1-UTC direction, respectively. Then, the impact of ERP errors on ultra-rapid orbit prediction by GNSS is studied. The methods for orbit integration and frame transformation in orbit prediction with introduced ERP errors dominate the accuracy of the predicted orbit. Experimental results show that the transformation from the geocentric celestial references system (GCRS) to ITRS exerts the strongest effect on the accuracy of the predicted ultra-rapid orbit. To obtain the most accurate predicted ultra-rapid orbit, a corresponding real-time orbit correction method is developed. First, orbits without ERP-related errors are predicted on the basis of ITRS observed part of ultra-rapid orbit for use as reference. Then, the corresponding predicted orbit is transformed from GCRS to ITRS to adjust for the predicted ERPs. Finally, the corrected ERPs with error slopes are re-introduced to correct the predicted orbit in ITRS. To validate the proposed method, three experimental schemes are designed: function extrapolation, simulation experiments, and experiments with predicted ultra-rapid orbits and international GNSS Monitoring and Assessment System (iGMAS) products. Experimental results show that using the proposed correction method with IERS products considerably improved the accuracy of ultra-rapid orbit prediction (except the geosynchronous BeiDou orbits). The accuracy of orbit prediction is enhanced by at least 50% (error related to ERP) when a highly accurate observed orbit is used with the correction method. For iGMAS-predicted orbits, the accuracy improvement ranges from 8.5% for the inclined BeiDou orbits to 17.99% for the GPS orbits. This demonstrates that the correction method proposed by this study can optimize the ultra-rapid orbit prediction.
Predicting the helix packing of globular proteins by self-correcting distance geometry.
Mumenthaler, C; Braun, W
1995-05-01
A new self-correcting distance geometry method for predicting the three-dimensional structure of small globular proteins was assessed with a test set of 8 helical proteins. With the knowledge of the amino acid sequence and the helical segments, our completely automated method calculated the correct backbone topology of six proteins. The accuracy of the predicted structures ranged from 2.3 A to 3.1 A for the helical segments compared to the experimentally determined structures. For two proteins, the predicted constraints were not restrictive enough to yield a conclusive prediction. The method can be applied to all small globular proteins, provided the secondary structure is known from NMR analysis or can be predicted with high reliability.
Comparison of four statistical and machine learning methods for crash severity prediction.
Iranitalab, Amirfarrokh; Khattak, Aemal
2017-11-01
Crash severity prediction models enable different agencies to predict the severity of a reported crash with unknown severity or the severity of crashes that may be expected to occur sometime in the future. This paper had three main objectives: comparison of the performance of four statistical and machine learning methods including Multinomial Logit (MNL), Nearest Neighbor Classification (NNC), Support Vector Machines (SVM) and Random Forests (RF), in predicting traffic crash severity; developing a crash costs-based approach for comparison of crash severity prediction methods; and investigating the effects of data clustering methods comprising K-means Clustering (KC) and Latent Class Clustering (LCC), on the performance of crash severity prediction models. The 2012-2015 reported crash data from Nebraska, United States was obtained and two-vehicle crashes were extracted as the analysis data. The dataset was split into training/estimation (2012-2014) and validation (2015) subsets. The four prediction methods were trained/estimated using the training/estimation dataset and the correct prediction rates for each crash severity level, overall correct prediction rate and a proposed crash costs-based accuracy measure were obtained for the validation dataset. The correct prediction rates and the proposed approach showed NNC had the best prediction performance in overall and in more severe crashes. RF and SVM had the next two sufficient performances and MNL was the weakest method. Data clustering did not affect the prediction results of SVM, but KC improved the prediction performance of MNL, NNC and RF, while LCC caused improvement in MNL and RF but weakened the performance of NNC. Overall correct prediction rate had almost the exact opposite results compared to the proposed approach, showing that neglecting the crash costs can lead to misjudgment in choosing the right prediction method. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Han; Yan, Jie; Liu, Yongqian; Han, Shuang; Li, Li; Zhao, Jing
2017-11-01
Increasing the accuracy of wind speed prediction lays solid foundation to the reliability of wind power forecasting. Most traditional correction methods for wind speed prediction establish the mapping relationship between wind speed of the numerical weather prediction (NWP) and the historical measurement data (HMD) at the corresponding time slot, which is free of time-dependent impacts of wind speed time series. In this paper, a multi-step-ahead wind speed prediction correction method is proposed with consideration of the passing effects from wind speed at the previous time slot. To this end, the proposed method employs both NWP and HMD as model inputs and the training labels. First, the probabilistic analysis of the NWP deviation for different wind speed bins is calculated to illustrate the inadequacy of the traditional time-independent mapping strategy. Then, support vector machine (SVM) is utilized as example to implement the proposed mapping strategy and to establish the correction model for all the wind speed bins. One Chinese wind farm in northern part of China is taken as example to validate the proposed method. Three benchmark methods of wind speed prediction are used to compare the performance. The results show that the proposed model has the best performance under different time horizons.
Comparison of techniques for correction of magnification of pelvic X-rays for hip surgery planning.
The, Bertram; Kootstra, Johan W J; Hosman, Anton H; Verdonschot, Nico; Gerritsma, Carina L E; Diercks, Ron L
2007-12-01
The aim of this study was to develop an accurate method for correction of magnification of pelvic x-rays to enhance accuracy of hip surgery planning. All investigated methods aim at estimating the anteroposterior location of the hip joint in supine position to correctly position a reference object for correction of magnification. An existing method-which is currently being used in clinical practice in our clinics-is based on estimating the position of the hip joint by palpation of the greater trochanter. It is only moderately accurate and difficult to execute reliably in clinical practice. To develop a new method, 99 patients who already had a hip implant in situ were included; this enabled determining the true location of the hip joint deducted from the magnification of the prosthesis. Physical examination was used to obtain predictor variables possibly associated with the height of the hip joint. This included a simple dynamic hip joint examination to estimate the position of the center of rotation. Prediction equations were then constructed using regression analysis. The performance of these prediction equations was compared with the performance of the existing protocol. The mean absolute error in predicting the height of the hip joint center using the old method was 20 mm (range -79 mm to +46 mm). This was 11 mm for the new method (-32 mm to +39 mm). The prediction equation is: height (mm) = 34 + 1/2 abdominal circumference (cm). The newly developed prediction equation is a superior method for predicting the height of the hip joint center for correction of magnification of pelvic x-rays. We recommend its implementation in the departments of radiology and orthopedic surgery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonetto, Andrea; Dall'Anese, Emiliano
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
Hypothesis, Prediction, and Conclusion: Using Nature of Science Terminology Correctly
ERIC Educational Resources Information Center
Eastwell, Peter
2012-01-01
This paper defines the terms "hypothesis," "prediction," and "conclusion" and shows how to use the terms correctly in scientific investigations in both the school and science education research contexts. The scientific method, or hypothetico-deductive (HD) approach, is described and it is argued that an understanding of the scientific method,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, T; Du, K; Bayouth, J
Purpose: Ventilation change caused by radiation therapy (RT) can be predicted using four-dimensional computed tomography (4DCT) and image registration. This study tested the dependency of predicted post-RT ventilation on effort correction and pre-RT lung function. Methods: Pre-RT and 3 month post-RT 4DCT images were obtained for 13 patients. The 4DCT images were used to create ventilation maps using a deformable image registration based Jacobian expansion calculation. The post-RT ventilation maps were predicted in four different ways using the dose delivered, pre-RT ventilation, and effort correction. The pre-RT ventilation and effort correction were toggled to determine dependency. The four different predictedmore » ventilation maps were compared to the post-RT ventilation map calculated from image registration to establish the best prediction method. Gamma pass rates were used to compare the different maps with the criteria of 2mm distance-to-agreement and 6% ventilation difference. Paired t-tests of gamma pass rates were used to determine significant differences between the maps. Additional gamma pass rates were calculated using only voxels receiving over 20 Gy. Results: The predicted post-RT ventilation maps were in agreement with the actual post-RT maps in the following percentage of voxels averaged over all subjects: 71% with pre-RT ventilation and effort correction, 69% with no pre-RT ventilation and effort correction, 60% with pre-RT ventilation and no effort correction, and 58% with no pre-RT ventilation and no effort correction. When analyzing only voxels receiving over 20 Gy, the gamma pass rates were respectively 74%, 69%, 65%, and 55%. The prediction including both pre- RT ventilation and effort correction was the only prediction with significant improvement over using no prediction (p<0.02). Conclusion: Post-RT ventilation is best predicted using both pre-RT ventilation and effort correction. This is the only prediction that provided a significant improvement on agreement. Research support from NIH grants CA166119 and CA166703, a gift from Roger Koch, and a Pilot Grant from University of Iowa Carver College of Medicine.« less
A Class of Prediction-Correction Methods for Time-Varying Convex Optimization
NASA Astrophysics Data System (ADS)
Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro
2016-09-01
This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Simonetto, Andrea
This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less
Innovation in prediction planning for anterior open bite correction.
Almuzian, Mohammed; Almukhtar, Anas; O'Neil, Michael; Benington, Philip; Al Anezi, Thamer; Ayoub, Ashraf
2015-05-01
This study applies recent advances in 3D virtual imaging for application in the prediction planning of dentofacial deformities. Stereo-photogrammetry has been used to create virtual and physical models, which are creatively combined in planning the surgical correction of anterior open bite. The application of these novel methods is demonstrated through the surgical correction of a case.
Improved Density Functional Tight Binding Potentials for Metalloid Aluminum Clusters
2016-06-01
simulations of the oxidation of Al4Cp * 4 show reasonable comparison with a DFT-based Car -Parrinello method, including correct prediction of hydride transfers...comparison with a DFT-based Car -Parrinello method, including correct prediction of hydride transfers from Cp* to the metal centers during the...initio molecular dynamics of the oxidation of Al4Cp * 4 using a DFT-based Car -Parrinello method. This simulation, which 43 several months on the
An improved method to detect correct protein folds using partial clustering.
Zhou, Jianjun; Wishart, David S
2013-01-16
Structure-based clustering is commonly used to identify correct protein folds among candidate folds (also called decoys) generated by protein structure prediction programs. However, traditional clustering methods exhibit a poor runtime performance on large decoy sets. We hypothesized that a more efficient "partial" clustering approach in combination with an improved scoring scheme could significantly improve both the speed and performance of existing candidate selection methods. We propose a new scheme that performs rapid but incomplete clustering on protein decoys. Our method detects structurally similar decoys (measured using either C(α) RMSD or GDT-TS score) and extracts representatives from them without assigning every decoy to a cluster. We integrated our new clustering strategy with several different scoring functions to assess both the performance and speed in identifying correct or near-correct folds. Experimental results on 35 Rosetta decoy sets and 40 I-TASSER decoy sets show that our method can improve the correct fold detection rate as assessed by two different quality criteria. This improvement is significantly better than two recently published clustering methods, Durandal and Calibur-lite. Speed and efficiency testing shows that our method can handle much larger decoy sets and is up to 22 times faster than Durandal and Calibur-lite. The new method, named HS-Forest, avoids the computationally expensive task of clustering every decoy, yet still allows superior correct-fold selection. Its improved speed, efficiency and decoy-selection performance should enable structure prediction researchers to work with larger decoy sets and significantly improve their ab initio structure prediction performance.
An improved method to detect correct protein folds using partial clustering
2013-01-01
Background Structure-based clustering is commonly used to identify correct protein folds among candidate folds (also called decoys) generated by protein structure prediction programs. However, traditional clustering methods exhibit a poor runtime performance on large decoy sets. We hypothesized that a more efficient “partial“ clustering approach in combination with an improved scoring scheme could significantly improve both the speed and performance of existing candidate selection methods. Results We propose a new scheme that performs rapid but incomplete clustering on protein decoys. Our method detects structurally similar decoys (measured using either Cα RMSD or GDT-TS score) and extracts representatives from them without assigning every decoy to a cluster. We integrated our new clustering strategy with several different scoring functions to assess both the performance and speed in identifying correct or near-correct folds. Experimental results on 35 Rosetta decoy sets and 40 I-TASSER decoy sets show that our method can improve the correct fold detection rate as assessed by two different quality criteria. This improvement is significantly better than two recently published clustering methods, Durandal and Calibur-lite. Speed and efficiency testing shows that our method can handle much larger decoy sets and is up to 22 times faster than Durandal and Calibur-lite. Conclusions The new method, named HS-Forest, avoids the computationally expensive task of clustering every decoy, yet still allows superior correct-fold selection. Its improved speed, efficiency and decoy-selection performance should enable structure prediction researchers to work with larger decoy sets and significantly improve their ab initio structure prediction performance. PMID:23323835
Characterizing bias correction uncertainty in wheat yield predictions
NASA Astrophysics Data System (ADS)
Ortiz, Andrea Monica; Jones, Julie; Freckleton, Robert; Scaife, Adam
2017-04-01
Farming systems are under increased pressure due to current and future climate change, variability and extremes. Research on the impacts of climate change on crop production typically rely on the output of complex Global and Regional Climate Models, which are used as input to crop impact models. Yield predictions from these top-down approaches can have high uncertainty for several reasons, including diverse model construction and parameterization, future emissions scenarios, and inherent or response uncertainty. These uncertainties propagate down each step of the 'cascade of uncertainty' that flows from climate input to impact predictions, leading to yield predictions that may be too complex for their intended use in practical adaptation options. In addition to uncertainty from impact models, uncertainty can also stem from the intermediate steps that are used in impact studies to adjust climate model simulations to become more realistic when compared to observations, or to correct the spatial or temporal resolution of climate simulations, which are often not directly applicable as input into impact models. These important steps of bias correction or calibration also add uncertainty to final yield predictions, given the various approaches that exist to correct climate model simulations. In order to address how much uncertainty the choice of bias correction method can add to yield predictions, we use several evaluation runs from Regional Climate Models from the Coordinated Regional Downscaling Experiment over Europe (EURO-CORDEX) at different resolutions together with different bias correction methods (linear and variance scaling, power transformation, quantile-quantile mapping) as input to a statistical crop model for wheat, a staple European food crop. The objective of our work is to compare the resulting simulation-driven hindcasted wheat yields to climate observation-driven wheat yield hindcasts from the UK and Germany in order to determine ranges of yield uncertainty that result from different climate model simulation input and bias correction methods. We simulate wheat yields using a General Linear Model that includes the effects of seasonal maximum temperatures and precipitation, since wheat is sensitive to heat stress during important developmental stages. We use the same statistical model to predict future wheat yields using the recently available bias-corrected simulations of EURO-CORDEX-Adjust. While statistical models are often criticized for their lack of complexity, an advantage is that we are here able to consider only the effect of the choice of climate model, resolution or bias correction method on yield. Initial results using both past and future bias-corrected climate simulations with a process-based model will also be presented. Through these methods, we make recommendations in preparing climate model output for crop models.
A maintenance time prediction method considering ergonomics through virtual reality simulation.
Zhou, Dong; Zhou, Xin-Xin; Guo, Zi-Yue; Lv, Chuan
2016-01-01
Maintenance time is a critical quantitative index in maintainability prediction. An efficient maintenance time measurement methodology plays an important role in early stage of the maintainability design. While traditional way to measure the maintenance time ignores the differences between line production and maintenance action. This paper proposes a corrective MOD method considering several important ergonomics factors to predict the maintenance time. With the help of the DELMIA analysis tools, the influence coefficient of several factors are discussed to correct the MOD value and the designers can measure maintenance time by calculating the sum of the corrective MOD time of each maintenance therbligs. Finally a case study is introduced, by maintaining the virtual prototype of APU motor starter in DELMIA, designer obtains the actual maintenance time by the proposed method, and the result verifies the effectiveness and accuracy of the proposed method.
NASA Astrophysics Data System (ADS)
Hurst, N. W.; Kusznir, N. J.
2005-05-01
A new method of inverting satellite gravity at rifted continental margins to give crustal thickness, incorporating a lithosphere thermal correction, has been developed which does not use a priori information about the location of the ocean-continent transition (OCT) and provides an independent prediction of OCT location. Satellite derived gravity anomaly data (Sandwell and Smith 1997) and bathymetry data (Gebco 2003) are used to derive the mantle residual gravity anomaly which is inverted in 3D in the spectral domain to give Moho depth. Oceanic lithosphere and stretched continental margin lithosphere produce a large negative residual thermal gravity anomaly (up to -380 mgal), which must be corrected for in order to determine Moho depth. This thermal gravity correction may be determined for oceanic lithosphere using oceanic isochron data, and for the thinned continental margin lithosphere using margin rift age and beta stretching estimates iteratively derived from crustal basement thickness determined from the gravity inversion. The gravity inversion using the thermal gravity correction predicts oceanic crustal thicknesses consistent with seismic observations, while that without the thermal correction predicts much too great oceanic crustal thicknesses. Predicted Moho depth and crustal thinning across the Hatton and Faroes rifted margins, using the gravity inversion with embedded thermal correction, compare well with those produced by wide-angle seismology. A new gravity inversion method has been developed in which no isochrons are used to define the thermal gravity correction. The new method assumes all lithosphere to be initially continental and a uniform lithosphere stretching age is used corresponding to the time of continental breakup. The thinning factor produced by the gravity inversion is used to predict the thickness of oceanic crust. This new modified form of gravity inversion with embedded thermal correction provides an improved estimate of rifted continental margin crustal thinning and an improved (and isochron independent) prediction of OCT location. The new method uses an empirical relationship to predict the thickness of oceanic crust as a function of lithosphere thinning factor controlled by two input parameters: a critical thinning factor for the start of ocean crust production and the maximum oceanic crustal thickness produced when the thinning factor = 1, corresponding to infinite lithosphere stretching. The disadvantage of using a uniform stretching age corresponding to the age of continental breakup is that the inversion fails to predict increasing thermal gravity correction towards the ocean ridge and incorrectly predicts thickening of oceanic crust with decreasing oceanic age. The new gravity inversion method has been applied to N. Atlantic rifted margins. This work forms part of the NERC Margins iSIMM project. iSIMM investigators are from Liverpool and Cambridge Universities, Badley Geoscience & Schlumberger Cambridge Research supported by the NERC, the DTI, Agip UK, BP, Amerada Hess Ltd, Anadarko, ConocoPhillips, Shell, Statoil and WesternGeco. The iSIMM team comprises NJ Kusznir, RS White, AM Roberts, PAF Christie, A Chappell, J Eccles, R Fletcher, D Healy, N Hurst, ZC Lunnon, CJ Parkin, AW Roberts, LK Smith, V Tymms & R Spitzer.
Lock-in amplifier error prediction and correction in frequency sweep measurements.
Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose
2007-01-01
This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.
Lu, Liqiang; Liu, Xiaowen; Li, Tingwen; ...
2017-08-12
For this study, gas–solids flow in a three-dimension periodic domain was numerically investigated by direct numerical simulation (DNS), computational fluid dynamic-discrete element method (CFD-DEM) and two-fluid model (TFM). DNS data obtained by finely resolving the flow around every particle are used as a benchmark to assess the validity of coarser DEM and TFM approaches. The CFD-DEM predicts the correct cluster size distribution and under-predicts the macro-scale slip velocity even with a grid size as small as twice the particle diameter. The TFM approach predicts larger cluster size and lower slip velocity with a homogeneous drag correlation. Although the slip velocitymore » can be matched by a simple modification to the drag model, the predicted voidage distribution is still different from DNS: Both CFD-DEM and TFM over-predict the fraction of particles in dense regions and under-predict the fraction of particles in regions of intermediate void fractions. Also, the cluster aspect ratio of DNS is smaller than CFD-DEM and TFM. Since a simple correction to the drag model can predict a correct slip velocity, it is hopeful that drag corrections based on more elaborate theories that consider voidage gradient and particle fluctuations may be able to improve the current predictions of cluster distribution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Liqiang; Liu, Xiaowen; Li, Tingwen
For this study, gas–solids flow in a three-dimension periodic domain was numerically investigated by direct numerical simulation (DNS), computational fluid dynamic-discrete element method (CFD-DEM) and two-fluid model (TFM). DNS data obtained by finely resolving the flow around every particle are used as a benchmark to assess the validity of coarser DEM and TFM approaches. The CFD-DEM predicts the correct cluster size distribution and under-predicts the macro-scale slip velocity even with a grid size as small as twice the particle diameter. The TFM approach predicts larger cluster size and lower slip velocity with a homogeneous drag correlation. Although the slip velocitymore » can be matched by a simple modification to the drag model, the predicted voidage distribution is still different from DNS: Both CFD-DEM and TFM over-predict the fraction of particles in dense regions and under-predict the fraction of particles in regions of intermediate void fractions. Also, the cluster aspect ratio of DNS is smaller than CFD-DEM and TFM. Since a simple correction to the drag model can predict a correct slip velocity, it is hopeful that drag corrections based on more elaborate theories that consider voidage gradient and particle fluctuations may be able to improve the current predictions of cluster distribution.« less
Comparison of stochastic optimization methods for all-atom folding of the Trp-Cage protein.
Schug, Alexander; Herges, Thomas; Verma, Abhinav; Lee, Kyu Hwan; Wenzel, Wolfgang
2005-12-09
The performances of three different stochastic optimization methods for all-atom protein structure prediction are investigated and compared. We use the recently developed all-atom free-energy force field (PFF01), which was demonstrated to correctly predict the native conformation of several proteins as the global optimum of the free energy surface. The trp-cage protein (PDB-code 1L2Y) is folded with the stochastic tunneling method, a modified parallel tempering method, and the basin-hopping technique. All the methods correctly identify the native conformation, and their relative efficiency is discussed.
Fekete, Attila; Komáromi, István
2016-12-07
A proteolytic reaction of papain with a simple peptide model substrate N-methylacetamide has been studied. Our aim was twofold: (i) we proposed a plausible reaction mechanism with the aid of potential energy surface scans and second geometrical derivatives calculated at the stationary points, and (ii) we investigated the applicability of the dispersion corrected density functional methods in comparison with the popular hybrid generalized gradient approximations (GGA) method (B3LYP) without such a correction in the QM/MM calculations for this particular problem. In the resting state of papain the ion pair and neutral forms of the Cys-His catalytic dyad have approximately the same energy and they are separated by only a small barrier. Zero point vibrational energy correction shifted this equilibrium slightly to the neutral form. On the other hand, the electrostatic solvation free energy corrections, calculated using the Poisson-Boltzmann method for the structures sampled from molecular dynamics simulation trajectories, resulted in a more stable ion-pair form. All methods we applied predicted at least a two elementary step acylation process via a zwitterionic tetrahedral intermediate. Using dispersion corrected DFT methods the thioester S-C bond formation and the proton transfer from histidine occur in the same elementary step, although not synchronously. The proton transfer lags behind (or at least does not precede) the S-C bond formation. The predicted transition state corresponds mainly to the S-C bond formation while the proton is still on the histidine Nδ atom. In contrast, the B3LYP method using larger basis sets predicts a transition state in which the S-C bond is almost fully formed and the transition state can be mainly featured by the Nδ(histidine) to N(amid) proton transfer. Considerably lower activation energy was predicted (especially by the B3LYP method) for the next amide bond breaking elementary step of acyl-enzyme formation. Deacylation appeared to be a single elementary step process in all the methods we applied.
Multi-Stage Target Tracking with Drift Correction and Position Prediction
NASA Astrophysics Data System (ADS)
Chen, Xin; Ren, Keyan; Hou, Yibin
2018-04-01
Most existing tracking methods are hard to combine accuracy and performance, and do not consider the shift between clarity and blur that often occurs. In this paper, we propound a multi-stage tracking framework with two particular modules: position prediction and corrective measure. We conduct tracking based on correlation filter with a corrective measure module to increase both performance and accuracy. Specifically, a convolutional network is used for solving the blur problem in realistic scene, training methodology that training dataset with blur images generated by the three blur algorithms. Then, we propose a position prediction module to reduce the computation cost and make tracker more capable of fast motion. Experimental result shows that our tracking method is more robust compared to others and more accurate on the benchmark sequences.
Corrected ROC analysis for misclassified binary outcomes.
Zawistowski, Matthew; Sussman, Jeremy B; Hofer, Timothy P; Bentley, Douglas; Hayward, Rodney A; Wiitala, Wyndy L
2017-06-15
Creating accurate risk prediction models from Big Data resources such as Electronic Health Records (EHRs) is a critical step toward achieving precision medicine. A major challenge in developing these tools is accounting for imperfect aspects of EHR data, particularly the potential for misclassified outcomes. Misclassification, the swapping of case and control outcome labels, is well known to bias effect size estimates for regression prediction models. In this paper, we study the effect of misclassification on accuracy assessment for risk prediction models and find that it leads to bias in the area under the curve (AUC) metric from standard ROC analysis. The extent of the bias is determined by the false positive and false negative misclassification rates as well as disease prevalence. Notably, we show that simply correcting for misclassification while building the prediction model is not sufficient to remove the bias in AUC. We therefore introduce an intuitive misclassification-adjusted ROC procedure that accounts for uncertainty in observed outcomes and produces bias-corrected estimates of the true AUC. The method requires that misclassification rates are either known or can be estimated, quantities typically required for the modeling step. The computational simplicity of our method is a key advantage, making it ideal for efficiently comparing multiple prediction models on very large datasets. Finally, we apply the correction method to a hospitalization prediction model from a cohort of over 1 million patients from the Veterans Health Administrations EHR. Implementations of the ROC correction are provided for Stata and R. Published 2017. This article is a U.S. Government work and is in the public domain in the USA. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
A velocity-correction projection method based immersed boundary method for incompressible flows
NASA Astrophysics Data System (ADS)
Cai, Shanggui
2014-11-01
In the present work we propose a novel direct forcing immersed boundary method based on the velocity-correction projection method of [J.L. Guermond, J. Shen, Velocity-correction projection methods for incompressible flows, SIAM J. Numer. Anal., 41 (1)(2003) 112]. The principal idea of immersed boundary method is to correct the velocity in the vicinity of the immersed object by using an artificial force to mimic the presence of the physical boundaries. Therefore, velocity-correction projection method is preferred to its pressure-correction counterpart in the present work. Since the velocity-correct projection method is considered as a dual class of pressure-correction method, the proposed method here can also be interpreted in the way that first the pressure is predicted by treating the viscous term explicitly without the consideration of the immersed boundary, and the solenoidal velocity is used to determine the volume force on the Lagrangian points, then the non-slip boundary condition is enforced by correcting the velocity with the implicit viscous term. To demonstrate the efficiency and accuracy of the proposed method, several numerical simulations are performed and compared with the results in the literature. China Scholarship Council.
Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.
Fourcade, Yoan; Engler, Jan O.; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one “virtual” derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases. PMID:24818607
First Principle Predictions of Isotopic Shifts in H2O
NASA Technical Reports Server (NTRS)
Schwenke, David W.; Kwak, Dochan (Technical Monitor)
2002-01-01
We compute isotope independent first and second order corrections to the Born-Oppenheimer approximation for water and use them to predict isotopic shifts. For the diagonal correction, we use icMRCI wavefunctions and derivatives with respect to mass dependent, internal coordinates to generate the mass independent correction functions. For the non-adiabatic correction, we use scaled SCF/CIS wave functions and a generalization of the Handy method to obtain mass independent correction functions. We find that including the non-adiabatic correction gives significantly improved results compared to just including the diagonal correction when the Born-Oppenheimer potential energy surface is optimized for H2O-16. The agreement with experimental results for deuterium and tritium containing isotopes is nearly as good as our best empirical correction, however, the present correction is expected to be more reliable for higher, uncharacterized levels.
NASA Astrophysics Data System (ADS)
Zhang, Yongqian; Brandner, Edward; Ozhasoglu, Cihat; Lalonde, Ron; Heron, Dwight E.; Saiful Huq, M.
2018-02-01
The use of small fields in radiation therapy techniques has increased substantially in particular in stereotactic radiosurgery (SRS) and stereotactic body radiation therapy (SBRT). However, as field size reduces further still, the response of the detector changes more rapidly with field size, and the effects of measurement uncertainties become increasingly significant due to the lack of lateral charged particle equilibrium, spectral changes as a function of field size, detector choice, and subsequent perturbations of the charged particle fluence. This work presents a novel 3D dose volume-to-point correction method to predict the readings of a 0.015 cc PinPoint chamber (PTW 31014) for both small static-fields and composite-field dosimetry formed by fixed cones on the CyberKnife® M6™ machine. A 3D correction matrix is introduced to link the 3D dose distribution to the response of the PinPoint chamber in water. The parameters of the correction matrix are determined by modeling its 3D dose response in circular fields created using the 12 fixed cones (5 mm-60 mm) on a CyberKnife® M6™ machine. A penalized least-square optimization problem is defined by fitting the calculated detector reading to the experimental measurement data to generate the optimal correction matrix; the simulated annealing algorithm is used to solve the inverse optimization problem. All the experimental measurements are acquired for every 2 mm chamber shift in the horizontal planes for each field size. The 3D dose distributions for the measurements are calculated using the Monte Carlo calculation with the MultiPlan® treatment planning system (Accuray Inc., Sunnyvale, CA, USA). The performance evaluation of the 3D conversion matrix is carried out by comparing the predictions of the output factors (OFs), off-axis ratios (OARs) and percentage depth dose (PDD) data to the experimental measurement data. The discrepancy of the measurement and the prediction data for composite fields is also performed for clinical SRS plans. The optimization algorithm used for generating the optimal correction factors is stable, and the resulting correction factors were smooth in the spatial domain. The measurement and prediction of OFs agree closely with percentage differences of less than 1.9% for all the 12 cones. The discrepancies between the prediction and the measurement PDD readings at 50 mm and 80 mm depth are 1.7% and 1.9%, respectively. The percentage differences of OARs between measurement and prediction data are less than 2% in the low dose gradient region, and 2%/1 mm discrepancies are observed within the high dose gradient regions. The differences between the measurement and prediction data for all the CyberKnife based SRS plans are less than 1%. These results demonstrate the existence and efficiency of the novel 3D correction method for small field dosimetry. The 3D correction matrix links the 3D dose distribution and the reading of the PinPoint chamber. The comparison between the predicted reading and the measurement data for static small fields (OFs, OARs and PDDs) yield discrepancies within 2% for low dose gradient regions and 2%/1 mm for high dose gradient regions; the discrepancies between the predicted and the measurement data are less than 1% for all the SRS plans. The 3D correction method provides an access to evaluate the clinical measurement data and can be applied to non-standard composite fields intensity modulated radiation therapy point dose verification.
Brandenburg, Jan Gerit; Grimme, Stefan
2014-01-01
We present and evaluate dispersion corrected Hartree-Fock (HF) and Density Functional Theory (DFT) based quantum chemical methods for organic crystal structure prediction. The necessity of correcting for missing long-range electron correlation, also known as van der Waals (vdW) interaction, is pointed out and some methodological issues such as inclusion of three-body dispersion terms are discussed. One of the most efficient and widely used methods is the semi-classical dispersion correction D3. Its applicability for the calculation of sublimation energies is investigated for the benchmark set X23 consisting of 23 small organic crystals. For PBE-D3 the mean absolute deviation (MAD) is below the estimated experimental uncertainty of 1.3 kcal/mol. For two larger π-systems, the equilibrium crystal geometry is investigated and very good agreement with experimental data is found. Since these calculations are carried out with huge plane-wave basis sets they are rather time consuming and routinely applicable only to systems with less than about 200 atoms in the unit cell. Aiming at crystal structure prediction, which involves screening of many structures, a pre-sorting with faster methods is mandatory. Small, atom-centered basis sets can speed up the computation significantly but they suffer greatly from basis set errors. We present the recently developed geometrical counterpoise correction gCP. It is a fast semi-empirical method which corrects for most of the inter- and intramolecular basis set superposition error. For HF calculations with nearly minimal basis sets, we additionally correct for short-range basis incompleteness. We combine all three terms in the HF-3c denoted scheme which performs very well for the X23 sublimation energies with an MAD of only 1.5 kcal/mol, which is close to the huge basis set DFT-D3 result.
Structural reliability analysis under evidence theory using the active learning kriging model
NASA Astrophysics Data System (ADS)
Yang, Xufeng; Liu, Yongshou; Ma, Panke
2017-11-01
Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.
Predicting photoyellowing behaviour of mechanical pulp containing papers
Umesh P. Agarwal
2005-01-01
It is well known that paper produced from mechanical-pulp-containing fiber furnish yellows upon exposure to light. Although the accelerated light-aging test method has been used to compare papers and predict long term performance, the reliability of the light-aging method has been questioned. Therefore, a method that can correctly predict a paperâs light stability is...
Intelligent monitoring and control of semiconductor manufacturing equipment
NASA Technical Reports Server (NTRS)
Murdock, Janet L.; Hayes-Roth, Barbara
1991-01-01
The use of AI methods to monitor and control semiconductor fabrication in a state-of-the-art manufacturing environment called the Rapid Thermal Multiprocessor is described. Semiconductor fabrication involves many complex processing steps with limited opportunities to measure process and product properties. By applying additional process and product knowledge to that limited data, AI methods augment classical control methods by detecting abnormalities and trends, predicting failures, diagnosing, planning corrective action sequences, explaining diagnoses or predictions, and reacting to anomalous conditions that classical control systems typically would not correct. Research methodology and issues are discussed, and two diagnosis scenarios are examined.
XenoSite: accurately predicting CYP-mediated sites of metabolism with neural networks.
Zaretzki, Jed; Matlock, Matthew; Swamidass, S Joshua
2013-12-23
Understanding how xenobiotic molecules are metabolized is important because it influences the safety, efficacy, and dose of medicines and how they can be modified to improve these properties. The cytochrome P450s (CYPs) are proteins responsible for metabolizing 90% of drugs on the market, and many computational methods can predict which atomic sites of a molecule--sites of metabolism (SOMs)--are modified during CYP-mediated metabolism. This study improves on prior methods of predicting CYP-mediated SOMs by using new descriptors and machine learning based on neural networks. The new method, XenoSite, is faster to train and more accurate by as much as 4% or 5% for some isozymes. Furthermore, some "incorrect" predictions made by XenoSite were subsequently validated as correct predictions by revaluation of the source literature. Moreover, XenoSite output is interpretable as a probability, which reflects both the confidence of the model that a particular atom is metabolized and the statistical likelihood that its prediction for that atom is correct.
A method for the in vivo measurement of americium-241 at long times post-exposure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neton, J.W.
1988-01-01
This study investigated an improved method for the quantitative measurement, calibration and calculation of {sup 241}Am organ burdens in humans. The techniques developed correct for cross-talk or count-rate contributions from surrounding and adjacent organ burdens and assures for the proper assignment of activity to the lungs, liver and skeleton. In order to predict the net count-rates for the measurement geometries of the skull, liver and lung, a background prediction method was developed. This method utilizes data obtained from the measurement of a group of control subjects. Based on this data, a linear prediction equation was developed for each measurement geometry.more » In order to correct for the cross-contributions among the various deposition loci, a series of surrogate human phantom structures were measured. The results of measurements of {sup 241}Am depositions in six exposure cases have been evaluated using these new techniques and have indicated that lung burden estimates could be in error by as much as 100 percent when corrections are not made for contributions to the count-rate from other organs.« less
Predicting hepatitis B monthly incidence rates using weighted Markov chains and time series methods.
Shahdoust, Maryam; Sadeghifar, Majid; Poorolajal, Jalal; Javanrooh, Niloofar; Amini, Payam
2015-01-01
Hepatitis B (HB) is a major global mortality. Accurately predicting the trend of the disease can provide an appropriate view to make health policy disease prevention. This paper aimed to apply three different to predict monthly incidence rates of HB. This historical cohort study was conducted on the HB incidence data of Hamadan Province, the west of Iran, from 2004 to 2012. Weighted Markov Chain (WMC) method based on Markov chain theory and two time series models including Holt Exponential Smoothing (HES) and SARIMA were applied on the data. The results of different applied methods were compared to correct percentages of predicted incidence rates. The monthly incidence rates were clustered into two clusters as state of Markov chain. The correct predicted percentage of the first and second clusters for WMC, HES and SARIMA methods was (100, 0), (84, 67) and (79, 47) respectively. The overall incidence rate of HBV is estimated to decrease over time. The comparison of results of the three models indicated that in respect to existing seasonality trend and non-stationarity, the HES had the most accurate prediction of the incidence rates.
Examination of multi-model ensemble seasonal prediction methods using a simple climate system
NASA Astrophysics Data System (ADS)
Kang, In-Sik; Yoo, Jin Ho
2006-02-01
A simple climate model was designed as a proxy for the real climate system, and a number of prediction models were generated by slightly perturbing the physical parameters of the simple model. A set of long (240 years) historical hindcast predictions were performed with various prediction models, which are used to examine various issues of multi-model ensemble seasonal prediction, such as the best ways of blending multi-models and the selection of models. Based on these results, we suggest a feasible way of maximizing the benefit of using multi models in seasonal prediction. In particular, three types of multi-model ensemble prediction systems, i.e., the simple composite, superensemble, and the composite after statistically correcting individual predictions (corrected composite), are examined and compared to each other. The superensemble has more of an overfitting problem than the others, especially for the case of small training samples and/or weak external forcing, and the corrected composite produces the best prediction skill among the multi-model systems.
NASA Astrophysics Data System (ADS)
Moghim, S.; Hsu, K.; Bras, R. L.
2013-12-01
General Circulation Models (GCMs) are used to predict circulation and energy transfers between the atmosphere and the land. It is known that these models produce biased results that will have impact on their uses. This work proposes a new method for bias correction: the equidistant cumulative distribution function-artificial neural network (EDCDFANN) procedure. The method uses artificial neural networks (ANNs) as a surrogate model to estimate bias-corrected temperature, given an identification of the system derived from GCM models output variables. A two-layer feed forward neural network is trained with observations during a historical period and then the adjusted network can be used to predict bias-corrected temperature for future periods. To capture the extreme values this method is combined with the equidistant CDF matching method (EDCDF, Li et al. 2010). The proposed method is tested with the Community Climate System Model (CCSM3) outputs using air and skin temperature, specific humidity, shortwave and longwave radiation as inputs to the ANN. This method decreases the mean square error and increases the spatial correlation between the modeled temperature and the observed one. The results indicate the EDCDFANN has potential to remove the biases of the model outputs.
A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.
2014-01-01
A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.
Asmadi, Aldi; Neumann, Marcus A; Kendrick, John; Girard, Pascale; Perrin, Marc-Antoine; Leusen, Frank J J
2009-12-24
In the 2007 blind test of crystal structure prediction hosted by the Cambridge Crystallographic Data Centre (CCDC), a hybrid DFT/MM method correctly ranked each of the four experimental structures as having the lowest lattice energy of all the crystal structures predicted for each molecule. The work presented here further validates this hybrid method by optimizing the crystal structures (experimental and submitted) of the first three CCDC blind tests held in 1999, 2001, and 2004. Except for the crystal structures of compound IX, all structures were reminimized and ranked according to their lattice energies. The hybrid method computes the lattice energy of a crystal structure as the sum of the DFT total energy and a van der Waals (dispersion) energy correction. Considering all four blind tests, the crystal structure with the lowest lattice energy corresponds to the experimentally observed structure for 12 out of 14 molecules. Moreover, good geometrical agreement is observed between the structures determined by the hybrid method and those measured experimentally. In comparison with the correct submissions made by the blind test participants, all hybrid optimized crystal structures (apart from compound II) have the smallest calculated root mean squared deviations from the experimentally observed structures. It is predicted that a new polymorph of compound V exists under pressure.
Empirical source strength correlations for rans-based acoustic analogy methods
NASA Astrophysics Data System (ADS)
Kube-McDowell, Matthew Tyndall
JeNo is a jet noise prediction code based on an acoustic analogy method developed by Mani, Gliebe, Balsa, and Khavaran. Using the flow predictions from a standard Reynolds-averaged Navier-Stokes computational fluid dynamics solver, JeNo predicts the overall sound pressure level and angular spectra for high-speed hot jets over a range of observer angles, with a processing time suitable for rapid design purposes. JeNo models the noise from hot jets as a combination of two types of noise sources; quadrupole sources dependent on velocity fluctuations, which represent the major noise of turbulent mixing, and dipole sources dependent on enthalpy fluctuations, which represent the effects of thermal variation. These two sources are modeled by JeNo as propagating independently into the far-field, with no cross-correlation at the observer location. However, high-fidelity computational fluid dynamics solutions demonstrate that this assumption is false. In this thesis, the theory, assumptions, and limitations of the JeNo code are briefly discussed, and a modification to the acoustic analogy method is proposed in which the cross-correlation of the two primary noise sources is allowed to vary with the speed of the jet and the observer location. As a proof-of-concept implementation, an empirical correlation correction function is derived from comparisons between JeNo's noise predictions and a set of experimental measurements taken for the Air Force Aero-Propulsion Laboratory. The empirical correlation correction is then applied to JeNo's predictions of a separate data set of hot jets tested at NASA's Glenn Research Center. Metrics are derived to measure the qualitative and quantitative performance of JeNo's acoustic predictions, and the empirical correction is shown to provide a quantitative improvement in the noise prediction at low observer angles with no freestream flow, and a qualitative improvement in the presence of freestream flow. However, the results also demonstrate that there are underlying flaws in JeNo's ability to predict the behavior of a hot jet's acoustic signature at certain rear observer angles, and that this correlation correction is not able to correct these flaws.
NASA Astrophysics Data System (ADS)
Rogers, Jeffrey N.; Parrish, Christopher E.; Ward, Larry G.; Burdick, David M.
2018-03-01
Salt marsh vegetation tends to increase vertical uncertainty in light detection and ranging (lidar) derived elevation data, often causing the data to become ineffective for analysis of topographic features governing tidal inundation or vegetation zonation. Previous attempts at improving lidar data collected in salt marsh environments range from simply computing and subtracting the global elevation bias to more complex methods such as computing vegetation-specific, constant correction factors. The vegetation specific corrections can be used along with an existing habitat map to apply separate corrections to different areas within a study site. It is hypothesized here that correcting salt marsh lidar data by applying location-specific, point-by-point corrections, which are computed from lidar waveform-derived features, tidal-datum based elevation, distance from shoreline and other lidar digital elevation model based variables, using nonparametric regression will produce better results. The methods were developed and tested using full-waveform lidar and ground truth for three marshes in Cape Cod, Massachusetts, U.S.A. Five different model algorithms for nonparametric regression were evaluated, with TreeNet's stochastic gradient boosting algorithm consistently producing better regression and classification results. Additionally, models were constructed to predict the vegetative zone (high marsh and low marsh). The predictive modeling methods used in this study estimated ground elevation with a mean bias of 0.00 m and a standard deviation of 0.07 m (0.07 m root mean square error). These methods appear very promising for correction of salt marsh lidar data and, importantly, do not require an existing habitat map, biomass measurements, or image based remote sensing data such as multi/hyperspectral imagery.
Building a Better Fragment Library for De Novo Protein Structure Prediction
de Oliveira, Saulo H. P.; Shi, Jiye; Deane, Charlotte M.
2015-01-01
Fragment-based approaches are the current standard for de novo protein structure prediction. These approaches rely on accurate and reliable fragment libraries to generate good structural models. In this work, we describe a novel method for structure fragment library generation and its application in fragment-based de novo protein structure prediction. The importance of correct testing procedures in assessing the quality of fragment libraries is demonstrated. In particular, the exclusion of homologs to the target from the libraries to correctly simulate a de novo protein structure prediction scenario, something which surprisingly is not always done. We demonstrate that fragments presenting different predominant predicted secondary structures should be treated differently during the fragment library generation step and that exhaustive and random search strategies should both be used. This information was used to develop a novel method, Flib. On a validation set of 41 structurally diverse proteins, Flib libraries presents both a higher precision and coverage than two of the state-of-the-art methods, NNMake and HHFrag. Flib also achieves better precision and coverage on the set of 275 protein domains used in the two previous experiments of the the Critical Assessment of Structure Prediction (CASP9 and CASP10). We compared Flib libraries against NNMake libraries in a structure prediction context. Of the 13 cases in which a correct answer was generated, Flib models were more accurate than NNMake models for 10. “Flib is available for download at: http://www.stats.ox.ac.uk/research/proteins/resources”. PMID:25901595
Patlewicz, Grace; Casati, Silvia; Basketter, David A; Asturiol, David; Roberts, David W; Lepoittevin, Jean-Pierre; Worth, Andrew P; Aschberger, Karin
2016-12-01
Predictive testing to characterize substances for their skin sensitization potential has historically been based on animal tests such as the Local Lymph Node Assay (LLNA). In recent years, regulations in the cosmetics and chemicals sectors have provided strong impetus to develop non-animal alternatives. Three test methods have undergone OECD validation: the direct peptide reactivity assay (DPRA), the KeratinoSens™ and the human Cell Line Activation Test (h-CLAT). Whilst these methods perform relatively well in predicting LLNA results, a concern raised is their ability to predict chemicals that need activation to be sensitizing (pre- or pro-haptens). This current study reviewed an EURL ECVAM dataset of 127 substances for which information was available in the LLNA and three non-animal test methods. Twenty eight of the sensitizers needed to be activated, with the majority being pre-haptens. These were correctly identified by 1 or more of the test methods. Six substances were categorized exclusively as pro-haptens, but were correctly identified by at least one of the cell-based assays. The analysis here showed that skin metabolism was not likely to be a major consideration for assessing sensitization potential and that sensitizers requiring activation could be identified correctly using one or more of the current non-animal methods. Published by Elsevier Inc.
Predicting helix orientation for coiled-coil dimers
Apgar, James R.; Gutwin, Karl N.; Keating, Amy E.
2008-01-01
The alpha-helical coiled coil is a structurally simple protein oligomerization or interaction motif consisting of two or more alpha helices twisted into a supercoiled bundle. Coiled coils can differ in their stoichiometry, helix orientation and axial alignment. Because of the near degeneracy of many of these variants, coiled coils pose a challenge to fold recognition methods for structure prediction. Whereas distinctions between some protein folds can be discriminated on the basis of hydrophobic/polar patterning or secondary structure propensities, the sequence differences that encode important details of coiled-coil structure can be subtle. This is emblematic of a larger problem in the field of protein structure and interaction prediction: that of establishing specificity between closely similar structures. We tested the behavior of different computational models on the problem of recognizing the correct orientation - parallel vs. antiparallel - of pairs of alpha helices that can form a dimeric coiled coil. For each of 131 examples of known structure, we constructed a large number of both parallel and antiparallel structural models and used these to asses the ability of five energy functions to recognize the correct fold. We also developed and tested three sequenced-based approaches that make use of varying degrees of implicit structural information. The best structural methods performed similarly to the best sequence methods, correctly categorizing ∼81% of dimers. Steric compatibility with the fold was important for some coiled coils we investigated. For many examples, the correct orientation was determined by smaller energy differences between parallel and antiparallel structures distributed over many residues and energy components. Prediction methods that used structure but incorporated varying approximations and assumptions showed quite different behaviors when used to investigate energetic contributions to orientation preference. Sequence based methods were sensitive to the choice of residue-pair interactions scored. PMID:18506779
A study of fault prediction and reliability assessment in the SEL environment
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Patnaik, Debabrata
1986-01-01
An empirical study on estimation and prediction of faults, prediction of fault detection and correction effort, and reliability assessment in the Software Engineering Laboratory environment (SEL) is presented. Fault estimation using empirical relationships and fault prediction using curve fitting method are investigated. Relationships between debugging efforts (fault detection and correction effort) in different test phases are provided, in order to make an early estimate of future debugging effort. This study concludes with the fault analysis, application of a reliability model, and analysis of a normalized metric for reliability assessment and reliability monitoring during development of software.
Putz, A M; Tiezzi, F; Maltecca, C; Gray, K A; Knauer, M T
2018-02-01
The objective of this study was to compare and determine the optimal validation method when comparing accuracy from single-step GBLUP (ssGBLUP) to traditional pedigree-based BLUP. Field data included six litter size traits. Simulated data included ten replicates designed to mimic the field data in order to determine the method that was closest to the true accuracy. Data were split into training and validation sets. The methods used were as follows: (i) theoretical accuracy derived from the prediction error variance (PEV) of the direct inverse (iLHS), (ii) approximated accuracies from the accf90(GS) program in the BLUPF90 family of programs (Approx), (iii) correlation between predictions and the single-step GEBVs from the full data set (GEBV Full ), (iv) correlation between predictions and the corrected phenotypes of females from the full data set (Y c ), (v) correlation from method iv divided by the square root of the heritability (Y ch ) and (vi) correlation between sire predictions and the average of their daughters' corrected phenotypes (Y cs ). Accuracies from iLHS increased from 0.27 to 0.37 (37%) in the Large White. Approximation accuracies were very consistent and close in absolute value (0.41 to 0.43). Both iLHS and Approx were much less variable than the corrected phenotype methods (ranging from 0.04 to 0.27). On average, simulated data showed an increase in accuracy from 0.34 to 0.44 (29%) using ssGBLUP. Both iLHS and Y ch approximated the increase well, 0.30 to 0.46 and 0.36 to 0.45, respectively. GEBV Full performed poorly in both data sets and is not recommended. Results suggest that for within-breed selection, theoretical accuracy using PEV was consistent and accurate. When direct inversion is infeasible to get the PEV, correlating predictions to the corrected phenotypes divided by the square root of heritability is adequate given a large enough validation data set. © 2017 Blackwell Verlag GmbH.
An empirical approach to improving tidal predictions using recent real-time tide gauge data
NASA Astrophysics Data System (ADS)
Hibbert, Angela; Royston, Samantha; Horsburgh, Kevin J.; Leach, Harry
2014-05-01
Classical harmonic methods of tidal prediction are often problematic in estuarine environments due to the distortion of tidal fluctuations in shallow water, which results in a disparity between predicted and observed sea levels. This is of particular concern in the Bristol Channel, where the error associated with tidal predictions is potentially greater due to an unusually large tidal range of around 12m. As such predictions are fundamental to the short-term forecasting of High Water (HW) extremes, it is vital that alternative solutions are found. In a pilot study, using a year-long observational sea level record from the Port of Avonmouth in the Bristol Channel, the UK National Tidal and Sea Level Facility (NTSLF) tested the potential for reducing tidal prediction errors, using three alternatives to the Harmonic Method of tidal prediction. The three methods evaluated were (1) the use of Artificial Neural Network (ANN) models, (2) the Species Concordance technique and (3) a simple empirical procedure for correcting Harmonic Method High Water predictions based upon a few recent observations (referred to as the Empirical Correction Method). This latter method was then successfully applied to sea level records from an additional 42 of the 45 tide gauges that comprise the UK Tide Gauge Network. Consequently, it is to be incorporated into the operational systems of the UK Coastal Monitoring and Forecasting Partnership in order to improve short-term sea level predictions for the UK and in particular, the accurate estimation of HW extremes.
Walkowski, Slawomir; Lundin, Mikael; Szymas, Janusz; Lundin, Johan
2015-01-01
The way of viewing whole slide images (WSI) can be tracked and analyzed. In particular, it can be useful to learn how medical students view WSIs during exams and how their viewing behavior is correlated with correctness of the answers they give. We used software-based view path tracking method that enabled gathering data about viewing behavior of multiple simultaneous WSI users. This approach was implemented and applied during two practical exams in oral pathology in 2012 (88 students) and 2013 (91 students), which were based on questions with attached WSIs. Gathered data were visualized and analyzed in multiple ways. As a part of extended analysis, we tried to use machine learning approaches to predict correctness of students' answers based on how they viewed WSIs. We compared the results of analyses for years 2012 and 2013 - done for a single question, for student groups, and for a set of questions. The overall patterns were generally consistent across these 3 years. Moreover, viewing behavior data appeared to have certain potential for predicting answers' correctness and some outcomes of machine learning approaches were in the right direction. However, general prediction results were not satisfactory in terms of precision and recall. Our work confirmed that the view path tracking method is useful for discovering viewing behavior of students analyzing WSIs. It provided multiple useful insights in this area, and general results of our analyses were consistent across two exams. On the other hand, predicting answers' correctness appeared to be a difficult task - students' answers seem to be often unpredictable.
Campbell-Washburn, Adrienne E; Xue, Hui; Lederman, Robert J; Faranesh, Anthony Z; Hansen, Michael S
2015-01-01
Purpose MRI-guided interventions demand high frame-rate imaging, making fast imaging techniques such as spiral imaging and echo planar imaging (EPI) appealing. In this study, we implemented a real-time distortion correction framework to enable the use of these fast acquisitions for interventional MRI. Methods Distortions caused by gradient waveform inaccuracies were corrected using the gradient impulse response function (GIRF), which was measured by standard equipment and saved as a calibration file on the host computer. This file was used at runtime to calculate the predicted k-space trajectories for image reconstruction. Additionally, the off-resonance reconstruction frequency was modified in real-time to interactively de-blur spiral images. Results Real-time distortion correction for arbitrary image orientations was achieved in phantoms and healthy human volunteers. The GIRF predicted k-space trajectories matched measured k-space trajectories closely for spiral imaging. Spiral and EPI image distortion was visibly improved using the GIRF predicted trajectories. The GIRF calibration file showed no systematic drift in 4 months and was demonstrated to correct distortions after 30 minutes of continuous scanning despite gradient heating. Interactive off-resonance reconstruction was used to sharpen anatomical boundaries during continuous imaging. Conclusions This real-time distortion correction framework will enable the use of these high frame-rate imaging methods for MRI-guided interventions. PMID:26114951
NASA Astrophysics Data System (ADS)
Nepal, Niraj K.; Ruzsinszky, Adrienn; Bates, Jefferson E.
2018-03-01
The ground state structural and energetic properties for rocksalt and cesium chloride phases of the cesium halides were explored using the random phase approximation (RPA) and beyond-RPA methods to benchmark the nonempirical SCAN meta-GGA and its empirical dispersion corrections. The importance of nonadditivity and higher-order multipole moments of dispersion in these systems is discussed. RPA generally predicts the equilibrium volume for these halides within 2.4% of the experimental value, while beyond-RPA methods utilizing the renormalized adiabatic LDA (rALDA) exchange-correlation kernel are typically within 1.8%. The zero-point vibrational energy is small and shows that the stability of these halides is purely due to electronic correlation effects. The rAPBE kernel as a correction to RPA overestimates the equilibrium volume and could not predict the correct phase ordering in the case of cesium chloride, while the rALDA kernel consistently predicted results in agreement with the experiment for all of the halides. However, due to its reasonable accuracy with lower computational cost, SCAN+rVV10 proved to be a good alternative to the RPA-like methods for describing the properties of these ionic solids.
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing
2018-01-15
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.
Improve the prediction of RNA-binding residues using structural neighbours.
Li, Quan; Cao, Zanxia; Liu, Haiyan
2010-03-01
The interactions between RNA-binding proteins (RBPs) with RNA play key roles in managing some of the cell's basic functions. The identification and prediction of RNA binding sites is important for understanding the RNA-binding mechanism. Computational approaches are being developed to predict RNA-binding residues based on the sequence- or structure-derived features. To achieve higher prediction accuracy, improvements on current prediction methods are necessary. We identified that the structural neighbors of RNA-binding and non-RNA-binding residues have different amino acid compositions. Combining this structure-derived feature with evolutionary (PSSM) and other structural information (secondary structure and solvent accessibility) significantly improves the predictions over existing methods. Using a multiple linear regression approach and 6-fold cross validation, our best model can achieve an overall correct rate of 87.8% and MCC of 0.47, with a specificity of 93.4%, correctly predict 52.4% of the RNA-binding residues for a dataset containing 107 non-homologous RNA-binding proteins. Compared with existing methods, including the amino acid compositions of structure neighbors lead to clearly improvement. A web server was developed for predicting RNA binding residues in a protein sequence (or structure),which is available at http://mcgill.3322.org/RNA/.
Higher Order Corrections in the CoLoRFulNNLO Framework
NASA Astrophysics Data System (ADS)
Somogyi, G.; Kardos, A.; Szőr, Z.; Trócsányi, Z.
We discuss the CoLoRFulNNLO method for computing higher order radiative corrections to jet cross sections in perturbative QCD. We apply our method to the calculation of event shapes and jet rates in three-jet production in electron-positron annihilation. We validate our code by comparing our predictions to previous results in the literature and present the jet cone energy fraction distribution at NNLO accuracy. We also present preliminary NNLO results for the three-jet rate using the Durham jet clustering algorithm matched to resummed predictions at NLL accuracy, and a comparison to LEP data.
Appalakondaiah, S; Vaitheeswaran, G; Lebègue, S
2015-06-18
We have performed ab initio calculations for a series of energetic solids to explore their structural and electronic properties. To evaluate the ground state volume of these molecular solids, different dispersion correction methods were accounted in DFT, namely the Tkatchenko-Scheffler method (with and without self-consistent screening), Grimme's methods (D2, D3(BJ)), and the vdW-DF method. Our results reveal that dispersion correction methods are essential in understanding these complex structures with van der Waals interactions and hydrogen bonding. The calculated ground state volumes and bulk moduli show that the performance of each method is not unique, and therefore a careful examination is mandatory for interpreting theoretical predictions. This work also emphasizes the importance of quasiparticle calculations in predicting the band gap, which is obtained here with the GW approximation. We find that the obtained band gaps are ranging from 4 to 7 eV for the different compounds, indicating their insulating nature. In addition, we show the essential role of quasiparticle band structure calculations to correlate the gap with the energetic properties.
Predicting chaos in memristive oscillator via harmonic balance method.
Wang, Xin; Li, Chuandong; Huang, Tingwen; Duan, Shukai
2012-12-01
This paper studies the possible chaotic behaviors in a memristive oscillator with cubic nonlinearities via harmonic balance method which is also called the method of describing function. This method was proposed to detect chaos in classical Chua's circuit. We first transform the considered memristive oscillator system into Lur'e model and present the prediction of the existence of chaotic behaviors. To ensure the prediction result is correct, the distortion index is also measured. Numerical simulations are presented to show the effectiveness of theoretical results.
The power-proportion method for intracranial volume correction in volumetric imaging analysis.
Liu, Dawei; Johnson, Hans J; Long, Jeffrey D; Magnotta, Vincent A; Paulsen, Jane S
2014-01-01
In volumetric brain imaging analysis, volumes of brain structures are typically assumed to be proportional or linearly related to intracranial volume (ICV). However, evidence abounds that many brain structures have power law relationships with ICV. To take this relationship into account in volumetric imaging analysis, we propose a power law based method-the power-proportion method-for ICV correction. The performance of the new method is demonstrated using data from the PREDICT-HD study.
The Recalibrated Sunspot Number: Impact on Solar Cycle Predictions
NASA Astrophysics Data System (ADS)
Clette, F.; Lefevre, L.
2017-12-01
Recently and for the first time since their creation, the sunspot number and group number series were entirely revisited and a first fully recalibrated version was officially released in July 2015 by the World Data Center SILSO (Brussels). Those reference long-term series are widely used as input data or as a calibration reference by various solar cycle prediction methods. Therefore, past predictions may now need to be redone using the new sunspot series, and methods already used for predicting cycle 24 will require adaptations before attempting predictions of the next cycles.In order to clarify the nature of the applied changes, we describe the different corrections applied to the sunspot and group number series, which affect extended time periods and can reach up to 40%. While some changes simply involve constant scale factors, other corrections vary with time or follow the solar cycle modulation. Depending on the prediction method and on the selected time interval, this can lead to different responses and biases. Moreover, together with the new series, standard error estimates are also progressively added to the new sunspot numbers, which may help deriving more accurate uncertainties for predicted activity indices. We conclude on the new round of recalibration that is now undertaken in the framework of a broad multi-team collaboration articulated around upcoming ISSI workshops. We outline the future corrections that can still be expected in the future, as part of a permanent upgrading process and quality control. From now on, future sunspot-based predictive models should thus be made more adaptable, and regular updates of predictions should become common practice in order to track periodic upgrades of the sunspot number series, just like it is done when using other modern solar observational series.
Li, Chenzhe; Thampy, Sampreetha; Zheng, Yongping; Kweun, Joshua M; Ren, Yixin; Chan, Julia Y; Kim, Hanchul; Cho, Maenghyo; Kim, Yoon Young; Hsu, Julia W P; Cho, Kyeongjae
2016-03-31
Understanding and effectively predicting the thermal stability of ternary transition metal oxides with heavy elements using first principle simulations are vital for understanding performance of advanced materials. In this work, we have investigated the thermal stability of mullite RMn2O5 (R = Bi, Pr, Sm, or Gd) structures by constructing temperature phase diagrams using an efficient mixed generalized gradient approximation (GGA) and the GGA + U method. Simulation predicted stability regions without corrections on heavy elements show a 4-200 K underestimation compared to our experimental results. We have found the number of d/f electrons in the heavy elements shows a linear relationship with the prediction deviation. Further correction on the strongly correlated electrons in heavy elements could significantly reduce the prediction deviations. Our corrected simulation results demonstrate that further correction of R-site elements in RMn2O5 could effectively reduce the underestimation of the density functional theory-predicted decomposition temperature to within 30 K. Therefore, it could produce an accurate thermal stability prediction for complex ternary transition metal oxide compounds with heavy elements.
Illias, Hazlee Azil; Chai, Xin Rui; Abu Bakar, Ab Halim; Mokhlis, Hazlie
2015-01-01
It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works.
2015-01-01
It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works. PMID:26103634
A two-dimensional matrix correction for off-axis portal dose prediction errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, Daniel W.; Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263; Kumaraswamy, Lalith
2013-05-15
Purpose: This study presents a follow-up to a modified calibration procedure for portal dosimetry published by Bailey et al. ['An effective correction algorithm for off-axis portal dosimetry errors,' Med. Phys. 36, 4089-4094 (2009)]. A commercial portal dose prediction system exhibits disagreement of up to 15% (calibrated units) between measured and predicted images as off-axis distance increases. The previous modified calibration procedure accounts for these off-axis effects in most regions of the detecting surface, but is limited by the simplistic assumption of radial symmetry. Methods: We find that a two-dimensional (2D) matrix correction, applied to each calibrated image, accounts for off-axismore » prediction errors in all regions of the detecting surface, including those still problematic after the radial correction is performed. The correction matrix is calculated by quantitative comparison of predicted and measured images that span the entire detecting surface. The correction matrix was verified for dose-linearity, and its effectiveness was verified on a number of test fields. The 2D correction was employed to retrospectively examine 22 off-axis, asymmetric electronic-compensation breast fields, five intensity-modulated brain fields (moderate-high modulation) manipulated for far off-axis delivery, and 29 intensity-modulated clinical fields of varying complexity in the central portion of the detecting surface. Results: Employing the matrix correction to the off-axis test fields and clinical fields, predicted vs measured portal dose agreement improves by up to 15%, producing up to 10% better agreement than the radial correction in some areas of the detecting surface. Gamma evaluation analyses (3 mm, 3% global, 10% dose threshold) of predicted vs measured portal dose images demonstrate pass rate improvement of up to 75% with the matrix correction, producing pass rates that are up to 30% higher than those resulting from the radial correction technique alone. As in the 1D correction case, the 2D algorithm leaves the portal dosimetry process virtually unchanged in the central portion of the detector, and thus these correction algorithms are not needed for centrally located fields of moderate size (at least, in the case of 6 MV beam energy).Conclusion: The 2D correction improves the portal dosimetry results for those fields for which the 1D correction proves insufficient, especially in the inplane, off-axis regions of the detector. This 2D correction neglects the relatively smaller discrepancies that may be caused by backscatter from nonuniform machine components downstream from the detecting layer.« less
Blind predictions of protein interfaces by docking calculations in CAPRI.
Lensink, Marc F; Wodak, Shoshana J
2010-11-15
Reliable prediction of the amino acid residues involved in protein-protein interfaces can provide valuable insight into protein function, and inform mutagenesis studies, and drug design applications. A fast-growing number of methods are being proposed for predicting protein interfaces, using structural information, energetic criteria, or sequence conservation or by integrating multiple criteria and approaches. Overall however, their performance remains limited, especially when applied to nonobligate protein complexes, where the individual components are also stable on their own. Here, we evaluate interface predictions derived from protein-protein docking calculations. To this end we measure the overlap between the interfaces in models of protein complexes submitted by 76 participants in CAPRI (Critical Assessment of Predicted Interactions) and those of 46 observed interfaces in 20 CAPRI targets corresponding to nonobligate complexes. Our evaluation considers multiple models for each target interface, submitted by different participants, using a variety of docking methods. Although this results in a substantial variability in the prediction performance across participants and targets, clear trends emerge. Docking methods that perform best in our evaluation predict interfaces with average recall and precision levels of about 60%, for a small majority (60%) of the analyzed interfaces. These levels are significantly higher than those obtained for nonobligate complexes by most extant interface prediction methods. We find furthermore that a sizable fraction (24%) of the interfaces in models ranked as incorrect in the CAPRI assessment are actually correctly predicted (recall and precision ≥50%), and that these models contribute to 70% of the correct docking-based interface predictions overall. Our analysis proves that docking methods are much more successful in identifying interfaces than in predicting complexes, and suggests that these methods have an excellent potential of addressing the interface prediction challenge. © 2010 Wiley-Liss, Inc.
NASA Technical Reports Server (NTRS)
Macwilkinson, D. G.; Blackerby, W. T.; Paterson, J. H.
1974-01-01
The degree of cruise drag correlation on the C-141A aircraft is determined between predictions based on wind tunnel test data, and flight test results. An analysis of wind tunnel tests on a 0.0275 scale model at Reynolds number up to 3.05 x 1 million/MAC is reported. Model support interference corrections are evaluated through a series of tests, and fully corrected model data are analyzed to provide details on model component interference factors. It is shown that predicted minimum profile drag for the complete configuration agrees within 0.75% of flight test data, using a wind tunnel extrapolation method based on flat plate skin friction and component shape factors. An alternative method of extrapolation, based on computed profile drag from a subsonic viscous theory, results in a prediction four percent lower than flight test data.
Development of a Jet Noise Prediction Method for Installed Jet Configurations
NASA Technical Reports Server (NTRS)
Hunter, Craig A.; Thomas, Russell H.
2003-01-01
This paper describes development of the Jet3D noise prediction method and its application to heated jets with complex three-dimensional flow fields and installation effects. Noise predictions were made for four separate flow bypass ratio five nozzle configurations tested in the NASA Langley Jet Noise Laboratory. These configurations consist of a round core and fan nozzle with and without pylon, and an eight chevron core nozzle and round fan nozzle with and without pylon. Predicted SPL data were in good agreement with experimental noise measurements up to 121 inlet angle, beyond which Jet3D under predicted low frequency levels. This is due to inherent limitations in the formulation of Lighthill's Acoustic Analogy used in Jet3D, and will be corrected in ongoing development. Jet3D did an excellent job predicting full scale EPNL for nonchevron configurations, and captured the effect of the pylon, correctly predicting a reduction in EPNL. EPNL predictions for chevron configurations were not in good agreement with measured data, likely due to the lower mixing and longer potential cores in the CFD simulations of these cases.
The Stokes-Einstein relation at moderate Schmidt number.
Balboa Usabiaga, Florencio; Xie, Xiaoyi; Delgado-Buscalioni, Rafael; Donev, Aleksandar
2013-12-07
The Stokes-Einstein relation for the self-diffusion coefficient of a spherical particle suspended in an incompressible fluid is an asymptotic result in the limit of large Schmidt number, that is, when momentum diffuses much faster than the particle. When the Schmidt number is moderate, which happens in most particle methods for hydrodynamics, deviations from the Stokes-Einstein prediction are expected. We study these corrections computationally using a recently developed minimally resolved method for coupling particles to an incompressible fluctuating fluid in both two and three dimensions. We find that for moderate Schmidt numbers the diffusion coefficient is reduced relative to the Stokes-Einstein prediction by an amount inversely proportional to the Schmidt number in both two and three dimensions. We find, however, that the Einstein formula is obeyed at all Schmidt numbers, consistent with linear response theory. The mismatch arises because thermal fluctuations affect the drag coefficient for a particle due to the nonlinear nature of the fluid-particle coupling. The numerical data are in good agreement with an approximate self-consistent theory, which can be used to estimate finite-Schmidt number corrections in a variety of methods. Our results indicate that the corrections to the Stokes-Einstein formula come primarily from the fact that the particle itself diffuses together with the momentum. Our study separates effects coming from corrections to no-slip hydrodynamics from those of finite separation of time scales, allowing for a better understanding of widely observed deviations from the Stokes-Einstein prediction in particle methods such as molecular dynamics.
A Review of Computational Intelligence Methods for Eukaryotic Promoter Prediction.
Singh, Shailendra; Kaur, Sukhbir; Goel, Neelam
2015-01-01
In past decades, prediction of genes in DNA sequences has attracted the attention of many researchers but due to its complex structure it is extremely intricate to correctly locate its position. A large number of regulatory regions are present in DNA that helps in transcription of a gene. Promoter is one such region and to find its location is a challenging problem. Various computational methods for promoter prediction have been developed over the past few years. This paper reviews these promoter prediction methods. Several difficulties and pitfalls encountered by these methods are also detailed, along with future research directions.
Hu, Meng; Müller, Erik; Schymanski, Emma L; Ruttkies, Christoph; Schulze, Tobias; Brack, Werner; Krauss, Martin
2018-03-01
In nontarget screening, structure elucidation of small molecules from high resolution mass spectrometry (HRMS) data is challenging, particularly the selection of the most likely candidate structure among the many retrieved from compound databases. Several fragmentation and retention prediction methods have been developed to improve this candidate selection. In order to evaluate their performance, we compared two in silico fragmenters (MetFrag and CFM-ID) and two retention time prediction models (based on the chromatographic hydrophobicity index (CHI) and on log D). A set of 78 known organic micropollutants was analyzed by liquid chromatography coupled to a LTQ Orbitrap HRMS with electrospray ionization (ESI) in positive and negative mode using two fragmentation techniques with different collision energies. Both fragmenters (MetFrag and CFM-ID) performed well for most compounds, with average ranking the correct candidate structure within the top 25% and 22 to 37% for ESI+ and ESI- mode, respectively. The rank of the correct candidate structure slightly improved when MetFrag and CFM-ID were combined. For unknown compounds detected in both ESI+ and ESI-, generally positive mode mass spectra were better for further structure elucidation. Both retention prediction models performed reasonably well for more hydrophobic compounds but not for early eluting hydrophilic substances. The log D prediction showed a better accuracy than the CHI model. Although the two fragmentation prediction methods are more diagnostic and sensitive for candidate selection, the inclusion of retention prediction by calculating a consensus score with optimized weighting can improve the ranking of correct candidates as compared to the individual methods. Graphical abstract Consensus workflow for combining fragmentation and retention prediction in LC-HRMS-based micropollutant identification.
Dessimoz, Christophe; Boeckmann, Brigitte; Roth, Alexander C J; Gonnet, Gaston H
2006-01-01
Correct orthology assignment is a critical prerequisite of numerous comparative genomics procedures, such as function prediction, construction of phylogenetic species trees and genome rearrangement analysis. We present an algorithm for the detection of non-orthologs that arise by mistake in current orthology classification methods based on genome-specific best hits, such as the COGs database. The algorithm works with pairwise distance estimates, rather than computationally expensive and error-prone tree-building methods. The accuracy of the algorithm is evaluated through verification of the distribution of predicted cases, case-by-case phylogenetic analysis and comparisons with predictions from other projects using independent methods. Our results show that a very significant fraction of the COG groups include non-orthologs: using conservative parameters, the algorithm detects non-orthology in a third of all COG groups. Consequently, sequence analysis sensitive to correct orthology assignments will greatly benefit from these findings.
Transient Spectra in TDDFT: Corrections and Correlations
NASA Astrophysics Data System (ADS)
Parkhill, John; Nguyen, Triet
We introduce an atomistic, all-electron, black-box electronic structure code to simulate transient absorption (TA) spectra and apply it to simulate pyrazole and a GFP chromophore derivative. The method is an application of OSCF2, our dissipative extension of time-dependent density functional theory. We compare our simulated spectra directly with recent ultra-fast spectroscopic experiments, showing that they are usefully predicted. We also relate bleaches in the TA signal to Fermi-blocking which would be missed in a simplified model. An important ingredient in the method is the stationary-TDDFT correction scheme recently put forwards by Fischer, Govind, and Cramer which allows us to overcome a limitation of adiabatic TDDFT. We demonstrate that OSCF2 is able to predict both the energies of bleaches and induced absorptions, as well as the decay of the transient spectrum, with only the molecular structure as input. With remaining time we will discuss corrections which resolve the non-resonant behavior of driven TDDFT, and correlated corrections to mean-field dynamics.
On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction.
Crop, F; Van Rompaye, B; Paelinck, L; Vakaet, L; Thierens, H; De Wagter, C
2008-07-21
The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry.
Wang, Zhenghe; Fu, Lianguo; Yang, Yide; Wang, Shuo; Ma, Jun
2016-05-01
To compare consistency of Body Mineral Content (BMC, kg) assessed by Multi-frequency Bioelectrical Impedance Analysis ( MF-BIA) and Dual Energy X-ray Absorptiometry (DXA) measurement, providing evidence for MF-BIA accurate application in Chinese overweight/obese adults. A total of 1323 overweight/obesity adults aged 22-55 years were recruited voluntarily. All the subjects received the measurement of BMC both using MF-BIA and DXA. To evaluate the agreement of BMC measured by MF-BIA and DXA using interclass correlation coefficients (ICC), then establish correction prediction models. The mean difference of BMC between two methods was significant different with 0, overweight male subgroup was 0.28 kg, and 0.38 kg for obesity male, 0.24 kg for overweight female and 0.36 kg for obesity female, respectively (P < 0.05). The ICC of BMC between MF-BIA and DXA measurement were statistically significant in all subgroups (P < 0.01). The ICC for overweight male subgroup was 0.787, 0.796 for obesity male, 0.741 for overweight female and 0.788 for obesity female, respectively. Correction prediction model: overweight male population: BMC (DXA method) = -0.297 + 1.005 x BMC (MF-BIA method). Obese male population: BMC (DXA method) =0.302 + 0.799 x BMC (MF-BIA method). Overweight female groups: BMC (DXA method) = 0.780 + 0.598 x BMC (MF-BIA method). Obese female group: BMC (DXA method) = 0.755 + 0.597 x BMC (MF-BIA method). Upon examination, correction prediction models were better. Co The correlation and agreement of BMC measured by BIA and DXA are weak in Chinese overweight/obese adults. Therefore, consideration should be given to BMC measured by BIA method in Chinese overweight/obese adults. It should be corrected or adjusted to reduce errors compared with DXA method.
NASA Technical Reports Server (NTRS)
1973-01-01
An analysis of Very Low Frequency propagation in the atmosphere in the 10-14 kHz range leads to a discussion of some of the more significant causes of phase perturbation. The method of generating sky-wave corrections to predict the Omega phase is discussed. Composite Omega is considered as a means of lane identification and of reducing Omega navigation error. A simple technique for generating trapezoidal model (T-model) phase prediction is presented and compared with the Navy predictions and actual phase measurements. The T-model prediction analysis illustrates the ability to account for the major phase shift created by the diurnal effects on the lower ionosphere. An analysis of the Navy sky-wave correction table is used to provide information about spatial and temporal correlation of phase correction relative to the differential mode of operation.
Wall Interference Study of the NTF Slotted Tunnel Using Bodies of Revolution Wall Signature Data
NASA Technical Reports Server (NTRS)
Iyer, Venkit; Kuhl, David D.; Walker, Eric L.
2004-01-01
This paper is a description of the analysis of blockage corrections for bodies of revolution for the slotted-wall configuration of the National Transonic Facility (NTF) at the NASA Langley Research Center (LaRC). A wall correction method based on the measured wall signature is used. Test data from three different-sized blockage bodies and four wall ventilation settings were analyzed at various Mach numbers and unit Reynolds numbers. The results indicate that with the proper selection of the boundary condition parameters, the wall correction method can predict blockage corrections consistent with the wall measurements for Mach numbers as high as 0.95.
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei
2018-01-01
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model’s performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM’s parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models’ performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors. PMID:29342942
Correction of Dual-PRF Doppler Velocity Outliers in the Presence of Aliasing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altube, Patricia; Bech, Joan; Argemí, Oriol
In Doppler weather radars, the presence of unfolding errors or outliers is a well-known quality issue for radial velocity fields estimated using the dual–pulse repetition frequency (PRF) technique. Postprocessing methods have been developed to correct dual-PRF outliers, but these need prior application of a dealiasing algorithm for an adequate correction. Our paper presents an alternative procedure based on circular statistics that corrects dual-PRF errors in the presence of extended Nyquist aliasing. The correction potential of the proposed method is quantitatively tested by means of velocity field simulations and is exemplified in the application to real cases, including severe storm events.more » The comparison with two other existing correction methods indicates an improved performance in the correction of clustered outliers. The technique we propose is well suited for real-time applications requiring high-quality Doppler radar velocity fields, such as wind shear and mesocyclone detection algorithms, or assimilation in numerical weather prediction models.« less
Correction of Dual-PRF Doppler Velocity Outliers in the Presence of Aliasing
Altube, Patricia; Bech, Joan; Argemí, Oriol; ...
2017-07-18
In Doppler weather radars, the presence of unfolding errors or outliers is a well-known quality issue for radial velocity fields estimated using the dual–pulse repetition frequency (PRF) technique. Postprocessing methods have been developed to correct dual-PRF outliers, but these need prior application of a dealiasing algorithm for an adequate correction. Our paper presents an alternative procedure based on circular statistics that corrects dual-PRF errors in the presence of extended Nyquist aliasing. The correction potential of the proposed method is quantitatively tested by means of velocity field simulations and is exemplified in the application to real cases, including severe storm events.more » The comparison with two other existing correction methods indicates an improved performance in the correction of clustered outliers. The technique we propose is well suited for real-time applications requiring high-quality Doppler radar velocity fields, such as wind shear and mesocyclone detection algorithms, or assimilation in numerical weather prediction models.« less
N3LO corrections to jet production in deep inelastic scattering using the Projection-to-Born method
NASA Astrophysics Data System (ADS)
Currie, J.; Gehrmann, T.; Glover, E. W. N.; Huss, A.; Niehues, J.; Vogt, A.
2018-05-01
Computations of higher-order QCD corrections for processes with exclusive final states require a subtraction method for real-radiation contributions. We present the first-ever generalisation of a subtraction method for third-order (N3LO) QCD corrections. The Projection-to-Born method is used to combine inclusive N3LO coefficient functions with an exclusive second-order (NNLO) calculation for a final state with an extra jet. The input requirements, advantages, and potential applications of the method are discussed, and validations at lower orders are performed. As a test case, we compute the N3LO corrections to kinematical distributions and production rates for single-jet production in deep inelastic scattering in the laboratory frame, and compare them with data from the ZEUS experiment at HERA. The corrections are small in the central rapidity region, where they stabilize the predictions to sub per-cent level. The corrections increase substantially towards forward rapidity where large logarithmic effects are expected, thereby yielding an improved description of the data in this region.
Benchmarking protein-protein interface predictions: why you should care about protein size.
Martin, Juliette
2014-07-01
A number of predictive methods have been developed to predict protein-protein binding sites. Each new method is traditionally benchmarked using sets of protein structures of various sizes, and global statistics are used to assess the quality of the prediction. Little attention has been paid to the potential bias due to protein size on these statistics. Indeed, small proteins involve proportionally more residues at interfaces than large ones. If a predictive method is biased toward small proteins, this can lead to an over-estimation of its performance. Here, we investigate the bias due to the size effect when benchmarking protein-protein interface prediction on the widely used docking benchmark 4.0. First, we simulate random scores that favor small proteins over large ones. Instead of the 0.5 AUC (Area Under the Curve) value expected by chance, these biased scores result in an AUC equal to 0.6 using hypergeometric distributions, and up to 0.65 using constant scores. We then use real prediction results to illustrate how to detect the size bias by shuffling, and subsequently correct it using a simple conversion of the scores into normalized ranks. In addition, we investigate the scores produced by eight published methods and show that they are all affected by the size effect, which can change their relative ranking. The size effect also has an impact on linear combination scores by modifying the relative contributions of each method. In the future, systematic corrections should be applied when benchmarking predictive methods using data sets with mixed protein sizes. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei
2015-02-01
A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety.
SU-F-R-04: Radiomics for Survival Prediction in Glioblastoma (GBM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, H; Molitoris, J; Bhooshan, N
Purpose: To develop a quantitative radiomics approach for survival prediction of glioblastoma (GBM) patients treated with chemoradiotherapy (CRT). Methods: 28 GBM patients who received CRT at our institution were retrospectively studied. 255 radiomic features were extracted from 3 gadolinium-enhanced T1 weighted MRIs for 2 regions of interest (ROIs) (the surgical cavity and its surrounding enhancement rim). The 3 MRIs were at pre-treatment, 1-month and 3-month post-CRT. The imaging features comprehensively quantified the intensity, spatial variation (texture), geometric property and their spatial-temporal changes for the 2 ROIs. 3 demographics features (age, race, gender) and 12 clinical parameters (KPS, extent of resection,more » whether concurrent temozolomide was adjusted/stopped and radiotherapy related information) were also included. 4 Machine learning models (logistic regression (LR), support vector machine (SVM), decision tree (DT), neural network (NN)) were applied to predict overall survival (OS) and progression-free survival (PFS). The number of cases and percentage of cases predicted correctly were collected and AUC (area under the receiver operating characteristic (ROC) curve) were determined after leave-one-out cross-validation. Results: From univariate analysis, 27 features (1 demographic, 1 clinical and 25 imaging) were statistically significant (p<0.05) for both OS and PFS. Two sets of features (each contained 24 features) were algorithmically selected from all features to predict OS and PFS. High prediction accuracy of OS was achieved by using NN (96%, 27 of 28 cases were correctly predicted, AUC = 0.99), LR (93%, 26 of 28 cases were correctly predicted, AUC = 0.95) and SVM (93%, 26 of 28 cases were correctly predicted, AUC = 0.90). When predicting PFS, NN obtained the highest prediction accuracy (89%, 25 of 28 cases were correctly predicted, AUC = 0.92). Conclusion: Radiomics approach combined with patients’ demographics and clinical parameters can accurately predict survival in GBM patients treated with CRT.« less
Grain growth prediction based on data assimilation by implementing 4DVar on multi-phase-field model
NASA Astrophysics Data System (ADS)
Ito, Shin-ichi; Nagao, Hiromichi; Kasuya, Tadashi; Inoue, Junya
2017-12-01
We propose a method to predict grain growth based on data assimilation by using a four-dimensional variational method (4DVar). When implemented on a multi-phase-field model, the proposed method allows us to calculate the predicted grain structures and uncertainties in them that depend on the quality and quantity of the observational data. We confirm through numerical tests involving synthetic data that the proposed method correctly reproduces the true phase-field assumed in advance. Furthermore, it successfully quantifies uncertainties in the predicted grain structures, where such uncertainty quantifications provide valuable information to optimize the experimental design.
Correcting the lobule in otoplasty using the fillet technique.
Sadick, Haneen; Artinger, Verena M; Haubner, Frank; Gassner, Holger G
2014-01-01
Correction of the protruded lobule in otoplasty continues to represent an important challenge. The lack of skeletal elements within the lobule makes a controlled lobule repositioning less predictable. OBJECTIVE To present a new surgical technique for lobule correction in otoplasty. Human cadaver studies were performed for detailed anatomical analysis of lobule deformities. In addition, we evaluated a novel algorithmic approach to correction of the lobule in 12 consecutive patients. INTERVENTIONS/EXPOSURES: Otoplasty with surgical correction of lobule using the fillet technique. The surgical outcome in the 12 most recent consecutive patients with at least 3 months of follow-up was assessed retrospectively. The postsurgical results were independently reviewed by a panel of noninvolved experts. The 3 major anatomic components of lobular deformities are the axial angular protrusion, the coronal angular protrusion, and the inherent shape. The fillet technique described in the present report addressed all 3 aspects in an effective way. Clinical data analysis revealed no immediate or long-term complications associated with this new surgical method. The patients' subjective rating and the panel's objective rating revealed "good" to "very good" postoperative results. This newly described fillet technique represents a safe and efficient method to correct protruded ear lobules in otoplasty. It allows precise and predictable positioning of the lobule with an excellent safety profile. 4.
Novel approaches to assess the quality of fertility data stored in dairy herd management software.
Hermans, K; Waegeman, W; Opsomer, G; Van Ranst, B; De Koster, J; Van Eetvelde, M; Hostens, M
2017-05-01
Scientific journals and popular press magazines are littered with articles in which the authors use data from dairy herd management software. Almost none of such papers include data cleaning and data quality assessment in their study design despite this being a very critical step during data mining. This paper presents 2 novel data cleaning methods that permit identification of animals with good and bad data quality. The first method is a deterministic or rule-based data cleaning method. Reproduction and mutation or life-changing events such as birth and death were converted to a symbolic (alphabetical letter) representation and split into triplets (3-letter code). The triplets were manually labeled as physiologically correct, suspicious, or impossible. The deterministic data cleaning method was applied to assess the quality of data stored in dairy herd management from 26 farms enrolled in the herd health management program from the Faculty of Veterinary Medicine Ghent University, Belgium. In total, 150,443 triplets were created, 65.4% were labeled as correct, 17.4% as suspicious, and 17.2% as impossible. The second method, a probabilistic method, uses a machine learning algorithm (random forests) to predict the correctness of fertility and mutation events in an early stage of data cleaning. The prediction accuracy of the random forests algorithm was compared with a classical linear statistical method (penalized logistic regression), outperforming the latter substantially, with a superior receiver operating characteristic curve and a higher accuracy (89 vs. 72%). From those results, we conclude that the triplet method can be used to assess the quality of reproduction data stored in dairy herd management software and that a machine learning technique such as random forests is capable of predicting the correctness of fertility data. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Moore, D F; Harwood, V J; Ferguson, D M; Lukasik, J; Hannah, P; Getrich, M; Brownell, M
2005-01-01
The accuracy of ribotyping and antibiotic resistance analysis (ARA) for prediction of sources of faecal bacterial pollution in an urban southern California watershed was determined using blinded proficiency samples. Antibiotic resistance patterns and HindIII ribotypes of Escherichia coli (n = 997), and antibiotic resistance patterns of Enterococcus spp. (n = 3657) were used to construct libraries from sewage samples and from faeces of seagulls, dogs, cats, horses and humans within the watershed. The three libraries were analysed to determine the accuracy of host source prediction. The internal accuracy of the libraries (average rate of correct classification, ARCC) with six source categories was 44% for E. coli ARA, 69% for E. coli ribotyping and 48% for Enterococcus ARA. Each library's predictive ability towards isolates that were not part of the library was determined using a blinded proficiency panel of 97 E. coli and 99 Enterococcus isolates. Twenty-eight per cent (by ARA) and 27% (by ribotyping) of the E. coli proficiency isolates were assigned to the correct source category. Sixteen per cent were assigned to the same source category by both methods, and 6% were assigned to the correct category. Addition of 2480 E. coli isolates to the ARA library did not improve the ARCC or proficiency accuracy. In contrast, 45% of Enterococcus proficiency isolates were correctly identified by ARA. None of the methods performed well enough on the proficiency panel to be judged ready for application to environmental samples. Most microbial source tracking (MST) studies published have demonstrated library accuracy solely by the internal ARCC measurement. Low rates of correct classification for E. coli proficiency isolates compared with the ARCCs of the libraries indicate that testing of bacteria from samples that are not represented in the library, such as blinded proficiency samples, is necessary to accurately measure predictive ability. The library-based MST methods used in this study may not be suited for determination of the source(s) of faecal pollution in large, urban watersheds.
Tan, Kok Chooi; Lim, Hwee San; Matjafri, Mohd Zubir; Abdullah, Khiruddin
2012-06-01
Atmospheric corrections for multi-temporal optical satellite images are necessary, especially in change detection analyses, such as normalized difference vegetation index (NDVI) rationing. Abrupt change detection analysis using remote-sensing techniques requires radiometric congruity and atmospheric correction to monitor terrestrial surfaces over time. Two atmospheric correction methods were used for this study: relative radiometric normalization and the simplified method for atmospheric correction (SMAC) in the solar spectrum. A multi-temporal data set consisting of two sets of Landsat images from the period between 1991 and 2002 of Penang Island, Malaysia, was used to compare NDVI maps, which were generated using the proposed atmospheric correction methods. Land surface temperature (LST) was retrieved using ATCOR3_T in PCI Geomatica 10.1 image processing software. Linear regression analysis was utilized to analyze the relationship between NDVI and LST. This study reveals that both of the proposed atmospheric correction methods yielded high accuracy through examination of the linear correlation coefficients. To check for the accuracy of the equation obtained through linear regression analysis for every single satellite image, 20 points were randomly chosen. The results showed that the SMAC method yielded a constant value (in terms of error) to predict the NDVI value from linear regression analysis-derived equation. The errors (average) from both proposed atmospheric correction methods were less than 10%.
Carluccio, Giuseppe; Bruno, Mary; Collins, Christopher M.
2015-01-01
Purpose Present a novel method for rapid prediction of temperature in vivo for a series of pulse sequences with differing levels and distributions of specific energy absorption rate (SAR). Methods After the temperature response to a brief period of heating is characterized, a rapid estimate of temperature during a series of periods at different heating levels is made using a linear heat equation and Impulse-Response (IR) concepts. Here the initial characterization and long-term prediction for a complete spine exam are made with the Pennes’ bioheat equation where, at first, core body temperature is allowed to increase and local perfusion is not. Then corrections through time allowing variation in local perfusion are introduced. Results The fast IR-based method predicted maximum temperature increase within 1% of that with a full finite difference simulation, but required less than 3.5% of the computation time. Even higher accelerations are possible depending on the time step size chosen, with loss in temporal resolution. Correction for temperature-dependent perfusion requires negligible additional time, and can be adjusted to be more or less conservative than the corresponding finite difference simulation. Conclusion With appropriate methods, it is possible to rapidly predict temperature increase throughout the body for actual MR examinations. (200/200 words) PMID:26096947
Lee, Chong Suh; Chung, Sung Soo; Park, Se Jun; Kim, Dong Min; Shin, Seong Kee
2014-01-01
This study aimed at deriving a lordosis predictive equation using the pelvic incidence and to establish a simple prediction method of lumbar lordosis for planning lumbar corrective surgery in Asians. Eighty-six asymptomatic volunteers were enrolled in the study. The maximal lumbar lordosis (MLL), lower lumbar lordosis (LLL), pelvic incidence (PI), and sacral slope (SS) were measured. The correlations between the parameters were analyzed using Pearson correlation analysis. Predictive equations of lumbar lordosis through simple regression analysis of the parameters and simple predictive values of lumbar lordosis using PI were derived. The PI strongly correlated with the SS (r = 0.78), and a strong correlation was found between the SS and LLL (r = 0.89), and between the SS and MLL (r = 0.83). Based on these correlations, the predictive equations of lumbar lordosis were found (SS = 0.80 + 0.74 PI (r = 0.78, R (2) = 0.61), LLL = 5.20 + 0.87 SS (r = 0.89, R (2) = 0.80), MLL = 17.41 + 0.96 SS (r = 0.83, R (2) = 0.68). When PI was between 30° to 35°, 40° to 50° and 55° to 60°, the equations predicted that MLL would be PI + 10°, PI + 5° and PI, and LLL would be PI - 5°, PI - 10° and PI - 15°, respectively. This simple calculation method can provide a more appropriate and simpler prediction of lumbar lordosis for Asian populations. The prediction of lumbar lordosis should be used as a reference for surgeons planning to restore the lumbar lordosis in lumbar corrective surgery.
Cluster-based analysis improves predictive validity of spike-triggered receptive field estimates
Malone, Brian J.
2017-01-01
Spectrotemporal receptive field (STRF) characterization is a central goal of auditory physiology. STRFs are often approximated by the spike-triggered average (STA), which reflects the average stimulus preceding a spike. In many cases, the raw STA is subjected to a threshold defined by gain values expected by chance. However, such correction methods have not been universally adopted, and the consequences of specific gain-thresholding approaches have not been investigated systematically. Here, we evaluate two classes of statistical correction techniques, using the resulting STRF estimates to predict responses to a novel validation stimulus. The first, more traditional technique eliminated STRF pixels (time-frequency bins) with gain values expected by chance. This correction method yielded significant increases in prediction accuracy, including when the threshold setting was optimized for each unit. The second technique was a two-step thresholding procedure wherein clusters of contiguous pixels surviving an initial gain threshold were then subjected to a cluster mass threshold based on summed pixel values. This approach significantly improved upon even the best gain-thresholding techniques. Additional analyses suggested that allowing threshold settings to vary independently for excitatory and inhibitory subfields of the STRF resulted in only marginal additional gains, at best. In summary, augmenting reverse correlation techniques with principled statistical correction choices increased prediction accuracy by over 80% for multi-unit STRFs and by over 40% for single-unit STRFs, furthering the interpretational relevance of the recovered spectrotemporal filters for auditory systems analysis. PMID:28877194
TMSEG: Novel prediction of transmembrane helices.
Bernhofer, Michael; Kloppmann, Edda; Reeb, Jonas; Rost, Burkhard
2016-11-01
Transmembrane proteins (TMPs) are important drug targets because they are essential for signaling, regulation, and transport. Despite important breakthroughs, experimental structure determination remains challenging for TMPs. Various methods have bridged the gap by predicting transmembrane helices (TMHs), but room for improvement remains. Here, we present TMSEG, a novel method identifying TMPs and accurately predicting their TMHs and their topology. The method combines machine learning with empirical filters. Testing it on a non-redundant dataset of 41 TMPs and 285 soluble proteins, and applying strict performance measures, TMSEG outperformed the state-of-the-art in our hands. TMSEG correctly distinguished helical TMPs from other proteins with a sensitivity of 98 ± 2% and a false positive rate as low as 3 ± 1%. Individual TMHs were predicted with a precision of 87 ± 3% and recall of 84 ± 3%. Furthermore, in 63 ± 6% of helical TMPs the placement of all TMHs and their inside/outside topology was correctly predicted. There are two main features that distinguish TMSEG from other methods. First, the errors in finding all helical TMPs in an organism are significantly reduced. For example, in human this leads to 200 and 1600 fewer misclassifications compared to the second and third best method available, and 4400 fewer mistakes than by a simple hydrophobicity-based method. Second, TMSEG provides an add-on improvement for any existing method to benefit from. Proteins 2016; 84:1706-1716. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Matsuda, Atsushi; Schermelleh, Lothar; Hirano, Yasuhiro; Haraguchi, Tokuko; Hiraoka, Yasushi
2018-05-15
Correction of chromatic shift is necessary for precise registration of multicolor fluorescence images of biological specimens. New emerging technologies in fluorescence microscopy with increasing spatial resolution and penetration depth have prompted the need for more accurate methods to correct chromatic aberration. However, the amount of chromatic shift of the region of interest in biological samples often deviates from the theoretical prediction because of unknown dispersion in the biological samples. To measure and correct chromatic shift in biological samples, we developed a quadrisection phase correlation approach to computationally calculate translation, rotation, and magnification from reference images. Furthermore, to account for local chromatic shifts, images are split into smaller elements, for which the phase correlation between channels is measured individually and corrected accordingly. We implemented this method in an easy-to-use open-source software package, called Chromagnon, that is able to correct shifts with a 3D accuracy of approximately 15 nm. Applying this software, we quantified the level of uncertainty in chromatic shift correction, depending on the imaging modality used, and for different existing calibration methods, along with the proposed one. Finally, we provide guidelines to choose the optimal chromatic shift registration method for any given situation.
NASA Astrophysics Data System (ADS)
Song, Hyeong Yong; Salehiyan, Reza; Li, Xiaolei; Lee, Seung Hak; Hyun, Kyu
2017-11-01
In this study, the effects of cone-plate (C/P) and parallel-plate (P/P) geometries were investigated on the rheological properties of various complex fluids, e.g. single-phase (polymer melts and solutions) and multiphase systems (polymer blend and nanocomposite, and suspension). Small amplitude oscillatory shear (SAOS) tests were carried out to compare linear rheological responses while nonlinear responses were compared using large amplitude oscillatory shear (LAOS) tests at different frequencies. Moreover, Fourier-transform (FT)-rheology method was used to analyze the nonlinear responses under LAOS flow. Experimental results were compared with predictions obtained by single-point correction and shear rate correction. For all systems, SAOS data measured by C/P and P/P coincide with each other, but results showed discordance between C/P and P/P measurements in the nonlinear regime. For all systems except xanthan gum solutions, first-harmonic moduli were corrected using a single horizontal shift factor, whereas FT rheology-based nonlinear parameters ( I 3/1, I 5/1, Q 3, and Q 5) were corrected using vertical shift factors that are well predicted by single-point correction. Xanthan gum solutions exhibited anomalous corrections. Their first-harmonic Fourier moduli were superposed using a horizontal shift factor predicted by shear rate correction applicable to highly shear thinning fluids. The distinguished corrections were observed for FT rheology-based nonlinear parameters. I 3/1 and I 5/1 were superposed by horizontal shifts, while the other systems displayed vertical shifts of I 3/1 and I 5/1. Q 3 and Q 5 of xanthan gum solutions were corrected using both horizontal and vertical shift factors. In particular, the obtained vertical shift factors for Q 3 and Q 5 were twice as large as predictions made by single-point correction. Such larger values are rationalized by the definitions of Q 3 and Q 5. These results highlight the significance of horizontal shift corrections in nonlinear oscillatory shear data.
Iturriaga, H; Hirsch, S; Bunout, D; Díaz, M; Kelly, M; Silva, G; de la Maza, M P; Petermann, M; Ugarte, G
1993-04-01
Looking for a noninvasive method to predict liver histologic alterations in alcoholic patients without clinical signs of liver failure, we studied 187 chronic alcoholics recently abstinent, divided in 2 series. In the model series (n = 94) several clinical variables and results of common laboratory tests were confronted to the findings of liver biopsies. These were classified in 3 groups: 1. Normal liver; 2. Moderate alterations; 3. Marked alterations, including alcoholic hepatitis and cirrhosis. Multivariate methods used were logistic regression analysis and a classification and regression tree (CART). Both methods entered gamma-glutamyltransferase (GGT), aspartate-aminotransferase (AST), weight and age as significant and independent variables. Univariate analysis with GGT and AST at different cutoffs were also performed. To predict the presence of any kind of damage (Groups 2 and 3), CART and AST > 30 IU showed the higher sensitivity, specificity and correct prediction, both in the model and validation series. For prediction of marked liver damage, a score based on logistic regression and GGT > 110 IU had the higher efficiencies. It is concluded that GGT and AST are good markers of alcoholic liver damage and that, using sample cutoffs, histologic diagnosis can be correctly predicted in 80% of recently abstinent asymptomatic alcoholics.
NASA Technical Reports Server (NTRS)
Ko, William L.; Fleischer, Van Tran
2014-01-01
To eliminate the need to use finite-element modeling for structure shape predictions, a new method was invented. This method is to use the Displacement Transfer Functions to transform the measured surface strains into deflections for mapping out overall structural deformed shapes. The Displacement Transfer Functions are expressed in terms of rectilinearly distributed surface strains, and contain no material properties. This report is to apply the patented method to the shape predictions of non-symmetrically loaded slender curved structures with different curvatures up to a full circle. Because the measured surface strains are not available, finite-element analysis had to be used to analytically generate the surface strains. Previously formulated straight-beam Displacement Transfer Functions were modified by introducing the curvature-effect correction terms. Through single-point or dual-point collocations with finite-elementgenerated deflection curves, functional forms of the curvature-effect correction terms were empirically established. The resulting modified Displacement Transfer Functions can then provide quite accurate shape predictions. Also, the uniform straight-beam Displacement Transfer Function was applied to the shape predictions of a section-cut of a generic capsule (GC) outer curved sandwich wall. The resulting GC shape predictions are quite accurate in partial regions where the radius of curvature does not change sharply.
Zimmerman, Tammy M.
2008-01-01
The Lake Erie beaches in Pennsylvania are a valuable recreational resource for Erie County. Concentrations of Escherichia coli (E. coli) at monitored beaches in Presque Isle State Park in Erie, Pa., occasionally exceed the single-sample bathing-water standard of 235 colonies per 100 milliliters resulting in potentially unsafe swimming conditions and prompting beach managers to post public advisories or to close beaches to recreation. To supplement the current method for assessing recreational water quality (E. coli concentrations from the previous day), a predictive regression model for E. coli concentrations at Presque Isle Beach 2 was developed from data collected during the 2004 and 2005 recreational seasons. Model output included predicted E. coli concentrations and exceedance probabilities--the probability that E. coli concentrations would exceed the standard. For this study, E. coli concentrations and other water-quality and environmental data were collected during the 2006 recreational season at Presque Isle Beach 2. The data from 2006, an independent year, were used to test (validate) the 2004-2005 predictive regression model and compare the model performance to the current method. Using 2006 data, the 2004-2005 model yielded more correct responses and better predicted exceedances of the standard than the use of E. coli concentrations from the previous day. The differences were not pronounced, however, and more data are needed. For example, the model correctly predicted exceedances of the standard 11 percent of the time (1 out of 9 exceedances that occurred in 2006) whereas using the E. coli concentrations from the previous day did not result in any correctly predicted exceedances. After validation, new models were developed by adding the 2006 data to the 2004-2005 dataset and by analyzing the data in 2- and 3-year combinations. Results showed that excluding the 2004 data (using 2005 and 2006 data only) yielded the best model. Explanatory variables in the 2005-2006 model were log10 turbidity, bird count, and wave height. The 2005-2006 model correctly predicted when the standard would not be exceeded (specificity) with a response of 95.2 percent (178 out of 187 nonexceedances) and correctly predicted when the standard would be exceeded (sensitivity) with a response of 64.3 percent (9 out of 14 exceedances). In all cases, the results from predictive modeling produced higher percentages of correct predictions than using E. coli concentrations from the previous day. Additional data collected each year can be used to test and possibly improve the model. The results of this study will aid beach managers in more rapidly determining when waters are not safe for recreational use and, subsequently, when to close a beach or post an advisory.
Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.
Samoli, Evangelia; Butland, Barbara K
2017-12-01
Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.
Izarzugaza, Jose MG; Juan, David; Pons, Carles; Pazos, Florencio; Valencia, Alfonso
2008-01-01
Background It has repeatedly been shown that interacting protein families tend to have similar phylogenetic trees. These similarities can be used to predicting the mapping between two families of interacting proteins (i.e. which proteins from one family interact with which members of the other). The correct mapping will be that which maximizes the similarity between the trees. The two families may eventually comprise orthologs and paralogs, if members of the two families are present in more than one organism. This fact can be exploited to restrict the possible mappings, simply by impeding links between proteins of different organisms. We present here an algorithm to predict the mapping between families of interacting proteins which is able to incorporate information regarding orthologues, or any other assignment of proteins to "classes" that may restrict possible mappings. Results For the first time in methods for predicting mappings, we have tested this new approach on a large number of interacting protein domains in order to statistically assess its performance. The method accurately predicts around 80% in the most favourable cases. We also analysed in detail the results of the method for a well defined case of interacting families, the sensor and kinase components of the Ntr-type two-component system, for which up to 98% of the pairings predicted by the method were correct. Conclusion Based on the well established relationship between tree similarity and interactions we developed a method for predicting the mapping between two interacting families using genomic information alone. The program is available through a web interface. PMID:18215279
Bao, Yu; Hayashida, Morihiro; Akutsu, Tatsuya
2016-11-25
Dicer is necessary for the process of mature microRNA (miRNA) formation because the Dicer enzyme cleaves pre-miRNA correctly to generate miRNA with correct seed regions. Nonetheless, the mechanism underlying the selection of a Dicer cleavage site is still not fully understood. To date, several studies have been conducted to solve this problem, for example, a recent discovery indicates that the loop/bulge structure plays a central role in the selection of Dicer cleavage sites. In accordance with this breakthrough, a support vector machine (SVM)-based method called PHDCleav was developed to predict Dicer cleavage sites which outperforms other methods based on random forest and naive Bayes. PHDCleav, however, tests only whether a position in the shift window belongs to a loop/bulge structure. In this paper, we used the length of loop/bulge structures (in addition to their presence or absence) to develop an improved method, LBSizeCleav, for predicting Dicer cleavage sites. To evaluate our method, we used 810 empirically validated sequences of human pre-miRNAs and performed fivefold cross-validation. In both 5p and 3p arms of pre-miRNAs, LBSizeCleav showed greater prediction accuracy than PHDCleav did. This result suggests that the length of loop/bulge structures is useful for prediction of Dicer cleavage sites. We developed a novel algorithm for feature space mapping based on the length of a loop/bulge for predicting Dicer cleavage sites. The better performance of our method indicates the usefulness of the length of loop/bulge structures for such predictions.
Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media
Cooley, R.L.; Christensen, S.
2006-01-01
Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.
The Perturbational MO Method for Saturated Systems.
ERIC Educational Resources Information Center
Herndon, William C.
1979-01-01
Summarizes a theoretical approach using nonbonding MO's and perturbation theory to correlate properties of saturated hydrocarbons. Discussion is limited to correctly predicted using this method. Suggests calculations can be carried out quickly in organic chemistry. (Author/SA)
NASA Technical Reports Server (NTRS)
Halford, G. R.
1983-01-01
The presentation focuses primarily on the progress we at NASA Lewis Research Center have made. The understanding of the phenomenological processes of high temperature fatigue of metals for the purpose of calculating lives of turbine engine hot section components is discussed. Improved understanding resulted in the development of accurate and physically correct life prediction methods such as Strain-Range partitioning for calculating creep fatigue interactions and the Double Linear Damage Rule for predicting potentially severe interactions between high and low cycle fatigue. Examples of other life prediction methods are also discussed. Previously announced in STAR as A83-12159
Olayan, Rawan S; Ashoor, Haitham; Bajic, Vladimir B
2018-04-01
Finding computationally drug-target interactions (DTIs) is a convenient strategy to identify new DTIs at low cost with reasonable accuracy. However, the current DTI prediction methods suffer the high false positive prediction rate. We developed DDR, a novel method that improves the DTI prediction accuracy. DDR is based on the use of a heterogeneous graph that contains known DTIs with multiple similarities between drugs and multiple similarities between target proteins. DDR applies non-linear similarity fusion method to combine different similarities. Before fusion, DDR performs a pre-processing step where a subset of similarities is selected in a heuristic process to obtain an optimized combination of similarities. Then, DDR applies a random forest model using different graph-based features extracted from the DTI heterogeneous graph. Using 5-repeats of 10-fold cross-validation, three testing setups, and the weighted average of area under the precision-recall curve (AUPR) scores, we show that DDR significantly reduces the AUPR score error relative to the next best start-of-the-art method for predicting DTIs by 34% when the drugs are new, by 23% when targets are new and by 34% when the drugs and the targets are known but not all DTIs between them are not known. Using independent sources of evidence, we verify as correct 22 out of the top 25 DDR novel predictions. This suggests that DDR can be used as an efficient method to identify correct DTIs. The data and code are provided at https://bitbucket.org/RSO24/ddr/. vladimir.bajic@kaust.edu.sa. Supplementary data are available at Bioinformatics online.
Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat
2008-01-01
Background Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. Methods In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Results Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Conclusion Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided. PMID:19036144
Postprocessing for Air Quality Predictions
NASA Astrophysics Data System (ADS)
Delle Monache, L.
2017-12-01
In recent year, air quality (AQ) forecasting has made significant progress towards better predictions with the goal of protecting the public from harmful pollutants. This progress is the results of improvements in weather and chemical transport models, their coupling, and more accurate emission inventories (e.g., with the development of new algorithms to account in near real-time for fires). Nevertheless, AQ predictions are still affected at times by significant biases which stem from limitations in both weather and chemistry transport models. Those are the result of numerical approximations and the poor representation (and understanding) of important physical and chemical process. Moreover, although the quality of emission inventories has been significantly improved, they are still one of the main sources of uncertainties in AQ predictions. For operational real-time AQ forecasting, a significant portion of these biases can be reduced with the implementation of postprocessing methods. We will review some of the techniques that have been proposed to reduce both systematic and random errors of AQ predictions, and improve the correlation between predictions and observations of ground-level ozone and surface particulate matter less than 2.5 µm in diameter (PM2.5). These methods, which can be applied to both deterministic and probabilistic predictions, include simple bias-correction techniques, corrections inspired by the Kalman filter, regression methods, and the more recently developed analog-based algorithms. These approaches will be compared and contrasted, and strength and weaknesses of each will be discussed.
Evaluating the utility of mid-infrared spectral subspaces for predicting soil properties.
Sila, Andrew M; Shepherd, Keith D; Pokhariyal, Ganesh P
2016-04-15
We propose four methods for finding local subspaces in large spectral libraries. The proposed four methods include (a) cosine angle spectral matching; (b) hit quality index spectral matching; (c) self-organizing maps and (d) archetypal analysis methods. Then evaluate prediction accuracies for global and subspaces calibration models. These methods were tested on a mid-infrared spectral library containing 1907 soil samples collected from 19 different countries under the Africa Soil Information Service project. Calibration models for pH, Mehlich-3 Ca, Mehlich-3 Al, total carbon and clay soil properties were developed for the whole library and for the subspace. Root mean square error of prediction was used to evaluate predictive performance of subspace and global models. The root mean square error of prediction was computed using a one-third-holdout validation set. Effect of pretreating spectra with different methods was tested for 1st and 2nd derivative Savitzky-Golay algorithm, multiplicative scatter correction, standard normal variate and standard normal variate followed by detrending methods. In summary, the results show that global models outperformed the subspace models. We, therefore, conclude that global models are more accurate than the local models except in few cases. For instance, sand and clay root mean square error values from local models from archetypal analysis method were 50% poorer than the global models except for subspace models obtained using multiplicative scatter corrected spectra with which were 12% better. However, the subspace approach provides novel methods for discovering data pattern that may exist in large spectral libraries.
Whittleton, Sarah R; Otero-de-la-Roza, A; Johnson, Erin R
2017-11-14
The crystal structure prediction (CSP) of a given compound from its molecular diagram is a fundamental challenge in computational chemistry with implications in relevant technological fields. A key component of CSP is the method to calculate the lattice energy of a crystal, which allows the ranking of candidate structures. This work is the second part of our investigation to assess the potential of the exchange-hole dipole moment (XDM) dispersion model for crystal structure prediction. In this article, we study the relatively large, nonplanar, mostly flexible molecules in the first five blind tests held by the Cambridge Crystallographic Data Centre. Four of the seven experimental structures are predicted as the energy minimum, and thermal effects are demonstrated to have a large impact on the ranking of at least another compound. As in the first part of this series, delocalization error affects the results for a single crystal (compound X), in this case by detrimentally overstabilizing the π-conjugated conformation of the monomer. Overall, B86bPBE-XDM correctly predicts 16 of the 21 compounds in the five blind tests, a result similar to the one obtained using the best CSP method available to date (dispersion-corrected PW91 by Neumann et al.). Perhaps more importantly, the systems for which B86bPBE-XDM fails to predict the experimental structure as the energy minimum are mostly the same as with Neumann's method, which suggests that similar difficulties (absence of vibrational free energy corrections, delocalization error,...) are not limited to B86bPBE-XDM but affect GGA-based DFT-methods in general. Our work confirms B86bPBE-XDM as an excellent option for crystal energy ranking in CSP and offers a guide to identify crystals (organic salts, conjugated flexible systems) where difficulties may appear.
Predictive modeling and reducing cyclic variability in autoignition engines
Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob
2016-08-30
Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.
He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei
2015-02-25
A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety. Copyright © 2014 Elsevier B.V. All rights reserved.
HESS Opinions "Should we apply bias correction to global and regional climate model data?"
NASA Astrophysics Data System (ADS)
Ehret, U.; Zehe, E.; Wulfmeyer, V.; Warrach-Sagi, K.; Liebert, J.
2012-04-01
Despite considerable progress in recent years, output of both Global and Regional Circulation Models is still afflicted with biases to a degree that precludes its direct use, especially in climate change impact studies. This is well known, and to overcome this problem bias correction (BC), i.e. the correction of model output towards observations in a post processing step for its subsequent application in climate change impact studies has now become a standard procedure. In this paper we argue that bias correction, which has a considerable influence on the results of impact studies, is not a valid procedure in the way it is currently used: it impairs the advantages of Circulation Models which are based on established physical laws by altering spatiotemporal field consistency, relations among variables and by violating conservation principles. Bias correction largely neglects feedback mechanisms and it is unclear whether bias correction methods are time-invariant under climate change conditions. Applying bias correction increases agreement of Climate Model output with observations in hind casts and hence narrows the uncertainty range of simulations and predictions without, however, providing a satisfactory physical justification. This is in most cases not transparent to the end user. We argue that this masks rather than reduces uncertainty, which may lead to avoidable forejudging of end users and decision makers. We present here a brief overview of state-of-the-art bias correction methods, discuss the related assumptions and implications, draw conclusions on the validity of bias correction and propose ways to cope with biased output of Circulation Models in the short term and how to reduce the bias in the long term. The most promising strategy for improved future Global and Regional Circulation Model simulations is the increase in model resolution to the convection-permitting scale in combination with ensemble predictions based on sophisticated approaches for ensemble perturbation. With this article, we advocate communicating the entire uncertainty range associated with climate change predictions openly and hope to stimulate a lively discussion on bias correction among the atmospheric and hydrological community and end users of climate change impact studies.
Kaus, Joseph W; Harder, Edward; Lin, Teng; Abel, Robert; McCammon, J Andrew; Wang, Lingle
2015-06-09
Recent advances in improved force fields and sampling methods have made it possible for the accurate calculation of protein–ligand binding free energies. Alchemical free energy perturbation (FEP) using an explicit solvent model is one of the most rigorous methods to calculate relative binding free energies. However, for cases where there are high energy barriers separating the relevant conformations that are important for ligand binding, the calculated free energy may depend on the initial conformation used in the simulation due to the lack of complete sampling of all the important regions in phase space. This is particularly true for ligands with multiple possible binding modes separated by high energy barriers, making it difficult to sample all relevant binding modes even with modern enhanced sampling methods. In this paper, we apply a previously developed method that provides a corrected binding free energy for ligands with multiple binding modes by combining the free energy results from multiple alchemical FEP calculations starting from all enumerated poses, and the results are compared with Glide docking and MM-GBSA calculations. From these calculations, the dominant ligand binding mode can also be predicted. We apply this method to a series of ligands that bind to c-Jun N-terminal kinase-1 (JNK1) and obtain improved free energy results. The dominant ligand binding modes predicted by this method agree with the available crystallography, while both Glide docking and MM-GBSA calculations incorrectly predict the binding modes for some ligands. The method also helps separate the force field error from the ligand sampling error, such that deviations in the predicted binding free energy from the experimental values likely indicate possible inaccuracies in the force field. An error in the force field for a subset of the ligands studied was identified using this method, and improved free energy results were obtained by correcting the partial charges assigned to the ligands. This improved the root-mean-square error (RMSE) for the predicted binding free energy from 1.9 kcal/mol with the original partial charges to 1.3 kcal/mol with the corrected partial charges.
2016-01-01
Recent advances in improved force fields and sampling methods have made it possible for the accurate calculation of protein–ligand binding free energies. Alchemical free energy perturbation (FEP) using an explicit solvent model is one of the most rigorous methods to calculate relative binding free energies. However, for cases where there are high energy barriers separating the relevant conformations that are important for ligand binding, the calculated free energy may depend on the initial conformation used in the simulation due to the lack of complete sampling of all the important regions in phase space. This is particularly true for ligands with multiple possible binding modes separated by high energy barriers, making it difficult to sample all relevant binding modes even with modern enhanced sampling methods. In this paper, we apply a previously developed method that provides a corrected binding free energy for ligands with multiple binding modes by combining the free energy results from multiple alchemical FEP calculations starting from all enumerated poses, and the results are compared with Glide docking and MM-GBSA calculations. From these calculations, the dominant ligand binding mode can also be predicted. We apply this method to a series of ligands that bind to c-Jun N-terminal kinase-1 (JNK1) and obtain improved free energy results. The dominant ligand binding modes predicted by this method agree with the available crystallography, while both Glide docking and MM-GBSA calculations incorrectly predict the binding modes for some ligands. The method also helps separate the force field error from the ligand sampling error, such that deviations in the predicted binding free energy from the experimental values likely indicate possible inaccuracies in the force field. An error in the force field for a subset of the ligands studied was identified using this method, and improved free energy results were obtained by correcting the partial charges assigned to the ligands. This improved the root-mean-square error (RMSE) for the predicted binding free energy from 1.9 kcal/mol with the original partial charges to 1.3 kcal/mol with the corrected partial charges. PMID:26085821
A review of propeller noise prediction methodology: 1919-1994
NASA Technical Reports Server (NTRS)
Metzger, F. Bruce
1995-01-01
This report summarizes a review of the literature regarding propeller noise prediction methods. The review is divided into six sections: (1) early methods; (2) more recent methods based on earlier theory; (3) more recent methods based on the Acoustic Analogy; (4) more recent methods based on Computational Acoustics; (5) empirical methods; and (6) broadband methods. The report concludes that there are a large number of noise prediction procedures available which vary markedly in complexity. Deficiencies in accuracy of methods in many cases may be related, not to the methods themselves, but the accuracy and detail of the aerodynamic inputs used to calculate noise. The steps recommended in the report to provide accurate and easy to use prediction methods are: (1) identify reliable test data; (2) define and conduct test programs to fill gaps in the existing data base; (3) identify the most promising prediction methods; (4) evaluate promising prediction methods relative to the data base; (5) identify and correct the weaknesses in the prediction methods, including lack of user friendliness, and include features now available only in research codes; (6) confirm the accuracy of improved prediction methods to the data base; and (7) make the methods widely available and provide training in their use.
Detection of trans–cis flips and peptide-plane flips in protein structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Touw, Wouter G., E-mail: wouter.touw@radboudumc.nl; Joosten, Robbie P.; Vriend, Gert, E-mail: wouter.touw@radboudumc.nl
A method is presented to detect peptide bonds that need either a trans–cis flip or a peptide-plane flip. A coordinate-based method is presented to detect peptide bonds that need correction either by a peptide-plane flip or by a trans–cis inversion of the peptide bond. When applied to the whole Protein Data Bank, the method predicts 4617 trans–cis flips and many thousands of hitherto unknown peptide-plane flips. A few examples are highlighted for which a correction of the peptide-plane geometry leads to a correction of the understanding of the structure–function relation. All data, including 1088 manually validated cases, are freely availablemore » and the method is available from a web server, a web-service interface and through WHAT-CHECK.« less
NASA Astrophysics Data System (ADS)
Zakaria, M. A.; Majeed, A. P. P. A.; Taha, Z.; Alim, M. M.; Baarath, K.
2018-03-01
The movement of a lower limb exoskeleton requires a reasonably accurate control method to allow for an effective gait therapy session to transpire. Trajectory tracking is a nontrivial means of passive rehabilitation technique to correct the motion of the patients’ impaired limb. This paper proposes an inverse predictive model that is coupled together with the forward kinematics of the exoskeleton to estimate the behaviour of the system. A conventional PID control system is used to converge the required joint angles based on the desired input from the inverse predictive model. It was demonstrated through the present study, that the inverse predictive model is capable of meeting the trajectory demand with acceptable error tolerance. The findings further suggest the ability of the predictive model of the exoskeleton to predict a correct joint angle command to the system.
Adaptive correction of ensemble forecasts
NASA Astrophysics Data System (ADS)
Pelosi, Anna; Battista Chirico, Giovanni; Van den Bergh, Joris; Vannitsem, Stephane
2017-04-01
Forecasts from numerical weather prediction (NWP) models often suffer from both systematic and non-systematic errors. These are present in both deterministic and ensemble forecasts, and originate from various sources such as model error and subgrid variability. Statistical post-processing techniques can partly remove such errors, which is particularly important when NWP outputs concerning surface weather variables are employed for site specific applications. Many different post-processing techniques have been developed. For deterministic forecasts, adaptive methods such as the Kalman filter are often used, which sequentially post-process the forecasts by continuously updating the correction parameters as new ground observations become available. These methods are especially valuable when long training data sets do not exist. For ensemble forecasts, well-known techniques are ensemble model output statistics (EMOS), and so-called "member-by-member" approaches (MBM). Here, we introduce a new adaptive post-processing technique for ensemble predictions. The proposed method is a sequential Kalman filtering technique that fully exploits the information content of the ensemble. One correction equation is retrieved and applied to all members, however the parameters of the regression equations are retrieved by exploiting the second order statistics of the forecast ensemble. We compare our new method with two other techniques: a simple method that makes use of a running bias correction of the ensemble mean, and an MBM post-processing approach that rescales the ensemble mean and spread, based on minimization of the Continuous Ranked Probability Score (CRPS). We perform a verification study for the region of Campania in southern Italy. We use two years (2014-2015) of daily meteorological observations of 2-meter temperature and 10-meter wind speed from 18 ground-based automatic weather stations distributed across the region, comparing them with the corresponding COSMO-LEPS ensemble forecasts. Deterministic verification scores (e.g., mean absolute error, bias) and probabilistic scores (e.g., CRPS) are used to evaluate the post-processing techniques. We conclude that the new adaptive method outperforms the simpler running bias-correction. The proposed adaptive method often outperforms the MBM method in removing bias. The MBM method has the advantage of correcting the ensemble spread, although it needs more training data.
A Comparison of Two Approaches to Correction of Restriction of Range in Correlation Analysis
ERIC Educational Resources Information Center
Wiberg, Marie; Sundstrom, Anna
2009-01-01
A common problem in predictive validity studies in the educational and psychological fields, e.g. in educational and employment selection, is restriction in range of the predictor variables. There are several methods for correcting correlations for restriction of range. The aim of this paper was to examine the usefulness of two approaches to…
ERIC Educational Resources Information Center
Pfaffel, Andreas; Schober, Barbara; Spiel, Christiane
2016-01-01
A common methodological problem in the evaluation of the predictive validity of selection methods, e.g. in educational and employment selection, is that the correlation between predictor and criterion is biased. Thorndike's (1949) formulas are commonly used to correct for this biased correlation. An alternative approach is to view the selection…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yulan; Hu, Shenyang; Sun, Xin
Here, complex microstructure changes occur in nuclear fuel and structural materials due to the extreme environments of intense irradiation and high temperature. This paper evaluates the role of the phase field method in predicting the microstructure evolution of irradiated nuclear materials and the impact on their mechanical, thermal, and magnetic properties. The paper starts with an overview of the important physical mechanisms of defect evolution and the significant gaps in simulating microstructure evolution in irradiated nuclear materials. Then, the phase field method is introduced as a powerful and predictive tool and its applications to microstructure and property evolution in irradiatedmore » nuclear materials are reviewed. The review shows that (1) Phase field models can correctly describe important phenomena such as spatial-dependent generation, migration, and recombination of defects, radiation-induced dissolution, the Soret effect, strong interfacial energy anisotropy, and elastic interaction; (2) The phase field method can qualitatively and quantitatively simulate two-dimensional and three-dimensional microstructure evolution, including radiation-induced segregation, second phase nucleation, void migration, void and gas bubble superlattice formation, interstitial loop evolution, hydrate formation, and grain growth, and (3) The Phase field method correctly predicts the relationships between microstructures and properties. The final section is dedicated to a discussion of the strengths and limitations of the phase field method, as applied to irradiation effects in nuclear materials.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yulan; Hu, Shenyang; Sun, Xin
Complex microstructure changes occur in nuclear fuel and structural materials due to the extreme environments of intense irradiation and high temperature. This paper evaluates the role of the phase field (PF) method in predicting the microstructure evolution of irradiated nuclear materials and the impact on their mechanical, thermal, and magnetic properties. The paper starts with an overview of the important physical mechanisms of defect evolution and the significant gaps in simulating microstructure evolution in irradiated nuclear materials. Then, the PF method is introduced as a powerful and predictive tool and its applications to microstructure and property evolution in irradiated nuclearmore » materials are reviewed. The review shows that 1) FP models can correctly describe important phenomena such as spatial dependent generation, migration, and recombination of defects, radiation-induced dissolution, the Soret effect, strong interfacial energy anisotropy, and elastic interaction; 2) The PF method can qualitatively and quantitatively simulate 2-D and 3-D microstructure evolution, including radiation-induced segregation, second phase nucleation, void migration, void and gas bubble superlattice formation, interstitial loop evolution, hydrate formation, and grain growth, and 3) The FP method correctly predicts the relationships between microstructures and properties. The final section is dedicated to a discussion of the strengths and limitations of the PF method, as applied to irradiation effects in nuclear materials.« less
Li, Yulan; Hu, Shenyang; Sun, Xin; ...
2017-04-14
Here, complex microstructure changes occur in nuclear fuel and structural materials due to the extreme environments of intense irradiation and high temperature. This paper evaluates the role of the phase field method in predicting the microstructure evolution of irradiated nuclear materials and the impact on their mechanical, thermal, and magnetic properties. The paper starts with an overview of the important physical mechanisms of defect evolution and the significant gaps in simulating microstructure evolution in irradiated nuclear materials. Then, the phase field method is introduced as a powerful and predictive tool and its applications to microstructure and property evolution in irradiatedmore » nuclear materials are reviewed. The review shows that (1) Phase field models can correctly describe important phenomena such as spatial-dependent generation, migration, and recombination of defects, radiation-induced dissolution, the Soret effect, strong interfacial energy anisotropy, and elastic interaction; (2) The phase field method can qualitatively and quantitatively simulate two-dimensional and three-dimensional microstructure evolution, including radiation-induced segregation, second phase nucleation, void migration, void and gas bubble superlattice formation, interstitial loop evolution, hydrate formation, and grain growth, and (3) The Phase field method correctly predicts the relationships between microstructures and properties. The final section is dedicated to a discussion of the strengths and limitations of the phase field method, as applied to irradiation effects in nuclear materials.« less
Development of Spatiotemporal Bias-Correction Techniques for Downscaling GCM Predictions
NASA Astrophysics Data System (ADS)
Hwang, S.; Graham, W. D.; Geurink, J.; Adams, A.; Martinez, C. J.
2010-12-01
Accurately representing the spatial variability of precipitation is an important factor for predicting watershed response to climatic forcing, particularly in small, low-relief watersheds affected by convective storm systems. Although Global Circulation Models (GCMs) generally preserve spatial relationships between large-scale and local-scale mean precipitation trends, most GCM downscaling techniques focus on preserving only observed temporal variability on point by point basis, not spatial patterns of events. Downscaled GCM results (e.g., CMIP3 ensembles) have been widely used to predict hydrologic implications of climate variability and climate change in large snow-dominated river basins in the western United States (Diffenbaugh et al., 2008; Adam et al., 2009). However fewer applications to smaller rain-driven river basins in the southeastern US (where preserving spatial variability of rainfall patterns may be more important) have been reported. In this study a new method was developed to bias-correct GCMs to preserve both the long term temporal mean and variance of the precipitation data, and the spatial structure of daily precipitation fields. Forty-year retrospective simulations (1960-1999) from 16 GCMs were collected (IPCC, 2007; WCRP CMIP3 multi-model database: https://esg.llnl.gov:8443/), and the daily precipitation data at coarse resolution (i.e., 280km) were interpolated to 12km spatial resolution and bias corrected using gridded observations over the state of Florida (Maurer et al., 2002; Wood et al, 2002; Wood et al, 2004). In this method spatial random fields which preserved the observed spatial correlation structure of the historic gridded observations and the spatial mean corresponding to the coarse scale GCM daily rainfall were generated. The spatiotemporal variability of the spatio-temporally bias-corrected GCMs were evaluated against gridded observations, and compared to the original temporally bias-corrected and downscaled CMIP3 data for the central Florida. The hydrologic response of two southwest Florida watersheds to the gridded observation data, the original bias corrected CMIP3 data, and the new spatiotemporally corrected CMIP3 predictions was compared using an integrated surface-subsurface hydrologic model developed by Tampa Bay Water.
Multiscale Quantum Mechanics/Molecular Mechanics Simulations with Neural Networks.
Shen, Lin; Wu, Jingheng; Yang, Weitao
2016-10-11
Molecular dynamics simulation with multiscale quantum mechanics/molecular mechanics (QM/MM) methods is a very powerful tool for understanding the mechanism of chemical and biological processes in solution or enzymes. However, its computational cost can be too high for many biochemical systems because of the large number of ab initio QM calculations. Semiempirical QM/MM simulations have much higher efficiency. Its accuracy can be improved with a correction to reach the ab initio QM/MM level. The computational cost on the ab initio calculation for the correction determines the efficiency. In this paper we developed a neural network method for QM/MM calculation as an extension of the neural-network representation reported by Behler and Parrinello. With this approach, the potential energy of any configuration along the reaction path for a given QM/MM system can be predicted at the ab initio QM/MM level based on the semiempirical QM/MM simulations. We further applied this method to three reactions in water to calculate the free energy changes. The free-energy profile obtained from the semiempirical QM/MM simulation is corrected to the ab initio QM/MM level with the potential energies predicted with the constructed neural network. The results are in excellent accordance with the reference data that are obtained from the ab initio QM/MM molecular dynamics simulation or corrected with direct ab initio QM/MM potential energies. Compared with the correction using direct ab initio QM/MM potential energies, our method shows a speed-up of 1 or 2 orders of magnitude. It demonstrates that the neural network method combined with the semiempirical QM/MM calculation can be an efficient and reliable strategy for chemical reaction simulations.
Statistical bias correction modelling for seasonal rainfall forecast for the case of Bali island
NASA Astrophysics Data System (ADS)
Lealdi, D.; Nurdiati, S.; Sopaheluwakan, A.
2018-04-01
Rainfall is an element of climate which is highly influential to the agricultural sector. Rain pattern and distribution highly determines the sustainability of agricultural activities. Therefore, information on rainfall is very useful for agriculture sector and farmers in anticipating the possibility of extreme events which often cause failures of agricultural production. This research aims to identify the biases from seasonal forecast products from ECMWF (European Centre for Medium-Range Weather Forecasts) rainfall forecast and to build a transfer function in order to correct the distribution biases as a new prediction model using quantile mapping approach. We apply this approach to the case of Bali Island, and as a result, the use of bias correction methods in correcting systematic biases from the model gives better results. The new prediction model obtained with this approach is better than ever. We found generally that during rainy season, the bias correction approach performs better than in dry season.
Application of Pressure-Based Wall Correction Methods to Two NASA Langley Wind Tunnels
NASA Technical Reports Server (NTRS)
Iyer, V.; Everhart, J. L.
2001-01-01
This paper is a description and status report on the implementation and application of the WICS wall interference method to the National Transonic Facility (NTF) and the 14 x 22-ft subsonic wind tunnel at the NASA Langley Research Center. The method calculates free-air corrections to the measured parameters and aerodynamic coefficients for full span and semispan models when the tunnels are in the solid-wall configuration. From a data quality point of view, these corrections remove predictable bias errors in the measurement due to the presence of the tunnel walls. At the NTF, the method is operational in the off-line and on-line modes, with three tests already computed for wall corrections. At the 14 x 22-ft tunnel, initial implementation has been done based on a test on a full span wing. This facility is currently scheduled for an upgrade to its wall pressure measurement system. With the addition of new wall orifices and other instrumentation upgrades, a significant improvement in the wall correction accuracy is expected.
Adaptable gene-specific dye bias correction for two-channel DNA microarrays.
Margaritis, Thanasis; Lijnzaad, Philip; van Leenen, Dik; Bouwmeester, Diane; Kemmeren, Patrick; van Hooff, Sander R; Holstege, Frank C P
2009-01-01
DNA microarray technology is a powerful tool for monitoring gene expression or for finding the location of DNA-bound proteins. DNA microarrays can suffer from gene-specific dye bias (GSDB), causing some probes to be affected more by the dye than by the sample. This results in large measurement errors, which vary considerably for different probes and also across different hybridizations. GSDB is not corrected by conventional normalization and has been difficult to address systematically because of its variance. We show that GSDB is influenced by label incorporation efficiency, explaining the variation of GSDB across different hybridizations. A correction method (Gene- And Slide-Specific Correction, GASSCO) is presented, whereby sequence-specific corrections are modulated by the overall bias of individual hybridizations. GASSCO outperforms earlier methods and works well on a variety of publically available datasets covering a range of platforms, organisms and applications, including ChIP on chip. A sequence-based model is also presented, which predicts which probes will suffer most from GSDB, useful for microarray probe design and correction of individual hybridizations. Software implementing the method is publicly available.
Adaptable gene-specific dye bias correction for two-channel DNA microarrays
Margaritis, Thanasis; Lijnzaad, Philip; van Leenen, Dik; Bouwmeester, Diane; Kemmeren, Patrick; van Hooff, Sander R; Holstege, Frank CP
2009-01-01
DNA microarray technology is a powerful tool for monitoring gene expression or for finding the location of DNA-bound proteins. DNA microarrays can suffer from gene-specific dye bias (GSDB), causing some probes to be affected more by the dye than by the sample. This results in large measurement errors, which vary considerably for different probes and also across different hybridizations. GSDB is not corrected by conventional normalization and has been difficult to address systematically because of its variance. We show that GSDB is influenced by label incorporation efficiency, explaining the variation of GSDB across different hybridizations. A correction method (Gene- And Slide-Specific Correction, GASSCO) is presented, whereby sequence-specific corrections are modulated by the overall bias of individual hybridizations. GASSCO outperforms earlier methods and works well on a variety of publically available datasets covering a range of platforms, organisms and applications, including ChIP on chip. A sequence-based model is also presented, which predicts which probes will suffer most from GSDB, useful for microarray probe design and correction of individual hybridizations. Software implementing the method is publicly available. PMID:19401678
Prediction of ground effects on aircraft noise
NASA Technical Reports Server (NTRS)
Pao, S. P.; Wenzel, A. R.; Oncley, P. B.
1978-01-01
A unified method is recommended for predicting ground effects on noise. This method may be used in flyover noise predictions and in correcting static test-stand data to free-field conditions. The recommendation is based on a review of recent progress in the theory of ground effects and of the experimental evidence which supports this theory. It is shown that a surface wave must be included sometimes in the prediction method. Prediction equations are collected conveniently in a single section of the paper. Methods of measuring ground impedance and the resulting ground-impedance data are also reviewed because the recommended method is based on a locally reactive impedance boundary model. Current practice of estimating ground effects are reviewed and consideration is given to practical problems in applying the recommended method. These problems include finite frequency-band filters, finite source dimension, wind and temperature gradients, and signal incoherence.
Carluccio, Giuseppe; Bruno, Mary; Collins, Christopher M
2016-05-01
Present a novel method for rapid prediction of temperature in vivo for a series of pulse sequences with differing levels and distributions of specific energy absorption rate (SAR). After the temperature response to a brief period of heating is characterized, a rapid estimate of temperature during a series of periods at different heating levels is made using a linear heat equation and impulse-response (IR) concepts. Here the initial characterization and long-term prediction for a complete spine exam are made with the Pennes' bioheat equation where, at first, core body temperature is allowed to increase and local perfusion is not. Then corrections through time allowing variation in local perfusion are introduced. The fast IR-based method predicted maximum temperature increase within 1% of that with a full finite difference simulation, but required less than 3.5% of the computation time. Even higher accelerations are possible depending on the time step size chosen, with loss in temporal resolution. Correction for temperature-dependent perfusion requires negligible additional time and can be adjusted to be more or less conservative than the corresponding finite difference simulation. With appropriate methods, it is possible to rapidly predict temperature increase throughout the body for actual MR examinations. © 2015 Wiley Periodicals, Inc.
Simulating the electrohydrodynamics of a viscous droplet
NASA Astrophysics Data System (ADS)
Theillard, Maxime; Saintillan, David
2016-11-01
We present a novel numerical approach for the simulation of viscous drop placed in an electric field in two and three spatial dimensions. Our method is constructed as a stable projection method on Quad/Octree grids. Using a modified pressure correction we were able to alleviate the standard time step restriction incurred by capillary forces. In weak electric fields, our results match remarkably well with the predictions from the Taylor-Melcher leaky dielectric model. In strong electric fields the so-called Quincke rotation is correctly reproduced.
Kouri, Donald J [Houston, TX; Vijay, Amrendra [Houston, TX; Zhang, Haiyan [Houston, TX; Zhang, Jingfeng [Houston, TX; Hoffman, David K [Ames, IA
2007-05-01
A method and system for solving the inverse acoustic scattering problem using an iterative approach with consideration of half-off-shell transition matrix elements (near-field) information, where the Volterra inverse series correctly predicts the first two moments of the interaction, while the Fredholm inverse series is correct only for the first moment and that the Volterra approach provides a method for exactly obtaining interactions which can be written as a sum of delta functions.
Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat
2008-11-26
Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided.
LeBlanc, Julia K; DeWitt, Jon; Johnson, Cynthia; Okumu, Wycliffe; McGreevy, Kathleen; Symms, Michelle; McHenry, Lee; Sherman, Stuart; Imperiale, Thomas
2009-04-01
The efficacy of 1-injection versus a 2-injections method of EUS-guided celiac plexus block (EUS-CPB) in patients with chronic pancreatitis is not known. To compare the clinical effectiveness and safety of EUS-CPB by using 1 versus 2 injections in patients with chronic pancreatitis and pain. The secondary aim is to identify factors that predict responsiveness. A prospective randomized study. EUS-CPB was performed by using bupivacaine and triamcinolone injected into 1 or 2 sites at the level of the celiac trunk during a single EUS-CPB procedure. Duration of pain relief, onset of pain relief, and complications. Fifty [corrected] subjects were enrolled (23 received 1 injection, 27 [corrected] received 2 injections). The median duration of pain relief in the 31 responders was 28 days (range 1-673 days). [corrected] Fifteen [corrected] of 23 (65%) [corrected] subjects who received 1 injection [corrected] had relief from pain compared with 16 of 27 (59%) [corrected] subjects who received 2 injections [corrected] (P = .67). [corrected] The median times to onset in the 1-injection and 2-injections groups were 21 and 14 days, respectively (P = .99). No correlation existed between duration of pain relief and time to onset of pain relief or onset within 24 hours. Age, sex, race, prior EUS-CPB, and smoking or alcohol history did not predict duration of pain relief. Telephone interviewers were not blinded. There was no difference in duration of pain relief or onset of pain relief in subjects with chronic pancreatitis and pain when the same total amount of medication was delivered in 1 or 2 injections during a single EUS-CPB procedure. Both methods were safe.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nozirov, Farhod, E-mail: teobaldk@gmail.com, E-mail: farhod.nozirov@gmail.com; Stachów, Michał, E-mail: michal.stachow@gmail.com; Kupka, Teobald, E-mail: teobaldk@gmail.com, E-mail: farhod.nozirov@gmail.com
2014-04-14
A theoretical prediction of nuclear magnetic shieldings and indirect spin-spin coupling constants in 1,1-, cis- and trans-1,2-difluoroethylenes is reported. The results obtained using density functional theory (DFT) combined with large basis sets and gauge-independent atomic orbital calculations were critically compared with experiment and conventional, higher level correlated electronic structure methods. Accurate structural, vibrational, and NMR parameters of difluoroethylenes were obtained using several density functionals combined with dedicated basis sets. B3LYP/6-311++G(3df,2pd) optimized structures of difluoroethylenes closely reproduced experimental geometries and earlier reported benchmark coupled cluster results, while BLYP/6-311++G(3df,2pd) produced accurate harmonic vibrational frequencies. The most accurate vibrations were obtained using B3LYP/6-311++G(3df,2pd)more » with correction for anharmonicity. Becke half and half (BHandH) density functional predicted more accurate {sup 19}F isotropic shieldings and van Voorhis and Scuseria's τ-dependent gradient-corrected correlation functional yielded better carbon shieldings than B3LYP. A surprisingly good performance of Hartree-Fock (HF) method in predicting nuclear shieldings in these molecules was observed. Inclusion of zero-point vibrational correction markedly improved agreement with experiment for nuclear shieldings calculated by HF, MP2, CCSD, and CCSD(T) methods but worsened the DFT results. The threefold improvement in accuracy when predicting {sup 2}J(FF) in 1,1-difluoroethylene for BHandH density functional compared to B3LYP was observed (the deviations from experiment were −46 vs. −115 Hz)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lutsker, V.; Niehaus, T. A., E-mail: thomas.niehaus@physik.uni-regensburg.de; Aradi, B.
2015-11-14
Bridging the gap between first principles methods and empirical schemes, the density functional based tight-binding method (DFTB) has become a versatile tool in predictive atomistic simulations over the past years. One of the major restrictions of this method is the limitation to local or gradient corrected exchange-correlation functionals. This excludes the important class of hybrid or long-range corrected functionals, which are advantageous in thermochemistry, as well as in the computation of vibrational, photoelectron, and optical spectra. The present work provides a detailed account of the implementation of DFTB for a long-range corrected functional in generalized Kohn-Sham theory. We apply themore » method to a set of organic molecules and compare ionization potentials and electron affinities with the original DFTB method and higher level theory. The new scheme cures the significant overpolarization in electric fields found for local DFTB, which parallels the functional dependence in first principles density functional theory (DFT). At the same time, the computational savings with respect to full DFT calculations are not compromised as evidenced by numerical benchmark data.« less
Wilkoff, B L; Kühlkamp, V; Volosin, K; Ellenbogen, K; Waldecker, B; Kacet, S; Gillberg, J M; DeSouza, C M
2001-01-23
One of the perceived benefits of dual-chamber implantable cardioverter-defibrillators (ICDs) is the reduction in inappropriate therapy due to new detection algorithms. It was the purpose of the present investigation to propose methods to minimize bias during such comparisons and to report the arrhythmia detection clinical results of the PR Logic dual-chamber detection algorithm in the GEM DR ICD in the context of these methods. Between November 1997 and October 1998, 933 patients received the GEM DR ICD in this prospective multicenter study. A total of 4856 sustained arrhythmia episodes (n=311) with stored electrogram and marker channel were classified by the investigators; 3488 episodes (n=232) were ventricular tachycardia (VT)/ventricular fibrillation (VF), and 1368 episodes (n=149) were supraventricular tachycardia (SVT). The overall detection results were corrected for multiple episodes within a patient with the generalized estimating equations (GEE) method with an exchangeable correlation structure between episodes. The relative sensitivity for detection of sustained VT and/or VF was 100.0% (3488 of 3488, n=232; 95% CI 98.3% to 100%), the VT/VF positive predictivity was 88.4% uncorrected (3488 of 3945, n=278) and 78.1% corrected (95% CI 73.3% to 82.3%) with the GEE method, and the SVT positive predictivity was 100.0% (911 of 911, n=101; 95% CI 96% to 100%). A structured approach to analysis limits the bias inherent in the evaluation of tachycardia discrimination algorithms through the use of relative VT/VF sensitivity, VT/VF positive predictivity, and SVT positive predictivity along with corrections for multiple tachycardia episodes in a single patient.
Effects of Barometric Fluctuations on Well Water-Level Measurements and Aquifer Test Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spane, Frank A.
1999-12-16
This report examines the effects of barometric fluctuations on well water-level measurements and evaluates adjustment and removal methods for determining areal aquifer head conditions and aquifer test analysis. Two examples of Hanford Site unconfined aquifer tests are examined that demonstrate baro-metric response analysis and illustrate the predictive/removal capabilities of various methods for well water-level and aquifer total head values. Good predictive/removal characteristics were demonstrated with best corrective results provided by multiple-regression deconvolution methods.
Model-Based Control of Observer Bias for the Analysis of Presence-Only Data in Ecology
Warton, David I.; Renner, Ian W.; Ramp, Daniel
2013-01-01
Presence-only data, where information is available concerning species presence but not species absence, are subject to bias due to observers being more likely to visit and record sightings at some locations than others (hereafter “observer bias”). In this paper, we describe and evaluate a model-based approach to accounting for observer bias directly – by modelling presence locations as a function of known observer bias variables (such as accessibility variables) in addition to environmental variables, then conditioning on a common level of bias to make predictions of species occurrence free of such observer bias. We implement this idea using point process models with a LASSO penalty, a new presence-only method related to maximum entropy modelling, that implicitly addresses the “pseudo-absence problem” of where to locate pseudo-absences (and how many). The proposed method of bias-correction is evaluated using systematically collected presence/absence data for 62 plant species endemic to the Blue Mountains near Sydney, Australia. It is shown that modelling and controlling for observer bias significantly improves the accuracy of predictions made using presence-only data, and usually improves predictions as compared to pseudo-absence or “inventory” methods of bias correction based on absences from non-target species. Future research will consider the potential for improving the proposed bias-correction approach by estimating the observer bias simultaneously across multiple species. PMID:24260167
LASIK versus photorefractive keratectomy for high myopic (> 3 diopter) astigmatism.
Katz, Toam; Wagenfeld, Lars; Galambos, Peter; Darrelmann, Benedikt Große; Richard, Gisbert; Linke, Stephan Johannes
2013-12-01
To compare the efficacy, safety, predictability, and vector analysis indices of LASIK and photorefractive keratectomy (PRK) for correction of high cylinder of greater than 3 diopters (D) in myopic eyes. The efficacy, safety, and predictability of LASIK or PRK performed in 114 consecutive randomly selected myopic eyes with an astigmatism of greater than 3 D were retrospectively analyzed at the 2- to 6-month follow-up visits. Vector analysis of the cylindrical correction was compared between the treatment groups. A total of 57 eyes receiving PRK and 57 eyes receiving LASIK of 114 refractive surgery candidates were enrolled in the study. No statistically significant difference in efficacy [efficacy index = 0.76 (±0.32) for PRK vs 0.74 (±0.19) for LASIK (P = .82)], safety [safety index = 1.10 (±0.26) for PRK vs 1.01 (±0.17) for LASIK (P = .121)], or predictability [achieved astigmatism < 1 D in 39% of PRK- and 54% of LASIK-treated eyes, and < 2 D in 88% of PRK- and 89% of LASIK-treated eyes (P = .218)] was demonstrated. Using Alpins vector analysis, the surgically induced astigmatism and difference vector were not significantly different between the surgery methods, whereas the correction index showed a slight and significant advantage of LASIK over PRK (1.25 for PRK and 1.06 for LASIK, P < .001). LASIK and PRK are comparably safe, effective, and predictable procedures for excimer laser correction of high astigmatism of greater than 3 D in myopic eyes. Predictability of the correction of the cylindrical component is lower than that of the spherical equivalent. Copyright 2013, SLACK Incorporated.
Predicting Correctness of Problem Solving from Low-Level Log Data in Intelligent Tutoring Systems
ERIC Educational Resources Information Center
Cetintas, Suleyman; Si, Luo; Xin, Yan Ping; Hord, Casey
2009-01-01
This paper proposes a learning based method that can automatically determine how likely a student is to give a correct answer to a problem in an intelligent tutoring system. Only log files that record students' actions with the system are used to train the model, therefore the modeling process doesn't require expert knowledge for identifying…
Chance-corrected classification for use in discriminant analysis: Ecological applications
Titus, K.; Mosher, J.A.; Williams, B.K.
1984-01-01
A method for evaluating the classification table from a discriminant analysis is described. The statistic, kappa, is useful to ecologists in that it removes the effects of chance. It is useful even with equal group sample sizes although the need for a chance-corrected measure of prediction becomes greater with more dissimilar group sample sizes. Examples are presented.
NASA Astrophysics Data System (ADS)
Tian, D.; Medina, H.
2017-12-01
Post-processing of medium range reference evapotranspiration (ETo) forecasts based on numerical weather prediction (NWP) models has the potential of improving the quality and utility of these forecasts. This work compares the performance of several post-processing methods for correcting ETo forecasts over the continental U.S. generated from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) database using data from Europe (EC), the United Kingdom (MO), and the United States (NCEP). The pondered post-processing techniques are: simple bias correction, the use of multimodels, the Ensemble Model Output Statistics (EMOS, Gneitting et al., 2005) and the Bayesian Model Averaging (BMA, Raftery et al., 2005). ETo estimates based on quality-controlled U.S. Regional Climate Reference Network measurements, and computed with the FAO 56 Penman Monteith equation, are adopted as baseline. EMOS and BMA are generally the most efficient post-processing techniques of the ETo forecasts. Nevertheless, the simple bias correction of the best model is commonly much more rewarding than using multimodel raw forecasts. Our results demonstrate the potential of different forecasting and post-processing frameworks in operational evapotranspiration and irrigation advisory systems at national scale.
Campbell-Washburn, Adrienne E; Xue, Hui; Lederman, Robert J; Faranesh, Anthony Z; Hansen, Michael S
2016-06-01
MRI-guided interventions demand high frame rate imaging, making fast imaging techniques such as spiral imaging and echo planar imaging (EPI) appealing. In this study, we implemented a real-time distortion correction framework to enable the use of these fast acquisitions for interventional MRI. Distortions caused by gradient waveform inaccuracies were corrected using the gradient impulse response function (GIRF), which was measured by standard equipment and saved as a calibration file on the host computer. This file was used at runtime to calculate the predicted k-space trajectories for image reconstruction. Additionally, the off-resonance reconstruction frequency was modified in real time to interactively deblur spiral images. Real-time distortion correction for arbitrary image orientations was achieved in phantoms and healthy human volunteers. The GIRF-predicted k-space trajectories matched measured k-space trajectories closely for spiral imaging. Spiral and EPI image distortion was visibly improved using the GIRF-predicted trajectories. The GIRF calibration file showed no systematic drift in 4 months and was demonstrated to correct distortions after 30 min of continuous scanning despite gradient heating. Interactive off-resonance reconstruction was used to sharpen anatomical boundaries during continuous imaging. This real-time distortion correction framework will enable the use of these high frame rate imaging methods for MRI-guided interventions. Magn Reson Med 75:2278-2285, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
The impact of missing trauma data on predicting massive transfusion
Trickey, Amber W.; Fox, Erin E.; del Junco, Deborah J.; Ning, Jing; Holcomb, John B.; Brasel, Karen J.; Cohen, Mitchell J.; Schreiber, Martin A.; Bulger, Eileen M.; Phelan, Herb A.; Alarcon, Louis H.; Myers, John G.; Muskat, Peter; Cotton, Bryan A.; Wade, Charles E.; Rahbar, Mohammad H.
2013-01-01
INTRODUCTION Missing data are inherent in clinical research and may be especially problematic for trauma studies. This study describes a sensitivity analysis to evaluate the impact of missing data on clinical risk prediction algorithms. Three blood transfusion prediction models were evaluated utilizing an observational trauma dataset with valid missing data. METHODS The PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study included patients requiring ≥ 1 unit of red blood cells (RBC) at 10 participating U.S. Level I trauma centers from July 2009 – October 2010. Physiologic, laboratory, and treatment data were collected prospectively up to 24h after hospital admission. Subjects who received ≥ 10 RBC units within 24h of admission were classified as massive transfusion (MT) patients. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation. A sensitivity analysis for missing data was conducted to determine the upper and lower bounds for correct classification percentages. RESULTS PROMMTT enrolled 1,245 subjects. MT was received by 297 patients (24%). Missing percentage ranged from 2.2% (heart rate) to 45% (respiratory rate). Proportions of complete cases utilized in the MT prediction models ranged from 41% to 88%. All models demonstrated similar correct classification percentages using complete case analysis and multiple imputation. In the sensitivity analysis, correct classification upper-lower bound ranges per model were 4%, 10%, and 12%. Predictive accuracy for all models using PROMMTT data was lower than reported in the original datasets. CONCLUSIONS Evaluating the accuracy clinical prediction models with missing data can be misleading, especially with many predictor variables and moderate levels of missingness per variable. The proposed sensitivity analysis describes the influence of missing data on risk prediction algorithms. Reporting upper/lower bounds for percent correct classification may be more informative than multiple imputation, which provided similar results to complete case analysis in this study. PMID:23778514
Vikramaditya, Talapunur; Lin, Shiang-Tai
2017-06-05
Accurate determination of ionization potentials (IPs), electron affinities (EAs), fundamental gaps (FGs), and HOMO, LUMO energy levels of organic molecules play an important role in modeling and predicting the efficiencies of organic photovoltaics, OLEDs etc. In this work, we investigate the effects of Hartree Fock (HF) Exchange, correlation energy, and long range corrections in predicting IP and EA in Hybrid Functionals. We observe increase in percentage of HF exchange results in increase of IPs and decrease in EAs. Contrary to the general expectations inclusion of both HF exchange and correlation energy (from the second order perturbation theory MP2) leads to poor prediction. Range separated Hybrid Functionals are found to be more reliable among various DFT Functionals investigated. DFT Functionals predict accurate IPs whereas post HF methods predict accurate EAs. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mashouf, Shahram; Department of Radiation Oncology, Sunnybrook Odette Cancer Centre, Toronto, Ontario; Fleury, Emmanuelle
Purpose: The inhomogeneity correction factor (ICF) method provides heterogeneity correction for the fast calculation TG43 formalism in seed brachytherapy. This study compared ICF-corrected plans to their standard TG43 counterparts, looking at their capacity to assess inadequate coverage and/or risk of any skin toxicities for patients who received permanent breast seed implant (PBSI). Methods and Materials: Two-month postimplant computed tomography scans and plans of 140 PBSI patients were used to calculate dose distributions by using the TG43 and the ICF methods. Multiple dose-volume histogram (DVH) parameters of clinical target volume (CTV) and skin were extracted and compared for both ICF and TG43more » dose distributions. Short-term (desquamation and erythema) and long-term (telangiectasia) skin toxicity data were available on 125 and 110 of the patients, respectively, at the time of the study. The predictive value of each DVH parameter of skin was evaluated using the area under the receiver operating characteristic (ROC) curve for each toxicity endpoint. Results: Dose-volume histogram parameters of CTV, calculated using the ICF method, showed an overall decrease compared to TG43, whereas those of skin showed an increase, confirming previously reported findings of the impact of heterogeneity with low-energy sources. The ICF methodology enabled us to distinguish patients for whom the CTV V{sub 100} and V{sub 90} are up to 19% lower compared to TG43, which could present a risk of recurrence not detected when heterogeneity are not accounted for. The ICF method also led to an increase in the prediction of desquamation, erythema, and telangiectasia for 91% of skin DVH parameters studied. Conclusions: The ICF methodology has the advantage of distinguishing any inadequate dose coverage of CTV due to breast heterogeneity, which can be missed by TG43. Use of ICF correction also led to an increase in prediction accuracy of skin toxicities in most cases.« less
NASA Astrophysics Data System (ADS)
Jochimsen, Thies H.; Schulz, Jessica; Busse, Harald; Werner, Peter; Schaudinn, Alexander; Zeisig, Vilia; Kurch, Lars; Seese, Anita; Barthel, Henryk; Sattler, Bernhard; Sabri, Osama
2015-06-01
This study explores the possibility of using simultaneous positron emission tomography—magnetic resonance imaging (PET-MRI) to estimate the lean body mass (LBM) in order to obtain a standardized uptake value (SUV) which is less dependent on the patients' adiposity. This approach is compared to (1) the commonly-used method based on a predictive equation for LBM, and (2) to using an LBM derived from PET-CT data. It is hypothesized that an MRI-based correction of SUV provides a robust method due to the high soft-tissue contrast of MRI. A straightforward approach to calculate an MRI-derived LBM is presented. It is based on the fat and water images computed from the two-point Dixon MRI primarily used for attenuation correction in PET-MRI. From these images, a water fraction was obtained for each voxel. Averaging over the whole body yielded the weight-normalized LBM. Performance of the new approach in terms of reducing variations of 18F-Fludeoxyglucose SUVs in brain and liver across 19 subjects was compared with results using predictive methods and PET-CT data to estimate the LBM. The MRI-based method reduced the coefficient of variation of SUVs in the brain by 41 ± 10% which is comparable to the reduction by the PET-CT method (35 ± 10%). The reduction of the predictive LBM method was 29 ± 8%. In the liver, the reduction was less clear, presumably due to other sources of variation. In conclusion, employing the Dixon data in simultaneous PET-MRI for calculation of lean body mass provides a brain SUV which is less dependent on patient adiposity. The reduced dependency is comparable to that obtained by CT and predictive equations. Therefore, it is more comparable across patients. The technique does not impose an overhead in measurement time and is straightforward to implement.
Jochimsen, Thies H; Schulz, Jessica; Busse, Harald; Werner, Peter; Schaudinn, Alexander; Zeisig, Vilia; Kurch, Lars; Seese, Anita; Barthel, Henryk; Sattler, Bernhard; Sabri, Osama
2015-06-21
This study explores the possibility of using simultaneous positron emission tomography--magnetic resonance imaging (PET-MRI) to estimate the lean body mass (LBM) in order to obtain a standardized uptake value (SUV) which is less dependent on the patients' adiposity. This approach is compared to (1) the commonly-used method based on a predictive equation for LBM, and (2) to using an LBM derived from PET-CT data. It is hypothesized that an MRI-based correction of SUV provides a robust method due to the high soft-tissue contrast of MRI. A straightforward approach to calculate an MRI-derived LBM is presented. It is based on the fat and water images computed from the two-point Dixon MRI primarily used for attenuation correction in PET-MRI. From these images, a water fraction was obtained for each voxel. Averaging over the whole body yielded the weight-normalized LBM. Performance of the new approach in terms of reducing variations of (18)F-Fludeoxyglucose SUVs in brain and liver across 19 subjects was compared with results using predictive methods and PET-CT data to estimate the LBM. The MRI-based method reduced the coefficient of variation of SUVs in the brain by 41 ± 10% which is comparable to the reduction by the PET-CT method (35 ± 10%). The reduction of the predictive LBM method was 29 ± 8%. In the liver, the reduction was less clear, presumably due to other sources of variation. In conclusion, employing the Dixon data in simultaneous PET-MRI for calculation of lean body mass provides a brain SUV which is less dependent on patient adiposity. The reduced dependency is comparable to that obtained by CT and predictive equations. Therefore, it is more comparable across patients. The technique does not impose an overhead in measurement time and is straightforward to implement.
Huh, Yeamin; Smith, David E.; Feng, Meihau Rose
2014-01-01
Human clearance prediction for small- and macro-molecule drugs was evaluated and compared using various scaling methods and statistical analysis.Human clearance is generally well predicted using single or multiple species simple allometry for macro- and small-molecule drugs excreted renally.The prediction error is higher for hepatically eliminated small-molecules using single or multiple species simple allometry scaling, and it appears that the prediction error is mainly associated with drugs with low hepatic extraction ratio (Eh). The error in human clearance prediction for hepatically eliminated small-molecules was reduced using scaling methods with a correction of maximum life span (MLP) or brain weight (BRW).Human clearance of both small- and macro-molecule drugs is well predicted using the monkey liver blood flow method. Predictions using liver blood flow from other species did not work as well, especially for the small-molecule drugs. PMID:21892879
Generalized quantum kinetic expansion: Higher-order corrections to multichromophoric Förster theory
NASA Astrophysics Data System (ADS)
Wu, Jianlan; Gong, Zhihao; Tang, Zhoufei
2015-08-01
For a general two-cluster energy transfer network, a new methodology of the generalized quantum kinetic expansion (GQKE) method is developed, which predicts an exact time-convolution equation for the cluster population evolution under the initial condition of the local cluster equilibrium state. The cluster-to-cluster rate kernel is expanded over the inter-cluster couplings. The lowest second-order GQKE rate recovers the multichromophoric Förster theory (MCFT) rate. The higher-order corrections to the MCFT rate are systematically included using the continued fraction resummation form, resulting in the resummed GQKE method. The reliability of the GQKE methodology is verified in two model systems, revealing the relevance of higher-order corrections.
Assessment and Mapping of Forest Parcel Sizes
Brett J. Butler; Susan L. King
2005-01-01
A method for analyzing and mapping forest parcel sizes in the Northeastern United States is presented. A decision tree model was created that predicts forest parcel size from spatially explicit predictor variables: population density, State, percentage forest land cover, and road density. The model correctly predicted parcel size for 60 percent of the observations in a...
Some Empirical Evidence for Latent Trait Model Selection.
ERIC Educational Resources Information Center
Hutten, Leah R.
The results of this study suggest that for purposes of estimating ability by latent trait methods, the Rasch model compares favorably with the three-parameter logistic model. Using estimated parameters to make predictions about 25 actual number-correct score distributions with samples of 1,000 cases each, those predicted by the Rasch model fit the…
Ainslie, Michael A; Leighton, Timothy G
2009-11-01
The scattering cross-section sigma(s) of a gas bubble of equilibrium radius R(0) in liquid can be written in the form sigma(s)=4piR(0) (2)[(omega(1) (2)omega(2)-1)(2)+delta(2)], where omega is the excitation frequency, omega(1) is the resonance frequency, and delta is a frequency-dependent dimensionless damping coefficient. A persistent discrepancy in the frequency dependence of the contribution to delta from radiation damping, denoted delta(rad), is identified and resolved, as follows. Wildt's [Physics of Sound in the Sea (Washington, DC, 1946), Chap. 28] pioneering derivation predicts a linear dependence of delta(rad) on frequency, a result which Medwin [Ultrasonics 15, 7-13 (1977)] reproduces using a different method. Weston [Underwater Acoustics, NATO Advanced Study Institute Series Vol. II, 55-88 (1967)], using ostensibly the same method as Wildt, predicts the opposite relationship, i.e., that delta(rad) is inversely proportional to frequency. Weston's version of the derivation of the scattering cross-section is shown here to be the correct one, thus resolving the discrepancy. Further, a correction to Weston's model is derived that amounts to a shift in the resonance frequency. A new, corrected, expression for the extinction cross-section is also derived. The magnitudes of the corrections are illustrated using examples from oceanography, volcanology, planetary acoustics, neutron spallation, and biomedical ultrasound. The corrections become significant when the bulk modulus of the gas is not negligible relative to that of the surrounding liquid.
Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies
Theis, Fabian J.
2017-01-01
Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Peter C.; Schreibmann, Eduard; Roper, Justin
2015-03-15
Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR.more » Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.« less
Revisiting Hansen Solubility Parameters by Including Thermodynamics.
Louwerse, Manuel J; Maldonado, Ana; Rousseau, Simon; Moreau-Masselon, Chloe; Roux, Bernard; Rothenberg, Gadi
2017-11-03
The Hansen solubility parameter approach is revisited by implementing the thermodynamics of dissolution and mixing. Hansen's pragmatic approach has earned its spurs in predicting solvents for polymer solutions, but for molecular solutes improvements are needed. By going into the details of entropy and enthalpy, several corrections are suggested that make the methodology thermodynamically sound without losing its ease of use. The most important corrections include accounting for the solvent molecules' size, the destruction of the solid's crystal structure, and the specificity of hydrogen-bonding interactions, as well as opportunities to predict the solubility at extrapolated temperatures. Testing the original and the improved methods on a large industrial dataset including solvent blends, fit qualities improved from 0.89 to 0.97 and the percentage of correct predictions rose from 54 % to 78 %. Full Matlab scripts are included in the Supporting Information, allowing readers to implement these improvements on their own datasets. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Measurement Error Correction for Predicted Spatiotemporal Air Pollution Exposures.
Keller, Joshua P; Chang, Howard H; Strickland, Matthew J; Szpiro, Adam A
2017-05-01
Air pollution cohort studies are frequently analyzed in two stages, first modeling exposure then using predicted exposures to estimate health effects in a second regression model. The difference between predicted and unobserved true exposures introduces a form of measurement error in the second stage health model. Recent methods for spatial data correct for measurement error with a bootstrap and by requiring the study design ensure spatial compatibility, that is, monitor and subject locations are drawn from the same spatial distribution. These methods have not previously been applied to spatiotemporal exposure data. We analyzed the association between fine particulate matter (PM2.5) and birth weight in the US state of Georgia using records with estimated date of conception during 2002-2005 (n = 403,881). We predicted trimester-specific PM2.5 exposure using a complex spatiotemporal exposure model. To improve spatial compatibility, we restricted to mothers residing in counties with a PM2.5 monitor (n = 180,440). We accounted for additional measurement error via a nonparametric bootstrap. Third trimester PM2.5 exposure was associated with lower birth weight in the uncorrected (-2.4 g per 1 μg/m difference in exposure; 95% confidence interval [CI]: -3.9, -0.8) and bootstrap-corrected (-2.5 g, 95% CI: -4.2, -0.8) analyses. Results for the unrestricted analysis were attenuated (-0.66 g, 95% CI: -1.7, 0.35). This study presents a novel application of measurement error correction for spatiotemporal air pollution exposures. Our results demonstrate the importance of spatial compatibility between monitor and subject locations and provide evidence of the association between air pollution exposure and birth weight.
Wu, Yao; Yang, Wei; Lu, Lijun; Lu, Zhentai; Zhong, Liming; Huang, Meiyan; Feng, Yanqiu; Feng, Qianjin; Chen, Wufan
2016-10-01
Attenuation correction is important for PET reconstruction. In PET/MR, MR intensities are not directly related to attenuation coefficients that are needed in PET imaging. The attenuation coefficient map can be derived from CT images. Therefore, prediction of CT substitutes from MR images is desired for attenuation correction in PET/MR. This study presents a patch-based method for CT prediction from MR images, generating attenuation maps for PET reconstruction. Because no global relation exists between MR and CT intensities, we propose local diffeomorphic mapping (LDM) for CT prediction. In LDM, we assume that MR and CT patches are located on 2 nonlinear manifolds, and the mapping from the MR manifold to the CT manifold approximates a diffeomorphism under a local constraint. Locality is important in LDM and is constrained by the following techniques. The first is local dictionary construction, wherein, for each patch in the testing MR image, a local search window is used to extract patches from training MR/CT pairs to construct MR and CT dictionaries. The k-nearest neighbors and an outlier detection strategy are then used to constrain the locality in MR and CT dictionaries. Second is local linear representation, wherein, local anchor embedding is used to solve MR dictionary coefficients when representing the MR testing sample. Under these local constraints, dictionary coefficients are linearly transferred from the MR manifold to the CT manifold and used to combine CT training samples to generate CT predictions. Our dataset contains 13 healthy subjects, each with T1- and T2-weighted MR and CT brain images. This method provides CT predictions with a mean absolute error of 110.1 Hounsfield units, Pearson linear correlation of 0.82, peak signal-to-noise ratio of 24.81 dB, and Dice in bone regions of 0.84 as compared with real CTs. CT substitute-based PET reconstruction has a regression slope of 1.0084 and R 2 of 0.9903 compared with real CT-based PET. In this method, no image segmentation or accurate registration is required. Our method demonstrates superior performance in CT prediction and PET reconstruction compared with competing methods. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
NASA Astrophysics Data System (ADS)
Tchitchekova, Deyana S.; Morthomas, Julien; Ribeiro, Fabienne; Ducher, Roland; Perez, Michel
2014-07-01
A novel method for accurate and efficient evaluation of the change in energy barriers for carbon diffusion in ferrite under heterogeneous stress is introduced. This method, called Linear Combination of Stress States, is based on the knowledge of the effects of simple stresses (uniaxial or shear) on these diffusion barriers. Then, it is assumed that the change in energy barriers under a complex stress can be expressed as a linear combination of these already known simple stress effects. The modifications of energy barriers by either uniaxial traction/compression and shear stress are determined by means of atomistic simulations with the Climbing Image-Nudge Elastic Band method and are stored as a set of functions. The results of this method are compared to the predictions of anisotropic elasticity theory. It is shown that, linear anisotropic elasticity fails to predict the correct energy barrier variation with stress (especially with shear stress) whereas the proposed method provides correct energy barrier variation for stresses up to ˜3 GPa. This study provides a basis for the development of multiscale models of diffusion under non-uniform stress.
Tchitchekova, Deyana S; Morthomas, Julien; Ribeiro, Fabienne; Ducher, Roland; Perez, Michel
2014-07-21
A novel method for accurate and efficient evaluation of the change in energy barriers for carbon diffusion in ferrite under heterogeneous stress is introduced. This method, called Linear Combination of Stress States, is based on the knowledge of the effects of simple stresses (uniaxial or shear) on these diffusion barriers. Then, it is assumed that the change in energy barriers under a complex stress can be expressed as a linear combination of these already known simple stress effects. The modifications of energy barriers by either uniaxial traction/compression and shear stress are determined by means of atomistic simulations with the Climbing Image-Nudge Elastic Band method and are stored as a set of functions. The results of this method are compared to the predictions of anisotropic elasticity theory. It is shown that, linear anisotropic elasticity fails to predict the correct energy barrier variation with stress (especially with shear stress) whereas the proposed method provides correct energy barrier variation for stresses up to ∼3 GPa. This study provides a basis for the development of multiscale models of diffusion under non-uniform stress.
The SAMI Galaxy Survey: can we trust aperture corrections to predict star formation?
NASA Astrophysics Data System (ADS)
Richards, S. N.; Bryant, J. J.; Croom, S. M.; Hopkins, A. M.; Schaefer, A. L.; Bland-Hawthorn, J.; Allen, J. T.; Brough, S.; Cecil, G.; Cortese, L.; Fogarty, L. M. R.; Gunawardhana, M. L. P.; Goodwin, M.; Green, A. W.; Ho, I.-T.; Kewley, L. J.; Konstantopoulos, I. S.; Lawrence, J. S.; Lorente, N. P. F.; Medling, A. M.; Owers, M. S.; Sharp, R.; Sweet, S. M.; Taylor, E. N.
2016-01-01
In the low-redshift Universe (z < 0.3), our view of galaxy evolution is primarily based on fibre optic spectroscopy surveys. Elaborate methods have been developed to address aperture effects when fixed aperture sizes only probe the inner regions for galaxies of ever decreasing redshift or increasing physical size. These aperture corrections rely on assumptions about the physical properties of galaxies. The adequacy of these aperture corrections can be tested with integral-field spectroscopic data. We use integral-field spectra drawn from 1212 galaxies observed as part of the SAMI Galaxy Survey to investigate the validity of two aperture correction methods that attempt to estimate a galaxy's total instantaneous star formation rate. We show that biases arise when assuming that instantaneous star formation is traced by broad-band imaging, and when the aperture correction is built only from spectra of the nuclear region of galaxies. These biases may be significant depending on the selection criteria of a survey sample. Understanding the sensitivities of these aperture corrections is essential for correct handling of systematic errors in galaxy evolution studies.
Bjerke, Benjamin T; Cheung, Zoe B; Shifflett, Grant D; Iyer, Sravisht; Derman, Peter B; Cunningham, Matthew E
2015-10-01
Shoulder balance for adolescent idiopathic scoliosis (AIS) patients is associated with patient satisfaction and self-image. However, few validated systems exist for selecting the upper instrumented vertebra (UIV) post-surgical shoulder balance. The purpose is to examine the existing UIV selection criteria and correlate with post-surgical shoulder balance in AIS patients. Patients who underwent spinal fusion at age 10-18 years for AIS over a 6-year period were reviewed. All patients with a minimum of 1-year radiographic follow-up were included. Imbalance was determined to be radiographic shoulder height |RSH| ≥ 15 mm at latest follow-up. Three UIV selection methods were considered: Lenke, Ilharreborde, and Trobisch. A recommended UIV was determined using each method from pre-surgical radiographs. The recommended UIV for each method was compared to the actual UIV instrumented for all three methods; concordance between these levels was defined as "Correct" UIV selection, and discordance was defined as "Incorrect" selection. One hundred seventy-one patients were included with 2.3 ± 1.1 year follow-up. For all methods, "Correct" UIV selection resulted in more shoulder imbalance than "Incorrect" UIV selection. Overall shoulder imbalance incidence was improved from 31.0% (53/171) to 15.2% (26/171). New shoulder imbalance incidence for patients with previously level shoulders was 8.8%. We could not identify a set of UIV selection criteria that accurately predicted post-surgical shoulder balance. Further validated measures are needed in this area. The complexity of proximal thoracic curve correction is underscored in a case example, where shoulder imbalance occurred despite "Correct" UIV selection by all methods.
NASA Astrophysics Data System (ADS)
Passow, Christian; Donner, Reik
2017-04-01
Quantile mapping (QM) is an established concept that allows to correct systematic biases in multiple quantiles of the distribution of a climatic observable. It shows remarkable results in correcting biases in historical simulations through observational data and outperforms simpler correction methods which relate only to the mean or variance. Since it has been shown that bias correction of future predictions or scenario runs with basic QM can result in misleading trends in the projection, adjusted, trend preserving, versions of QM were introduced in the form of detrended quantile mapping (DQM) and quantile delta mapping (QDM) (Cannon, 2015, 2016). Still, all previous versions and applications of QM based bias correction rely on the assumption of time-independent quantiles over the investigated period, which can be misleading in the context of a changing climate. Here, we propose a novel combination of linear quantile regression (QR) with the classical QM method to introduce a consistent, time-dependent and trend preserving approach of bias correction for historical and future projections. Since QR is a regression method, it is possible to estimate quantiles in the same resolution as the given data and include trends or other dependencies. We demonstrate the performance of the new method of linear regression quantile mapping (RQM) in correcting biases of temperature and precipitation products from historical runs (1959 - 2005) of the COSMO model in climate mode (CCLM) from the Euro-CORDEX ensemble relative to gridded E-OBS data of the same spatial and temporal resolution. A thorough comparison with established bias correction methods highlights the strengths and potential weaknesses of the new RQM approach. References: A.J. Cannon, S.R. Sorbie, T.Q. Murdock: Bias Correction of GCM Precipitation by Quantile Mapping - How Well Do Methods Preserve Changes in Quantiles and Extremes? Journal of Climate, 28, 6038, 2015 A.J. Cannon: Multivariate Bias Correction of Climate Model Outputs - Matching Marginal Distributions and Inter-variable Dependence Structure. Journal of Climate, 29, 7045, 2016
NASA Astrophysics Data System (ADS)
Jin, N.; Yang, F.; Shang, S. Y.; Tao, T.; Liu, J. S.
2016-08-01
According to the limitations of the LVRT technology of traditional photovoltaic inverter existed, this paper proposes a low voltage ride through (LVRT) control method based on model current predictive control (MCPC). This method can effectively improve the photovoltaic inverter output characteristics and response speed. The MCPC method of photovoltaic grid-connected inverter designed, the sum of the absolute value of the predictive current and the given current error is adopted as the cost function with the model predictive control method. According to the MCPC, the optimal space voltage vector is selected. Photovoltaic inverter has achieved automatically switches of priority active or reactive power control of two control modes according to the different operating states, which effectively improve the inverter capability of LVRT. The simulation and experimental results proves that the proposed method is correct and effective.
NASA Technical Reports Server (NTRS)
Green, S.; Cochrane, D. L.; Truhlar, D. G.
1986-01-01
The utility of the energy-corrected sudden (ECS) scaling method is evaluated on the basis of how accurately it predicts the entire matrix of state-to-state rate constants, when the fundamental rate constants are independently known. It is shown for the case of Ar-CO collisions at 500 K that when a critical impact parameter is about 1.75-2.0 A, the ECS method yields excellent excited state rates on the average and has an rms error of less than 20 percent.
Using a bias aware EnKF to account for unresolved structure in an unsaturated zone model
NASA Astrophysics Data System (ADS)
Erdal, D.; Neuweiler, I.; Wollschläger, U.
2014-01-01
When predicting flow in the unsaturated zone, any method for modeling the flow will have to define how, and to what level, the subsurface structure is resolved. In this paper, we use the Ensemble Kalman Filter to assimilate local soil water content observations from both a synthetic layered lysimeter and a real field experiment in layered soil in an unsaturated water flow model. We investigate the use of colored noise bias corrections to account for unresolved subsurface layering in a homogeneous model and compare this approach with a fully resolved model. In both models, we use a simplified model parameterization in the Ensemble Kalman Filter. The results show that the use of bias corrections can increase the predictive capability of a simplified homogeneous flow model if the bias corrections are applied to the model states. If correct knowledge of the layering structure is available, the fully resolved model performs best. However, if no, or erroneous, layering is used in the model, the use of a homogeneous model with bias corrections can be the better choice for modeling the behavior of the system.
Predicting Rotator Cuff Tears Using Data Mining and Bayesian Likelihood Ratios
Lu, Hsueh-Yi; Huang, Chen-Yuan; Su, Chwen-Tzeng; Lin, Chen-Chiang
2014-01-01
Objectives Rotator cuff tear is a common cause of shoulder diseases. Correct diagnosis of rotator cuff tears can save patients from further invasive, costly and painful tests. This study used predictive data mining and Bayesian theory to improve the accuracy of diagnosing rotator cuff tears by clinical examination alone. Methods In this retrospective study, 169 patients who had a preliminary diagnosis of rotator cuff tear on the basis of clinical evaluation followed by confirmatory MRI between 2007 and 2011 were identified. MRI was used as a reference standard to classify rotator cuff tears. The predictor variable was the clinical assessment results, which consisted of 16 attributes. This study employed 2 data mining methods (ANN and the decision tree) and a statistical method (logistic regression) to classify the rotator cuff diagnosis into “tear” and “no tear” groups. Likelihood ratio and Bayesian theory were applied to estimate the probability of rotator cuff tears based on the results of the prediction models. Results Our proposed data mining procedures outperformed the classic statistical method. The correction rate, sensitivity, specificity and area under the ROC curve of predicting a rotator cuff tear were statistical better in the ANN and decision tree models compared to logistic regression. Based on likelihood ratios derived from our prediction models, Fagan's nomogram could be constructed to assess the probability of a patient who has a rotator cuff tear using a pretest probability and a prediction result (tear or no tear). Conclusions Our predictive data mining models, combined with likelihood ratios and Bayesian theory, appear to be good tools to classify rotator cuff tears as well as determine the probability of the presence of the disease to enhance diagnostic decision making for rotator cuff tears. PMID:24733553
Huang, David; Tang, Maolong; Wang, Li; Zhang, Xinbo; Armour, Rebecca L.; Gattey, Devin M.; Lombardi, Lorinna H.; Koch, Douglas D.
2013-01-01
Purpose: To use optical coherence tomography (OCT) to measure corneal power and improve the selection of intraocular lens (IOL) power in cataract surgeries after laser vision correction. Methods: Patients with previous myopic laser vision corrections were enrolled in this prospective study from two eye centers. Corneal thickness and power were measured by Fourier-domain OCT. Axial length, anterior chamber depth, and automated keratometry were measured by a partial coherence interferometer. An OCT-based IOL formula was developed. The mean absolute error of the OCT-based formula in predicting postoperative refraction was compared to two regression-based IOL formulae for eyes with previous laser vision correction. Results: Forty-six eyes of 46 patients all had uncomplicated cataract surgery with monofocal IOL implantation. The mean arithmetic prediction error of postoperative refraction was 0.05 ± 0.65 diopter (D) for the OCT formula, 0.14 ± 0.83 D for the Haigis-L formula, and 0.24 ± 0.82 D for the no-history Shammas-PL formula. The mean absolute error was 0.50 D for OCT compared to a mean absolute error of 0.67 D for Haigis-L and 0.67 D for Shammas-PL. The adjusted mean absolute error (average prediction error removed) was 0.49 D for OCT, 0.65 D for Haigis-L (P=.031), and 0.62 D for Shammas-PL (P=.044). For OCT, 61% of the eyes were within 0.5 D of prediction error, whereas 46% were within 0.5 D for both Haigis-L and Shammas-PL (P=.034). Conclusions: The predictive accuracy of OCT-based IOL power calculation was better than Haigis-L and Shammas-PL formulas in eyes after laser vision correction. PMID:24167323
Al-Khatib, Ra'ed M; Rashid, Nur'Aini Abdul; Abdullah, Rosni
2011-08-01
The secondary structure of RNA pseudoknots has been extensively inferred and scrutinized by computational approaches. Experimental methods for determining RNA structure are time consuming and tedious; therefore, predictive computational approaches are required. Predicting the most accurate and energy-stable pseudoknot RNA secondary structure has been proven to be an NP-hard problem. In this paper, a new RNA folding approach, termed MSeeker, is presented; it includes KnotSeeker (a heuristic method) and Mfold (a thermodynamic algorithm). The global optimization of this thermodynamic heuristic approach was further enhanced by using a case-based reasoning technique as a local optimization method. MSeeker is a proposed algorithm for predicting RNA pseudoknot structure from individual sequences, especially long ones. This research demonstrates that MSeeker improves the sensitivity and specificity of existing RNA pseudoknot structure predictions. The performance and structural results from this proposed method were evaluated against seven other state-of-the-art pseudoknot prediction methods. The MSeeker method had better sensitivity than the DotKnot, FlexStem, HotKnots, pknotsRG, ILM, NUPACK and pknotsRE methods, with 79% of the predicted pseudoknot base-pairs being correct.
An algorithm for direct causal learning of influences on patient outcomes.
Rathnam, Chandramouli; Lee, Sanghoon; Jiang, Xia
2017-01-01
This study aims at developing and introducing a new algorithm, called direct causal learner (DCL), for learning the direct causal influences of a single target. We applied it to both simulated and real clinical and genome wide association study (GWAS) datasets and compared its performance to classic causal learning algorithms. The DCL algorithm learns the causes of a single target from passive data using Bayesian-scoring, instead of using independence checks, and a novel deletion algorithm. We generate 14,400 simulated datasets and measure the number of datasets for which DCL correctly and partially predicts the direct causes. We then compare its performance with the constraint-based path consistency (PC) and conservative PC (CPC) algorithms, the Bayesian-score based fast greedy search (FGS) algorithm, and the partial ancestral graphs algorithm fast causal inference (FCI). In addition, we extend our comparison of all five algorithms to both a real GWAS dataset and real breast cancer datasets over various time-points in order to observe how effective they are at predicting the causal influences of Alzheimer's disease and breast cancer survival. DCL consistently outperforms FGS, PC, CPC, and FCI in discovering the parents of the target for the datasets simulated using a simple network. Overall, DCL predicts significantly more datasets correctly (McNemar's test significance: p<0.0001) than any of the other algorithms for these network types. For example, when assessing overall performance (simple and complex network results combined), DCL correctly predicts approximately 1400 more datasets than the top FGS method, 1600 more datasets than the top CPC method, 4500 more datasets than the top PC method, and 5600 more datasets than the top FCI method. Although FGS did correctly predict more datasets than DCL for the complex networks, and DCL correctly predicted only a few more datasets than CPC for these networks, there is no significant difference in performance between these three algorithms for this network type. However, when we use a more continuous measure of accuracy, we find that all the DCL methods are able to better partially predict more direct causes than FGS and CPC for the complex networks. In addition, DCL consistently had faster runtimes than the other algorithms. In the application to the real datasets, DCL identified rs6784615, located on the NISCH gene, and rs10824310, located on the PRKG1 gene, as direct causes of late onset Alzheimer's disease (LOAD) development. In addition, DCL identified ER category as a direct predictor of breast cancer mortality within 5 years, and HER2 status as a direct predictor of 10-year breast cancer mortality. These predictors have been identified in previous studies to have a direct causal relationship with their respective phenotypes, supporting the predictive power of DCL. When the other algorithms discovered predictors from the real datasets, these predictors were either also found by DCL or could not be supported by previous studies. Our results show that DCL outperforms FGS, PC, CPC, and FCI in almost every case, demonstrating its potential to advance causal learning. Furthermore, our DCL algorithm effectively identifies direct causes in the LOAD and Metabric GWAS datasets, which indicates its potential for clinical applications. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Parrish, Jared W.; Gessner, Bradford D.
2010-01-01
Objectives: To accurately count the number of infant maltreatment-related fatalities and to use information from the birth certificates to predict infant maltreatment-related deaths. Methods: A population-based retrospective cohort study of infants born in Alaska for the years 1992 through 2005 was conducted. Risk factor variables were ascertained…
Can small field diode correction factors be applied universally?
Liu, Paul Z Y; Suchowerska, Natalka; McKenzie, David R
2014-09-01
Diode detectors are commonly used in dosimetry, but have been reported to over-respond in small fields. Diode correction factors have been reported in the literature. The purpose of this study is to determine whether correction factors for a given diode type can be universally applied over a range of irradiation conditions including beams of different qualities. A mathematical relation of diode over-response as a function of the field size was developed using previously published experimental data in which diodes were compared to an air core scintillation dosimeter. Correction factors calculated from the mathematical relation were then compared those available in the literature. The mathematical relation established between diode over-response and the field size was found to predict the measured diode correction factors for fields between 5 and 30 mm in width. The average deviation between measured and predicted over-response was 0.32% for IBA SFD and PTW Type E diodes. Diode over-response was found to be not strongly dependent on the type of linac, the method of collimation or the measurement depth. The mathematical relation was found to agree with published diode correction factors derived from Monte Carlo simulations and measurements, indicating that correction factors are robust in their transportability between different radiation beams. Copyright © 2014. Published by Elsevier Ireland Ltd.
Mashouf, Shahram; Fleury, Emmanuelle; Lai, Priscilla; Merino, Tomas; Lechtman, Eli; Kiss, Alex; McCann, Claire; Pignol, Jean-Philippe
2016-03-15
The inhomogeneity correction factor (ICF) method provides heterogeneity correction for the fast calculation TG43 formalism in seed brachytherapy. This study compared ICF-corrected plans to their standard TG43 counterparts, looking at their capacity to assess inadequate coverage and/or risk of any skin toxicities for patients who received permanent breast seed implant (PBSI). Two-month postimplant computed tomography scans and plans of 140 PBSI patients were used to calculate dose distributions by using the TG43 and the ICF methods. Multiple dose-volume histogram (DVH) parameters of clinical target volume (CTV) and skin were extracted and compared for both ICF and TG43 dose distributions. Short-term (desquamation and erythema) and long-term (telangiectasia) skin toxicity data were available on 125 and 110 of the patients, respectively, at the time of the study. The predictive value of each DVH parameter of skin was evaluated using the area under the receiver operating characteristic (ROC) curve for each toxicity endpoint. Dose-volume histogram parameters of CTV, calculated using the ICF method, showed an overall decrease compared to TG43, whereas those of skin showed an increase, confirming previously reported findings of the impact of heterogeneity with low-energy sources. The ICF methodology enabled us to distinguish patients for whom the CTV V100 and V90 are up to 19% lower compared to TG43, which could present a risk of recurrence not detected when heterogeneity are not accounted for. The ICF method also led to an increase in the prediction of desquamation, erythema, and telangiectasia for 91% of skin DVH parameters studied. The ICF methodology has the advantage of distinguishing any inadequate dose coverage of CTV due to breast heterogeneity, which can be missed by TG43. Use of ICF correction also led to an increase in prediction accuracy of skin toxicities in most cases. Copyright © 2016 Elsevier Inc. All rights reserved.
On the accuracy of Whitham's method. [for steady ideal gas flow past cones
NASA Technical Reports Server (NTRS)
Zahalak, G. I.; Myers, M. K.
1974-01-01
The steady flow of an ideal gas past a conical body is studied by the method of matched asymptotic expansions and by Whitham's method in order to assess the accuracy of the latter. It is found that while Whitham's method does not yield a correct asymptotic representation of the perturbation field to second order in regions where the flow ahead of the Mach cone of the apex is disturbed, it does correctly predict the changes of the second-order perturbation quantities across a shock (the first-order shock strength). The results of the analysis are illustrated by a special case of a flat, rectangular plate at incidence.
Study on SOC wavelet analysis for LiFePO4 battery
NASA Astrophysics Data System (ADS)
Liu, Xuepeng; Zhao, Dongmei
2017-08-01
Improving the prediction accuracy of SOC can reduce the complexity of the conservative and control strategy of the strategy such as the scheduling, optimization and planning of LiFePO4 battery system. Based on the analysis of the relationship between the SOC historical data and the external stress factors, the SOC Estimation-Correction Prediction Model based on wavelet analysis is established. Using wavelet neural network prediction model is of high precision to achieve forecast link, external stress measured data is used to update parameters estimation in the model, implement correction link, makes the forecast model can adapt to the LiFePO4 battery under rated condition of charge and discharge the operating point of the variable operation area. The test results show that the method can obtain higher precision prediction model when the input and output of LiFePO4 battery are changed frequently.
Computer program to predict aircraft noise levels
NASA Technical Reports Server (NTRS)
Clark, B. J.
1981-01-01
Methods developed at the NASA Lewis Research Center for predicting the noise contributions from various aircraft noise sources were programmed to predict aircraft noise levels either in flight or in ground tests. The noise sources include fan inlet and exhaust, jet, flap (for powered lift), core (combustor), turbine, and airframe. Noise propagation corrections are available for atmospheric attenuation, ground reflections, extra ground attenuation, and shielding. Outputs can include spectra, overall sound pressure level, perceived noise level, tone-weighted perceived noise level, and effective perceived noise level at locations specified by the user. Footprint contour coordinates and approximate footprint areas can also be calculated. Inputs and outputs can be in either System International or U.S. customary units. The subroutines for each noise source and propagation correction are described. A complete listing is given.
Automated prediction of protein function and detection of functional sites from structure.
Pazos, Florencio; Sternberg, Michael J E
2004-10-12
Current structural genomics projects are yielding structures for proteins whose functions are unknown. Accordingly, there is a pressing requirement for computational methods for function prediction. Here we present PHUNCTIONER, an automatic method for structure-based function prediction using automatically extracted functional sites (residues associated to functions). The method relates proteins with the same function through structural alignments and extracts 3D profiles of conserved residues. Functional features to train the method are extracted from the Gene Ontology (GO) database. The method extracts these features from the entire GO hierarchy and hence is applicable across the whole range of function specificity. 3D profiles associated with 121 GO annotations were extracted. We tested the power of the method both for the prediction of function and for the extraction of functional sites. The success of function prediction by our method was compared with the standard homology-based method. In the zone of low sequence similarity (approximately 15%), our method assigns the correct GO annotation in 90% of the protein structures considered, approximately 20% higher than inheritance of function from the closest homologue.
Process-conditioned bias correction for seasonal forecasting: a case-study with ENSO in Peru
NASA Astrophysics Data System (ADS)
Manzanas, R.; Gutiérrez, J. M.
2018-05-01
This work assesses the suitability of a first simple attempt for process-conditioned bias correction in the context of seasonal forecasting. To do this, we focus on the northwestern part of Peru and bias correct 1- and 4-month lead seasonal predictions of boreal winter (DJF) precipitation from the ECMWF System4 forecasting system for the period 1981-2010. In order to include information about the underlying large-scale circulation which may help to discriminate between precipitation affected by different processes, we introduce here an empirical quantile-quantile mapping method which runs conditioned on the state of the Southern Oscillation Index (SOI), which is accurately predicted by System4 and is known to affect the local climate. Beyond the reduction of model biases, our results show that the SOI-conditioned method yields better ROC skill scores and reliability than the raw model output over the entire region of study, whereas the standard unconditioned implementation provides no added value for any of these metrics. This suggests that conditioning the bias correction on simple but well-simulated large-scale processes relevant to the local climate may be a suitable approach for seasonal forecasting. Yet, further research on the suitability of the application of similar approaches to the one considered here for other regions, seasons and/or variables is needed.
Salmingo, Remel A; Tadano, Shigeru; Fujisaki, Kazuhiro; Abe, Yuichiro; Ito, Manabu
2012-05-01
Scoliosis is defined as a spinal pathology characterized as a three-dimensional deformity of the spine combined with vertebral rotation. Treatment for severe scoliosis is achieved when the scoliotic spine is surgically corrected and fixed using implanted rods and screws. Several studies performed biomechanical modeling and corrective forces measurements of scoliosis correction. These studies were able to predict the clinical outcome and measured the corrective forces acting on screws, however, they were not able to measure the intraoperative three-dimensional geometry of the spinal rod. In effect, the results of biomechanical modeling might not be so realistic and the corrective forces during the surgical correction procedure were intra-operatively difficult to measure. Projective geometry has been shown to be successful in the reconstruction of a three-dimensional structure using a series of images obtained from different views. In this study, we propose a new method to measure the three-dimensional geometry of an implant rod using two cameras. The reconstruction method requires only a few parameters, the included angle θ between the two cameras, the actual length of the rod in mm, and the location of points for curve fitting. The implant rod utilized in spine surgery was used to evaluate the accuracy of the current method. The three-dimensional geometry of the rod was measured from the image obtained by a scanner and compared to the proposed method using two cameras. The mean error in the reconstruction measurements ranged from 0.32 to 0.45 mm. The method presented here demonstrated the possibility of intra-operatively measuring the three-dimensional geometry of spinal rod. The proposed method could be used in surgical procedures to better understand the biomechanics of scoliosis correction through real-time measurement of three-dimensional implant rod geometry in vivo.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ajami, N K; Duan, Q; Gao, X
2005-04-11
This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less
Exploring Mouse Protein Function via Multiple Approaches.
Huang, Guohua; Chu, Chen; Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning; Cai, Yu-Dong
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality.
Exploring Mouse Protein Function via Multiple Approaches
Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality. PMID:27846315
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, L; Lin, A; Ahn, P
Purpose: To utilize online CBCT scans to develop models for predicting DVH metrics in proton therapy of head and neck tumors. Methods: Nine patients with locally advanced oropharyngeal cancer were retrospectively selected in this study. Deformable image registration was applied to the simulation CT, target volumes, and organs at risk (OARs) contours onto each weekly CBCT scan. Intensity modulated proton therapy (IMPT) treatment plans were created on the simulation CT and forward calculated onto each corrected CBCT scan. Thirty six potentially predictive metrics were extracted from each corrected CBCT. These features include minimum/maximum/mean over and under-ranges at the proximal andmore » distal surface of PTV volumes, and geometrical and water equivalent distance between PTV and each OARs. Principal component analysis (PCA) was used to reduce the dimension of the extracted features. Three principal components were found to account for over 90% of variances in those features. Datasets from eight patients were used to train a machine learning model to fit these principal components with DVH metrics (dose to 95% and 5% of PTV, mean dose or max dose to OARs) from the forward calculated dose on each corrected CBCT. The accuracy of this model was verified on the datasets from the 9th patient. Results: The predicted changes of DVH metrics from the model were in good agreement with actual values calculated on corrected CBCT images. Median differences were within 1 Gy for most DVH metrics except for larynx and constrictor mean dose. However, a large spread of the differences was observed, indicating additional training datasets and predictive features are needed to improve the model. Conclusion: Intensity corrected CBCT scans hold the potential to be used for online verification of proton therapy and prediction of delivered dose distributions.« less
Seismic wavefield propagation in 2D anisotropic media: Ray theory versus wave-equation simulation
NASA Astrophysics Data System (ADS)
Bai, Chao-ying; Hu, Guang-yi; Zhang, Yan-teng; Li, Zhong-sheng
2014-05-01
Despite the ray theory that is based on the high frequency assumption of the elastic wave-equation, the ray theory and the wave-equation simulation methods should be mutually proof of each other and hence jointly developed, but in fact parallel independent progressively. For this reason, in this paper we try an alternative way to mutually verify and test the computational accuracy and the solution correctness of both the ray theory (the multistage irregular shortest-path method) and the wave-equation simulation method (both the staggered finite difference method and the pseudo-spectral method) in anisotropic VTI and TTI media. Through the analysis and comparison of wavefield snapshot, common source gather profile and synthetic seismogram, it is able not only to verify the accuracy and correctness of each of the methods at least for kinematic features, but also to thoroughly understand the kinematic and dynamic features of the wave propagation in anisotropic media. The results show that both the staggered finite difference method and the pseudo-spectral method are able to yield the same results even for complex anisotropic media (such as a fault model); the multistage irregular shortest-path method is capable of predicting similar kinematic features as the wave-equation simulation method does, which can be used to mutually test each other for methodology accuracy and solution correctness. In addition, with the aid of the ray tracing results, it is easy to identify the multi-phases (or multiples) in the wavefield snapshot, common source point gather seismic section and synthetic seismogram predicted by the wave-equation simulation method, which is a key issue for later seismic application.
NASA Astrophysics Data System (ADS)
Bi, Yiming; Tang, Liang; Shan, Peng; Xie, Qiong; Hu, Yong; Peng, Silong; Tan, Jie; Li, Changwen
2014-08-01
Interference such as baseline drift and light scattering can degrade the model predictability in multivariate analysis of near-infrared (NIR) spectra. Usually interference can be represented by an additive and a multiplicative factor. In order to eliminate these interferences, correction parameters are needed to be estimated from spectra. However, the spectra are often mixed of physical light scattering effects and chemical light absorbance effects, making it difficult for parameter estimation. Herein, a novel algorithm was proposed to find a spectral region automatically that the interesting chemical absorbance and noise are low, that is, finding an interference dominant region (IDR). Based on the definition of IDR, a two-step method was proposed to find the optimal IDR and the corresponding correction parameters estimated from IDR. Finally, the correction was performed to the full spectral range using previously obtained parameters for the calibration set and test set, respectively. The method can be applied to multi target systems with one IDR suitable for all targeted analytes. Tested on two benchmark data sets of near-infrared spectra, the performance of the proposed method provided considerable improvement compared with full spectral estimation methods and comparable with other state-of-art methods.
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Mann, Michael J.
1992-01-01
A survey of research on drag-due-to-lift minimization at supersonic speeds, including a study of the effectiveness of current design and analysis methods was conducted. The results show that a linearized theory analysis with estimated attainable thrust and vortex force effects can predict with reasonable accuracy the lifting efficiency of flat wings. Significantly better wing performance can be achieved through the use of twist and camber. Although linearized theory methods tend to overestimate the amount of twist and camber required for a given application and provide an overly optimistic performance prediction, these deficiencies can be overcome by implementation of recently developed empirical corrections. Numerous examples of the correlation of experiment and theory are presented to demonstrate the applicability and limitations of linearized theory methods with and without empirical corrections. The use of an Euler code for the estimation of aerodynamic characteristics of a twisted and cambered wing and its application to design by iteration are discussed.
ERIC Educational Resources Information Center
Hancock, Thomas E.; And Others
1995-01-01
In machine-mediated learning environments, there is a need for more reliable methods of calculating the probability that a learner's response will be correct in future trials. A combination of domain-independent response-state measures of cognition along with two instructional variables for maximum predictive ability are demonstrated. (Author/LRW)
ERIC Educational Resources Information Center
Hamer, Elisa G.; Bos, Arend F.; Hadders-Algra, Mijna
2011-01-01
Aim: Abnormal general movements at around 3 months corrected age indicate a high risk of cerebral palsy (CP). We aimed to determine whether specific movement characteristics can improve the predictive power of definitely abnormal general movements. Method: Video recordings of 46 infants with definitely abnormal general movements at 9 to 13 weeks…
An improved method for predicting brittleness of rocks via well logs in tight oil reservoirs
NASA Astrophysics Data System (ADS)
Wang, Zhenlin; Sun, Ting; Feng, Cheng; Wang, Wei; Han, Chuang
2018-06-01
There can be no industrial oil production in tight oil reservoirs until fracturing is undertaken. Under such conditions, the brittleness of the rocks is a very important factor. However, it has so far been difficult to predict. In this paper, the selected study area is the tight oil reservoirs in Lucaogou formation, Permian, Jimusaer sag, Junggar basin. According to the transformation of dynamic and static rock mechanics parameters and the correction of confining pressure, an improved method is proposed for quantitatively predicting the brittleness of rocks via well logs in tight oil reservoirs. First, 19 typical tight oil core samples are selected in the study area. Their static Young’s modulus, static Poisson’s ratio and petrophysical parameters are measured. In addition, the static brittleness indices of four other tight oil cores are measured under different confining pressure conditions. Second, the dynamic Young’s modulus, Poisson’s ratio and brittleness index are calculated using the compressional and shear wave velocity. With combination of the measured and calculated results, the transformation model of dynamic and static brittleness index is built based on the influence of porosity and clay content. The comparison of the predicted brittleness indices and measured results shows that the model has high accuracy. Third, on the basis of the experimental data under different confining pressure conditions, the amplifying factor of brittleness index is proposed to correct for the influence of confining pressure on the brittleness index. Finally, the above improved models are applied to formation evaluation via well logs. Compared with the results before correction, the results of the improved models agree better with the experimental data, which indicates that the improved models have better application effects. The brittleness index prediction method of tight oil reservoirs is improved in this research. It is of great importance in the optimization of fracturing layer and fracturing construction schemes and the improvement of oil recovery.
Ablation algorithms and corneal asphericity in myopic correction with excimer lasers
NASA Astrophysics Data System (ADS)
Iroshnikov, Nikita G.; Larichev, Andrey V.; Yablokov, Michail G.
2007-06-01
The purpose of this work is studying a corneal asphericity change after a myopic refractive correction by mean of excimer lasers. As the ablation profile shape plays a key role in the post-op corneal asphericity, ablation profiles of recent lasers should be studied. The other task of this research was to analyze operation (LASIK) outcomes of one of the lasers with generic spherical ablation profile and to compare an asphericity change with theoretical predictions. The several correction methods, like custom generated aspherical profiles, may be utilized for mitigation of unwanted effects of asphericity change. Here we also present preliminary results of such correction for one of the excimer lasers.
Improved Use of Satellite Imagery to Forecast Hurricanes
NASA Technical Reports Server (NTRS)
Louis, Jean-Francois
2001-01-01
This project tested a novel method that uses satellite imagery to correct phase errors in the initial state for numerical weather prediction, applied to hurricane forecasts. The system was tested on hurricanes Guillermo (1997), Felicia (1997) and Iniki (1992). We compared the performance of the system with and without phase correction to a procedure that uses bogus data in the initial state, similar to current operational procedures. The phase correction keeps the hurricane on track in the analysis and is far superior to a system without phase correction. Compared to operational procedure, phase correction generates somewhat worse 3-day forecast of the hurricane track, but better forecast of intensity. It is believed that the phase correction module would work best in the context of 4-dimensional variational data assimilation. Very little modification to 4DVar would be required.
NASA Astrophysics Data System (ADS)
Zhang, DaDi; Yang, Xiaolong; Zheng, Xiao; Yang, Weitao
2018-04-01
Electron affinity (EA) is the energy released when an additional electron is attached to an atom or a molecule. EA is a fundamental thermochemical property, and it is closely pertinent to other important properties such as electronegativity and hardness. However, accurate prediction of EA is difficult with density functional theory methods. The somewhat large error of the calculated EAs originates mainly from the intrinsic delocalisation error associated with the approximate exchange-correlation functional. In this work, we employ a previously developed non-empirical global scaling correction approach, which explicitly imposes the Perdew-Parr-Levy-Balduz condition to the approximate functional, and achieve a substantially improved accuracy for the calculated EAs. In our approach, the EA is given by the scaling corrected Kohn-Sham lowest unoccupied molecular orbital energy of the neutral molecule, without the need to carry out the self-consistent-field calculation for the anion.
Predictive Formula for Refraction of Autologous Lenticule Implantation for Hyperopia Correction.
Li, Meng; Li, Meiyan; Sun, Ling; Ni, Katherine; Zhou, Xingtao
2017-12-01
To create a formula to predict refractive correction of autologous lenticule implantation for correction of hyperopia (with myopia in one eye and hyperopia in the contralateral eye). In this prospective study, 10 consecutive patients (20 eyes) who had myopia in one eye and hyperopia in the contralateral eye were included. The preoperative spherical equivalent was -3.31 ± 1.73 diopters (D) for the myopic eyes and +4.46 ± 1.97 D for the hyperopic eyes. For each patient, the myopic eye was treated with small incision lenticule extraction and the lenticule was subsequently implanted into the contralateral hyperopic eye. The average length of follow-up was 17 months. All of the operations were successful without complications. At the last visit, the efficacy index (postoperative uncorrected distance visual acuity/preoperative corrected distance visual acuity [CDVA]) of the hyperopic eyes was 0.94 ± 0.35 and the safety index (postoperative CDVA/preoperative CDVA) was 1.36 ± 0.38. No eyes lost any lines of visual acuity. Six of 10 (60%) of the implanted eyes were within ±1.00 D of the intended refractive target. A predictive formula was derived: Lenticule implantation achieved correction (D) (LAC) = 1.224 Lenticule refractive power (D) (LRP) - 0.063 (R 2 =0.92, P < .001). On corneal topography, there was a significant increase in the corneal anterior surface keratometry value postoperatively, whereas the posterior surface keratometry value remained stable (P > .05). Autologous lenticule implantation could provide a reliable method of correcting hyperopia. The refractive correction formula may require further verification and adjustment. [J Refract Surg. 2017;33(12):827-833.]. Copyright 2017, SLACK Incorporated.
Lingner, Thomas; Kataya, Amr R; Antonicelli, Gerardo E; Benichou, Aline; Nilssen, Kjersti; Chen, Xiong-Yan; Siemsen, Tanja; Morgenstern, Burkhard; Meinicke, Peter; Reumann, Sigrun
2011-04-01
In the postgenomic era, accurate prediction tools are essential for identification of the proteomes of cell organelles. Prediction methods have been developed for peroxisome-targeted proteins in animals and fungi but are missing specifically for plants. For development of a predictor for plant proteins carrying peroxisome targeting signals type 1 (PTS1), we assembled more than 2500 homologous plant sequences, mainly from EST databases. We applied a discriminative machine learning approach to derive two different prediction methods, both of which showed high prediction accuracy and recognized specific targeting-enhancing patterns in the regions upstream of the PTS1 tripeptides. Upon application of these methods to the Arabidopsis thaliana genome, 392 gene models were predicted to be peroxisome targeted. These predictions were extensively tested in vivo, resulting in a high experimental verification rate of Arabidopsis proteins previously not known to be peroxisomal. The prediction methods were able to correctly infer novel PTS1 tripeptides, which even included novel residues. Twenty-three newly predicted PTS1 tripeptides were experimentally confirmed, and a high variability of the plant PTS1 motif was discovered. These prediction methods will be instrumental in identifying low-abundance and stress-inducible peroxisomal proteins and defining the entire peroxisomal proteome of Arabidopsis and agronomically important crop plants.
Study of the integration of wind tunnel and computational methods for aerodynamic configurations
NASA Technical Reports Server (NTRS)
Browne, Lindsey E.; Ashby, Dale L.
1989-01-01
A study was conducted to determine the effectiveness of using a low-order panel code to estimate wind tunnel wall corrections. The corrections were found by two computations. The first computation included the test model and the surrounding wind tunnel walls, while in the second computation the wind tunnel walls were removed. The difference between the force and moment coefficients obtained by comparing these two cases allowed the determination of the wall corrections. The technique was verified by matching the test-section, wall-pressure signature from a wind tunnel test with the signature predicted by the panel code. To prove the viability of the technique, two cases were considered. The first was a two-dimensional high-lift wing with a flap that was tested in the 7- by 10-foot wind tunnel at NASA Ames Research Center. The second was a 1/32-scale model of the F/A-18 aircraft which was tested in the low-speed wind tunnel at San Diego State University. The panel code used was PMARC (Panel Method Ames Research Center). Results of this study indicate that the proposed wind tunnel wall correction method is comparable to other methods and that it also inherently includes the corrections due to model blockage and wing lift.
Local sharpening and subspace wavefront correction with predictive dynamic digital holography
NASA Astrophysics Data System (ADS)
Sulaiman, Sennan; Gibson, Steve
2017-09-01
Digital holography holds several advantages over conventional imaging and wavefront sensing, chief among these being significantly fewer and simpler optical components and the retrieval of complex field. Consequently, many imaging and sensing applications including microscopy and optical tweezing have turned to using digital holography. A significant obstacle for digital holography in real-time applications, such as wavefront sensing for high energy laser systems and high speed imaging for target racking, is the fact that digital holography is computationally intensive; it requires iterative virtual wavefront propagation and hill-climbing to optimize some sharpness criteria. It has been shown recently that minimum-variance wavefront prediction can be integrated with digital holography and image sharpening to reduce significantly large number of costly sharpening iterations required to achieve near-optimal wavefront correction. This paper demonstrates further gains in computational efficiency with localized sharpening in conjunction with predictive dynamic digital holography for real-time applications. The method optimizes sharpness of local regions in a detector plane by parallel independent wavefront correction on reduced-dimension subspaces of the complex field in a spectral plane.
A consistent transported PDF model for treating differential molecular diffusion
NASA Astrophysics Data System (ADS)
Wang, Haifeng; Zhang, Pei
2016-11-01
Differential molecular diffusion is a fundamentally significant phenomenon in all multi-component turbulent reacting or non-reacting flows caused by the different rates of molecular diffusion of energy and species concentrations. In the transported probability density function (PDF) method, the differential molecular diffusion can be treated by using a mean drift model developed by McDermott and Pope. This model correctly accounts for the differential molecular diffusion in the scalar mean transport and yields a correct DNS limit of the scalar variance production. The model, however, misses the molecular diffusion term in the scalar variance transport equation, which yields an inconsistent prediction of the scalar variance in the transported PDF method. In this work, a new model is introduced to remedy this problem that can yield a consistent scalar variance prediction. The model formulation along with its numerical implementation is discussed, and the model validation is conducted in a turbulent mixing layer problem.
NASA Astrophysics Data System (ADS)
Qian, Xiaoshan
2018-01-01
The traditional model of evaporation process parameters have continuity and cumulative characteristics of the prediction error larger issues, based on the basis of the process proposed an adaptive particle swarm neural network forecasting method parameters established on the autoregressive moving average (ARMA) error correction procedure compensated prediction model to predict the results of the neural network to improve prediction accuracy. Taking a alumina plant evaporation process to analyze production data validation, and compared with the traditional model, the new model prediction accuracy greatly improved, can be used to predict the dynamic process of evaporation of sodium aluminate solution components.
A comparison of methods to estimate future sub-daily design rainfall
NASA Astrophysics Data System (ADS)
Li, J.; Johnson, F.; Evans, J.; Sharma, A.
2017-12-01
Warmer temperatures are expected to increase extreme short-duration rainfall due to the increased moisture-holding capacity of the atmosphere. While attention has been paid to the impacts of climate change on future design rainfalls at daily or longer time scales, the potential changes in short duration design rainfalls have been often overlooked due to the limited availability of sub-daily projections and observations. This study uses a high-resolution regional climate model (RCM) to predict the changes in sub-daily design rainfalls for the Greater Sydney region in Australia. Sixteen methods for predicting changes to sub-daily future extremes are assessed based on different options for bias correction, disaggregation and frequency analysis. A Monte Carlo cross-validation procedure is employed to evaluate the skill of each method in estimating the design rainfall for the current climate. It is found that bias correction significantly improves the accuracy of the design rainfall estimated for the current climate. For 1 h events, bias correcting the hourly annual maximum rainfall simulated by the RCM produces design rainfall closest to observations, whereas for multi-hour events, disaggregating the daily rainfall total is recommended. This suggests that the RCM fails to simulate the observed multi-duration rainfall persistence, which is a common issue for most climate models. Despite the significant differences in the estimated design rainfalls between different methods, all methods lead to an increase in design rainfalls across the majority of the study region.
Sound radiation of a railway rail in close proximity to the ground
NASA Astrophysics Data System (ADS)
Zhang, Xianying; Squicciarini, Giacomo; Thompson, David J.
2016-02-01
The sound radiation of a railway in close to proximity to a ground (both rigid and absorptive) is predicted by the boundary element method (BEM) in two dimensions (2D). Results are given in terms of the radiation ratio for both vertical and lateral motion of the rail, when the effects of the acoustic boundary conditions due to the sleepers and ballast are taken into account in the numerical models. Allowance is made for the effect of wave propagation along the rail by applying a correction in the 2D modelling. It is shown that the 2D correction is necessary at low frequency, for both vertical and lateral motion of an unsupported rail, especially in the vicinity of the corresponding critical frequency. However, this correction is not applicable for a supported rail; for vertical motion no correction is needed to the 2D result while for lateral motion the corresponding correction would depend on the pad stiffness. Finally, the corresponding numerical predictions of the sound radiation from a rail are verified by comparison with experimental results obtained using a 1/5 scale rail model in different configurations.
Near Real-Time Optimal Prediction of Adverse Events in Aviation Data
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander; Das, Santanu
2010-01-01
The prediction of anomalies or adverse events is a challenging task, and there are a variety of methods which can be used to address the problem. In this paper, we demonstrate how to recast the anomaly prediction problem into a form whose solution is accessible as a level-crossing prediction problem. The level-crossing prediction problem has an elegant, optimal, yet untested solution under certain technical constraints, and only when the appropriate modeling assumptions are made. As such, we will thoroughly investigate the resilience of these modeling assumptions, and show how they affect final performance. Finally, the predictive capability of this method will be assessed by quantitative means, using both validation and test data containing anomalies or adverse events from real aviation data sets that have previously been identified as operationally significant by domain experts. It will be shown that the formulation proposed yields a lower false alarm rate on average than competing methods based on similarly advanced concepts, and a higher correct detection rate than a standard method based upon exceedances that is commonly used for prediction.
Beta value coupled wave theory for nonslanted reflection gratings.
Neipp, Cristian; Francés, Jorge; Gallego, Sergi; Bleda, Sergio; Martínez, Francisco Javier; Pascual, Inmaculada; Beléndez, Augusto
2014-01-01
We present a modified coupled wave theory to describe the properties of nonslanted reflection volume diffraction gratings. The method is based on the beta value coupled wave theory, which will be corrected by using appropriate boundary conditions. The use of this correction allows predicting the efficiency of the reflected order for nonslanted reflection gratings embedded in two media with different refractive indices. The results obtained by using this method will be compared to those obtained using a matrix method, which gives exact solutions in terms of Mathieu functions, and also to Kogelnik's coupled wave theory. As will be demonstrated, the technique presented in this paper means a significant improvement over Kogelnik's coupled wave theory.
Beta Value Coupled Wave Theory for Nonslanted Reflection Gratings
Neipp, Cristian; Francés, Jorge; Gallego, Sergi; Bleda, Sergio; Martínez, Francisco Javier; Pascual, Inmaculada; Beléndez, Augusto
2014-01-01
We present a modified coupled wave theory to describe the properties of nonslanted reflection volume diffraction gratings. The method is based on the beta value coupled wave theory, which will be corrected by using appropriate boundary conditions. The use of this correction allows predicting the efficiency of the reflected order for nonslanted reflection gratings embedded in two media with different refractive indices. The results obtained by using this method will be compared to those obtained using a matrix method, which gives exact solutions in terms of Mathieu functions, and also to Kogelnik's coupled wave theory. As will be demonstrated, the technique presented in this paper means a significant improvement over Kogelnik's coupled wave theory. PMID:24723811
Bjornerud, Atle; Sorensen, A Gregory; Mouridsen, Kim; Emblem, Kyrre E
2011-01-01
We present a novel contrast agent (CA) extravasation-correction method based on analysis of the tissue residue function for assessment of multiple hemodynamic parameters. The method enables semiquantitative determination of the transfer constant and can be used to distinguish between T1- and T2*-dominant extravasation effects, while being insensitive to variations in tissue mean transit time (MTT). Results in 101 patients with confirmed glioma suggest that leakage-corrected absolute cerebral blood volume (CBV) values obtained with the proposed method provide improved overall survival prediction compared with normalized CBV values combined with an established leakage-correction method. Using a standard gradient-echo echo-planar imaging sequence, ∼60% and 10% of tumors with detectable CA extravasation mainly exhibited T1- and T2*-dominant leakage effects, respectively. The remaining 30% of leaky tumors had mixed T1- and T2*-dominant effects. Using an MTT-sensitive correction method, our results show that CBV is underestimated when tumor MTT is significantly longer than MTT in the reference tissue. Furthermore, results from our simulations suggest that the relative contribution of T1- versus T2*-dominant extravasation effects is strongly dependent on the effective transverse relaxivity in the extravascular space and may thus be a potential marker for cellular integrity and tissue structure. PMID:21505483
NASA Astrophysics Data System (ADS)
Darazi, R.; Gouze, A.; Macq, B.
2009-01-01
Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.
NASA Technical Reports Server (NTRS)
Schredder, J. M.
1988-01-01
A comparative analysis was performed, using both the Geometrical Theory of Diffraction (GTD) and traditional pathlength error analysis techniques, for predicting RF antenna gain performance and pointing corrections. The NASA/JPL 70 meter antenna with its shaped surface was analyzed for gravity loading over the range of elevation angles. Also analyzed were the effects of lateral and axial displacements of the subreflector. Significant differences were noted between the predictions of the two methods, in the effect of subreflector displacements, and in the optimal subreflector positions to focus a gravity-deformed main reflector. The results are of relevance to future design procedure.
NASA Astrophysics Data System (ADS)
Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin
2018-05-01
Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.
Permutation importance: a corrected feature importance measure.
Altmann, André; Toloşi, Laura; Sander, Oliver; Lengauer, Thomas
2010-05-15
In life sciences, interpretability of machine learning models is as important as their prediction accuracy. Linear models are probably the most frequently used methods for assessing feature relevance, despite their relative inflexibility. However, in the past years effective estimators of feature relevance have been derived for highly complex or non-parametric models such as support vector machines and RandomForest (RF) models. Recently, it has been observed that RF models are biased in such a way that categorical variables with a large number of categories are preferred. In this work, we introduce a heuristic for normalizing feature importance measures that can correct the feature importance bias. The method is based on repeated permutations of the outcome vector for estimating the distribution of measured importance for each variable in a non-informative setting. The P-value of the observed importance provides a corrected measure of feature importance. We apply our method to simulated data and demonstrate that (i) non-informative predictors do not receive significant P-values, (ii) informative variables can successfully be recovered among non-informative variables and (iii) P-values computed with permutation importance (PIMP) are very helpful for deciding the significance of variables, and therefore improve model interpretability. Furthermore, PIMP was used to correct RF-based importance measures for two real-world case studies. We propose an improved RF model that uses the significant variables with respect to the PIMP measure and show that its prediction accuracy is superior to that of other existing models. R code for the method presented in this article is available at http://www.mpi-inf.mpg.de/ approximately altmann/download/PIMP.R CONTACT: altmann@mpi-inf.mpg.de, laura.tolosi@mpi-inf.mpg.de Supplementary data are available at Bioinformatics online.
New model for burnout prediction in channels of various cross-section
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bobkov, V.P.; Kozina, N.V.; Vinogrado, V.N.
1995-09-01
The model developed to predict a critical heat flux (CHF) in various channels is presented together with the results of data analysis. A model is the realization of relative method of CHF describing based on the data for round tube and on the system of correction factors. The results of data description presented here are for rectangular and triangular channels, annuli and rod bundles.
Method and apparatus for sensor fusion
NASA Technical Reports Server (NTRS)
Krishen, Kumar (Inventor); Shaw, Scott (Inventor); Defigueiredo, Rui J. P. (Inventor)
1991-01-01
Method and apparatus for fusion of data from optical and radar sensors by error minimization procedure is presented. The method was applied to the problem of shape reconstruction of an unknown surface at a distance. The method involves deriving an incomplete surface model from an optical sensor. The unknown characteristics of the surface are represented by some parameter. The correct value of the parameter is computed by iteratively generating theoretical predictions of the radar cross sections (RCS) of the surface, comparing the predicted and the observed values for the RCS, and improving the surface model from results of the comparison. Theoretical RCS may be computed from the surface model in several ways. One RCS prediction technique is the method of moments. The method of moments can be applied to an unknown surface only if some shape information is available from an independent source. The optical image provides the independent information.
MO-G-18C-05: Real-Time Prediction in Free-Breathing Perfusion MRI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, H; Liu, W; Ruan, D
Purpose: The aim is to minimize frame-wise difference errors caused by respiratory motion and eliminate the need for breath-holds in magnetic resonance imaging (MRI) sequences with long acquisitions and repeat times (TRs). The technique is being applied to perfusion MRI using arterial spin labeling (ASL). Methods: Respiratory motion prediction (RMP) using navigator echoes was implemented in ASL. A least-square method was used to extract the respiratory motion information from the 1D navigator. A generalized artificial neutral network (ANN) with three layers was developed to simultaneously predict 10 time points forward in time and correct for respiratory motion during MRI acquisition.more » During the training phase, the parameters of the ANN were optimized to minimize the aggregated prediction error based on acquired navigator data. During realtime prediction, the trained ANN was applied to the most recent estimated displacement trajectory to determine in real-time the amount of spatial Results: The respiratory motion information extracted from the least-square method can accurately represent the navigator profiles, with a normalized chi-square value of 0.037±0.015 across the training phase. During the 60-second training phase, the ANN successfully learned the respiratory motion pattern from the navigator training data. During real-time prediction, the ANN received displacement estimates and predicted the motion in the continuum of a 1.0 s prediction window. The ANN prediction was able to provide corrections for different respiratory states (i.e., inhalation/exhalation) during real-time scanning with a mean absolute error of < 1.8 mm. Conclusion: A new technique enabling free-breathing acquisition during MRI is being developed. A generalized ANN development has demonstrated its efficacy in predicting a continuum of motion profile for volumetric imaging based on navigator inputs. Future work will enhance the robustness of ANN and verify its effectiveness with human subjects. Research supported by National Institutes of Health National Cancer Institute Grant R01 CA159471-01.« less
Suzuki, Yutaka; Urashima, Mitsuyoshi; Yoshida, Hideki; Iwase, Tsuyoshi; Kura, Toshiroh; Imazato, Shin; Kudo, Michiaki; Ohta, Tomoyuki; Mizuhara, Akihiro; Tamamori, Yutaka; Muramatsu, Hirohito; Nishiguchi, Yukio; Nishiyama, Yorihiro; Takahashi, Mikako; Nishiwaki, Shinji; Matsumoto, Masami; Goshi, Satoshi; Sakamoto, Shigeo; Uchida, Nobuyuki; Ijima, Masashi; Ogawa, Tetsushi; Shimazaki, Makoto; Takei, Shinichi; Kimura, Chikou; Yamashita, Satoyoshi; Endo, Takao; Nakahori, Masato; Itoh, Akihiko; Kusakabe, Toshiro; Ishizuka, Izumi; Iiri, Takao; Fukasawa, Shingo; Arimoto, Yukitsugu; Kajitani, Nobuaki; Ishida, Kazuhiko; Onishi, Koji; Taira, Akihiko; Kobayashi, Makoto; Itano, Yasuto; Kobuke, Toshiya
2009-01-01
During tube exchange for percutaneous endoscopic gastrostomy (PEG), a misplaced tube can cause peritonitis and death. Thus, endoscopic or radiologic observation is required at tube exchange to make sure the tube is placed correctly. However, these procedures cost extensive time and money to perform in all patients at the time of tube exchange. Therefore, we developed the "sky blue method" as a screening test to detect misplacement of the PEG tube during tube exchange. First, sky blue solution consisting of indigocarmine diluted with saline was injected into the gastric space via the old PEG tube just before the tube exchange. Next, the tube was exchanged using a standard method. Then, we checked whether the sky blue solution could be collected through the new tube or not. Finally, we confirmed correct placement of the tube by endoscopic or radiologic observation for all patients. A total of 961 patients were enrolled. Each tube exchange took 1 to 3 minutes, and there were no adverse effects. Four patients experienced a misplaced tube, all of which were detectable with the sky blue method. Diagnostic parameters of the sky blue method were as follows: sensitivity, 94% (95%CI: 92-95%); specificity, 100% (95%CI: 40-100%); positive predictive value, 100% (95%CI: 100-100%); negative predictive value, 6% (95%CI: 2-16%). These results suggest that the number of endoscopic or radiologic observations to confirm correct replacement of the PEG tube may be reduced to one fifteenth using the sky blue method.
NASA Astrophysics Data System (ADS)
Gogler, Slawomir; Bieszczad, Grzegorz; Krupinski, Michal
2013-10-01
Thermal imagers and used therein infrared array sensors are subject to calibration procedure and evaluation of their voltage sensitivity on incident radiation during manufacturing process. The calibration procedure is especially important in so-called radiometric cameras, where accurate radiometric quantities, given in physical units, are of concern. Even though non-radiometric cameras are not expected to stand up to such elevated standards, it is still important, that the image faithfully represents temperature variations across the scene. Detectors used in thermal camera are illuminated by infrared radiation transmitted through an infrared transmitting optical system. Often an optical system, when exposed to uniform Lambertian source forms a non-uniform irradiation distribution in its image plane. In order to be able to carry out an accurate non-uniformity correction it is essential to correctly predict irradiation distribution from a uniform source. In the article a non-uniformity correction method has been presented, that takes into account optical system's radiometry. Predictions of the irradiation distribution have been confronted with measured irradiance values. Presented radiometric model allows fast and accurate non-uniformity correction to be carried out.
Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai
2016-04-01
We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Da-Wei; Meng, Dan; Brüschweiler, Rafael
2015-05-01
A robust NMR resonance assignment method is introduced for proteins whose 3D structure has previously been determined by X-ray crystallography. The goal of the method is to obtain a subset of correct assignments from a parsimonious set of 3D NMR experiments of 15N, 13C labeled proteins. Chemical shifts of sequential residue pairs are predicted from static protein structures using PPM_One, which are then compared with the corresponding experimental shifts. Globally optimized weighted matching identifies the assignments that are robust with respect to small changes in NMR cross-peak positions. The method, termed PASSPORT, is demonstrated for 4 proteins with 100-250 amino acids using 3D NHCA and a 3D CBCA(CO)NH experiments as input producing correct assignments with high reliability for 22% of the residues. The method, which works best for Gly, Ala, Ser, and Thr residues, provides assignments that serve as anchor points for additional assignments by both manual and semi-automated methods or they can be directly used for further studies, e.g. on ligand binding, protein dynamics, or post-translational modification, such as phosphorylation.
Li, Da-Wei; Meng, Dan; Brüschweiler, Rafael
2015-01-01
A robust NMR resonance assignment method is introduced for proteins whose 3D structure has previously been determined by X-ray crystallography. The goal of the method is to obtain a subset of correct assignments from a parsimonious set of 3D NMR experiments of 15N, 13C labeled proteins. Chemical shifts of sequential residue pairs are predicted from static protein structures using PPM_One, which are then compared with the corresponding experimental shifts. Globally optimized weighted matching identifies the assignments that are robust with respect to small changes in NMR cross-peak positions. The method, termed PASSPORT, is demonstrated for 4 proteins with 100 – 250 amino acids using 3D NHCA and a 3D CBCA(CO)NH experiments as input producing correct assignments with high reliability for 22% of the residues. The method, which works best for Gly, Ala, Ser, and Thr residues, provides assignments that serve as anchor points for additional assignments by both manual and semi-automated methods or they can be directly used for further studies, e.g. on ligand binding, protein dynamics, or post-translational modification, such as phosphorylation. PMID:25863893
Numerical study of combustion processes in afterburners
NASA Technical Reports Server (NTRS)
Zhou, Xiaoqing; Zhang, Xiaochun
1986-01-01
Mathematical models and numerical methods are presented for computer modeling of aeroengine afterburners. A computer code GEMCHIP is described briefly. The algorithms SIMPLER, for gas flow predictions, and DROPLET, for droplet flow calculations, are incorporated in this code. The block correction technique is adopted to facilitate convergence. The method of handling irregular shapes of combustors and flameholders is described. The predicted results for a low-bypass-ratio turbofan afterburner in the cases of gaseous combustion and multiphase spray combustion are provided and analyzed, and engineering guides for afterburner optimization are presented.
Chavez, P.S.
1988-01-01
Digital analysis of remotely sensed data has become an important component of many earth-science studies. These data are often processed through a set of preprocessing or "clean-up" routines that includes a correction for atmospheric scattering, often called haze. Various methods to correct or remove the additive haze component have been developed, including the widely used dark-object subtraction technique. A problem with most of these methods is that the haze values for each spectral band are selected independently. This can create problems because atmospheric scattering is highly wavelength-dependent in the visible part of the electromagnetic spectrum and the scattering values are correlated with each other. Therefore, multispectral data such as from the Landsat Thematic Mapper and Multispectral Scanner must be corrected with haze values that are spectral band dependent. An improved dark-object subtraction technique is demonstrated that allows the user to select a relative atmospheric scattering model to predict the haze values for all the spectral bands from a selected starting band haze value. The improved method normalizes the predicted haze values for the different gain and offset parameters used by the imaging system. Examples of haze value differences between the old and improved methods for Thematic Mapper Bands 1, 2, 3, 4, 5, and 7 are 40.0, 13.0, 12.0, 8.0, 5.0, and 2.0 vs. 40.0, 13.2, 8.9, 4.9, 16.7, and 3.3, respectively, using a relative scattering model of a clear atmosphere. In one Landsat multispectral scanner image the haze value differences for Bands 4, 5, 6, and 7 were 30.0, 50.0, 50.0, and 40.0 for the old method vs. 30.0, 34.4, 43.6, and 6.4 for the new method using a relative scattering model of a hazy atmosphere. ?? 1988.
NASA Astrophysics Data System (ADS)
Ahn, J. B.; Hur, J.
2015-12-01
The seasonal prediction of both the surface air temperature and the first-flowering date (FFD) over South Korea are produced using dynamical downscaling (Hur and Ahn, 2015). Dynamical downscaling is performed using Weather Research and Forecast (WRF) v3.0 with the lateral forcing from hourly outputs of Pusan National University (PNU) coupled general circulation model (CGCM) v1.1. Gridded surface air temperature data with high spatial (3km) and temporal (daily) resolution are obtained using the physically-based dynamical models. To reduce systematic bias, simple statistical correction method is then applied to the model output. The FFDs of cherry, peach and pear in South Korea are predicted for the decade of 1999-2008 by applying the corrected daily temperature predictions to the phenological thermal-time model. The WRF v3.0 results reflect the detailed topographical effect, despite having cold and warm biases for warm and cold seasons, respectively. After applying the correction, the mean temperature for early spring (February to April) well represents the general pattern of observation, while preserving the advantages of dynamical downscaling. The FFD predictabilities for the three species of trees are evaluated in terms of qualitative, quantitative and categorical estimations. Although FFDs derived from the corrected WRF results well predict the spatial distribution and the variation of observation, the prediction performance has no statistical significance or appropriate predictability. The approach used in the study may be helpful in obtaining detailed and useful information about FFD and regional temperature by accounting for physically-based atmospheric dynamics, although the seasonal predictability of flowering phenology is not high enough. Acknowledgements This work was carried out with the support of the Rural Development Administration Cooperative Research Program for Agriculture Science and Technology Development under Grant Project No. PJ009953 and Project No. PJ009353, Republic of Korea. Reference Hur, J., J.-B. Ahn, 2015. Seasonal Prediction of Regional Surface Air Temperature and First-flowering Date over South Korea, Int. J. Climatol., DOI: 10.1002/joc.4323.
Mocz, G.
1995-01-01
Fuzzy cluster analysis has been applied to the 20 amino acids by using 65 physicochemical properties as a basis for classification. The clustering products, the fuzzy sets (i.e., classical sets with associated membership functions), have provided a new measure of amino acid similarities for use in protein folding studies. This work demonstrates that fuzzy sets of simple molecular attributes, when assigned to amino acid residues in a protein's sequence, can predict the secondary structure of the sequence with reasonable accuracy. An approach is presented for discriminating standard folding states, using near-optimum information splitting in half-overlapping segments of the sequence of assigned membership functions. The method is applied to a nonredundant set of 252 proteins and yields approximately 73% matching for correctly predicted and correctly rejected residues with approximately 60% overall success rate for the correctly recognized ones in three folding states: alpha-helix, beta-strand, and coil. The most useful attributes for discriminating these states appear to be related to size, polarity, and thermodynamic factors. Van der Waals volume, apparent average thickness of surrounding molecular free volume, and a measure of dimensionless surface electron density can explain approximately 95% of prediction results. hydrogen bonding and hydrophobicity induces do not yet enable clear clustering and prediction. PMID:7549882
NASA Astrophysics Data System (ADS)
Sahu, Jyoti; Juvekar, Vinay A.
2018-05-01
Prediction of the osmotic coefficient of concentrated electrolytes is needed in a wide variety of industrial applications. There is a need to correctly segregate the electrostatic contribution to osmotic coefficient from nonelectrostatic contribution. This is achieved in a rational way in this work. Using the Robinson-Stokes-Glueckauf hydrated ion model to predict non-electrostatic contribution to the osmotic coefficient, it is shown that hydration number should be independent of concentration so that the observed linear dependence of osmotic coefficient on electrolyte concentration in high concentration range could be predicted. The hydration number of several electrolytes (LiCl, NaCl, KCl, MgCl2, and MgSO4) has been estimated by this method. The hydration number predicted by this model shows correct dependence on temperature. It is also shown that the electrostatic contribution to osmotic coefficient is underpredicted by the Debye-Hückel theory at concentration beyond 0.1 m. The Debye-Hückel theory is modified by introducing a concentration dependent hydrated ionic size. Using the present analysis, it is possible to correctly estimate the electrostatic contribution to the osmotic coefficient, beyond the range of validation of the D-H theory. This would allow development of a more fundamental model for electrostatic interaction at high electrolyte concentrations.
Brady, Amie M.G.; Bushon, Rebecca N.; Plona, Meg B.
2009-01-01
The Cuyahoga River within Cuyahoga Valley National Park (CVNP) in Ohio is often impaired for recreational use because of elevated concentrations of bacteria, which are indicators of fecal contamination. During the recreational seasons (May through August) of 2004 through 2007, samples were collected at two river sites, one upstream of and one centrally-located within CVNP. Bacterial concentrations and turbidity were determined, and streamflow at time of sampling and rainfall amounts over the previous 24 hours prior to sampling were ascertained. Statistical models to predict Escherichia coli (E. coli) concentrations were developed for each site (with data from 2004 through 2006) and tested during an independent year (2007). At Jaite, a sampling site near the center of CVNP, the predictive model performed better than the traditional method of determining the current day's water quality using the previous day's E. coli concentration. During 2007, the Jaite model, based on turbidity, produced more correct responses (81 percent) and fewer false negatives (3.2 percent) than the traditional method (68 and 26 percent, respectively). At Old Portage, a sampling site just upstream from CVNP, a predictive model with turbidity and rainfall as explanatory variables did not perform as well as the traditional method. The Jaite model was used to estimate water quality at three other sites in the park; although it did not perform as well as the traditional method, it performed well - yielding between 68 and 91 percent correct responses. Further research would be necessary to determine whether using the Jaite model to predict recreational water quality elsewhere on the river would provide accurate results.
Liu, Yan; Cai, Wensheng; Shao, Xueguang
2016-12-05
Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses. Copyright © 2016 Elsevier B.V. All rights reserved.
On some methods for assessing earthquake predictions
NASA Astrophysics Data System (ADS)
Molchan, G.; Romashkova, L.; Peresan, A.
2017-09-01
A regional approach to the problem of assessing earthquake predictions inevitably faces a deficit of data. We point out some basic limits of assessment methods reported in the literature, considering the practical case of the performance of the CN pattern recognition method in the prediction of large Italian earthquakes. Along with the classical hypothesis testing, a new game approach, the so-called parimutuel gambling (PG) method, is examined. The PG, originally proposed for the evaluation of the probabilistic earthquake forecast, has been recently adapted for the case of 'alarm-based' CN prediction. The PG approach is a non-standard method; therefore it deserves careful examination and theoretical analysis. We show that the PG alarm-based version leads to an almost complete loss of information about predicted earthquakes (even for a large sample). As a result, any conclusions based on the alarm-based PG approach are not to be trusted. We also show that the original probabilistic PG approach does not necessarily identifies the genuine forecast correctly among competing seismicity rate models, even when applied to extensive data.
2016-01-01
Background As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. Objective To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. Methods A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. Results The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. Conclusions A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. PMID:27986644
Application of distance correction to ChemCam laser-induced breakdown spectroscopy measurements
Mezzacappa, A.; Melikechi, N.; Cousin, A.; ...
2016-04-04
Laser-induced breakdown spectroscopy (LIBS) provides chemical information from atomic, ionic, and molecular emissions from which geochemical composition can be deciphered. Analysis of LIBS spectra in cases where targets are observed at different distances, as is the case for the ChemCam instrument on the Mars rover Curiosity, which performs analyses at distances between 2 and 7.4 m is not a simple task. Previously, we showed that spectral distance correction based on a proxy spectroscopic standard created from first-shot dust observations on Mars targets ameliorates the distance bias in multivariate-based elemental-composition predictions of laboratory data. In this work, we correct an expandedmore » set of neutral and ionic spectral emissions for distance bias in the ChemCam data set. By using and testing different selection criteria to generate multiple proxy standards, we find a correction that minimizes the difference in spectral intensity measured at two different distances and increases spectral reproducibility. When the quantitative performance of distance correction is assessed, there is improvement for SiO 2, Al 2O 3, CaO, FeOT, Na 2O, K 2O, that is, for most of the major rock forming elements, and for the total major-element weight percent predicted. But, for MgO the method does not provide improvements while for TiO 2, it yields inconsistent results. Additionally, we observed that many emission lines do not behave consistently with distance, evidenced from laboratory analogue measurements and ChemCam data. This limits the effectiveness of the method.« less
[Discrimination of donkey meat by NIR and chemometrics].
Niu, Xiao-Ying; Shao, Li-Min; Dong, Fang; Zhao, Zhi-Lei; Zhu, Yan
2014-10-01
Donkey meat samples (n = 167) from different parts of donkey body (neck, costalia, rump, and tendon), beef (n = 47), pork (n = 51) and mutton (n = 32) samples were used to establish near-infrared reflectance spectroscopy (NIR) classification models in the spectra range of 4,000~12,500 cm(-1). The accuracies of classification models constructed by Mahalanobis distances analysis, soft independent modeling of class analogy (SIMCA) and least squares-support vector machine (LS-SVM), respectively combined with pretreatment of Savitzky-Golay smooth (5, 15 and 25 points) and derivative (first and second), multiplicative scatter correction and standard normal variate, were compared. The optimal models for intact samples were obtained by Mahalanobis distances analysis with the first 11 principal components (PCs) from original spectra as inputs and by LS-SVM with the first 6 PCs as inputs, and correctly classified 100% of calibration set and 98. 96% of prediction set. For minced samples of 7 mm diameter the optimal result was attained by LS-SVM with the first 5 PCs from original spectra as inputs, which gained an accuracy of 100% for calibration and 97.53% for prediction. For minced diameter of 5 mm SIMCA model with the first 8 PCs from original spectra as inputs correctly classified 100% of calibration and prediction. And for minced diameter of 3 mm Mahalanobis distances analysis and SIMCA models both achieved 100% accuracy for calibration and prediction respectively with the first 7 and 9 PCs from original spectra as inputs. And in these models, donkey meat samples were all correctly classified with 100% either in calibration or prediction. The results show that it is feasible that NIR with chemometrics methods is used to discriminate donkey meat from the else meat.
A Review of Spectral Methods for Variable Amplitude Fatigue Prediction and New Results
NASA Technical Reports Server (NTRS)
Larsen, Curtis E.; Irvine, Tom
2013-01-01
A comprehensive review of the available methods for estimating fatigue damage from variable amplitude loading is presented. The dependence of fatigue damage accumulation on power spectral density (psd) is investigated for random processes relevant to real structures such as in offshore or aerospace applications. Beginning with the Rayleigh (or narrow band) approximation, attempts at improved approximations or corrections to the Rayleigh approximation are examined by comparison to rainflow analysis of time histories simulated from psd functions representative of simple theoretical and real world applications. Spectral methods investigated include corrections by Wirsching and Light, Ortiz and Chen, the Dirlik formula, and the Single-Moment method, among other more recent proposed methods. Good agreement is obtained between the spectral methods and the time-domain rainflow identification for most cases, with some limitations. Guidelines are given for using the several spectral methods to increase confidence in the damage estimate.
Analyzing the uncertainty of suspended sediment load prediction using sequential data assimilation
NASA Astrophysics Data System (ADS)
Leisenring, Marc; Moradkhani, Hamid
2012-10-01
SummaryA first step in understanding the impacts of sediment and controlling the sources of sediment is to quantify the mass loading. Since mass loading is the product of flow and concentration, the quantification of loads first requires the quantification of runoff volume. Using the National Weather Service's SNOW-17 and the Sacramento Soil Moisture Accounting (SAC-SMA) models, this study employed particle filter based Bayesian data assimilation methods to predict seasonal snow water equivalent (SWE) and runoff within a small watershed in the Lake Tahoe Basin located in California, USA. A procedure was developed to scale the variance multipliers (a.k.a hyperparameters) for model parameters and predictions based on the accuracy of the mean predictions relative to the ensemble spread. In addition, an online bias correction algorithm based on the lagged average bias was implemented to detect and correct for systematic bias in model forecasts prior to updating with the particle filter. Both of these methods significantly improved the performance of the particle filter without requiring excessively wide prediction bounds. The flow ensemble was linked to a non-linear regression model that was used to predict suspended sediment concentrations (SSCs) based on runoff rate and time of year. Runoff volumes and SSC were then combined to produce an ensemble of suspended sediment load estimates. Annual suspended sediment loads for the 5 years of simulation were finally computed along with 95% prediction intervals that account for uncertainty in both the SSC regression model and flow rate estimates. Understanding the uncertainty associated with annual suspended sediment load predictions is critical for making sound watershed management decisions aimed at maintaining the exceptional clarity of Lake Tahoe. The computational methods developed and applied in this research could assist with similar studies where it is important to quantify the predictive uncertainty of pollutant load estimates.
Gulliver, Kristina; Yoder, Bradley A
2018-05-09
To determine the effect of altitude correction on bronchopulmonary dysplasia (BPD) rates and to assess validity of the NICHD "Neonatal BPD Outcome Estimator" for predicting BPD with and without altitude correction. Retrospective analysis included neonates born <30 weeks gestational age (GA) between 2010 and 2016. "Effective" FiO 2 requirements were determined at 36 weeks corrected GA. Altitude correction performed via ratio of barometric pressure (BP) in our unit to sea level BP. Probability of death and/or moderate-to-severe BPD was calculated using the NICHD BPD Outcome Estimator. Five hundred and sixty-one infants were included. Rate of moderate-to-severe BPD decreased from 71 to 40% following altitude correction. Receiver-operating characteristic curves indicated high predictability of BPD Outcome Estimator for altitude-corrected moderate-to-severe BPD diagnosis. Correction for altitude reduced moderate-to-severe BPD rate by almost 50%, to a rate consistent with recent published values. NICHD BPD Outcome Estimator is a valid tool for predicting the risk of moderate-to-severe BPD following altitude correction.
HIV-1 protease cleavage site prediction based on two-stage feature selection method.
Niu, Bing; Yuan, Xiao-Cheng; Roeper, Preston; Su, Qiang; Peng, Chun-Rong; Yin, Jing-Yuan; Ding, Juan; Li, HaiPeng; Lu, Wen-Cong
2013-03-01
Knowledge of the mechanism of HIV protease cleavage specificity is critical to the design of specific and effective HIV inhibitors. Searching for an accurate, robust, and rapid method to correctly predict the cleavage sites in proteins is crucial when searching for possible HIV inhibitors. In this article, HIV-1 protease specificity was studied using the correlation-based feature subset (CfsSubset) selection method combined with Genetic Algorithms method. Thirty important biochemical features were found based on a jackknife test from the original data set containing 4,248 features. By using the AdaBoost method with the thirty selected features the prediction model yields an accuracy of 96.7% for the jackknife test and 92.1% for an independent set test, with increased accuracy over the original dataset by 6.7% and 77.4%, respectively. Our feature selection scheme could be a useful technique for finding effective competitive inhibitors of HIV protease.
Kyme, Andre; Meikle, Steven; Baldock, Clive; Fulton, Roger
2012-08-01
Motion-compensated radiotracer imaging of fully conscious rodents represents an important paradigm shift for preclinical investigations. In such studies, if motion tracking is performed through a transparent enclosure containing the awake animal, light refraction at the interface will introduce errors in stereo pose estimation. We have performed a thorough investigation of how this impacts the accuracy of pose estimates and the resulting motion correction, and developed an efficient method to predict and correct for refraction-based error. The refraction model underlying this study was validated using a state-of-the-art motion tracking system. Refraction-based error was shown to be dependent on tracking marker size, working distance, and interface thickness and tilt. Correcting for refraction error improved the spatial resolution and quantitative accuracy of motion-corrected positron emission tomography images. Since the methods are general, they may also be useful in other contexts where data are corrupted by refraction effects. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.
Fourier-based classification of protein secondary structures.
Shu, Jian-Jun; Yong, Kian Yan
2017-04-15
The correct prediction of protein secondary structures is one of the key issues in predicting the correct protein folded shape, which is used for determining gene function. Existing methods make use of amino acids properties as indices to classify protein secondary structures, but are faced with a significant number of misclassifications. The paper presents a technique for the classification of protein secondary structures based on protein "signal-plotting" and the use of the Fourier technique for digital signal processing. New indices are proposed to classify protein secondary structures by analyzing hydrophobicity profiles. The approach is simple and straightforward. Results show that the more types of protein secondary structures can be classified by means of these newly-proposed indices. Copyright © 2017 Elsevier Inc. All rights reserved.
SU-E-T-472: Improvement of IMRT QA Passing Rate by Correcting Angular Dependence of MatriXX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Q; Watkins, W; Kim, T
2015-06-15
Purpose: Multi-channel planar detector arrays utilized for IMRT-QA, such as the MatriXX, exhibit an incident-beam angular dependent response which can Result in false-positive gamma-based QA results, especially for helical tomotherapy plans which encompass the full range of beam angles. Although MatriXX can use with gantry angle sensor to provide automatically angular correction, this sensor does not work with tomotherapy. The purpose of the study is to reduce IMRT-QA false-positives by correcting for the MatriXX angular dependence. Methods: MatriXX angular dependence was characterized by comparing multiple fixed-angle irradiation measurements with corresponding TPS computed doses. For 81 Tomo-helical IMRT-QA measurements, two differentmore » correction schemes were tested: (1) A Monte-Carlo dose engine was used to compute MatriXX signal based on the angular-response curve. The computed signal was then compared with measurement. (2) Uncorrected computed signal was compared with measurements uniformly scaled to account for the average angular dependence. Three scaling factor (+2%, +2.5%, +3%) were tested. Results: The MatriXX response is 8% less than predicted for a PA beam even when the couch is fully accounted for. Without angular correction, only 67% of the cases pass the >90% points γ<1 (3%, 3mm). After full angular correction, 96% of the cases pass the criteria. Of three scaling factors, +2% gave the highest passing rate (89%), which is still less than the full angular correction method. With a stricter γ(2%,3mm) criteria, the full angular correction method was still able to achieve the 90% passing rate while the scaling method only gives 53% passing rate. Conclusion: Correction for the MatriXX angular dependence reduced the false-positives rate of our IMRT-QA process. It is necessary to correct for the angular dependence to achieve the IMRT passing criteria specified in TG129.« less
Pandit, Jaideep J; Tavare, Aniket
2011-07-01
It is important that a surgical list is planned to utilise as much of the scheduled time as possible while not over-running, because this can lead to cancellation of operations. We wished to assess whether, theoretically, the known duration of individual operations could be used quantitatively to predict the likely duration of the operating list. In a university hospital setting, we first assessed the extent to which the current ad-hoc method of operating list planning was able to match the scheduled operating list times for 153 consecutive historical lists. Using receiver operating curve analysis, we assessed the ability of an alternative method to predict operating list duration for the same operating lists. This method uses a simple formula: the sum of individual operation times and a pooled standard deviation of these times. We used the operating list duration estimated from this formula to generate a probability that the operating list would finish within its scheduled time. Finally, we applied the simple formula prospectively to 150 operating lists, 'shadowing' the current ad-hoc method, to confirm the predictive ability of the formula. The ad-hoc method was very poor at planning: 50% of historical operating lists were under-booked and 37% over-booked. In contrast, the simple formula predicted the correct outcome (under-run or over-run) for 76% of these operating lists. The calculated probability that a planned series of operations will over-run or under-run was found useful in developing an algorithm to adjust the planned cases optimally. In the prospective series, 65% of operating lists were over-booked and 10% were under-booked. The formula predicted the correct outcome for 84% of operating lists. A simple quantitative method of estimating operating list duration for a series of operations leads to an algorithm (readily created on an Excel spreadsheet, http://links.lww.com/EJA/A19) that can potentially improve operating list planning.
Testa, Alison C; Hane, James K; Ellwood, Simon R; Oliver, Richard P
2015-03-11
The impact of gene annotation quality on functional and comparative genomics makes gene prediction an important process, particularly in non-model species, including many fungi. Sets of homologous protein sequences are rarely complete with respect to the fungal species of interest and are often small or unreliable, especially when closely related species have not been sequenced or annotated in detail. In these cases, protein homology-based evidence fails to correctly annotate many genes, or significantly improve ab initio predictions. Generalised hidden Markov models (GHMM) have proven to be invaluable tools in gene annotation and, recently, RNA-seq has emerged as a cost-effective means to significantly improve the quality of automated gene annotation. As these methods do not require sets of homologous proteins, improving gene prediction from these resources is of benefit to fungal researchers. While many pipelines now incorporate RNA-seq data in training GHMMs, there has been relatively little investigation into additionally combining RNA-seq data at the point of prediction, and room for improvement in this area motivates this study. CodingQuarry is a highly accurate, self-training GHMM fungal gene predictor designed to work with assembled, aligned RNA-seq transcripts. RNA-seq data informs annotations both during gene-model training and in prediction. Our approach capitalises on the high quality of fungal transcript assemblies by incorporating predictions made directly from transcript sequences. Correct predictions are made despite transcript assembly problems, including those caused by overlap between the transcripts of adjacent gene loci. Stringent benchmarking against high-confidence annotation subsets showed CodingQuarry predicted 91.3% of Schizosaccharomyces pombe genes and 90.4% of Saccharomyces cerevisiae genes perfectly. These results are 4-5% better than those of AUGUSTUS, the next best performing RNA-seq driven gene predictor tested. Comparisons against whole genome Sc. pombe and S. cerevisiae annotations further substantiate a 4-5% improvement in the number of correctly predicted genes. We demonstrate the success of a novel method of incorporating RNA-seq data into GHMM fungal gene prediction. This shows that a high quality annotation can be achieved without relying on protein homology or a training set of genes. CodingQuarry is freely available ( https://sourceforge.net/projects/codingquarry/ ), and suitable for incorporation into genome annotation pipelines.
NASA Astrophysics Data System (ADS)
Chang, Guobin; Xu, Tianhe; Yao, Yifei; Wang, Qianxin
2018-01-01
In order to incorporate the time smoothness of ionospheric delay to aid the cycle slip detection, an adaptive Kalman filter is developed based on variance component estimation. The correlations between measurements at neighboring epochs are fully considered in developing a filtering algorithm for colored measurement noise. Within this filtering framework, epoch-differenced ionospheric delays are predicted. Using this prediction, the potential cycle slips are repaired for triple-frequency signals of global navigation satellite systems. Cycle slips are repaired in a stepwise manner; i.e., for two extra wide lane combinations firstly and then for the third frequency. In the estimation for the third frequency, a stochastic model is followed in which the correlations between the ionospheric delay prediction errors and the errors in the epoch-differenced phase measurements are considered. The implementing details of the proposed method are tabulated. A real BeiDou Navigation Satellite System data set is used to check the performance of the proposed method. Most cycle slips, no matter trivial or nontrivial, can be estimated in float values with satisfactorily high accuracy and their integer values can hence be correctly obtained by simple rounding. To be more specific, all manually introduced nontrivial cycle slips are correctly repaired.
Prediction of beta-turns in proteins using the first-order Markov models.
Lin, Thy-Hou; Wang, Ging-Ming; Wang, Yen-Tseng
2002-01-01
We present a method based on the first-order Markov models for predicting simple beta-turns and loops containing multiple turns in proteins. Sequences of 338 proteins in a database are divided using the published turn criteria into the following three regions, namely, the turn, the boundary, and the nonturn ones. A transition probability matrix is constructed for either the turn or the nonturn region using the weighted transition probabilities computed for dipeptides identified from each region. There are two such matrices constructed for the boundary region since the transition probabilities for dipeptides immediately preceding or following a turn are different. The window used for scanning a protein sequence from amino (N-) to carboxyl (C-) terminal is a hexapeptide since the transition probability computed for a turn tetrapeptide is capped at both the N- and C- termini with a boundary transition probability indexed respectively from the two boundary transition matrices. A sum of the averaged product of the transition probabilities of all the hexapeptides involving each residue is computed. This is then weighted with a probability computed from assuming that all the hexapeptides are from the nonturn region to give the final prediction quantity. Both simple beta-turns and loops containing multiple turns in a protein are then identified by the rising of the prediction quantity computed. The performance of the prediction scheme or the percentage (%) of correct prediction is evaluated through computation of Matthews correlation coefficients for each protein predicted. It is found that the prediction method is capable of giving prediction results with better correlation between the percent of correct prediction and the Matthews correlation coefficients for a group of test proteins as compared with those predicted using some secondary structural prediction methods. The prediction accuracy for about 40% of proteins in the database or 50% of proteins in the test set is better than 70%. Such a percentage for the test set is reduced to 30 if the structures of all the proteins in the set are treated as unknown.
Bazarian, Jeffrey J; Beck, Christopher; Blyth, Brian; von Ahsen, Nicolas; Hasselblatt, Martin
2006-01-01
To validate a correction factor for the extracranial release of the astroglial protein, S-100B, based on concomitant creatine kinase (CK) levels. The CK- S-100B relationship in non-head injured marathon runners was used to derive a correction factor for the extracranial release of S-100B. This factor was then applied to a separate cohort of 96 mild traumatic brain injury (TBI) patients in whom both CK and S-100B levels were measured. Corrected S-100B was compared to uncorrected S-100B for the prediction of initial head CT, three-month headache and three-month post concussive syndrome (PCS). Corrected S-100B resulted in a statistically significant improvement in the prediction of 3-month headache (area under curve [AUC] 0.46 vs 0.52, p=0.02), but not PCS or initial head CT. Using a cutoff that maximizes sensitivity (> or = 90%), corrected S-100B improved the prediction of initial head CT scan (negative predictive value from 75% [95% CI, 2.6%, 67.0%] to 96% [95% CI: 83.5%, 99.8%]). Although S-100B is overall poorly predictive of outcome, a correction factor using CK is a valid means of accounting for extracranial release. By increasing the proportion of mild TBI patients correctly categorized as low risk for abnormal head CT, CK-corrected S100-B can further reduce the number of unnecessary brain CT scans performed after this injury.
Evaluation of different flamelet tabulation methods for laminar spray combustion
NASA Astrophysics Data System (ADS)
Luo, Yujuan; Wen, Xu; Wang, Haiou; Luo, Kun; Fan, Jianren
2018-05-01
In this work, three different flamelet tabulation methods for spray combustion are evaluated. Major differences among these methods lie in the treatment of the temperature boundary conditions of the flamelet equations. Particularly, in the first tabulation method ("M1"), both the fuel and oxidizer temperature boundary conditions are set to be fixed. In the second tabulation method ("M2"), the fuel temperature boundary condition is varied while the oxidizer temperature boundary condition is fixed. In the third tabulation method ("M3"), both the fuel and oxidizer temperature boundary conditions are varied and set to be equal. The focus of this work is to investigate whether the heat transfer between the droplet phase and gas phase can be represented by the studied tabulation methods through a priori analyses. To this end, spray flames stabilized in a three-dimensional counterflow are first simulated with detailed chemistry. Then, the trajectory variables are calculated from the detailed chemistry solutions. Finally, the tabulated thermo-chemical quantities are compared to the corresponding values from the detailed chemistry solutions. The comparisons show that the gas temperature cannot be predicted by "M1" with only a mixture fraction and reaction progress variable being the trajectory variables. The gas temperature can be correctly predicted by both "M2" and "M3," in which the total enthalpy is introduced as an additional manifold. In "M2," variations of the oxidizer temperature are considered with a temperature modification technique, which is not required in "M3." Interestingly, it is found that the mass fractions of the reactants and major products are not sensitive to the representation of the interphase heat transfer in the flamelet chemtables, and they can be correctly predicted by all tabulation methods. By contrast, the intermediate species CO and H2 in the premixed flame reaction zone are over-predicted by all tabulation methods.
Stagnation Point Nonequilibrium Radiative Heating and the Influence of Energy Exchange Models
NASA Technical Reports Server (NTRS)
Hartung, Lin C.; Mitcheltree, Robert A.; Gnoffo, Peter A.
1991-01-01
A nonequilibrium radiative heating prediction method has been used to evaluate several energy exchange models used in nonequilibrium computational fluid dynamics methods. The radiative heating measurements from the FIRE II flight experiment supply an experimental benchmark against which different formulations for these exchange models can be judged. The models which predict the lowest radiative heating are found to give the best agreement with the flight data. Examination of the spectral distribution of radiation indicates that despite close agreement of the total radiation, many of the models examined predict excessive molecular radiation. It is suggested that a study of the nonequilibrium chemical kinetics may lead to a correction for this problem.
A linearized Euler analysis of unsteady flows in turbomachinery
NASA Technical Reports Server (NTRS)
Hall, Kenneth C.; Crawley, Edward F.
1987-01-01
A method for calculating unsteady flows in cascades is presented. The model, which is based on the linearized unsteady Euler equations, accounts for blade loading shock motion, wake motion, and blade geometry. The mean flow through the cascade is determined by solving the full nonlinear Euler equations. Assuming the unsteadiness in the flow is small, then the Euler equations are linearized about the mean flow to obtain a set of linear variable coefficient equations which describe the small amplitude, harmonic motion of the flow. These equations are discretized on a computational grid via a finite volume operator and solved directly subject to an appropriate set of linearized boundary conditions. The steady flow, which is calculated prior to the unsteady flow, is found via a Newton iteration procedure. An important feature of the analysis is the use of shock fitting to model steady and unsteady shocks. Use of the Euler equations with the unsteady Rankine-Hugoniot shock jump conditions correctly models the generation of steady and unsteady entropy and vorticity at shocks. In particular, the low frequency shock displacement is correctly predicted. Results of this method are presented for a variety of test cases. Predicted unsteady transonic flows in channels are compared to full nonlinear Euler solutions obtained using time-accurate, time-marching methods. The agreement between the two methods is excellent for small to moderate levels of flow unsteadiness. The method is also used to predict unsteady flows in cascades due to blade motion (flutter problem) and incoming disturbances (gust response problem).
Dispersion- and Exchange-Corrected Density Functional Theory for Sodium Ion Hydration.
Soniat, Marielle; Rogers, David M; Rempe, Susan B
2015-07-14
A challenge in density functional theory is developing functionals that simultaneously describe intermolecular electron correlation and electron delocalization. Recent exchange-correlation functionals address those two issues by adding corrections important at long ranges: an atom-centered pairwise dispersion term to account for correlation and a modified long-range component of the electron exchange term to correct for delocalization. Here we investigate how those corrections influence the accuracy of binding free energy predictions for sodium-water clusters. We find that the dual-corrected ωB97X-D functional gives cluster binding energies closest to high-level ab initio methods (CCSD(T)). Binding energy decomposition shows that the ωB97X-D functional predicts the smallest ion-water (pairwise) interaction energy and larger multibody contributions for a four-water cluster than most other functionals - a trend consistent with CCSD(T) results. Also, ωB97X-D produces the smallest amounts of charge transfer and the least polarizable waters of the density functionals studied, which mimics the lower polarizability of CCSD. When compared with experimental binding free energies, however, the exchange-corrected CAM-B3LYP functional performs best (error <1 kcal/mol), possibly because of its parametrization to experimental formation enthalpies. For clusters containing more than four waters, "split-shell" coordination must be considered to obtain accurate free energies in comparison with experiment.
Murray, Christopher J L
2007-03-10
Health statistics are at the centre of an increasing number of worldwide health controversies. Several factors are sharpening the tension between the supply and demand for high quality health information, and the health-related Millennium Development Goals (MDGs) provide a high-profile example. With thousands of indicators recommended but few measured well, the worldwide health community needs to focus its efforts on improving measurement of a small set of priority areas. Priority indicators should be selected on the basis of public-health significance and several dimensions of measurability. Health statistics can be divided into three types: crude, corrected, and predicted. Health statistics are necessary inputs to planning and strategic decision making, programme implementation, monitoring progress towards targets, and assessment of what works and what does not. Crude statistics that are biased have no role in any of these steps; corrected statistics are preferred. For strategic decision making, when corrected statistics are unavailable, predicted statistics can play an important part. For monitoring progress towards agreed targets and assessment of what works and what does not, however, predicted statistics should not be used. Perhaps the most effective method to decrease controversy over health statistics and to encourage better primary data collection and the development of better analytical methods is a strong commitment to provision of an explicit data audit trail. This initiative would make available the primary data, all post-data collection adjustments, models including covariates used for farcasting and forecasting, and necessary documentation to the public.
Ołdziej, S; Czaplewski, C; Liwo, A; Chinchio, M; Nanias, M; Vila, J A; Khalili, M; Arnautova, Y A; Jagielska, A; Makowski, M; Schafroth, H D; Kaźmierkiewicz, R; Ripoll, D R; Pillardy, J; Saunders, J A; Kang, Y K; Gibson, K D; Scheraga, H A
2005-05-24
Recent improvements in the protein-structure prediction method developed in our laboratory, based on the thermodynamic hypothesis, are described. The conformational space is searched extensively at the united-residue level by using our physics-based UNRES energy function and the conformational space annealing method of global optimization. The lowest-energy coarse-grained structures are then converted to an all-atom representation and energy-minimized with the ECEPP/3 force field. The procedure was assessed in two recent blind tests of protein-structure prediction. During the first blind test, we predicted large fragments of alpha and alpha+beta proteins [60-70 residues with C(alpha) rms deviation (rmsd) <6 A]. However, for alpha+beta proteins, significant topological errors occurred despite low rmsd values. In the second exercise, we predicted whole structures of five proteins (two alpha and three alpha+beta, with sizes of 53-235 residues) with remarkably good accuracy. In particular, for the genomic target TM0487 (a 102-residue alpha+beta protein from Thermotoga maritima), we predicted the complete, topologically correct structure with 7.3-A C(alpha) rmsd. So far this protein is the largest alpha+beta protein predicted based solely on the amino acid sequence and a physics-based potential-energy function and search procedure. For target T0198, a phosphate transport system regulator PhoU from T. maritima (a 235-residue mainly alpha-helical protein), we predicted the topology of the whole six-helix bundle correctly within 8 A rmsd, except the 32 C-terminal residues, most of which form a beta-hairpin. These and other examples described in this work demonstrate significant progress in physics-based protein-structure prediction.
Van Driel, Robin; Trask, Catherine; Johnson, Peter W; Callaghan, Jack P; Koehoorn, Mieke; Teschke, Kay
2013-01-01
Measuring trunk posture in the workplace commonly involves subjective observation or self-report methods or the use of costly and time-consuming motion analysis systems (current gold standard). This work compared trunk inclination measurements using a simple data-logging inclinometer with trunk flexion measurements using a motion analysis system, and evaluated adding measures of subject anthropometry to exposure prediction models to improve the agreement between the two methods. Simulated lifting tasks (n=36) were performed by eight participants, and trunk postures were simultaneously measured with each method. There were significant differences between the two methods, with the inclinometer initially explaining 47% of the variance in the motion analysis measurements. However, adding one key anthropometric parameter (lower arm length) to the inclinometer-based trunk flexion prediction model reduced the differences between the two systems and accounted for 79% of the motion analysis method's variance. Although caution must be applied when generalizing lower-arm length as a correction factor, the overall strategy of anthropometric modeling is a novel contribution. In this lifting-based study, by accounting for subject anthropometry, a single, simple data-logging inclinometer shows promise for trunk posture measurement and may have utility in larger-scale field studies where similar types of tasks are performed.
NASA Technical Reports Server (NTRS)
Milesi, Cristina; Costa-Cabral, Mariza; Rath, John; Mills, William; Roy, Sujoy; Thrasher, Bridget; Wang, Weile; Chiang, Felicia; Loewenstein, Max; Podolske, James
2014-01-01
Water resource managers planning for the adaptation to future events of extreme precipitation now have access to high resolution downscaled daily projections derived from statistical bias correction and constructed analogs. We also show that along the Pacific Coast the Northern Oscillation Index (NOI) is a reliable predictor of storm likelihood, and therefore a predictor of seasonal precipitation totals and likelihood of extremely intense precipitation. Such time series can be used to project intensity duration curves into the future or input into stormwater models. However, few climate projection studies have explored the impact of the type of downscaling method used on the range and uncertainty of predictions for local flood protection studies. Here we present a study of the future climate flood risk at NASA Ames Research Center, located in South Bay Area, by comparing the range of predictions in extreme precipitation events calculated from three sets of time series downscaled from CMIP5 data: 1) the Bias Correction Constructed Analogs method dataset downscaled to a 1/8 degree grid (12km); 2) the Bias Correction Spatial Disaggregation method downscaled to a 1km grid; 3) a statistical model of extreme daily precipitation events and projected NOI from CMIP5 models. In addition, predicted years of extreme precipitation are used to estimate the risk of overtopping of the retention pond located on the site through simulations of the EPA SWMM hydrologic model. Preliminary results indicate that the intensity of extreme precipitation events is expected to increase and flood the NASA Ames retention pond. The results from these estimations will assist flood protection managers in planning for infrastructure adaptations.
A new method for the prediction of combustion instability
NASA Astrophysics Data System (ADS)
Flanagan, Steven Meville
This dissertation presents a new approach to the prediction of combustion instability in solid rocket motors. Previous attempts at developing computational tools to solve this problem have been largely unsuccessful, showing very poor agreement with experimental results and having little or no predictive capability. This is due primarily to deficiencies in the linear stability theory upon which these efforts have been based. Recent advances in linear instability theory by Flandro have demonstrated the importance of including unsteady rotational effects, previously considered negligible. Previous versions of the theory also neglected corrections to the unsteady flow field of the first order in the mean flow Mach number. This research explores the stability implications of extending the solution to include these corrections. Also, the corrected linear stability theory based upon a rotational unsteady flow field extended to first order in mean flow Mach number has been implemented in two computer programs developed for the Macintosh platform. A quasi one-dimensional version of the program has been developed which is based upon an approximate solution to the cavity acoustics problem. The three-dimensional program applies Greens's Function Discretization (GFD) to the solution for the acoustic mode shapes and frequency. GFD is a recently developed numerical method for finding fully three dimensional solutions for this class of problems. The analysis of complex motor geometries, previously a tedious and time consuming task, has also been greatly simplified through the development of a drawing package designed specifically to facilitate the specification of typical motor geometries. The combination of the drawing package, improved acoustic solutions, and new analysis, results in a tool which is capable of producing more accurate and meaningful predictions than have been possible in the past.
NASA Astrophysics Data System (ADS)
Sitha, Sanyasi; Jewell, Linda L.; Piketh, Stuart J.; Fourie, Gerhard
2011-01-01
The formation of HOSO 2 from OH and SO 2 has been thoroughly investigated using several different methods (MP2=Full, MP2=FC, B3LYP, HF and composite G∗ methods) and basis sets (6-31G(d,p), 6-31++G(d,p), 6-31++G(2d,2p), 6-31++G(2df,2p) and aug-cc-pVnZ). We have found two different possible transition state structures, one of which is a true transition state since it has a higher energy than the reactants and products (MP2=Full, MP2=FC and HF), while the other is not a true transition state since it has an energy which lies between that of the reactants and products (B3LYP and B3LYP based methods). The transition state structure (from MP2) has a twist angle of the OH fragment relative to the SO bond of the SO 2 fragment of -50.0°, whereas this angle is 26.7° in the product molecule. Examination of the displacement vectors confirms that this is a true transition state structure. The MP2=Full method with a larger basis set (MP2=Full/6-31++G(2df,2p)) predicts the enthalpy of reaction to be -112.8 kJ mol -1 which is close to the experimental value of -113.3 ± 6 kJ mol -1, and predicts a rather high barrier of 20.0 kJ mol -1. When the TS structure obtained by the MP2 method is used as the input for calculating the energetics using the QCISD/6-31++G(2df,2p) method, a barrier of 4.1 kJ mol -1 is obtained (ZPE corrected). The rate constant calculated from this barrier is 1.3 × 10 -13 cm 3 molecule -1 s -1. We conclude that while the MP2 methods correctly predict the TS from a structural point of view, higher level energy corrections are needed for estimation of exact barrier height.
Ocular Chromatic Aberrations and Their Effects on Polychromatic Retinal Image Quality
NASA Astrophysics Data System (ADS)
Zhang, Xiaoxiao
Previous studies of ocular chromatic aberrations have concentrated on chromatic difference of focus (CDF). Less is known about the chromatic difference of image position (CDP) in the peripheral retina and no experimental attempt has been made to measure the ocular chromatic difference of magnification (CDM). Consequently, theoretical modelling of human eyes is incomplete. The insufficient knowledge of ocular chromatic aberrations is partially responsible for two unsolved applied vision problems: (1) how to improve vision by correcting ocular chromatic aberration? (2) what is the impact of ocular chromatic aberration on the use of isoluminance gratings as a tool in spatial-color vision?. Using optical ray tracing methods, MTF analysis methods of image quality, and psychophysical methods, I have developed a more complete model of ocular chromatic aberrations and their effects on vision. The ocular CDM was determined psychophysically by measuring the tilt in the apparent frontal parallel plane (AFPP) induced by interocular difference in image wavelength. This experimental result was then used to verify a theoretical relationship between the ocular CDM, the ocular CDF and the entrance pupil of the eye. In the retinal image after correcting the ocular CDF with existing achromatizing methods, two forms of chromatic aberration (CDM and chromatic parallax) were examined. The CDM was predicted by theoretical ray tracing and measured with the same method used to determine ocular CDM. The chromatic parallax was predicted with a nodal ray model and measured with the two-color vernier alignment method. The influence of these two aberrations on polychromatic MTF were calculated. Using this improved model of ocular chromatic aberration, luminance artifacts in the images of isoluminance gratings were calculated. The predicted luminance artifacts were then compared with experimental data from previous investigators. The results show that: (1) A simple relationship exists between two major chromatic aberrations and the location of the pupil; (2) The ocular CDM is measurable and varies among individuals; (3) All existing methods to correct ocular chromatic aberration face another aberration, chromatic parallax, which is inherent in the methodology; (4) Ocular chromatic aberrations have the potential to contaminate psychophysical experimental results on human spatial-color vision.
Kuniya, Toshikazu; Sano, Hideki
2016-05-10
In mathematical epidemiology, age-structured epidemic models have usually been formulated as the boundary-value problems of the partial differential equations. On the other hand, in engineering, the backstepping method has recently been developed and widely studied by many authors. Using the backstepping method, we obtained a boundary feedback control which plays the role of the threshold criteria for the prediction of increase or decrease of newly infected population. Under an assumption that the period of infectiousness is same for all infected individuals (that is, the recovery rate is given by the Dirac delta function multiplied by a sufficiently large positive constant), the prediction method is simplified to the comparison of the numbers of reported cases at the current and previous time steps. Our prediction method was applied to the reported cases per sentinel of influenza in Japan from 2006 to 2015 and its accuracy was 0.81 (404 correct predictions to the total 500 predictions). It was higher than that of the ARIMA models with different orders of the autoregressive part, differencing and moving-average process. In addition, a proposed method for the estimation of the number of reported cases, which is consistent with our prediction method, was better than that of the best-fitted ARIMA model ARIMA(1,1,0) in the sense of mean square error. Our prediction method based on the backstepping method can be simplified to the comparison of the numbers of reported cases of the current and previous time steps. In spite of its simplicity, it can provide a good prediction for the spread of influenza in Japan.
Prediction of the production of nitrogen oxide (NOx) in turbojet engines
NASA Astrophysics Data System (ADS)
Tsague, Louis; Tsogo, Joseph; Tatietse, Thomas Tamo
Gaseous nitrogen oxides (NO+NO2=NOx) are known as atmospheric trace constituent. These gases remain a big concern despite the advances in low NOx emission technology because they play a critical role in regulating the oxidization capacity of the atmosphere according to Crutzen [1995. My life with O 3, NO x and other YZO x S; Nobel Lecture; Chemistry 1995; pp 195; December 8, 1995] . Aircraft emissions of nitrogen oxides ( NOx) are regulated by the International Civil Aviation Organization. The prediction of NOx emission in turbojet engines by combining combustion operational data produced information showing correlation between the analytical and empirical results. There is close similarity between the calculated emission index and experimental data. The correlation shows improved accuracy when the 2124 experimental data from 11 gas turbine engines are evaluated than a previous semi empirical correlation approach proposed by Pearce et al. [1993. The prediction of thermal NOx in gas turbine exhausts. Eleventh International Symposium on Air Breathing Engines, Tokyo, 1993, pp. 6-9]. The new method we propose predict the production of NOx with far more improved accuracy than previous methods. Since a turbojet engine works in an atmosphere where temperature, pressure and humidity change frequently, a correction factor is developed with standard atmospheric laws and some correlations taken from scientific literature [Swartwelder, M., 2000. Aerospace engineering 410 Term Project performance analysis, November 17, 2000, pp. 2-5; Reed, J.A. Java Gas Turbine Simulator Documentation. pp. 4-5]. The new correction factor is validated with experimental observations from 19 turbojet engines cruising at altitudes of 9 and 13 km given in the ICAO repertory [Middleton, D., 1992. Appendix K (FAA/SETA). Section 1: Boeing Method Two Indices, 1992, pp. 2-3]. This correction factor will enable the prediction of cruise NOx emissions of turbojet engines at cruising speeds. The ICAO database [Goehlich, R.A., 2000. Investigation into the applicability of pollutant emission models for computer aided preliminary aircraft design, Book number 175654, 4.2.2000, pp. 57-79] can now be completed using the approach we propose to complete the whole mission flight NOx emissions.
Determining spherical lens correction for astronaut training underwater
Porter, Jason; Gibson, C. Robert; Strauss, Samuel
2013-01-01
Purpose To develop a model that will accurately predict the distance spherical lens correction needed to be worn by National Aeronautics and Space Administration (NASA) astronauts while training underwater. The replica space suit’s helmet contains curved visors that induce refractive power when submersed in water. Methods Anterior surface powers and thicknesses were measured for the helmet’s protective and inside visors. The impact of each visor on the helmet’s refractive power in water was analyzed using thick lens calculations and Zemax optical design software. Using geometrical optics approximations, a model was developed to determine the optimal distance spherical power needed to be worn underwater based on the helmet’s total induced spherical power underwater and the astronaut’s manifest spectacle plane correction in air. The validity of the model was tested using data from both eyes of 10 astronauts who trained underwater. Results The helmet visors induced a total power of −2.737 D when placed underwater. The required underwater spherical correction (FW) was linearly related to the spectacle plane spherical correction in air (FAir): FW = FAir + 2.356 D. The mean magnitude of the difference between the actual correction worn underwater and the calculated underwater correction was 0.20 ± 0.11 D. The actual and calculated values were highly correlated (R = 0.971) with 70% of eyes having a difference in magnitude of < 0.25 D between values. Conclusions We devised a model to calculate the spherical spectacle lens correction needed to be worn underwater by National Aeronautics and Space Administration astronauts. The model accurately predicts the actual values worn underwater and can be applied (more generally) to determine a suitable spectacle lens correction to be worn behind other types of masks when submerged underwater. PMID:21623249
Cognitive Demands of Lower Paleolithic Toolmaking
Stout, Dietrich; Hecht, Erin; Khreisheh, Nada; Bradley, Bruce; Chaminade, Thierry
2015-01-01
Stone tools provide some of the most abundant, continuous, and high resolution evidence of behavioral change over human evolution, but their implications for cognitive evolution have remained unclear. We investigated the neurophysiological demands of stone toolmaking by training modern subjects in known Paleolithic methods (“Oldowan”, “Acheulean”) and collecting structural and functional brain imaging data as they made technical judgments (outcome prediction, strategic appropriateness) about planned actions on partially completed tools. Results show that this task affected neural activity and functional connectivity in dorsal prefrontal cortex, that effect magnitude correlated with the frequency of correct strategic judgments, and that the frequency of correct strategic judgments was predictive of success in Acheulean, but not Oldowan, toolmaking. This corroborates hypothesized cognitive control demands of Acheulean toolmaking, specifically including information monitoring and manipulation functions attributed to the "central executive" of working memory. More broadly, it develops empirical methods for assessing the differential cognitive demands of Paleolithic technologies, and expands the scope of evolutionary hypotheses that can be tested using the available archaeological record. PMID:25875283
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reeve, Samuel Temple; Strachan, Alejandro, E-mail: strachan@purdue.edu
We use functional, Fréchet, derivatives to quantify how thermodynamic outputs of a molecular dynamics (MD) simulation depend on the potential used to compute atomic interactions. Our approach quantifies the sensitivity of the quantities of interest with respect to the input functions as opposed to its parameters as is done in typical uncertainty quantification methods. We show that the functional sensitivity of the average potential energy and pressure in isothermal, isochoric MD simulations using Lennard–Jones two-body interactions can be used to accurately predict those properties for other interatomic potentials (with different functional forms) without re-running the simulations. This is demonstrated undermore » three different thermodynamic conditions, namely a crystal at room temperature, a liquid at ambient pressure, and a high pressure liquid. The method provides accurate predictions as long as the change in potential can be reasonably described to first order and does not significantly affect the region in phase space explored by the simulation. The functional uncertainty quantification approach can be used to estimate the uncertainties associated with constitutive models used in the simulation and to correct predictions if a more accurate representation becomes available.« less
Genomic selection for slaughter age in pigs using the Cox frailty model.
Santos, V S; Martins Filho, S; Resende, M D V; Azevedo, C F; Lopes, P S; Guimarães, S E F; Glória, L S; Silva, F F
2015-10-19
The aim of this study was to compare genomic selection methodologies using a linear mixed model and the Cox survival model. We used data from an F2 population of pigs, in which the response variable was the time in days from birth to the culling of the animal and the covariates were 238 markers [237 single nucleotide polymorphism (SNP) plus the halothane gene]. The data were corrected for fixed effects, and the accuracy of the method was determined based on the correlation of the ranks of predicted genomic breeding values (GBVs) in both models with the corrected phenotypic values. The analysis was repeated with a subset of SNP markers with largest absolute effects. The results were in agreement with the GBV prediction and the estimation of marker effects for both models for uncensored data and for normality. However, when considering censored data, the Cox model with a normal random effect (S1) was more appropriate. Since there was no agreement between the linear mixed model and the imputed data (L2) for the prediction of genomic values and the estimation of marker effects, the model S1 was considered superior as it took into account the latent variable and the censored data. Marker selection increased correlations between the ranks of predicted GBVs by the linear and Cox frailty models and the corrected phenotypic values, and 120 markers were required to increase the predictive ability for the characteristic analyzed.
Efficient Semi-Automatic 3D Segmentation for Neuron Tracing in Electron Microscopy Images
Jones, Cory; Liu, Ting; Cohan, Nathaniel Wood; Ellisman, Mark; Tasdizen, Tolga
2015-01-01
0.1. Background In the area of connectomics, there is a significant gap between the time required for data acquisition and dense reconstruction of the neural processes contained in the same dataset. Automatic methods are able to eliminate this timing gap, but the state-of-the-art accuracy so far is insufficient for use without user corrections. If completed naively, this process of correction can be tedious and time consuming. 0.2. New Method We present a new semi-automatic method that can be used to perform 3D segmentation of neurites in EM image stacks. It utilizes an automatic method that creates a hierarchical structure for recommended merges of superpixels. The user is then guided through each predicted region to quickly identify errors and establish correct links. 0.3. Results We tested our method on three datasets with both novice and expert users. Accuracy and timing were compared with published automatic, semi-automatic, and manual results. 0.4. Comparison with Existing Methods Post-automatic correction methods have also been used in [1] and [2]. These methods do not provide navigation or suggestions in the manner we present. Other semi-automatic methods require user input prior to the automatic segmentation such as [3] and [4] and are inherently different than our method. 0.5. Conclusion Using this method on the three datasets, novice users achieved accuracy exceeding state-of-the-art automatic results, and expert users achieved accuracy on par with full manual labeling but with a 70% time improvement when compared with other examples in publication. PMID:25769273
Wu, Zhenkai; Ding, Jing; Zhao, Dahang; Zhao, Li; Li, Hai; Liu, Jianlin
2017-07-10
The multiplier method was introduced by Paley to calculate the timing for temporary hemiepiphysiodesis. However, this method has not been verified in terms of clinical outcome measure. We aimed to (1) predict the rate of angular correction per year (ACPY) at the various corresponding ages by means of multiplier method and verify the reliability based on the data from the published studies and (2) screen out risk factors for deviation of prediction. A comprehensive search was performed in the following electronic databases: Cochrane, PubMed, and EMBASE™. A total of 22 studies met the inclusion criteria. If the actual value of ACPY from the collected date was located out of the range of the predicted value based on the multiplier method, it was considered as the deviation of prediction (DOP). The associations of patient characteristics with DOP were assessed with the use of univariate logistic regression. Only one article was evaluated as moderate evidence; the remaining articles were evaluated as poor quality. The rate of DOP was 31.82%. In the detailed individual data of included studies, the rate of DOP was 55.44%. The multiplier method is not reliable in predicting the timing for temporary hemiepiphysiodesis, even though it is prone to be more reliable for the younger patients with idiopathic genu coronal deformity.
Improved model quality assessment using ProQ2.
Ray, Arjun; Lindahl, Erik; Wallner, Björn
2012-09-10
Employing methods to assess the quality of modeled protein structures is now standard practice in bioinformatics. In a broad sense, the techniques can be divided into methods relying on consensus prediction on the one hand, and single-model methods on the other. Consensus methods frequently perform very well when there is a clear consensus, but this is not always the case. In particular, they frequently fail in selecting the best possible model in the hard cases (lacking consensus) or in the easy cases where models are very similar. In contrast, single-model methods do not suffer from these drawbacks and could potentially be applied on any protein of interest to assess quality or as a scoring function for sampling-based refinement. Here, we present a new single-model method, ProQ2, based on ideas from its predecessor, ProQ. ProQ2 is a model quality assessment algorithm that uses support vector machines to predict local as well as global quality of protein models. Improved performance is obtained by combining previously used features with updated structural and predicted features. The most important contribution can be attributed to the use of profile weighting of the residue specific features and the use features averaged over the whole model even though the prediction is still local. ProQ2 is significantly better than its predecessors at detecting high quality models, improving the sum of Z-scores for the selected first-ranked models by 20% and 32% compared to the second-best single-model method in CASP8 and CASP9, respectively. The absolute quality assessment of the models at both local and global level is also improved. The Pearson's correlation between the correct and local predicted score is improved from 0.59 to 0.70 on CASP8 and from 0.62 to 0.68 on CASP9; for global score to the correct GDT_TS from 0.75 to 0.80 and from 0.77 to 0.80 again compared to the second-best single methods in CASP8 and CASP9, respectively. ProQ2 is available at http://proq2.wallnerlab.org.
Airframe noise prediction evaluation
NASA Technical Reports Server (NTRS)
Yamamoto, Kingo J.; Donelson, Michael J.; Huang, Shumei C.; Joshi, Mahendra C.
1995-01-01
The objective of this study is to evaluate the accuracy and adequacy of current airframe noise prediction methods using available airframe noise measurements from tests of a narrow body transport (DC-9) and a wide body transport (DC-10) in addition to scale model test data. General features of the airframe noise from these aircraft and models are outlined. The results of the assessment of two airframe prediction methods, Fink's and Munson's methods, against flight test data of these aircraft and scale model wind tunnel test data are presented. These methods were extensively evaluated against measured data from several configurations including clean, slat deployed, landing gear-deployed, flap deployed, and landing configurations of both DC-9 and DC-10. They were also assessed against a limited number of configurations of scale models. The evaluation was conducted in terms of overall sound pressure level (OASPL), tone corrected perceived noise level (PNLT), and one-third-octave band sound pressure level (SPL).
Automatic Train Operation Using Autonomic Prediction of Train Runs
NASA Astrophysics Data System (ADS)
Asuka, Masashi; Kataoka, Kenji; Komaya, Kiyotoshi; Nishida, Syogo
In this paper, we present an automatic train control method adaptable to disturbed train traffic conditions. The proposed method presumes transmission of detected time of a home track clearance to trains approaching to the station by employing equipment of Digital ATC (Automatic Train Control). Using the information, each train controls its acceleration by the method that consists of two approaches. First, by setting a designated restricted speed, the train controls its running time to arrive at the next station in accordance with predicted delay. Second, the train predicts the time at which it will reach the current braking pattern generated by Digital ATC, along with the time when the braking pattern transits ahead. By comparing them, the train correctly chooses the coasting drive mode in advance to avoid deceleration due to the current braking pattern. We evaluated the effectiveness of the proposed method regarding driving conditions, energy consumption and reduction of delays by simulation.
Rotman, Oren Moshe; Weiss, Dar; Zaretsky, Uri; Shitzer, Avraham; Einav, Shmuel
2015-09-18
High accuracy differential pressure measurements are required in various biomedical and medical applications, such as in fluid-dynamic test systems, or in the cath-lab. Differential pressure measurements using fluid-filled catheters are relatively inexpensive, yet may be subjected to common mode pressure errors (CMP), which can significantly reduce the measurement accuracy. Recently, a novel correction method for high accuracy differential pressure measurements was presented, and was shown to effectively remove CMP distortions from measurements acquired in rigid tubes. The purpose of the present study was to test the feasibility of this correction method inside compliant tubes, which effectively simulate arteries. Two tubes with varying compliance were tested under dynamic flow and pressure conditions to cover the physiological range of radial distensibility in coronary arteries. A third, compliant model, with a 70% stenosis severity was additionally tested. Differential pressure measurements were acquired over a 3 cm tube length using a fluid-filled double-lumen catheter, and were corrected using the proposed CMP correction method. Validation of the corrected differential pressure signals was performed by comparison to differential pressure recordings taken via a direct connection to the compliant tubes, and by comparison to predicted differential pressure readings of matching fluid-structure interaction (FSI) computational simulations. The results show excellent agreement between the experimentally acquired and computationally determined differential pressure signals. This validates the application of the CMP correction method in compliant tubes of the physiological range for up to intermediate size stenosis severity of 70%. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Multidimensional B-Spline Correction for Accurate Modeling Sugar Puckering in QM/MM Simulations.
Huang, Ming; Dissanayake, Thakshila; Kuechler, Erich; Radak, Brian K; Lee, Tai-Sung; Giese, Timothy J; York, Darrin M
2017-09-12
The computational efficiency of approximate quantum mechanical methods allows their use for the construction of multidimensional reaction free energy profiles. It has recently been demonstrated that quantum models based on the neglect of diatomic differential overlap (NNDO) approximation have difficulty modeling deoxyribose and ribose sugar ring puckers and thus limit their predictive value in the study of RNA and DNA systems. A method has been introduced in our previous work to improve the description of the sugar puckering conformational landscape that uses a multidimensional B-spline correction map (BMAP correction) for systems involving intrinsically coupled torsion angles. This method greatly improved the adiabatic potential energy surface profiles of DNA and RNA sugar rings relative to high-level ab initio methods even for highly problematic NDDO-based models. In the present work, a BMAP correction is developed, implemented, and tested in molecular dynamics simulations using the AM1/d-PhoT semiempirical Hamiltonian for biological phosphoryl transfer reactions. Results are presented for gas-phase adiabatic potential energy surfaces of RNA transesterification model reactions and condensed-phase QM/MM free energy surfaces for nonenzymatic and RNase A-catalyzed transesterification reactions. The results show that the BMAP correction is stable, efficient, and leads to improvement in both the potential energy and free energy profiles for the reactions studied, as compared with ab initio and experimental reference data. Exploration of the effect of the size of the quantum mechanical region indicates the best agreement with experimental reaction barriers occurs when the full CpA dinucleotide substrate is treated quantum mechanically with the sugar pucker correction.
Holmes, John B; Dodds, Ken G; Lee, Michael A
2017-03-02
An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.
[Evaluation of Sugar Content of Huanghua Pear on Trees by Visible/Near Infrared Spectroscopy].
Liu, Hui-jun; Ying, Yi-bin
2015-11-01
A method of ambient light correction was proposed to evaluate the sugar content of Huanghua pears on tree by visible/near infrared diffuse reflectance spectroscopy (Vis/NIRS). Due to strong interference of ambient light, it was difficult to collect the efficient spectral of pears on tree. In the field, covering the fruits with a bag blocking ambient light can get better results, but the efficiency is fairly low, the instrument corrections of dark and reference spectra may help to reduce the error of the model, however, the interference of the ambient light cannot be eliminated effectively. In order to reduce the effect of ambient light, a shutter was attached to the front of probe. When opening shutter, the spot spectrum were obtained, on which instrument light and ambient light acted at the same time. While closing shutter, background spectra were obtained, on which only ambient light acted, then the ambient light spectra was subtracted from spot spectra. Prediction models were built using data on tree (before and after ambient light correction) and after harvesting by partial least square (PLS). The results of the correlation coefficient (R) are 0.1, 0.69, 0.924; the root mean square error of prediction (SEP) are 0. 89°Brix, 0.42°Brix, 0.27°Brix; ratio of standard deviation (SD) to SEP (RPD) are 0.79, 1.69, 2.58, respectively. The results indicate that, method of background correction used in the experiment can reduce the effect of ambient lighting on spectral acquisition of Huanghua pears in field, efficiently. This method can be used to collect the visible/near infrared spectrum of fruits in field, and may give full play to visible/near-infrared spectroscopy in preharvest management and maturity testing of fruits in the field.
Gamma model and its analysis for phase measuring profilometry.
Liu, Kai; Wang, Yongchang; Lau, Daniel L; Hao, Qi; Hassebrook, Laurence G
2010-03-01
Phase measuring profilometry is a method of structured light illumination whose three-dimensional reconstructions are susceptible to error from nonunitary gamma in the associated optical devices. While the effects of this distortion diminish with an increasing number of employed phase-shifted patterns, gamma distortion may be unavoidable in real-time systems where the number of projected patterns is limited by the presence of target motion. A mathematical model is developed for predicting the effects of nonunitary gamma on phase measuring profilometry, while also introducing an accurate gamma calibration method and two strategies for minimizing gamma's effect on phase determination. These phase correction strategies include phase corrections with and without gamma calibration. With the reduction in noise, for three-step phase measuring profilometry, analysis of the root mean squared error of the corrected phase will show a 60x reduction in phase error when the proposed gamma calibration is performed versus 33x reduction without calibration.
Wu, Jingheng; Shen, Lin; Yang, Weitao
2017-10-28
Ab initio quantum mechanics/molecular mechanics (QM/MM) molecular dynamics simulation is a useful tool to calculate thermodynamic properties such as potential of mean force for chemical reactions but intensely time consuming. In this paper, we developed a new method using the internal force correction for low-level semiempirical QM/MM molecular dynamics samplings with a predefined reaction coordinate. As a correction term, the internal force was predicted with a machine learning scheme, which provides a sophisticated force field, and added to the atomic forces on the reaction coordinate related atoms at each integration step. We applied this method to two reactions in aqueous solution and reproduced potentials of mean force at the ab initio QM/MM level. The saving in computational cost is about 2 orders of magnitude. The present work reveals great potentials for machine learning in QM/MM simulations to study complex chemical processes.
Modeling coherent errors in quantum error correction
NASA Astrophysics Data System (ADS)
Greenbaum, Daniel; Dutton, Zachary
2018-01-01
Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.
An Improved Method of AGM for High Precision Geolocation of SAR Images
NASA Astrophysics Data System (ADS)
Zhou, G.; He, C.; Yue, T.; Huang, W.; Huang, Y.; Li, X.; Chen, Y.
2018-05-01
In order to take full advantage of SAR images, it is necessary to obtain the high precision location of the image. During the geometric correction process of images, to ensure the accuracy of image geometric correction and extract the effective mapping information from the images, precise image geolocation is important. This paper presents an improved analytical geolocation method (IAGM) that determine the high precision geolocation of each pixel in a digital SAR image. This method is based on analytical geolocation method (AGM) proposed by X. K. Yuan aiming at realizing the solution of RD model. Tests will be conducted using RADARSAT-2 SAR image. Comparing the predicted feature geolocation with the position as determined by high precision orthophoto, results indicate an accuracy of 50m is attainable with this method. Error sources will be analyzed and some recommendations about improving image location accuracy in future spaceborne SAR's will be given.
Jet production in the CoLoRFulNNLO method: Event shapes in electron-positron collisions
NASA Astrophysics Data System (ADS)
Del Duca, Vittorio; Duhr, Claude; Kardos, Adam; Somogyi, Gábor; Szőr, Zoltán; Trócsányi, Zoltán; Tulipánt, Zoltán
2016-10-01
We present the CoLoRFulNNLO method to compute higher order radiative corrections to jet cross sections in perturbative QCD. We apply our method to the computation of event shape observables in electron-positron collisions at NNLO accuracy and validate our code by comparing our predictions to previous results in the literature. We also calculate for the first time jet cone energy fraction at NNLO.
New correction procedures for the fast field program which extend its range
NASA Technical Reports Server (NTRS)
West, M.; Sack, R. A.
1990-01-01
A fast field program (FFP) algorithm was developed based on the method of Lee et al., for the prediction of sound pressure level from low frequency, high intensity sources. In order to permit accurate predictions at distances greater than 2 km, new correction procedures have had to be included in the algorithm. Certain functions, whose Hankel transforms can be determined analytically, are subtracted from the depth dependent Green's function. The distance response is then obtained as the sum of these transforms and the Fast Fourier Transformation (FFT) of the residual k dependent function. One procedure, which permits the elimination of most complex exponentials, has allowed significant changes in the structure of the FFP algorithm, which has resulted in a substantial reduction in computation time.
Milles, Julien; Zhu, Yue Min; Gimenez, Gérard; Guttmann, Charles R G; Magnin, Isabelle E
2007-03-01
A novel approach for correcting intensity nonuniformity in magnetic resonance imaging (MRI) is presented. This approach is based on the simultaneous use of spatial and gray-level histogram information. Spatial information about intensity nonuniformity is obtained using cubic B-spline smoothing. Gray-level histogram information of the image corrupted by intensity nonuniformity is exploited from a frequential point of view. The proposed correction method is illustrated using both physical phantom and human brain images. The results are consistent with theoretical prediction, and demonstrate a new way of dealing with intensity nonuniformity problems. They are all the more significant as the ground truth on intensity nonuniformity is unknown in clinical images.
NASA Astrophysics Data System (ADS)
Sun, Yu; Zhao, Yingjun; Qin, Kai; Tian, Feng
2016-04-01
Hyperspectral remote sensing is a frontier of remote sensing. Due to its advantage of integrated image with spectrum, it can realize objects identification, superior to objects classification of multispectral remote sensing. Taken the Mingshujing area in Gansu Province of China as an example, this study extracted the alteration minerals and thus to do metallogenic prediction using CASI/SASI airborne hyperspectral data. The Mingshujing area, located in Liuyuan region of Gansu Province, is dominated by middle Variscan granites and Indosinian granites, with well developed EW- and NE-trending faults. In July 2012, our project team obtained the CASI/SASI hyperspectral data of Liuyuan region by aerial flight. The CASI hyperspectral data have 32 bands and the SASI hyperspectral data have 88 bands, with spectral resolution of 15nm for both. The hyperspectral raw data were first preprocessed, including radiometric correction and geometric correction. We then conducted atmospheric correction using empirical line method based on synchronously measured ground spectra to obtain hyperspectral reflectance data. Spectral dimension of hyperspectral data was reduced by the minimum noise fraction transformation method, and then purity pixels were selected. After these steps, image endmember spectra were obtained. We used the endmember spectrum election method based on expert knowledge to analyze the image endmember spectra. Then, the mixture tuned matched filter (MTMF) mapping method was used to extract mineral information, including limonite, Al-rich sericite, Al-poor sericite and chlorite. Finally, the distribution of minerals in the Mingshujing area was mapped. According to the distribution of limonite and Al-rich sericite mapped by CASI/SASI hyperspectral data, we delineated five gold prospecting areas, and further conducted field verification in these areas. It is shown that there are significant gold mineralized anomalies in surface in the Baixianishan and Xitan prospecting areas. The application of CASI/SASI airborne hyperspectral remote sensing data in the metallogenic prediction of the Mingshujing area has achieved ideal results, indicative of their wide application potential in geological research.
Second derivatives for approximate spin projection methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Lee M.; Hratchian, Hrant P., E-mail: hhratchian@ucmerced.edu
2015-02-07
The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical secondmore » derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.« less
Kobashi, Hidenaga; Kamiya, Kazutaka; Ali, Mohamed A.; Igarashi, Akihito; Elewa, Mohamed Ehab M.; Shimizu, Kimiya
2015-01-01
Purpose To compare postoperative astigmatic correction between femtosecond lenticule extraction (FLEx) and small-incision lenticule extraction (SMILE) in eyes with myopic astigmatism. Methods We examined 26 eyes of 26 patients undergoing FLEx and 26 eyes of 26 patients undergoing SMILE to correct myopic astigmatism (manifest astigmatism of 1 diopter (D) or more). Visual acuity, cylindrical refraction, the predictability of the astigmatic correction, and the astigmatic vector components using Alpin’s method, were compared between the two groups 3 months postoperatively. Results We found no statistically significant difference in manifest cylindrical refraction (p=0.74) or in the percentage of eyes within ± 0.50 D of their refraction (p=0.47) after the two surgical procedures. Moreover, no statistically significant difference was detected between the groups in astigmatic vector components, namely, surgically induced astigmatism (0.80), target induced astigmatism (p=0.87), astigmatic correction index (p=0.77), angle of error (p=0.24), difference vector (p=0.76), index of success (p=0.91), flattening effect (p=0.79), and flattening index (p=0.84). Conclusions Both FLEx and SMILE procedures are essentially equivalent in correcting myopic astigmatism using vector analysis, suggesting that the lifting or non-lifting of the flap does not significantly affect astigmatic outcomes after these surgical procedures. PMID:25849381
Blind prediction of noncanonical RNA structure at atomic accuracy.
Watkins, Andrew M; Geniesse, Caleb; Kladwang, Wipapat; Zakrevsky, Paul; Jaeger, Luc; Das, Rhiju
2018-05-01
Prediction of RNA structure from nucleotide sequence remains an unsolved grand challenge of biochemistry and requires distinct concepts from protein structure prediction. Despite extensive algorithmic development in recent years, modeling of noncanonical base pairs of new RNA structural motifs has not been achieved in blind challenges. We report a stepwise Monte Carlo (SWM) method with a unique add-and-delete move set that enables predictions of noncanonical base pairs of complex RNA structures. A benchmark of 82 diverse motifs establishes the method's general ability to recover noncanonical pairs ab initio, including multistrand motifs that have been refractory to prior approaches. In a blind challenge, SWM models predicted nucleotide-resolution chemical mapping and compensatory mutagenesis experiments for three in vitro selected tetraloop/receptors with previously unsolved structures (C7.2, C7.10, and R1). As a final test, SWM blindly and correctly predicted all noncanonical pairs of a Zika virus double pseudoknot during a recent community-wide RNA-Puzzle. Stepwise structure formation, as encoded in the SWM method, enables modeling of noncanonical RNA structure in a variety of previously intractable problems.
Li, Xinpeng; Li, Hong; Liu, Yun; Xiong, Wei; Fang, Sheng
2018-03-05
The release rate of atmospheric radionuclide emissions is a critical factor in the emergency response to nuclear accidents. However, there are unavoidable biases in radionuclide transport models, leading to inaccurate estimates. In this study, a method that simultaneously corrects these biases and estimates the release rate is developed. Our approach provides a more complete measurement-by-measurement correction of the biases with a coefficient matrix that considers both deterministic and stochastic deviations. This matrix and the release rate are jointly solved by the alternating minimization algorithm. The proposed method is generic because it does not rely on specific features of transport models or scenarios. It is validated against wind tunnel experiments that simulate accidental releases in a heterogonous and densely built nuclear power plant site. The sensitivities to the position, number, and quality of measurements and extendibility of the method are also investigated. The results demonstrate that this method effectively corrects the model biases, and therefore outperforms Tikhonov's method in both release rate estimation and model prediction. The proposed approach is robust to uncertainties and extendible with various center estimators, thus providing a flexible framework for robust source inversion in real accidents, even if large uncertainties exist in multiple factors. Copyright © 2017 Elsevier B.V. All rights reserved.
Thermal properties of nuclear matter in a variational framework with relativistic corrections
NASA Astrophysics Data System (ADS)
Zaryouni, S.; Hassani, M.; Moshfegh, H. R.
2014-01-01
The properties of hot symmetric nuclear matter for a wide range of densities and temperatures are investigated by employing the AV14 potential within the lowest order constrained variational (LOCV) method with the inclusion of a phenomenological three-body force as well as relativistic corrections. The relativistic corrections of many-body kinetic energies as well as the boot interaction corrections are presented for a wide range of densities and temperatures. The free energy, pressure, incompressibility, and other thermodynamic quantities of symmetric nuclear matter are obtained and discussed. The critical temperature is found, and the liquid-gas phase transition is analyzed both with and without the inclusion of three-body forces and relativistic corrections in the LOCV approach. It is shown that the critical temperature is strongly affected by the three-body forces but does not depend on the relativistic corrections. Finally, the results obtained in the present study are compared with other many-body calculations and experimental predictions.
The effect of monitor raster latency on VEPs, ERPs and Brain-Computer Interface performance.
Nagel, Sebastian; Dreher, Werner; Rosenstiel, Wolfgang; Spüler, Martin
2018-02-01
Visual neuroscience experiments and Brain-Computer Interface (BCI) control often require strict timings in a millisecond scale. As most experiments are performed using a personal computer (PC), the latencies that are introduced by the setup should be taken into account and be corrected. As a standard computer monitor uses a rastering to update each line of the image sequentially, this causes a monitor raster latency which depends on the position, on the monitor and the refresh rate. We technically measured the raster latencies of different monitors and present the effects on visual evoked potentials (VEPs) and event-related potentials (ERPs). Additionally we present a method for correcting the monitor raster latency and analyzed the performance difference of a code-modulated VEP BCI speller by correcting the latency. There are currently no other methods validating the effects of monitor raster latency on VEPs and ERPs. The timings of VEPs and ERPs are directly affected by the raster latency. Furthermore, correcting the raster latency resulted in a significant reduction of the target prediction error from 7.98% to 4.61% and also in a more reliable classification of targets by significantly increasing the distance between the most probable and the second most probable target by 18.23%. The monitor raster latency affects the timings of VEPs and ERPs, and correcting resulted in a significant error reduction of 42.23%. It is recommend to correct the raster latency for an increased BCI performance and methodical correctness. Copyright © 2017 Elsevier B.V. All rights reserved.
A novel method for structure-based prediction of ion channel conductance properties.
Smart, O S; Breed, J; Smith, G R; Sansom, M S
1997-01-01
A rapid and easy-to-use method of predicting the conductance of an ion channel from its three-dimensional structure is presented. The method combines the pore dimensions of the channel as measured in the HOLE program with an Ohmic model of conductance. An empirically based correction factor is then applied. The method yielded good results for six experimental channel structures (none of which were included in the training set) with predictions accurate to within an average factor of 1.62 to the true values. The predictive r2 was equal to 0.90, which is indicative of a good predictive ability. The procedure is used to validate model structures of alamethicin and phospholamban. Two genuine predictions for the conductance of channels with known structure but without reported conductances are given. A modification of the procedure that calculates the expected results for the effect of the addition of nonelectrolyte polymers on conductance is set out. Results for a cholera toxin B-subunit crystal structure agree well with the measured values. The difficulty in interpreting such studies is discussed, with the conclusion that measurements on channels of known structure are required. Images FIGURE 1 FIGURE 3 FIGURE 4 FIGURE 6 FIGURE 10 PMID:9138559
Crystal structure prediction supported by incomplete experimental data
NASA Astrophysics Data System (ADS)
Tsujimoto, Naoto; Adachi, Daiki; Akashi, Ryosuke; Todo, Synge; Tsuneyuki, Shinji
2018-05-01
We propose an efficient theoretical scheme for structure prediction on the basis of the idea of combining methods, which optimize theoretical calculation and experimental data simultaneously. In this scheme, we formulate a cost function based on a weighted sum of interatomic potential energies and a penalty function which is defined with partial experimental data totally insufficient for conventional structure analysis. In particular, we define the cost function using "crystallinity" formulated with only peak positions within the small range of the x-ray-diffraction pattern. We apply this method to well-known polymorphs of SiO2 and C with up to 108 atoms in the simulation cell and show that it reproduces the correct structures efficiently with very limited information of diffraction peaks. This scheme opens a new avenue for determining and predicting structures that are difficult to determine by conventional methods.
NASA Astrophysics Data System (ADS)
Hamdi, H.; Qausar, A. M.; Srigutomo, W.
2016-08-01
Controlled source audio-frequency magnetotellurics (CSAMT) is a frequency-domain electromagnetic sounding technique which uses a fixed grounded dipole as an artificial signal source. Measurement of CSAMT with finite distance between transmitter and receiver caused a complex wave. The shifted of the electric field due to the static effect caused elevated resistivity curve up or down and affects the result of measurement. The objective of this study was to obtain data that have been corrected for source and static effects as to have the same characteristic as MT data which are assumed to exhibit plane wave properties. Corrected CSAMT data were inverted to reveal subsurface resistivity model. Source effect correction method was applied to eliminate the effect of the signal source and static effect was corrected by using spatial filtering technique. Inversion method that used in this study is the Occam's 2D Inversion. The results of inversion produces smooth models with a small misfit value, it means the model can describe subsurface conditions well. Based on the result of inversion was predicted measurement area is rock that has high permeability values with rich hot fluid.
Surface Depletion Correction to Carrier Profiles by Hall Measurements.
1985-12-01
deviations much larger than those predicted by the LSS theory. There are several advantages of the differential Hall method over the C-V method. For...3Y 2 ) bo A’ 64 b -- (2j8 3 2 -6) 2 A where A = 10$ - 12)Y2 -18. Equation (3) may now be integrated to obtain an analitic V.-’.4 function, in terms of
Mackie, Iain D; DiLabio, Gino A
2011-10-07
The first-principles calculation of non-covalent (particularly dispersion) interactions between molecules is a considerable challenge. In this work we studied the binding energies for ten small non-covalently bonded dimers with several combinations of correlation methods (MP2, coupled-cluster single double, coupled-cluster single double (triple) (CCSD(T))), correlation-consistent basis sets (aug-cc-pVXZ, X = D, T, Q), two-point complete basis set energy extrapolations, and counterpoise corrections. For this work, complete basis set results were estimated from averaged counterpoise and non-counterpoise-corrected CCSD(T) binding energies obtained from extrapolations with aug-cc-pVQZ and aug-cc-pVTZ basis sets. It is demonstrated that, in almost all cases, binding energies converge more rapidly to the basis set limit by averaging the counterpoise and non-counterpoise corrected values than by using either counterpoise or non-counterpoise methods alone. Examination of the effect of basis set size and electron correlation shows that the triples contribution to the CCSD(T) binding energies is fairly constant with the basis set size, with a slight underestimation with CCSD(T)∕aug-cc-pVDZ compared to the value at the (estimated) complete basis set limit, and that contributions to the binding energies obtained by MP2 generally overestimate the analogous CCSD(T) contributions. Taking these factors together, we conclude that the binding energies for non-covalently bonded systems can be accurately determined using a composite method that combines CCSD(T)∕aug-cc-pVDZ with energy corrections obtained using basis set extrapolated MP2 (utilizing aug-cc-pVQZ and aug-cc-pVTZ basis sets), if all of the components are obtained by averaging the counterpoise and non-counterpoise energies. With such an approach, binding energies for the set of ten dimers are predicted with a mean absolute deviation of 0.02 kcal/mol, a maximum absolute deviation of 0.05 kcal/mol, and a mean percent absolute deviation of only 1.7%, relative to the (estimated) complete basis set CCSD(T) results. Use of this composite approach to an additional set of eight dimers gave binding energies to within 1% of previously published high-level data. It is also shown that binding within parallel and parallel-crossed conformations of naphthalene dimer is predicted by the composite approach to be 9% greater than that previously reported in the literature. The ability of some recently developed dispersion-corrected density-functional theory methods to predict the binding energies of the set of ten small dimers was also examined. © 2011 American Institute of Physics
Modeling ready biodegradability of fragrance materials.
Ceriani, Lidia; Papa, Ester; Kovarich, Simona; Boethling, Robert; Gramatica, Paola
2015-06-01
In the present study, quantitative structure activity relationships were developed for predicting ready biodegradability of approximately 200 heterogeneous fragrance materials. Two classification methods, classification and regression tree (CART) and k-nearest neighbors (kNN), were applied to perform the modeling. The models were validated with multiple external prediction sets, and the structural applicability domain was verified by the leverage approach. The best models had good sensitivity (internal ≥80%; external ≥68%), specificity (internal ≥80%; external 73%), and overall accuracy (≥75%). Results from the comparison with BIOWIN global models, based on group contribution method, show that specific models developed in the present study perform better in prediction than BIOWIN6, in particular for the correct classification of not readily biodegradable fragrance materials. © 2015 SETAC.
SAMPL4 & DOCK3.7: lessons for automated docking procedures
NASA Astrophysics Data System (ADS)
Coleman, Ryan G.; Sterling, Teague; Weiss, Dahlia R.
2014-03-01
The SAMPL4 challenges were used to test current automated methods for solvation energy, virtual screening, pose and affinity prediction of the molecular docking pipeline DOCK 3.7. Additionally, first-order models of binding affinity were proposed as milestones for any method predicting binding affinity. Several important discoveries about the molecular docking software were made during the challenge: (1) Solvation energies of ligands were five-fold worse than any other method used in SAMPL4, including methods that were similarly fast, (2) HIV Integrase is a challenging target, but automated docking on the correct allosteric site performed well in terms of virtual screening and pose prediction (compared to other methods) but affinity prediction, as expected, was very poor, (3) Molecular docking grid sizes can be very important, serious errors were discovered with default settings that have been adjusted for all future work. Overall, lessons from SAMPL4 suggest many changes to molecular docking tools, not just DOCK 3.7, that could improve the state of the art. Future difficulties and projects will be discussed.
Prediction of Body Fluids where Proteins are Secreted into Based on Protein Interaction Network
Hu, Le-Le; Huang, Tao; Cai, Yu-Dong; Chou, Kuo-Chen
2011-01-01
Determining the body fluids where secreted proteins can be secreted into is important for protein function annotation and disease biomarker discovery. In this study, we developed a network-based method to predict which kind of body fluids human proteins can be secreted into. For a newly constructed benchmark dataset that consists of 529 human-secreted proteins, the prediction accuracy for the most possible body fluid location predicted by our method via the jackknife test was 79.02%, significantly higher than the success rate by a random guess (29.36%). The likelihood that the predicted body fluids of the first four orders contain all the true body fluids where the proteins can be secreted into is 62.94%. Our method was further demonstrated with two independent datasets: one contains 57 proteins that can be secreted into blood; while the other contains 61 proteins that can be secreted into plasma/serum and were possible biomarkers associated with various cancers. For the 57 proteins in first dataset, 55 were correctly predicted as blood-secrete proteins. For the 61 proteins in the second dataset, 58 were predicted to be most possible in plasma/serum. These encouraging results indicate that the network-based prediction method is quite promising. It is anticipated that the method will benefit the relevant areas for both basic research and drug development. PMID:21829572
Universality of quantum gravity corrections.
Das, Saurya; Vagenas, Elias C
2008-11-28
We show that the existence of a minimum measurable length and the related generalized uncertainty principle (GUP), predicted by theories of quantum gravity, influence all quantum Hamiltonians. Thus, they predict quantum gravity corrections to various quantum phenomena. We compute such corrections to the Lamb shift, the Landau levels, and the tunneling current in a scanning tunneling microscope. We show that these corrections can be interpreted in two ways: (a) either that they are exceedingly small, beyond the reach of current experiments, or (b) that they predict upper bounds on the quantum gravity parameter in the GUP, compatible with experiments at the electroweak scale. Thus, more accurate measurements in the future should either be able to test these predictions, or further tighten the above bounds and predict an intermediate length scale between the electroweak and the Planck scale.
Predicting Fog in the Nocturnal Boundary Layer
NASA Astrophysics Data System (ADS)
Izett, Jonathan; van de Wiel, Bas; Baas, Peter; van der Linden, Steven; van Hooft, Antoon; Bosveld, Fred
2017-04-01
Fog is a global phenomenon that presents a hazard to navigation and human safety, resulting in significant economic impacts for air and shipping industries as well as causing numerous road traffic accidents. Accurate prediction of fog events, however, remains elusive both in terms of timing and occurrence itself. Statistical methods based on set threshold criteria for key variables such as wind speed have been developed, but high rates of correct prediction of fog events still lead to similarly high "false alarms" when the conditions appear favourable, but no fog forms. Using data from the CESAR meteorological observatory in the Netherlands, we analyze specific cases and perform statistical analyses of event climatology, in order to identify the necessary conditions for correct prediction of fog. We also identify potential "missing ingredients" in current analysis that could help to reduce the number of false alarms. New variables considered include the indicators of boundary layer stability, as well as the presence of aerosols conducive to droplet formation. The poster presents initial findings of new research as well as plans for continued research.
Xu, Dong; Zhang, Yang
2013-01-01
Genome-wide protein structure prediction and structure-based function annotation have been a long-term goal in molecular biology but not yet become possible due to difficulties in modeling distant-homology targets. We developed a hybrid pipeline combining ab initio folding and template-based modeling for genome-wide structure prediction applied to the Escherichia coli genome. The pipeline was tested on 43 known sequences, where QUARK-based ab initio folding simulation generated models with TM-score 17% higher than that by traditional comparative modeling methods. For 495 unknown hard sequences, 72 are predicted to have a correct fold (TM-score > 0.5) and 321 have a substantial portion of structure correctly modeled (TM-score > 0.35). 317 sequences can be reliably assigned to a SCOP fold family based on structural analogy to existing proteins in PDB. The presented results, as a case study of E. coli, represent promising progress towards genome-wide structure modeling and fold family assignment using state-of-the-art ab initio folding algorithms. PMID:23719418
Methods for predicting unsteady takeoff and landing trajectories of the aircraft
NASA Astrophysics Data System (ADS)
Shevchenko, A.; Pavlov, B.; Nachinkina, G.
2017-01-01
Informational and situational awareness of the aircrew greatly affects the probability of accidents, during takeoff and landing in particular. For the purpose of assessing the current and predicting the future states of an aircraft the energy approach to the flight control is used. Key energy balance equation is generalized to the ground phases. The equation describes the process of accumulating of the total energy of the aircraft along the entire trajectory, including the segment ahead. This segment length is defined by the required terminal energy state. For the takeoff phase the predict algorithm calculates the aircraft position on a runway after which it is possible to accelerate up to the speed of steady level flight and to reach the altitude sufficient for overcoming the high-rise obstacles. For the landing phase the braking distance length is determined. For increasing the likelihood of predicting the correction of the algorithm is introduced. The results of modeling many takeoffs and landings of passenger liner with different weights with the ahead obstacle and the engine failure are given. Working availability of the algorithm correction is shown.
A study of pressure-based methodology for resonant flows in non-linear combustion instabilities
NASA Technical Reports Server (NTRS)
Yang, H. Q.; Pindera, M. Z.; Przekwas, A. J.; Tucker, K.
1992-01-01
This paper presents a systematic assessment of a large variety of spatial and temporal differencing schemes on nonstaggered grids by the pressure-based methods for the problems of fast transient flows. The observation from the present study is that for steady state flow problems, pressure-based methods can be very competitive with the density-based methods. For transient flow problems, pressure-based methods utilizing the same differencing scheme are less accurate, even though the wave speeds are correctly predicted.
NASA Astrophysics Data System (ADS)
Noyes, Ben F.; Mokaberi, Babak; Oh, Jong Hun; Kim, Hyun Sik; Sung, Jun Ha; Kea, Marc
2016-03-01
One of the keys to successful mass production of sub-20nm nodes in the semiconductor industry is the development of an overlay correction strategy that can meet specifications, reduce the number of layers that require dedicated chuck overlay, and minimize measurement time. Three important aspects of this strategy are: correction per exposure (CPE), integrated metrology (IM), and the prioritization of automated correction over manual subrecipes. The first and third aspects are accomplished through an APC system that uses measurements from production lots to generate CPE corrections that are dynamically applied to future lots. The drawback of this method is that production overlay sampling must be extremely high in order to provide the system with enough data to generate CPE. That drawback makes IM particularly difficult because of the throughput impact that can be created on expensive bottleneck photolithography process tools. The goal is to realize the cycle time and feedback benefits of IM coupled with the enhanced overlay correction capability of automated CPE without impacting process tool throughput. This paper will discuss the development of a system that sends measured data with reduced sampling via an optimized layout to the exposure tool's computational modelling platform to predict and create "upsampled" overlay data in a customizable output layout that is compatible with the fab user CPE APC system. The result is dynamic CPE without the burden of extensive measurement time, which leads to increased utilization of IM.
1997-01-01
perturbed strain, [L/ L] P501263.PDF [Page: 12 of 122] UNCLASSIFIED viii €~j constrained strain, [L/ L] €£j eigenstrain , [L/ L] €£J c corrected... eigenstrain of phase-r material, [L/ L] £iJ u uncorrected eigenstrain of phase~r material, [L/ L] fijkl correction matrix of phase-r material... eigenstrains , [2] wher·e St.jkl is known as the Eshelby tensor. The tensor is a function of the matrix Poisson ratio and the shape of the inclusion
Correcting Memory Improves Accuracy of Predicted Task Duration
ERIC Educational Resources Information Center
Roy, Michael M.; Mitten, Scott T.; Christenfeld, Nicholas J. S.
2008-01-01
People are often inaccurate in predicting task duration. The memory bias explanation holds that this error is due to people having incorrect memories of how long previous tasks have taken, and these biased memories cause biased predictions. Therefore, the authors examined the effect on increasing predictive accuracy of correcting memory through…
Investigation of Super Learner Methodology on HIV-1 Small Sample: Application on Jaguar Trial Data.
Houssaïni, Allal; Assoumou, Lambert; Marcelin, Anne Geneviève; Molina, Jean Michel; Calvez, Vincent; Flandre, Philippe
2012-01-01
Background. Many statistical models have been tested to predict phenotypic or virological response from genotypic data. A statistical framework called Super Learner has been introduced either to compare different methods/learners (discrete Super Learner) or to combine them in a Super Learner prediction method. Methods. The Jaguar trial is used to apply the Super Learner framework. The Jaguar study is an "add-on" trial comparing the efficacy of adding didanosine to an on-going failing regimen. Our aim was also to investigate the impact on the use of different cross-validation strategies and different loss functions. Four different repartitions between training set and validations set were tested through two loss functions. Six statistical methods were compared. We assess performance by evaluating R(2) values and accuracy by calculating the rates of patients being correctly classified. Results. Our results indicated that the more recent Super Learner methodology of building a new predictor based on a weighted combination of different methods/learners provided good performance. A simple linear model provided similar results to those of this new predictor. Slight discrepancy arises between the two loss functions investigated, and slight difference arises also between results based on cross-validated risks and results from full dataset. The Super Learner methodology and linear model provided around 80% of patients correctly classified. The difference between the lower and higher rates is around 10 percent. The number of mutations retained in different learners also varys from one to 41. Conclusions. The more recent Super Learner methodology combining the prediction of many learners provided good performance on our small dataset.
The CO 2 with dimethylamine reaction: ab initio predicted vibrational spectra
NASA Astrophysics Data System (ADS)
Jamróz, M. H.; Dobrowolski, J. Cz.; Borowiak, M. A.
1999-05-01
The IR spectra of CO 2, dimethylamine (DMA), (DMA) 2 dimers, DMA⋯CO 2 (2 : 1) complex, dimethylcarbamic acid (DMCA), DMCA⋯DMA (1 : 1) complex, DMCA -, and DMA(H) + were calculated at the B3PW91/6-31G** level. Potential energy distribution (PED) was calculated for predicted spectra to form basis for elucidation of experimental IR data. The stabilisation energy of the studied complexes was corrected by counterpoise method.
Prediction of Ionizing Radiation Resistance in Bacteria Using a Multiple Instance Learning Model.
Aridhi, Sabeur; Sghaier, Haïtham; Zoghlami, Manel; Maddouri, Mondher; Nguifo, Engelbert Mephu
2016-01-01
Ionizing-radiation-resistant bacteria (IRRB) are important in biotechnology. In this context, in silico methods of phenotypic prediction and genotype-phenotype relationship discovery are limited. In this work, we analyzed basal DNA repair proteins of most known proteome sequences of IRRB and ionizing-radiation-sensitive bacteria (IRSB) in order to learn a classifier that correctly predicts this bacterial phenotype. We formulated the problem of predicting bacterial ionizing radiation resistance (IRR) as a multiple-instance learning (MIL) problem, and we proposed a novel approach for this purpose. We provide a MIL-based prediction system that classifies a bacterium to either IRRB or IRSB. The experimental results of the proposed system are satisfactory with 91.5% of successful predictions.
Garrett, Adia J.; Mazzocco, Michèle M. M.; Baker, Linda
2009-01-01
Metacognition refers to knowledge about one’s own cognition. The present study was designed to assess metacognitive skills that either precede or follow task engagement, rather than the processes that occur during a task. Specifically, we examined prediction and evaluation skills among children with (n = 17) or without (n = 179) mathematics learning disability (MLD), from grades 2 to 4. Children were asked to predict which of several math problems they could solve correctly; later, they were asked to solve those problems. They were asked to evaluate whether their solution to each of another set of problems was correct. Children’s ability to evaluate their answers to math problems improved from grade 2 to grade 3, whereas there was no change over time in the children’s ability to predict which problems they could solve correctly. Children with MLD were less accurate than children without MLD in evaluating both their correct and incorrect solutions, and they were less accurate at predicting which problems they could solve correctly. However, children with MLD were as accurate as their peers in correctly predicting that they could not solve specific math problems. The findings have implications for the usefulness of children’s self-review during mathematics problem solving. PMID:20084181
De Buck, Stefan S; Sinha, Vikash K; Fenu, Luca A; Nijsen, Marjoleen J; Mackie, Claire E; Gilissen, Ron A H J
2007-10-01
The aim of this study was to evaluate different physiologically based modeling strategies for the prediction of human pharmacokinetics. Plasma profiles after intravenous and oral dosing were simulated for 26 clinically tested drugs. Two mechanism-based predictions of human tissue-to-plasma partitioning (P(tp)) from physicochemical input (method Vd1) were evaluated for their ability to describe human volume of distribution at steady state (V(ss)). This method was compared with a strategy that combined predicted and experimentally determined in vivo rat P(tp) data (method Vd2). Best V(ss) predictions were obtained using method Vd2, providing that rat P(tp) input was corrected for interspecies differences in plasma protein binding (84% within 2-fold). V(ss) predictions from physicochemical input alone were poor (32% within 2-fold). Total body clearance (CL) was predicted as the sum of scaled rat renal clearance and hepatic clearance projected from in vitro metabolism data. Best CL predictions were obtained by disregarding both blood and microsomal or hepatocyte binding (method CL2, 74% within 2-fold), whereas strong bias was seen using both blood and microsomal or hepatocyte binding (method CL1, 53% within 2-fold). The physiologically based pharmacokinetics (PBPK) model, which combined methods Vd2 and CL2 yielded the most accurate predictions of in vivo terminal half-life (69% within 2-fold). The Gastroplus advanced compartmental absorption and transit model was used to construct an absorption-disposition model and provided accurate predictions of area under the plasma concentration-time profile, oral apparent volume of distribution, and maximum plasma concentration after oral dosing, with 74%, 70%, and 65% within 2-fold, respectively. This evaluation demonstrates that PBPK models can lead to reasonable predictions of human pharmacokinetics.
George, Joanne M; Boyd, Roslyn N; Colditz, Paul B; Rose, Stephen E; Pannek, Kerstin; Fripp, Jurgen; Lingwood, Barbara E; Lai, Melissa M; Kong, Annice H T; Ware, Robert S; Coulthard, Alan; Finn, Christine M; Bandaranayake, Sasaka E
2015-09-16
More than 50 percent of all infants born very preterm will experience significant motor and cognitive impairment. Provision of early intervention is dependent upon accurate, early identification of infants at risk of adverse outcomes. Magnetic resonance imaging at term equivalent age combined with General Movements assessment at 12 weeks corrected age is currently the most accurate method for early prediction of cerebral palsy at 12 months corrected age. To date no studies have compared the use of earlier magnetic resonance imaging combined with neuromotor and neurobehavioural assessments (at 30 weeks postmenstrual age) to predict later motor and neurodevelopmental outcomes including cerebral palsy (at 12-24 months corrected age). This study aims to investigate i) the relationship between earlier brain imaging and neuromotor/neurobehavioural assessments at 30 and 40 weeks postmenstrual age, and ii) their ability to predict motor and neurodevelopmental outcomes at 3 and 12 months corrected age. This prospective cohort study will recruit 80 preterm infants born ≤ 30 week's gestation and a reference group of 20 healthy term born infants from the Royal Brisbane & Women's Hospital in Brisbane, Australia. Infants will undergo brain magnetic resonance imaging at approximately 30 and 40 weeks postmenstrual age to develop our understanding of very early brain structure at 30 weeks and maturation that occurs between 30 and 40 weeks postmenstrual age. A combination of neurological (Hammersmith Neonatal Neurologic Examination), neuromotor (General Movements, Test of Infant Motor Performance), neurobehavioural (NICU Network Neurobehavioural Scale, Premie-Neuro) and visual assessments will be performed at 30 and 40 weeks postmenstrual age to improve our understanding of the relationship between brain structure and function. These data will be compared to motor assessments at 12 weeks corrected age and motor and neurodevelopmental outcomes at 12 months corrected age (neurological assessment by paediatrician, Bayley scales of Infant and Toddler Development, Alberta Infant Motor Scale, Neurosensory Motor Developmental Assessment) to differentiate atypical development (including cerebral palsy and/or motor delay). Earlier identification of those very preterm infants at risk of adverse neurodevelopmental and motor outcomes provides an additional period for intervention to optimise outcomes. Australian New Zealand Clinical Trials Registry ACTRN12613000280707. Registered 8 March 2013.
Calculation and measurement of radiation corrections for plasmon resonances in nanoparticles
NASA Astrophysics Data System (ADS)
Hung, L.; Lee, S. Y.; McGovern, O.; Rabin, O.; Mayergoyz, I.
2013-08-01
The problem of plasmon resonances in metallic nanoparticles can be formulated as an eigenvalue problem under the condition that the wavelengths of the incident radiation are much larger than the particle dimensions. As the nanoparticle size increases, the quasistatic condition is no longer valid. For this reason, the accuracy of the electrostatic approximation may be compromised and appropriate radiation corrections for the calculation of resonance permittivities and resonance wavelengths are needed. In this paper, we present the radiation corrections in the framework of the eigenvalue method for plasmon mode analysis and demonstrate that the computational results accurately match analytical solutions (for nanospheres) and experimental data (for nanorings and nanocubes). We also demonstrate that the optical spectra of silver nanocube suspensions can be fully assigned to dipole-type resonance modes when radiation corrections are introduced. Finally, our method is used to predict the resonance wavelengths for face-to-face silver nanocube dimers on glass substrates. These results may be useful for the indirect measurements of the gaps in the dimers from extinction cross-section observations.
Heyman, Gene M.; Grisanzio, Katherine A.; Liang, Victor
2016-01-01
We tested whether principles that describe the allocation of overt behavior, as in choice experiments, also describe the allocation of cognition, as in attention experiments. Our procedure is a cognitive version of the “two-armed bandit choice procedure.” The two-armed bandit procedure has been of interest to psychologistsand economists because it tends to support patterns of responding that are suboptimal. Each of two alternatives provides rewards according to fixed probabilities. The optimal solution is to choose the alternative with the higher probability of reward on each trial. However, subjects often allocate responses so that the probability of a response approximates its probability of reward. Although it is this result which has attracted most interest, probability matching is not always observed. As a function of monetary incentives, practice, and individual differences, subjects tend to deviate from probability matching toward exclusive preference, as predicted by maximizing. In our version of the two-armed bandit procedure, the monitor briefly displayed two, small adjacent stimuli that predicted correct responses according to fixed probabilities, as in a two-armed bandit procedure. We show that in this setting, a simple linear equation describes the relationship between attention and correct responses, and that the equation’s solution is the allocation of attention between the two stimuli. The calculations showed that attention allocation varied as a function of the degree to which the stimuli predicted correct responses. Linear regression revealed a strong correlation (r = 0.99) between the predictiveness of a stimulus and the probability of attending to it. Nevertheless there were deviations from probability matching, and although small, they were systematic and statistically significant. As in choice studies, attention allocation deviated toward maximizing as a function of practice, feedback, and incentives. Our approach also predicts the frequency of correct guesses and the relationship between attention allocation and response latencies. The results were consistent with these two predictions, the assumptions of the equations used to calculate attention allocation, and recent studies which show that predictiveness and reward are important determinants of attention. PMID:27014109
A viable method to predict acoustic streaming in presence of cavitation.
Louisnard, O
2017-03-01
The steady liquid flow observed under ultrasonic emitters generating acoustic cavitation can be successfully predicted by a standard turbulent flow calculation. The flow is driven by the classical averaged volumetric force density calculated from the acoustic field, but the inertial term in Navier-Stokes equations must be kept, and a turbulent solution must be sought. The acoustic field must be computed with a realistic model, properly accounting for dissipation by the cavitation bubbles [Louisnard, Ultrason. Sonochem., 19, (2012) 56-65]. Comparison with 20kHz experiments, involving the combination of acoustic streaming and a perpendicular forced flow in a duct, shows reasonably good agreement. Moreover, the persistence of the cavitation effects on the wall facing the emitter, in spite of the deflection of the streaming jet, is correctly reproduced by the model. It is also shown that predictions based either on linear acoustics with the correct turbulent solution, or with Louisnard's model with Eckart-Nyborg's theory yields unrealistic results. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Ferguson, D. R.
1972-01-01
The streamtube curvature program (STC) has been developed to predict the inviscid flow field and the pressure distribution about nacelles at transonic speeds. The effects of boundary layer are to displace the inviscid flow and effectively change the body shape. Thus, the body shape must be corrected by the displacement thickness in order to calculate the correct pressure distribution. This report describes the coupling of the Stratford and Beavers boundary layer solution with the inviscid STC analysis so that all nacelle pressure forces, friction drag, and incipient separation may be predicted. The usage of the coupled STC-SAB computer program is outlined and the program input and output are defined. Included in this manual are descriptions of the principal boundary layer tables and other revisions to the STC program. The use of the viscous option is controlled by the engineer during program input definition.
Sine Rotation Vector Method for Attitude Estimation of an Underwater Robot
Ko, Nak Yong; Jeong, Seokki; Bae, Youngchul
2016-01-01
This paper describes a method for estimating the attitude of an underwater robot. The method employs a new concept of sine rotation vector and uses both an attitude heading and reference system (AHRS) and a Doppler velocity log (DVL) for the purpose of measurement. First, the acceleration and magnetic-field measurements are transformed into sine rotation vectors and combined. The combined sine rotation vector is then transformed into the differences between the Euler angles of the measured attitude and the predicted attitude; the differences are used to correct the predicted attitude. The method was evaluated according to field-test data and simulation data and compared to existing methods that calculate angular differences directly without a preceding sine rotation vector transformation. The comparison verifies that the proposed method improves the attitude estimation performance. PMID:27490549
A New Methodology for the Extension of the Impact of Data Assimilation on Ocean Wave Prediction
2008-07-01
Assimilation method The analysis fields used were corrected by an assimilation method developed at the Norwegian Meteorological Insti- tute ( Breivik and Reistad...523–535 525 becomes equal to the solution obtained by optimal interpolation (see Bratseth 1986 and Breivik and Reistad 1994). The iterations begin with...updated accordingly. A more detailed description of the assimilation method is given in Breivik and Reistad (1994). 2.3 Kolmogorov–Zurbenko filters
Brady, Amie M.G.; Plona, Meg B.
2012-01-01
The Cuyahoga River within Cuyahoga Valley National Park (CVNP) is at times impaired for recreational use due to elevated concentrations of Escherichia coli (E. coli), a fecal-indicator bacterium. During the recreational seasons of mid-May through September during 2009–11, samples were collected 4 days per week and analyzed for E. coli concentrations at two sites within CVNP. Other water-quality and environ-mental data, including turbidity, rainfall, and streamflow, were measured and (or) tabulated for analysis. Regression models developed to predict recreational water quality in the river were implemented during the recreational seasons of 2009–11 for one site within CVNP–Jaite. For the 2009 and 2010 seasons, the regression models were better at predicting exceedances of Ohio's single-sample standard for primary-contact recreation compared to the traditional method of using the previous day's E. coli concentration. During 2009, the regression model was based on data collected during 2005 through 2008, excluding available 2004 data. The resulting model for 2009 did not perform as well as expected (based on the calibration data set) and tended to overestimate concentrations (correct responses at 69 percent). During 2010, the regression model was based on data collected during 2004 through 2009, including all of the available data. The 2010 model performed well, correctly predicting 89 percent of the samples above or below the single-sample standard, even though the predictions tended to be lower than actual sample concentrations. During 2011, the regression model was based on data collected during 2004 through 2010 and tended to overestimate concentrations. The 2011 model did not perform as well as the traditional method or as expected, based on the calibration dataset (correct responses at 56 percent). At a second site—Lock 29, approximately 5 river miles upstream from Jaite, a regression model based on data collected at the site during the recreational seasons of 2008–10 also did not perform as well as the traditional method or as well as expected (correct responses at 60 percent). Above normal precipitation in the region and a delayed start to the 2011 sampling season (sampling began mid-June) may have affected how well the 2011 models performed. With these new data, however, updated regression models may be better able to predict recreational water quality conditions due to the increased amount of diverse water quality conditions included in the calibration data. Daily recreational water-quality predictions for Jaite were made available on the Ohio Nowcast Web site at www.ohionowcast.info. Other public outreach included signage at trailheads in the park, articles in the park's quarterly-published schedule of events and volunteer newsletters. A U.S. Geological Survey Fact Sheet was also published to bring attention to water-quality issues in the park.
He, Hua; McDermott, Michael P.
2012-01-01
Sensitivity and specificity are common measures of the accuracy of a diagnostic test. The usual estimators of these quantities are unbiased if data on the diagnostic test result and the true disease status are obtained from all subjects in an appropriately selected sample. In some studies, verification of the true disease status is performed only for a subset of subjects, possibly depending on the result of the diagnostic test and other characteristics of the subjects. Estimators of sensitivity and specificity based on this subset of subjects are typically biased; this is known as verification bias. Methods have been proposed to correct verification bias under the assumption that the missing data on disease status are missing at random (MAR), that is, the probability of missingness depends on the true (missing) disease status only through the test result and observed covariate information. When some of the covariates are continuous, or the number of covariates is relatively large, the existing methods require parametric models for the probability of disease or the probability of verification (given the test result and covariates), and hence are subject to model misspecification. We propose a new method for correcting verification bias based on the propensity score, defined as the predicted probability of verification given the test result and observed covariates. This is estimated separately for those with positive and negative test results. The new method classifies the verified sample into several subsamples that have homogeneous propensity scores and allows correction for verification bias. Simulation studies demonstrate that the new estimators are more robust to model misspecification than existing methods, but still perform well when the models for the probability of disease and probability of verification are correctly specified. PMID:21856650
3D Markov Process for Traffic Flow Prediction in Real-Time.
Ko, Eunjeong; Ahn, Jinyoung; Kim, Eun Yi
2016-01-25
Recently, the correct estimation of traffic flow has begun to be considered an essential component in intelligent transportation systems. In this paper, a new statistical method to predict traffic flows using time series analyses and geometric correlations is proposed. The novelty of the proposed method is two-fold: (1) a 3D heat map is designed to describe the traffic conditions between roads, which can effectively represent the correlations between spatially- and temporally-adjacent traffic states; and (2) the relationship between the adjacent roads on the spatiotemporal domain is represented by cliques in MRF and the clique parameters are obtained by example-based learning. In order to assess the validity of the proposed method, it is tested using data from expressway traffic that are provided by the Korean Expressway Corporation, and the performance of the proposed method is compared with existing approaches. The results demonstrate that the proposed method can predict traffic conditions with an accuracy of 85%, and this accuracy can be improved further.
3D Markov Process for Traffic Flow Prediction in Real-Time
Ko, Eunjeong; Ahn, Jinyoung; Kim, Eun Yi
2016-01-01
Recently, the correct estimation of traffic flow has begun to be considered an essential component in intelligent transportation systems. In this paper, a new statistical method to predict traffic flows using time series analyses and geometric correlations is proposed. The novelty of the proposed method is two-fold: (1) a 3D heat map is designed to describe the traffic conditions between roads, which can effectively represent the correlations between spatially- and temporally-adjacent traffic states; and (2) the relationship between the adjacent roads on the spatiotemporal domain is represented by cliques in MRF and the clique parameters are obtained by example-based learning. In order to assess the validity of the proposed method, it is tested using data from expressway traffic that are provided by the Korean Expressway Corporation, and the performance of the proposed method is compared with existing approaches. The results demonstrate that the proposed method can predict traffic conditions with an accuracy of 85%, and this accuracy can be improved further. PMID:26821025
Primordial spectra of slow-roll inflation at second-order with the Gauss-Bonnet correction
NASA Astrophysics Data System (ADS)
Wu, Qiang; Zhu, Tao; Wang, Anzhong
2018-05-01
The slow-roll inflation for a single scalar field that couples to the Gauss-Bonnet (GB) term represents an important higher-order curvature correction inspired by string theory. With the arrival of the era of precision cosmology, it is expected that the high-order corrections become more and more important. In this paper we study the observational predictions of the slow-roll inflation with the GB term by using the third-order uniform asymptotic approximation method. We calculate explicitly the primordial power spectra, spectral indices, running of the spectral indices for both scalar and tensor perturbations, and the ratio between tensor and scalar spectra. These expressions are all written in terms of the Hubble and GB coupling flow parameters and expanded up to the next-to-leading order in the slow-roll expansions so they represent the most accurate results obtained so far in the literature. In addition, by studying the theoretical predictions of the scalar spectral index and the tensor-to-scalar ratio with the Planck 2015 constraints in a model with power-law potential and GB coupling, we show that the second-order corrections are important in the future measurements. We expect that the understanding of the GB corrections in the primordial spectra and their constraints by forthcoming observational data will provide clues for the UV complete theory of quantum gravity, such as the string/M-theory.
The Cognitive and Perceptual Laws of the Inclined Plane.
Masin, Sergio Cesare
2016-09-01
The study explored whether laypersons correctly tacitly know Galileo's law of the inclined plane and what the basis of such knowledge could be. Participants predicted the time a ball would take to roll down a slope with factorial combination of ball travel distance and slope angle. The resulting pattern of factorial curves relating the square of predicted time to travel distance for each slope angle was identical to that implied by Galileo's law, indicating a correct cognitive representation of this law. Intuitive physics research suggests that this cognitive representation may result from memories of past perceptions of objects rolling down a slope. Such a basis and the correct cognitive representation of Galileo's law led to the hypothesis that Galileo's law is also perceptually represented correctly. To test this hypothesis, participants were asked to judge the perceived travel time of a ball actually rolling down a slope, with perceived travel distance and perceived slope angle varied in a factorial design. The obtained pattern of factorial curves was equal to that implied by Galileo's law, indicating that the functional relationships defined in this law were perceptually represented correctly. The results foster the idea that laypersons may tacitly know both linear and nonlinear multiplicative physical laws of the everyday world. As a practical implication, the awareness of this conclusion may help develop more effective methods for teaching physics and for improving human performance in the physical environment.
An analysis of the ArcCHECK-MR diode array's performance for ViewRay quality assurance.
Ellefson, Steven T; Culberson, Wesley S; Bednarz, Bryan P; DeWerd, Larry A; Bayouth, John E
2017-07-01
The ArcCHECK-MR diode array utilizes a correction system with a virtual inclinometer to correct the angular response dependencies of the diodes. However, this correction system cannot be applied to measurements on the ViewRay MR-IGRT system due to the virtual inclinometer's incompatibility with the ViewRay's multiple simultaneous beams. Additionally, the ArcCHECK's current correction factors were determined without magnetic field effects taken into account. In the course of performing ViewRay IMRT quality assurance with the ArcCHECK, measurements were observed to be consistently higher than the ViewRay TPS predictions. The goals of this study were to quantify the observed discrepancies and test whether applying the current factors improves the ArcCHECK's accuracy for measurements on the ViewRay. Gamma and frequency analysis were performed on 19 ViewRay patient plans. Ion chamber measurements were performed at a subset of diode locations using a PMMA phantom with the same dimensions as the ArcCHECK. A new method for applying directionally dependent factors utilizing beam information from the ViewRay TPS was developed in order to analyze the current ArcCHECK correction factors. To test the current factors, nine ViewRay plans were altered to be delivered with only a single simultaneous beam and were measured with the ArcCHECK. The current correction factors were applied using both the new and current methods. The new method was also used to apply corrections to the original 19 ViewRay plans. It was found the ArcCHECK systematically reports doses higher than those actually delivered by the ViewRay. Application of the current correction factors by either method did not consistently improve measurement accuracy. As dose deposition and diode response have both been shown to change under the influence of a magnetic field, it can be concluded the current ArcCHECK correction factors are invalid and/or inadequate to correct measurements on the ViewRay system. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Stisen, S.; Højberg, A. L.; Troldborg, L.; Refsgaard, J. C.; Christensen, B. S. B.; Olsen, M.; Henriksen, H. J.
2012-11-01
Precipitation gauge catch correction is often given very little attention in hydrological modelling compared to model parameter calibration. This is critical because significant precipitation biases often make the calibration exercise pointless, especially when supposedly physically-based models are in play. This study addresses the general importance of appropriate precipitation catch correction through a detailed modelling exercise. An existing precipitation gauge catch correction method addressing solid and liquid precipitation is applied, both as national mean monthly correction factors based on a historic 30 yr record and as gridded daily correction factors based on local daily observations of wind speed and temperature. The two methods, named the historic mean monthly (HMM) and the time-space variable (TSV) correction, resulted in different winter precipitation rates for the period 1990-2010. The resulting precipitation datasets were evaluated through the comprehensive Danish National Water Resources model (DK-Model), revealing major differences in both model performance and optimised model parameter sets. Simulated stream discharge is improved significantly when introducing the TSV correction, whereas the simulated hydraulic heads and multi-annual water balances performed similarly due to recalibration adjusting model parameters to compensate for input biases. The resulting optimised model parameters are much more physically plausible for the model based on the TSV correction of precipitation. A proxy-basin test where calibrated DK-Model parameters were transferred to another region without site specific calibration showed better performance for parameter values based on the TSV correction. Similarly, the performances of the TSV correction method were superior when considering two single years with a much dryer and a much wetter winter, respectively, as compared to the winters in the calibration period (differential split-sample tests). We conclude that TSV precipitation correction should be carried out for studies requiring a sound dynamic description of hydrological processes, and it is of particular importance when using hydrological models to make predictions for future climates when the snow/rain composition will differ from the past climate. This conclusion is expected to be applicable for mid to high latitudes, especially in coastal climates where winter precipitation types (solid/liquid) fluctuate significantly, causing climatological mean correction factors to be inadequate.
NASA Astrophysics Data System (ADS)
Rosenbaum, Joyce E.
2011-12-01
Commercial air traffic is anticipated to increase rapidly in the coming years. The impact of aviation noise on communities surrounding airports is, therefore, a growing concern. Accurate prediction of noise can help to mitigate the impact on communities and foster smoother integration of aerospace engineering advances. The problem of accurate sound level prediction requires careful inclusion of all mechanisms that affect propagation, in addition to correct source characterization. Terrain, ground type, meteorological effects, and source directivity can have a substantial influence on the noise level. Because they are difficult to model, these effects are often included only by rough approximation. This dissertation presents a model designed for sound propagation over uneven terrain, with mixed ground type and realistic meteorological conditions. The model is a hybrid of two numerical techniques: the parabolic equation (PE) and fast field program (FFP) methods, which allow for physics-based inclusion of propagation effects and ensure the low frequency content, a factor in community impact, is predicted accurately. Extension of the hybrid model to a pseudo-three-dimensional representation allows it to produce aviation noise contour maps in the standard form. In order for the model to correctly characterize aviation noise sources, a method of representing arbitrary source directivity patterns was developed for the unique form of the parabolic equation starting field. With this advancement, the model can represent broadband, directional moving sound sources, traveling along user-specified paths. This work was prepared for possible use in the research version of the sound propagation module in the Federal Aviation Administration's new standard predictive tool.
Biomarker Surrogates Do Not Accurately Predict Sputum Eosinophils and Neutrophils in Asthma
Hastie, Annette T.; Moore, Wendy C.; Li, Huashi; Rector, Brian M.; Ortega, Victor E.; Pascual, Rodolfo M.; Peters, Stephen P.; Meyers, Deborah A.; Bleecker, Eugene R.
2013-01-01
Background Sputum eosinophils (Eos) are a strong predictor of airway inflammation, exacerbations, and aid asthma management, whereas sputum neutrophils (Neu) indicate a different severe asthma phenotype, potentially less responsive to TH2-targeted therapy. Variables such as blood Eos, total IgE, fractional exhaled nitric oxide (FeNO) or FEV1% predicted, may predict airway Eos, while age, FEV1%predicted, or blood Neu may predict sputum Neu. Availability and ease of measurement are useful characteristics, but accuracy in predicting airway Eos and Neu, individually or combined, is not established. Objectives To determine whether blood Eos, FeNO, and IgE accurately predict sputum eosinophils, and age, FEV1% predicted, and blood Neu accurately predict sputum neutrophils (Neu). Methods Subjects in the Wake Forest Severe Asthma Research Program (N=328) were characterized by blood and sputum cells, healthcare utilization, lung function, FeNO, and IgE. Multiple analytical techniques were utilized. Results Despite significant association with sputum Eos, blood Eos, FeNO and total IgE did not accurately predict sputum Eos, and combinations of these variables failed to improve prediction. Age, FEV1%predicted and blood Neu were similarly unsatisfactory for prediction of sputum Neu. Factor analysis and stepwise selection found FeNO, IgE and FEV1% predicted, but not blood Eos, correctly predicted 69% of sputum Eos
NASA Astrophysics Data System (ADS)
Li, Xiaoli; Zeng, Zhi; Shen, Jingling; Zhang, Cunlin; Zhao, Yuejin
2018-03-01
Logarithmic peak second derivative (LPSD) method is the most popular method for depth prediction in pulsed thermography. It is widely accepted that this method is independent of defect size. The theoretical model for LPSD method is based on the one-dimensional solution of heat conduction without considering the effect of defect size. When a decay term considering defect aspect ratio is introduced into the solution to correct the three-dimensional thermal diffusion effect, we found that LPSD method is affected by defect size by analytical model. Furthermore, we constructed the relation between the characteristic time of LPSD method and defect aspect ratio, which was verified with the experimental results of stainless steel and glass fiber reinforced plate (GFRP) samples. We also proposed an improved LPSD method for depth prediction when the effect of defect size was considered, and the rectification results of stainless steel and GFRP samples were presented and discussed.
Predicting plantar fasciitis in runners.
Warren, B L; Jones, C J
1987-02-01
Ninety-one runners were studied to determine whether specific variables were indicative of runners who had suffered with plantar fasciitis either presently or formerly vs runners who had never suffered with plantar fasciitis. Each runner was asked to complete a running history, was subjected to several anatomical measurements, and was asked to run on a treadmill in both a barefoot and shoe condition at a speed of 3.35 mps (8 min mile pace). Factor coefficients were used in a discriminant function analysis which revealed that, when group membership was predicted, 63% of the runners could be correctly assigned to their group. Considering that 76% of the control group was correctly predicted, it was concluded that the predictor variables were able to correctly predict membership of the control group, but not able to correctly predict the presently or formerly injured sufferers of plantar fasciitis.
Quantitative CT based radiomics as predictor of resectability of pancreatic adenocarcinoma
NASA Astrophysics Data System (ADS)
van der Putten, Joost; Zinger, Svitlana; van der Sommen, Fons; de With, Peter H. N.; Prokop, Mathias; Hermans, John
2018-02-01
In current clinical practice, the resectability of pancreatic ductal adenocarcinoma (PDA) is determined subjec- tively by a physician, which is an error-prone procedure. In this paper, we present a method for automated determination of resectability of PDA from a routine abdominal CT, to reduce such decision errors. The tumor features are extracted from a group of patients with both hypo- and iso-attenuating tumors, of which 29 were resectable and 21 were not. The tumor contours are supplied by a medical expert. We present an approach that uses intensity, shape, and texture features to determine tumor resectability. The best classification results are obtained with fine Gaussian SVM and the L0 Feature Selection algorithms. Compared to expert predictions made on the same dataset, our method achieves better classification results. We obtain significantly better results on correctly predicting non-resectability (+17%) compared to a expert, which is essential for patient treatment (negative prediction value). Moreover, our predictions of resectability exceed expert predictions by approximately 3% (positive prediction value).
NASA Astrophysics Data System (ADS)
Samadi; Wajizah, S.; Munawar, A. A.
2018-02-01
Feed plays an important factor in animal production. The purpose of this study is to apply NIRS method in determining feed values. NIRS spectra data were acquired for feed samples in wavelength range of 1000 - 2500 nm with 32 scans and 0.2 nm wavelength. Spectral data were corrected by de-trending (DT) and standard normal variate (SNV) methods. Prediction of in vitro dry matter digestibility (IVDMD) and in vitro organic matter digestibility (IVOMD) were established as model by using principal component regression (PCR) and validated using leave one out cross validation (LOOCV). Prediction performance was quantified using coefficient correlation (r) and residual predictive deviation (RPD) index. The results showed that IVDMD and IVOMD can be predicted by using SNV spectra data with r and RPD index: 0.93 and 2.78 for IVDMD ; 0.90 and 2.35 for IVOMD respectively. In conclusion, NIRS technique appears feasible to predict animal feed nutritive values.
Using GPS, GIS, and Accelerometer Data to Predict Transportation Modes.
Brondeel, Ruben; Pannier, Bruno; Chaix, Basile
2015-12-01
Active transportation is a substantial source of physical activity, which has a positive influence on many health outcomes. A survey of transportation modes for each trip is challenging, time-consuming, and requires substantial financial investments. This study proposes a passive collection method and the prediction of modes at the trip level using random forests. The RECORD GPS study collected real-life trip data from 236 participants over 7 d, including the transportation mode, global positioning system, geographical information systems, and accelerometer data. A prediction model of transportation modes was constructed using the random forests method. Finally, we investigated the performance of models on the basis of a limited number of participants/trips to predict transportation modes for a large number of trips. The full model had a correct prediction rate of 90%. A simpler model of global positioning system explanatory variables combined with geographical information systems variables performed nearly as well. Relatively good predictions could be made using a model based on the 991 trips of the first 30 participants. This study uses real-life data from a large sample set to test a method for predicting transportation modes at the trip level, thereby providing a useful complement to time unit-level prediction methods. By enabling predictions on the basis of a limited number of observations, this method may decrease the workload for participants/researchers and provide relevant trip-level data to investigate relations between transportation and health.
Avoiding drift related to linear analysis update with Lagrangian coordinate models
NASA Astrophysics Data System (ADS)
Wang, Yiguo; Counillon, Francois; Bertino, Laurent
2015-04-01
When applying data assimilation to Lagrangian coordinate models, it is profitable to correct its grid (position, volume). In isopycnal ocean coordinate model, such information is provided by the layer thickness that can be massless but must remains positive (truncated Gaussian distribution). A linear gaussian analysis does not ensure positivity for such variable. Existing methods have been proposed to handle this issue - e.g. post processing, anamorphosis or resampling - but none ensures conservation of the mean, which is imperative in climate application. Here, a framework is introduced to test a new method, which proceed as following. First, layers for which analysis yields negative values are iteratively grouped with neighboring layers, resulting in a probability density function with a larger mean and smaller standard deviation that prevent appearance of negative values. Second, analysis increments of the grouped layer are uniformly distributed, which prevent massless layers to become filled and vice-versa. The new method is proved fully conservative with e.g. OI or 3DVAR but a small drift remains with ensemble-based methods (e.g. EnKF, DEnKF, …) during the update of the ensemble anomaly. However, the resulting drift with the latter is small (an order of magnitude smaller than with post-processing) and the increase of the computational cost moderate. The new method is demonstrated with a realistic application in the Norwegian Climate Prediction Model (NorCPM) that provides climate prediction by assimilating sea surface temperature with the Ensemble Kalman Filter in a fully coupled Earth System model (NorESM) with an isopycnal ocean model (MICOM). Over 25-year analysis period, the new method does not impair the predictive skill of the system but corrects the artificial steric drift introduced by data assimilation, and provide estimate in good agreement with IPCC AR5.
Application of Exactly Linearized Error Transport Equations to AIAA CFD Prediction Workshops
NASA Technical Reports Server (NTRS)
Derlaga, Joseph M.; Park, Michael A.; Rallabhandi, Sriram
2017-01-01
The computational fluid dynamics (CFD) prediction workshops sponsored by the AIAA have created invaluable opportunities in which to discuss the predictive capabilities of CFD in areas in which it has struggled, e.g., cruise drag, high-lift, and sonic boom pre diction. While there are many factors that contribute to disagreement between simulated and experimental results, such as modeling or discretization error, quantifying the errors contained in a simulation is important for those who make decisions based on the computational results. The linearized error transport equations (ETE) combined with a truncation error estimate is a method to quantify one source of errors. The ETE are implemented with a complex-step method to provide an exact linearization with minimal source code modifications to CFD and multidisciplinary analysis methods. The equivalency of adjoint and linearized ETE functional error correction is demonstrated. Uniformly refined grids from a series of AIAA prediction workshops demonstrate the utility of ETE for multidisciplinary analysis with a connection between estimated discretization error and (resolved or under-resolved) flow features.
Theoretical prediction of welding distortion in large and complex structures
NASA Astrophysics Data System (ADS)
Deng, De-An
2010-06-01
Welding technology is widely used to assemble large thin plate structures such as ships, automobiles, and passenger trains because of its high productivity. However, it is impossible to avoid welding-induced distortion during the assembly process. Welding distortion not only reduces the fabrication accuracy of a weldment, but also decreases the productivity due to correction work. If welding distortion can be predicted using a practical method beforehand, the prediction will be useful for taking appropriate measures to control the dimensional accuracy to an acceptable limit. In this study, a two-step computational approach, which is a combination of a thermoelastic-plastic finite element method (FEM) and an elastic finite element with consideration for large deformation, is developed to estimate welding distortion for large and complex welded structures. Welding distortions in several representative large complex structures, which are often used in shipbuilding, are simulated using the proposed method. By comparing the predictions and the measurements, the effectiveness of the two-step computational approach is verified.
NASA Astrophysics Data System (ADS)
Cowie, Leanne; Kusznir, Nick
2014-05-01
Subsidence analysis of sedimentary basins and rifted continental margins requires a correction for the anomalous uplift or subsidence arising from mantle dynamic topography. Whilst different global model predictions of mantle dynamic topography may give a broadly similar pattern at long wavelengths, they differ substantially in the predicted amplitude and at shorter wavelengths. As a consequence the accuracy of predicted mantle dynamic topography is not sufficiently good to provide corrections for subsidence analysis. Measurements of present day anomalous subsidence, which we attribute to mantle dynamic topography, have been made for three rifted continental margins; offshore Iberia, the Gulf of Aden and southern Angola. We determine residual depth anomaly (RDA), corrected for sediment loading and crustal thickness variation for 2D profiles running from unequivocal oceanic crust across the continental ocean boundary onto thinned continental crust. Residual depth anomalies (RDA), corrected for sediment loading using flexural backstripping and decompaction, have been calculated by comparing observed and age predicted oceanic bathymetries at these margins. Age predicted bathymetric anomalies have been calculated using the thermal plate model predictions from Crosby & McKenzie (2009). Non-zero sediment corrected RDAs may result from anomalous oceanic crustal thickness with respect to the global average or from anomalous uplift or subsidence. Gravity anomaly inversion incorporating a lithosphere thermal gravity anomaly correction and sediment thickness from 2D seismic reflection data has been used to determine Moho depth, calibrated using seismic refraction, and oceanic crustal basement thickness. Crustal basement thicknesses derived from gravity inversion together with Airy isostasy have been used to correct for variations of crustal thickness from a standard oceanic thickness of 7km. The 2D profiles of RDA corrected for both sediment loading and non-standard crustal thickness provide a measurement of anomalous uplift or subsidence which we attribute to mantle dynamic topography. We compare our sediment and crustal thickness corrected RDA analysis results with published predictions of mantle dynamic topography from global models.
Reconstructing ice-age palaeoclimates: Quantifying low-CO2 effects on plants
NASA Astrophysics Data System (ADS)
Prentice, I. C.; Cleator, S. F.; Huang, Y. H.; Harrison, S. P.; Roulstone, I.
2017-02-01
We present a novel method to quantify the ecophysiological effects of changes in CO2 concentration during the reconstruction of climate changes from fossil pollen assemblages. The method does not depend on any particular vegetation model. Instead, it makes use of general equations from ecophysiology and hydrology that link moisture index (MI) to transpiration and the ratio of leaf-internal to ambient CO2 (χ). Statistically reconstructed MI values are corrected post facto for effects of CO2 concentration. The correction is based on the principle that e, the rate of water loss per unit carbon gain, should be inversely related to effective moisture availability as sensed by plants. The method involves solving a non-linear equation that relates e to MI, temperature and CO2 concentration via the Fu-Zhang relation between evapotranspiration and MI, Monteith's empirical relationship between vapour pressure deficit and evapotranspiration, and recently developed theory that predicts the response of χ to vapour pressure deficit and temperature. The solution to this equation provides a correction term for MI. The numerical value of the correction depends on the reconstructed MI. It is slightly sensitive to temperature, but primarily sensitive to CO2 concentration. Under low LGM CO2 concentration the correction is always positive, implying that LGM climate was wetter than it would seem from vegetation composition. A statistical reconstruction of last glacial maximum (LGM, 21±1 kyr BP) palaeoclimates, based on a new compilation of modern and LGM pollen assemblage data from Australia, is used to illustrate the method in practice. Applying the correction brings pollen-reconstructed LGM moisture availability in southeastern Australia better into line with palaeohydrological estimates of LGM climate.
Electrode effects in dielectric spectroscopy of colloidal suspensions
NASA Astrophysics Data System (ADS)
Cirkel, P. A.; van der Ploeg, J. P. M.; Koper, G. J. M.
1997-02-01
We present a simple model to account for electrode polarization in colloidal suspensions. Apart from correctly predicting the ω {-3}/{2} dependence for the dielectric permittivity at low frequencies ω, the model provides an explicit dependence of the effect on electrode spacing. The predictions are tested for the sodium bis(2-ethylhexyl) sulfosuccinate (AOT) water-in-oil microemulsion with iso-octane as continuous phase. In particular, the dependence of electrode polarization effects on electrode spacing has been measured and is found to be in accordance with the model prediction. Methods to reduce or account for electrode polarization are briefly discussed.
Eighteen- and 24-Month-Old Infants Correct Others in Anticipation of Action Mistakes
ERIC Educational Resources Information Center
Knudsen, Birgit; Liszkowski, Ulf
2012-01-01
Much of human communication and collaboration is predicated on making predictions about others' actions. Humans frequently use predictions about others' action mistakes to correct others and spare them mistakes. Such anticipatory correcting reveals a social motivation for unsolicited helping. Cognitively, it requires forward inferences about…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, P; Schreibmann, E; Fox, T
2014-06-15
Purpose: Severe CT artifacts can impair our ability to accurately calculate proton range thereby resulting in a clinically unacceptable treatment plan. In this work, we investigated a novel CT artifact correction method based on a coregistered MRI and investigated its ability to estimate CT HU and proton range in the presence of severe CT artifacts. Methods: The proposed method corrects corrupted CT data using a coregistered MRI to guide the mapping of CT values from a nearby artifact-free region. First patient MRI and CT images were registered using 3D deformable image registration software based on B-spline and mutual information. Themore » CT slice with severe artifacts was selected as well as a nearby slice free of artifacts (e.g. 1cm away from the artifact). The two sets of paired MRI and CT images at different slice locations were further registered by applying 2D deformable image registration. Based on the artifact free paired MRI and CT images, a comprehensive geospatial analysis was performed to predict the correct CT HU of the CT image with severe artifact. For a proof of concept, a known artifact was introduced that changed the ground truth CT HU value up to 30% and up to 5cm error in proton range. The ability of the proposed method to recover the ground truth was quantified using a selected head and neck case. Results: A significant improvement in image quality was observed visually. Our proof of concept study showed that 90% of area that had 30% errors in CT HU was corrected to 3% of its ground truth value. Furthermore, the maximum proton range error up to 5cm was reduced to 4mm error. Conclusion: MRI based CT artifact correction method can improve CT image quality and proton range calculation for patients with severe CT artifacts.« less
An approach to adjustment of relativistic mean field model parameters
NASA Astrophysics Data System (ADS)
Bayram, Tuncay; Akkoyun, Serkan
2017-09-01
The Relativistic Mean Field (RMF) model with a small number of adjusted parameters is powerful tool for correct predictions of various ground-state nuclear properties of nuclei. Its success for describing nuclear properties of nuclei is directly related with adjustment of its parameters by using experimental data. In the present study, the Artificial Neural Network (ANN) method which mimics brain functionality has been employed for improvement of the RMF model parameters. In particular, the understanding capability of the ANN method for relations between the RMF model parameters and their predictions for binding energies (BEs) of 58Ni and 208Pb have been found in agreement with the literature values.
NASA Astrophysics Data System (ADS)
Yang, GuanYa; Wu, Jiang; Chen, ShuGuang; Zhou, WeiJun; Sun, Jian; Chen, GuanHua
2018-06-01
Neural network-based first-principles method for predicting heat of formation (HOF) was previously demonstrated to be able to achieve chemical accuracy in a broad spectrum of target molecules [L. H. Hu et al., J. Chem. Phys. 119, 11501 (2003)]. However, its accuracy deteriorates with the increase in molecular size. A closer inspection reveals a systematic correlation between the prediction error and the molecular size, which appears correctable by further statistical analysis, calling for a more sophisticated machine learning algorithm. Despite the apparent difference between simple and complex molecules, all the essential physical information is already present in a carefully selected set of small molecule representatives. A model that can capture the fundamental physics would be able to predict large and complex molecules from information extracted only from a small molecules database. To this end, a size-independent, multi-step multi-variable linear regression-neural network-B3LYP method is developed in this work, which successfully improves the overall prediction accuracy by training with smaller molecules only. And in particular, the calculation errors for larger molecules are drastically reduced to the same magnitudes as those of the smaller molecules. Specifically, the method is based on a 164-molecule database that consists of molecules made of hydrogen and carbon elements. 4 molecular descriptors were selected to encode molecule's characteristics, among which raw HOF calculated from B3LYP and the molecular size are also included. Upon the size-independent machine learning correction, the mean absolute deviation (MAD) of the B3LYP/6-311+G(3df,2p)-calculated HOF is reduced from 16.58 to 1.43 kcal/mol and from 17.33 to 1.69 kcal/mol for the training and testing sets (small molecules), respectively. Furthermore, the MAD of the testing set (large molecules) is reduced from 28.75 to 1.67 kcal/mol.
NASA Astrophysics Data System (ADS)
O'Carroll, Jack P. J.; Kennedy, Robert; Ren, Lei; Nash, Stephen; Hartnett, Michael; Brown, Colin
2017-10-01
The INFOMAR (Integrated Mapping For the Sustainable Development of Ireland's Marine Resource) initiative has acoustically mapped and classified a significant proportion of Ireland's Exclusive Economic Zone (EEZ), and is likely to be an important tool in Ireland's efforts to meet the criteria of the MSFD. In this study, open source and relic data were used in combination with new grab survey data to model EUNIS level 4 biotope distributions in Galway Bay, Ireland. The correct prediction rates of two artificial neural networks (ANNs) were compared to assess the effectiveness of acoustic sediment classifications versus sediments that were visually classified by an expert in the field as predictor variables. To test for autocorrelation between predictor variables the RELATE routine with Spearman rank correlation method was used. Optimal models were derived by iteratively removing predictor variables and comparing the correct prediction rates of each model. The models with the highest correct prediction rates were chosen as optimal. The optimal models each used a combination of salinity (binary; 0 = polyhaline and 1 = euhaline), proximity to reef (binary; 0 = within 50 m and 1 = outside 50 m), depth (continuous; metres) and a sediment descriptor (acoustic or observed) as predictor variables. As the status of benthic habitats is required to be assessed under the MSFD the Ecological Status (ES) of the subtidal sediments of Galway Bay was also assessed using the Infaunal Quality Index. The ANN that used observed sediment classes as predictor variables could correctly predict the distribution of biotopes 67% of the time, compared to 63% for the ANN using acoustic sediment classes. Acoustic sediment ANN predictions were affected by local sediment heterogeneity, and the lack of a mixed sediment class. The all-round poor performance of ANNs is likely to be a result of the temporally variable and sparsely distributed data within the study area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleury, Leesa M.; Moore, Guy D.
2016-05-03
If the axion exists and if the initial axion field value is uncorrelated at causally disconnected points, then it should be possible to predict the efficiency of cosmological axion production, relating the axionic dark matter density to the axion mass. The main obstacle to making this prediction is correctly treating the axion string cores. We develop a new algorithm for treating the axionic string cores correctly in 2+1 dimensions. When the axionic string cores are given their full physical string tension, axion production is about twice as efficient as in previous simulations. We argue that the string network in 2+1more » dimensions should behave very differently than in 3+1 dimensions, so this result cannot be simply carried over to the physical case. We outline how to extend our method to 3+1D axion string dynamics.« less
NASA Astrophysics Data System (ADS)
Iveson, Simon M.
2003-06-01
Pietruszczak and coworkers (Internat. J. Numer. Anal. Methods Geomech. 1994; 18(2):93-105; Comput. Geotech. 1991; 12( ):55-71) have presented a continuum-based model for predicting the dynamic mechanical response of partially saturated granular media with viscous interstitial liquids. In their model they assume that the gas phase is distributed uniformly throughout the medium as discrete spherical air bubbles occupying the voids between the particles. However, their derivation of the air pressure inside these gas bubbles is inconsistent with their stated assumptions. In addition the resultant dependence of gas pressure on liquid saturation lies outside of the plausible range of possible values for discrete air bubbles. This results in an over-prediction of the average bulk modulus of the void phase. Corrected equations are presented.
Methods for assessing wall interference in the 2- by 2-foot adaptive-wall wind tunnel
NASA Technical Reports Server (NTRS)
Schairer, E. T.
1986-01-01
Discussed are two methods for assessing two-dimensional wall interference in the adaptive-wall test section of the NASA Ames 2 x 2-Foot Transonic Wind Tunnel: (1) a method for predicting free-air conditions near the walls of the test section (adaptive-wall methods); and (2) a method for estimating wall-induced velocities near the model (correction methods), both of which methods are based on measurements of either one or two components of flow velocity near the walls of the test section. Each method is demonstrated using simulated wind tunnel data and is compared with other methods of the same type. The two-component adaptive-wall and correction methods were found to be preferable to the corresponding one-component methods because: (1) they are more sensitive to, and give a more complete description of, wall interference; (2) they require measurements at fewer locations; (3) they can be used to establish free-stream conditions; and (4) they are independent of a description of the model and constants of integration.
Unified treatment of the luminosity distance in cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Jaiyul; Scaccabarozzi, Fulvio, E-mail: jyoo@physik.uzh.ch, E-mail: fulvio@physik.uzh.ch
Comparing the luminosity distance measurements to its theoretical predictions is one of the cornerstones in establishing the modern cosmology. However, as shown in Biern and Yoo, its theoretical predictions in literature are often plagued with infrared divergences and gauge-dependences. This trend calls into question the sanity of the methods used to derive the luminosity distance. Here we critically investigate four different methods—the geometric approach, the Sachs approach, the Jacobi mapping approach, and the geodesic light cone (GLC) approach to modeling the luminosity distance, and we present a unified treatment of such methods, facilitating the comparison among the methods and checkingmore » their sanity. All of these four methods, if exercised properly, can be used to reproduce the correct description of the luminosity distance.« less
NASA Technical Reports Server (NTRS)
Munteanu, M. J.; Piraino, P.; Jakubowicz, O.
1984-01-01
A total of 1575 radiosondes and the corresponding simulated brightness temperatures were used in an effort to derive a temperature retrieval based on the clusters of brightness temperatures. The 8 simulated channels, namely, 3 MSU and 5 IR of the TIROS-N satellite are used by the GLAS temperature retrieval method. The 3 MSU and 5 IR brightness temperatures were clustered into 17 cluster groups and a regression for the prediction of the tropopause height in mb was generated. The overall r.m.s. for the tropopause prediction is excellent, namely, around 16 mb for the summer and 23 mb for the winter. The correct cluster of brightness temperatures can be identified 98% of the time by the method of discriminatory classification if it is approximately a normal distribution or, in general, by the method of the nearest neighbor.
Inspection Robot Based Mobile Sensing and Power Line Tracking for Smart Grid
Byambasuren, Bat-erdene; Kim, Donghan; Oyun-Erdene, Mandakh; Bold, Chinguun; Yura, Jargalbaatar
2016-01-01
Smart sensing and power line tracking is very important in a smart grid system. Illegal electricity usage can be detected by remote current measurement on overhead power lines using an inspection robot. There is a need for accurate detection methods of illegal electricity usage. Stable and correct power line tracking is a very prominent issue. In order to correctly track and make accurate measurements, the swing path of a power line should be previously fitted and predicted by a mathematical function using an inspection robot. After this, the remote inspection robot can follow the power line and measure the current. This paper presents a new power line tracking method using parabolic and circle fitting algorithms for illegal electricity detection. We demonstrate the effectiveness of the proposed tracking method by simulation and experimental results. PMID:26907274
Inspection Robot Based Mobile Sensing and Power Line Tracking for Smart Grid.
Byambasuren, Bat-Erdene; Kim, Donghan; Oyun-Erdene, Mandakh; Bold, Chinguun; Yura, Jargalbaatar
2016-02-19
Smart sensing and power line tracking is very important in a smart grid system. Illegal electricity usage can be detected by remote current measurement on overhead power lines using an inspection robot. There is a need for accurate detection methods of illegal electricity usage. Stable and correct power line tracking is a very prominent issue. In order to correctly track and make accurate measurements, the swing path of a power line should be previously fitted and predicted by a mathematical function using an inspection robot. After this, the remote inspection robot can follow the power line and measure the current. This paper presents a new power line tracking method using parabolic and circle fitting algorithms for illegal electricity detection. We demonstrate the effectiveness of the proposed tracking method by simulation and experimental results.
Day, Ryan; Qu, Xiaotao; Swanson, Rosemarie; Bohannan, Zach; Bliss, Robert
2011-01-01
Abstract Most current template-based structure prediction methods concentrate on finding the correct backbone conformation and then packing sidechains within that backbone. Our packing-based method derives distance constraints from conserved relative packing groups (RPGs). In our refinement approach, the RPGs provide a level of resolution that restrains global topology while allowing conformational sampling. In this study, we test our template-based structure prediction method using 51 prediction units from CASP7 experiments. RPG-based constraints are able to substantially improve approximately two-thirds of starting templates. Upon deeper investigation, we find that true positive spatial constraints, especially those non-local in sequence, derived from the RPGs were important to building nearer native models. Surprisingly, the fraction of incorrect or false positive constraints does not strongly influence the quality of the final candidate. This result indicates that our RPG-based true positive constraints sample the self-consistent, cooperative interactions of the native structure. The lack of such reinforcing cooperativity explains the weaker effect of false positive constraints. Generally, these findings are encouraging indications that RPGs will improve template-based structure prediction. PMID:21210729
NASA Astrophysics Data System (ADS)
Greenhalgh, E. E.; Kusznir, N. J.
2006-12-01
Satellite gravity inversion incorporating a lithosphere thermal gravity correction has been used to map crustal thickness and lithosphere thinning factor for the N.E. Atlantic. The inversion of gravity data to determine crustal thickness incorporates a lithosphere thermal gravity anomaly correction for both oceanic and continental margin lithosphere. Predicted crustal thicknesses in the Norwegian Basin are between 7 and 4 km on the extinct Aegir oceanic ridge which ceased sea-floor spreading in the Oligocene. Crustal thickness estimates do not include a correction for sediment thickness and are upper bounds. Crustal thicknesses determined by gravity inversion for the Aegir Ridge are consistent with recent estimates derived using refraction seismology by Breivik et al. (2006). Failure to incorporate a lithosphere thermal gravity anomaly correction produces an over-estimate of crustal thickness. Oceanic crustal thicknesses within the Norwegian Basin are predicted by the gravity inversion to increase to 9-10 km eastwards towards the Norwegian (Moere) and westwards towards the Jan Mayen micro-continent, consistent with volcanic margin continental breakup at the end of the Palaeocene. The observation (from gravity inversion and seismic refraction studies) of thin oceanic crust produced by the Aegir ocean ridge in the Oligocene has implications for the temporal evolution of asthenosphere temperature under the N.E. Atlantic during the Tertiary. Thin Oligocene oceanic crust may imply cool (normal) asthenosphere temperatures during the Oligocene in contrast to elevated asthenosphere temperatures in the Palaeocene and Miocene-Recent as indicated by volcanic margin formation and the formation of Iceland respectively. Gravity inversion also predicts a region of thin oceanic crust to the west of the northern part of the Jan Mayen micro-continent and to the east of the thicker oceanic crust currently being formed at the Kolbeinsey Ridge. Thicker crust (c.f. ocean basins) is predicted for the Jan Mayen micro- continent south of Jan Mayen Island, with crust of the order of 20 km thickness extending southwards to connect with both the Faroes-Iceland Ridge and N.E. Iceland. Predicted crustal thicknesses under the Faroes- Iceland Ridge are approximately 25 km. The lithosphere thermal model used to predict the lithosphere thermal gravity anomaly correction may be conditioned using magnetic isochron data to provide the age of oceanic lithosphere. The resulting crustal thickness determination and the location of ocean-continent transition (OCT) are however sensitive to errors in the magnetic isochron data. An alternative method of inverting satellite gravity to give crustal thickness, incorporating a lithosphere thermal correction, has been used which does not use magnetic isochron data and provides an independent prediction of crustal thickness and OCT location. The crustal thickness estimates and OCT locations detailed above are robust to these sensitivity tests.
Outcomes of planetary close encounters - A systematic comparison of methodologies
NASA Technical Reports Server (NTRS)
Greenberg, Richard; Carusi, Andrea; Valsecchi, G. B.
1988-01-01
Several methods for estimating the outcomes of close planetary encounters are compared on the basis of the numerical integration of a range of encounter types. An attempt is made to lay the foundation for the development of predictive rules concerning the encounter outcomes applicable to the refinement of the statistical mechanics that apply to planet-formation and similar problems concerning planetary swarms. Attention is given to Oepik's (1976) formulation of the two-body approximation, whose predicted motion differs from the correct three-body behavior.
A quantum theoretical study of polyimides
NASA Technical Reports Server (NTRS)
Burke, Luke A.
1987-01-01
One of the most important contributions of theoretical chemistry is the correct prediction of properties of materials before any costly experimental work begins. This is especially true in the field of electrically conducting polymers. Development of the Valence Effective Hamiltonian (VEH) technique for the calculation of the band structure of polymers was initiated. The necessary VEH potentials were developed for the sulfur and oxygen atoms within the particular molecular environments and the explanation explored for the success of this approximate method in predicting the optical properties of conducting polymers.
2014-01-01
and proportional correctors. The weighting function evaluates nearby data samples to determine the utility of each correction style , eliminating the...sparse methods may be of use. As for other multi-fidelity techniques, true cokriging in the style described by geo-statisticians[93] is beyond the...sampling style between sampling points predicted to fall near the contour and sampling points predicted to be farther from the contour but with
Huang, Ai-Mei; Nguyen, Truong
2009-04-01
In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.
Space sickness predictors suggest fluid shift involvement and possible countermeasures
NASA Technical Reports Server (NTRS)
Simanonok, K. E.; Moseley, E. C.; Charles, J. B.
1992-01-01
Preflight data from 64 first time Shuttle crew members were examined retrospectively to predict space sickness severity (NONE, MILD, MODERATE, or SEVERE) by discriminant analysis. From 9 input variables relating to fluid, electrolyte, and cardiovascular status, 8 variables were chosen by discriminant analysis that correctly predicted space sickness severity with 59 pct. success by one method of cross validation on the original sample and 67 pct. by another method. The 8 variables in order of their importance for predicting space sickness severity are sitting systolic blood pressure, serum uric acid, calculated blood volume, serum phosphate, urine osmolality, environmental temperature at the launch site, red cell count, and serum chloride. These results suggest the presence of predisposing physiologic factors to space sickness that implicate a fluid shift etiology. Addition of a 10th input variable, hours spent in the Weightless Environment Training Facility (WETF), improved the prediction of space sickness severity to 66 pct. success by the first method of cross validation on the original sample and to 71 pct. by the second method. The data suggest that WETF training may reduce space sickness severity.
2014-01-01
Background National data on body mass index (BMI), computed from self-reported height and weight, is readily available for many populations including the Canadian population. Because self-reported weight is found to be systematically under-reported, it has been proposed that the bias in self-reported BMI can be corrected using equations derived from data sets which include both self-reported and measured height and weight. Such correction equations have been developed and adopted. We aim to evaluate the usefulness (i.e., distributional similarity; sensitivity and specificity; and predictive utility vis-à-vis disease outcomes) of existing and new correction equations in population-based research. Methods The Canadian Community Health Surveys from 2005 and 2008 include both measured and self-reported values of height and weight, which allows for construction and evaluation of correction equations. We focused on adults age 18–65, and compared three correction equations (two correcting weight only, and one correcting BMI) against self-reported and measured BMI. We first compared population distributions of BMI. Second, we compared the sensitivity and specificity of self-reported BMI and corrected BMI against measured BMI. Third, we compared the self-reported and corrected BMI in terms of association with health outcomes using logistic regression. Results All corrections outperformed self-report when estimating the full BMI distribution; the weight-only correction outperformed the BMI-only correction for females in the 23–28 kg/m2 BMI range. In terms of sensitivity/specificity, when estimating obesity prevalence, corrected values of BMI (from any equation) were superior to self-report. In terms of modelling BMI-disease outcome associations, findings were mixed, with no correction proving consistently superior to self-report. Conclusions If researchers are interested in modelling the full population distribution of BMI, or estimating the prevalence of obesity in a population, then a correction of any kind included in this study is recommended. If the researcher is interested in using BMI as a predictor variable for modelling disease, then both self-reported and corrected BMI result in biased estimates of association. PMID:24885210
Framework for making better predictions by directly estimating variables' predictivity.
Lo, Adeline; Chernoff, Herman; Zheng, Tian; Lo, Shaw-Hwa
2016-12-13
We propose approaching prediction from a framework grounded in the theoretical correct prediction rate of a variable set as a parameter of interest. This framework allows us to define a measure of predictivity that enables assessing variable sets for, preferably high, predictivity. We first define the prediction rate for a variable set and consider, and ultimately reject, the naive estimator, a statistic based on the observed sample data, due to its inflated bias for moderate sample size and its sensitivity to noisy useless variables. We demonstrate that the [Formula: see text]-score of the PR method of VS yields a relatively unbiased estimate of a parameter that is not sensitive to noisy variables and is a lower bound to the parameter of interest. Thus, the PR method using the [Formula: see text]-score provides an effective approach to selecting highly predictive variables. We offer simulations and an application of the [Formula: see text]-score on real data to demonstrate the statistic's predictive performance on sample data. We conjecture that using the partition retention and [Formula: see text]-score can aid in finding variable sets with promising prediction rates; however, further research in the avenue of sample-based measures of predictivity is much desired.
Framework for making better predictions by directly estimating variables’ predictivity
Chernoff, Herman; Lo, Shaw-Hwa
2016-01-01
We propose approaching prediction from a framework grounded in the theoretical correct prediction rate of a variable set as a parameter of interest. This framework allows us to define a measure of predictivity that enables assessing variable sets for, preferably high, predictivity. We first define the prediction rate for a variable set and consider, and ultimately reject, the naive estimator, a statistic based on the observed sample data, due to its inflated bias for moderate sample size and its sensitivity to noisy useless variables. We demonstrate that the I-score of the PR method of VS yields a relatively unbiased estimate of a parameter that is not sensitive to noisy variables and is a lower bound to the parameter of interest. Thus, the PR method using the I-score provides an effective approach to selecting highly predictive variables. We offer simulations and an application of the I-score on real data to demonstrate the statistic’s predictive performance on sample data. We conjecture that using the partition retention and I-score can aid in finding variable sets with promising prediction rates; however, further research in the avenue of sample-based measures of predictivity is much desired. PMID:27911830
Shao, Xu; Milner, Ben
2005-08-01
This work proposes a method to reconstruct an acoustic speech signal solely from a stream of mel-frequency cepstral coefficients (MFCCs) as may be encountered in a distributed speech recognition (DSR) system. Previous methods for speech reconstruction have required, in addition to the MFCC vectors, fundamental frequency and voicing components. In this work the voicing classification and fundamental frequency are predicted from the MFCC vectors themselves using two maximum a posteriori (MAP) methods. The first method enables fundamental frequency prediction by modeling the joint density of MFCCs and fundamental frequency using a single Gaussian mixture model (GMM). The second scheme uses a set of hidden Markov models (HMMs) to link together a set of state-dependent GMMs, which enables a more localized modeling of the joint density of MFCCs and fundamental frequency. Experimental results on speaker-independent male and female speech show that accurate voicing classification and fundamental frequency prediction is attained when compared to hand-corrected reference fundamental frequency measurements. The use of the predicted fundamental frequency and voicing for speech reconstruction is shown to give very similar speech quality to that obtained using the reference fundamental frequency and voicing.
NASA Astrophysics Data System (ADS)
Brereton, Carol A.; Joynes, Ian M.; Campbell, Lucy J.; Johnson, Matthew R.
2018-05-01
Fugitive emissions are important sources of greenhouse gases and lost product in the energy sector that can be difficult to detect, but are often easily mitigated once they are known, located, and quantified. In this paper, a scalar transport adjoint-based optimization method is presented to locate and quantify unknown emission sources from downstream measurements. This emission characterization approach correctly predicted locations to within 5 m and magnitudes to within 13% of experimental release data from Project Prairie Grass. The method was further demonstrated on simulated simultaneous releases in a complex 3-D geometry based on an Alberta gas plant. Reconstructions were performed using both the complex 3-D transient wind field used to generate the simulated release data and using a sequential series of steady-state RANS wind simulations (SSWS) representing 30 s intervals of physical time. Both the detailed transient and the simplified wind field series could be used to correctly locate major sources and predict their emission rates within 10%, while predicting total emission rates from all sources within 24%. This SSWS case would be much easier to implement in a real-world application, and gives rise to the possibility of developing pre-computed databases of both wind and scalar transport adjoints to reduce computational time.
Bao, Yidan; Kong, Wenwen; Liu, Fei; Qiu, Zhengjun; He, Yong
2012-01-01
Amino acids are quite important indices to indicate the growth status of oilseed rape under herbicide stress. Near infrared (NIR) spectroscopy combined with chemometrics was applied for fast determination of glutamic acid in oilseed rape leaves. The optimal spectral preprocessing method was obtained after comparing Savitzky-Golay smoothing, standard normal variate, multiplicative scatter correction, first and second derivatives, detrending and direct orthogonal signal correction. Linear and nonlinear calibration methods were developed, including partial least squares (PLS) and least squares-support vector machine (LS-SVM). The most effective wavelengths (EWs) were determined by the successive projections algorithm (SPA), and these wavelengths were used as the inputs of PLS and LS-SVM model. The best prediction results were achieved by SPA-LS-SVM (Raw) model with correlation coefficient r = 0.9943 and root mean squares error of prediction (RMSEP) = 0.0569 for prediction set. These results indicated that NIR spectroscopy combined with SPA-LS-SVM was feasible for the fast and effective detection of glutamic acid in oilseed rape leaves. The selected EWs could be used to develop spectral sensors, and the important and basic amino acid data were helpful to study the function mechanism of herbicide. PMID:23203052
Monitoring apparatus and method for battery power supply
Martin, Harry L.; Goodson, Raymond E.
1983-01-01
A monitoring apparatus and method are disclosed for monitoring and/or indicating energy that a battery power source has then remaining and/or can deliver for utilization purposes as, for example, to an electric vehicle. A battery mathematical model forms the basis for monitoring with a capacity prediction determined from measurement of the discharge current rate and stored battery parameters. The predicted capacity is used to provide a state-of-charge indication. Self-calibration over the life of the battery power supply is enacted through use of a feedback voltage based upon the difference between predicted and measured voltages to correct the battery mathematical model. Through use of a microprocessor with central information storage of temperature, current and voltage, system behavior is monitored, and system flexibility is enhanced.
Classical least squares multivariate spectral analysis
Haaland, David M.
2002-01-01
An improved classical least squares multivariate spectral analysis method that adds spectral shapes describing non-calibrated components and system effects (other than baseline corrections) present in the analyzed mixture to the prediction phase of the method. These improvements decrease or eliminate many of the restrictions to the CLS-type methods and greatly extend their capabilities, accuracy, and precision. One new application of PACLS includes the ability to accurately predict unknown sample concentrations when new unmodeled spectral components are present in the unknown samples. Other applications of PACLS include the incorporation of spectrometer drift into the quantitative multivariate model and the maintenance of a calibration on a drifting spectrometer. Finally, the ability of PACLS to transfer a multivariate model between spectrometers is demonstrated.
Williams, Gary E.; Wood, P.B.
2002-01-01
We used miniature infrared video cameras to monitor Wood Thrush (Hylocichla mustelina) nests during 1998–2000. We documented nest predators and examined whether evidence at nests can be used to predict predator identities and nest fates. Fifty-six nests were monitored; 26 failed, with 3 abandoned and 23 depredated. We predicted predator class (avian, mammalian, snake) prior to review of video footage and were incorrect 57% of the time. Birds and mammals were underrepresented whereas snakes were over-represented in our predictions. We documented ≥9 nest-predator species, with the southern flying squirrel (Glaucomys volans) taking the most nests (n = 8). During 2000, we predicted fate (fledge or fail) of 27 nests; 23 were classified correctly. Traditional methods of monitoring nests appear to be effective for classifying success or failure of nests, but ineffective at classifying nest predators.
NASA Astrophysics Data System (ADS)
Howard, J. E.
2014-12-01
This study focusses on improving methods of accounting for atmospheric effects on infrasound amplitudes observed on arrays at regional distances in the southwestern United States. Recordings at ranges of 150 to nearly 300 km from a repeating ground truth source of small HE explosions are used. The explosions range in actual weight from approximately 2000-4000 lbs. and are detonated year-round which provides signals for a wide range of atmospheric conditions. Three methods of correcting the observed amplitudes for atmospheric effects are investigated with the data set. The first corrects amplitudes for upper stratospheric wind as developed by Mutschlecner and Whitaker (1999) and uses the average wind speed between 45-55 km altitudes in the direction of propagation to derive an empirical correction formula. This approach was developed using large chemical and nuclear explosions and is tested with the smaller explosions for which shorter wavelengths cause the energy to be scattered by the smaller scale structure of the atmosphere. The second approach isa semi-empirical method using ray tracing to determine wind speed at ray turning heights where the wind estimates replace the wind values in the existing formula. Finally, parabolic equation (PE) modeling is used to predict the amplitudes at the arrays at 1 Hz. The PE amplitudes are compared to the observed amplitudes with a narrow band filter centered at 1 Hz. An analysis is performed of the conditions under which the empirical and semi-empirical methods fail and full wave methods must be used.
Prediction of microstructure, residual stress, and deformation in laser powder bed fusion process
NASA Astrophysics Data System (ADS)
Yang, Y. P.; Jamshidinia, M.; Boulware, P.; Kelly, S. M.
2018-05-01
Laser powder bed fusion (L-PBF) process has been investigated significantly to build production parts with a complex shape. Modeling tools, which can be used in a part level, are essential to allow engineers to fine tune the shape design and process parameters for additive manufacturing. This study focuses on developing modeling methods to predict microstructure, hardness, residual stress, and deformation in large L-PBF built parts. A transient sequentially coupled thermal and metallurgical analysis method was developed to predict microstructure and hardness on L-PBF built high-strength, low-alloy steel parts. A moving heat-source model was used in this analysis to accurately predict the temperature history. A kinetics based model which was developed to predict microstructure in the heat-affected zone of a welded joint was extended to predict the microstructure and hardness in an L-PBF build by inputting the predicted temperature history. The tempering effect resulting from the following built layers on the current-layer microstructural phases were modeled, which is the key to predict the final hardness correctly. It was also found that the top layers of a build part have higher hardness because of the lack of the tempering effect. A sequentially coupled thermal and mechanical analysis method was developed to predict residual stress and deformation for an L-PBF build part. It was found that a line-heating model is not suitable for analyzing a large L-PBF built part. The layer heating method is a potential method for analyzing a large L-PBF built part. The experiment was conducted to validate the model predictions.
Prediction of microstructure, residual stress, and deformation in laser powder bed fusion process
NASA Astrophysics Data System (ADS)
Yang, Y. P.; Jamshidinia, M.; Boulware, P.; Kelly, S. M.
2017-12-01
Laser powder bed fusion (L-PBF) process has been investigated significantly to build production parts with a complex shape. Modeling tools, which can be used in a part level, are essential to allow engineers to fine tune the shape design and process parameters for additive manufacturing. This study focuses on developing modeling methods to predict microstructure, hardness, residual stress, and deformation in large L-PBF built parts. A transient sequentially coupled thermal and metallurgical analysis method was developed to predict microstructure and hardness on L-PBF built high-strength, low-alloy steel parts. A moving heat-source model was used in this analysis to accurately predict the temperature history. A kinetics based model which was developed to predict microstructure in the heat-affected zone of a welded joint was extended to predict the microstructure and hardness in an L-PBF build by inputting the predicted temperature history. The tempering effect resulting from the following built layers on the current-layer microstructural phases were modeled, which is the key to predict the final hardness correctly. It was also found that the top layers of a build part have higher hardness because of the lack of the tempering effect. A sequentially coupled thermal and mechanical analysis method was developed to predict residual stress and deformation for an L-PBF build part. It was found that a line-heating model is not suitable for analyzing a large L-PBF built part. The layer heating method is a potential method for analyzing a large L-PBF built part. The experiment was conducted to validate the model predictions.
Exemplar-based human action pose correction.
Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen
2014-07-01
The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.
Excimer laser correction of hyperopia, hyperopic and mixed astigmatism: past, present, and future.
Lukenda, Adrian; Martinović, Zeljka Karaman; Kalauz, Miro
2012-06-01
The broad acceptance of "spot scanning" or "flying spot" excimer lasers in the last decade has enabled the domination of corneal ablative laser surgery over other refractive surgical procedures for the correction of hyperopia, hyperopic and mixed astigmatism. This review outlines the most important reasons why the ablative laser correction of hyperopia, hyperopic and mixed astigmatism for many years lagged behind that of myopia. Most of today's scanning laser systems, used in the LASIK and PRK procedures, can safely and effectively perform low, moderate and high hyperopic and hyperopic astigmatic corrections. The introduction of these laser platforms has also significantly improved the long term refractive stability of hyperopic treatments. In the future, further improvements in femtosecond and nanosecond technology, eye-tracker systems, and the development of new customized algorithms, such as the ray-tracing method, could additionally increase the upper limit for the safe and predictable corneal ablative laser correction ofhyperopia, hyperopic and mixed astigmatism.
Lingner, Thomas; Kataya, Amr R. A.; Reumann, Sigrun
2012-01-01
We recently developed the first algorithms specifically for plants to predict proteins carrying peroxisome targeting signals type 1 (PTS1) from genome sequences.1 As validated experimentally, the prediction methods are able to correctly predict unknown peroxisomal Arabidopsis proteins and to infer novel PTS1 tripeptides. The high prediction performance is primarily determined by the large number and sequence diversity of the underlying positive example sequences, which mainly derived from EST databases. However, a few constructs remained cytosolic in experimental validation studies, indicating sequencing errors in some ESTs. To identify erroneous sequences, we validated subcellular targeting of additional positive example sequences in the present study. Moreover, we analyzed the distribution of prediction scores separately for each orthologous group of PTS1 proteins, which generally resembled normal distributions with group-specific mean values. The cytosolic sequences commonly represented outliers of low prediction scores and were located at the very tail of a fitted normal distribution. Three statistical methods for identifying outliers were compared in terms of sensitivity and specificity.” Their combined application allows elimination of erroneous ESTs from positive example data sets. This new post-validation method will further improve the prediction accuracy of both PTS1 and PTS2 protein prediction models for plants, fungi, and mammals. PMID:22415050
Lingner, Thomas; Kataya, Amr R A; Reumann, Sigrun
2012-02-01
We recently developed the first algorithms specifically for plants to predict proteins carrying peroxisome targeting signals type 1 (PTS1) from genome sequences. As validated experimentally, the prediction methods are able to correctly predict unknown peroxisomal Arabidopsis proteins and to infer novel PTS1 tripeptides. The high prediction performance is primarily determined by the large number and sequence diversity of the underlying positive example sequences, which mainly derived from EST databases. However, a few constructs remained cytosolic in experimental validation studies, indicating sequencing errors in some ESTs. To identify erroneous sequences, we validated subcellular targeting of additional positive example sequences in the present study. Moreover, we analyzed the distribution of prediction scores separately for each orthologous group of PTS1 proteins, which generally resembled normal distributions with group-specific mean values. The cytosolic sequences commonly represented outliers of low prediction scores and were located at the very tail of a fitted normal distribution. Three statistical methods for identifying outliers were compared in terms of sensitivity and specificity." Their combined application allows elimination of erroneous ESTs from positive example data sets. This new post-validation method will further improve the prediction accuracy of both PTS1 and PTS2 protein prediction models for plants, fungi, and mammals.
Vosough, Maryam; Salemi, Amir
2007-08-15
In the present work two second-order calibration methods, generalized rank annihilation method (GRAM) and multivariate curve resolution-alternating least square (MCR-ALS) have been applied on standard addition data matrices obtained by gas chromatography-mass spectrometry (GC-MS) to characterize and quantify four unsaturated fatty acids cis-9-hexadecenoic acid (C16:1omega7c), cis-9-octadecenoic acid (C18:1omega9c), cis-11-eicosenoic acid (C20:1omega9) and cis-13-docosenoic acid (C22:1omega9) in fish oil considering matrix interferences. With these methods, the area does not need to be directly measured and predictions are more accurate. Because of non-trilinear conditions of GC-MS data matrices, at first MCR-ALS and GRAM have been used on uncorrected data matrices. In comparison to MCR-ALS, biased and imprecise concentrations (%R.S.D.=27.3) were obtained using GRAM without correcting the retention time-shift. As trilinearity is the essential requirement for implementing GRAM, the data need to be corrected. Multivariate rank alignment objectively corrects the run-to-run retention time variations between sample GC-MS data matrix and a standard addition GC-MS data matrix. Then, two second-order algorithms have been compared with each other. The above algorithms provided similar mean predictions, pure concentrations and spectral profiles. The results validated using standard mass spectra of target compounds. In addition, some of the quantification results were compared with the concentration values obtained using the selected mass chromatograms. As in the case of strong peak-overlap and the matrix effect, the classical univariate method of determination of the area of the peaks of the analytes will fail, the "second-order advantage" has solved this problem successfully.
Prediction of light aircraft interior noise
NASA Technical Reports Server (NTRS)
Howlett, J. T.; Morales, D. A.
1976-01-01
At the present time, predictions of aircraft interior noise depend heavily on empirical correction factors derived from previous flight measurements. However, to design for acceptable interior noise levels and to optimize acoustic treatments, analytical techniques which do not depend on empirical data are needed. This paper describes a computerized interior noise prediction method for light aircraft. An existing analytical program (developed for commercial jets by Cockburn and Jolly in 1968) forms the basis of some modal analysis work which is described. The accuracy of this modal analysis technique for predicting low-frequency coupled acoustic-structural natural frequencies is discussed along with trends indicating the effects of varying parameters such as fuselage length and diameter, structural stiffness, and interior acoustic absorption.
How Conformational Dynamics of DNA Polymerase Select Correct Substrates: Experiments and Simulations
Kirmizialtin, Serdal; Nguyen, Virginia; Johnson, Kenneth A.; Elber, Ron
2012-01-01
Summary Nearly every enzyme undergoes a significant change in structure after binding it’s substrate. New experimental and theoretical analyses of the role of changes in HIV reverse transcriptase structure in selecting a correct substrate are presented. Atomically detailed simulations using the Milestoning method predict a rate and free energy profile of the conformational change commensurate with experimental data. A large conformational change occurring on a ms timescale locks the correct nucleotide at the active site, but promotes release of a mismatched nucleotide. The positions along the reaction coordinate that decide the yield of the reaction are not determined by the chemical step. Rather, the initial steps of weak substrate binding and protein conformational transition significantly enrich the yield of a reaction with a correct substrate, while the same steps diminish the reaction probability of an incorrect substrate. PMID:22483109
Yan, Yumeng; Wen, Zeyu; Wang, Xinxiang; Huang, Sheng-You
2017-03-01
Protein-protein docking is an important computational tool for predicting protein-protein interactions. With the rapid development of proteomics projects, more and more experimental binding information ranging from mutagenesis data to three-dimensional structures of protein complexes are becoming available. Therefore, how to appropriately incorporate the biological information into traditional ab initio docking has been an important issue and challenge in the field of protein-protein docking. To address these challenges, we have developed a Hybrid DOCKing protocol of template-based and template-free approaches, referred to as HDOCK. The basic procedure of HDOCK is to model the structures of individual components based on the template complex by a template-based method if a template is available; otherwise, the component structures will be modeled based on monomer proteins by regular homology modeling. Then, the complex structure of the component models is predicted by traditional protein-protein docking. With the HDOCK protocol, we have participated in the CPARI experiment for rounds 28-35. Out of the 25 CASP-CAPRI targets for oligomer modeling, our HDOCK protocol predicted correct models for 16 targets, ranking one of the top algorithms in this challenge. Our docking method also made correct predictions on other CAPRI challenges such as protein-peptide binding for 6 out of 8 targets and water predictions for 2 out of 2 targets. The advantage of our hybrid docking approach over pure template-based docking was further confirmed by a comparative evaluation on 20 CASP-CAPRI targets. Proteins 2017; 85:497-512. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Guo, Song; Liu, Chunhua; Zhou, Peng; Li, Yanling
2016-01-01
Tyrosine sulfation is one of the ubiquitous protein posttranslational modifications, where some sulfate groups are added to the tyrosine residues. It plays significant roles in various physiological processes in eukaryotic cells. To explore the molecular mechanism of tyrosine sulfation, one of the prerequisites is to correctly identify possible protein tyrosine sulfation residues. In this paper, a novel method was presented to predict protein tyrosine sulfation residues from primary sequences. By means of informative feature construction and elaborate feature selection and parameter optimization scheme, the proposed predictor achieved promising results and outperformed many other state-of-the-art predictors. Using the optimal features subset, the proposed method achieved mean MCC of 94.41% on the benchmark dataset, and a MCC of 90.09% on the independent dataset. The experimental performance indicated that our new proposed method could be effective in identifying the important protein posttranslational modifications and the feature selection scheme would be powerful in protein functional residues prediction research fields.
Liu, Chunhua; Zhou, Peng; Li, Yanling
2016-01-01
Tyrosine sulfation is one of the ubiquitous protein posttranslational modifications, where some sulfate groups are added to the tyrosine residues. It plays significant roles in various physiological processes in eukaryotic cells. To explore the molecular mechanism of tyrosine sulfation, one of the prerequisites is to correctly identify possible protein tyrosine sulfation residues. In this paper, a novel method was presented to predict protein tyrosine sulfation residues from primary sequences. By means of informative feature construction and elaborate feature selection and parameter optimization scheme, the proposed predictor achieved promising results and outperformed many other state-of-the-art predictors. Using the optimal features subset, the proposed method achieved mean MCC of 94.41% on the benchmark dataset, and a MCC of 90.09% on the independent dataset. The experimental performance indicated that our new proposed method could be effective in identifying the important protein posttranslational modifications and the feature selection scheme would be powerful in protein functional residues prediction research fields. PMID:27034949
Pseudo CT estimation from MRI using patch-based random forest
NASA Astrophysics Data System (ADS)
Yang, Xiaofeng; Lei, Yang; Shu, Hui-Kuo; Rossi, Peter; Mao, Hui; Shim, Hyunsuk; Curran, Walter J.; Liu, Tian
2017-02-01
Recently, MR simulators gain popularity because of unnecessary radiation exposure of CT simulators being used in radiation therapy planning. We propose a method for pseudo CT estimation from MR images based on a patch-based random forest. Patient-specific anatomical features are extracted from the aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified using feature selection to train the random forest. The well-trained random forest is used to predict the pseudo CT of a new patient. This prediction technique was tested with human brain images and the prediction accuracy was assessed using the original CT images. Peak signal-to-noise ratio (PSNR) and feature similarity (FSIM) indexes were used to quantify the differences between the pseudo and original CT images. The experimental results showed the proposed method could accurately generate pseudo CT images from MR images. In summary, we have developed a new pseudo CT prediction method based on patch-based random forest, demonstrated its clinical feasibility, and validated its prediction accuracy. This pseudo CT prediction technique could be a useful tool for MRI-based radiation treatment planning and attenuation correction in a PET/MRI scanner.
Li, Wen-xia; Li, Feng; Zhao, Guo-liang; Tang, Shi-jun; Liu, Xiao-ying
2014-12-01
A series of 376 cotton-polyester (PET) blend fabrics were studied by a portable near-infrared (NIR) spectrometer. A NIR semi-quantitative-qualitative calibration model was established by Partial Least Squares (PLS) method combined with qualitative identification coefficient. In this process, PLS method in a quantitative analysis was used as a correction method, and the qualitative identification coefficient was set by the content of cotton and polyester in blend fabrics. Cotton-polyester blend fabrics were identified qualitatively by the model and their relative contents were obtained quantitatively, the model can be used for semi-quantitative identification analysis. In the course of establishing the model, the noise and baseline drift of the spectra were eliminated by Savitzky-Golay(S-G) derivative. The influence of waveband selection and different pre-processing method was also studied in the qualitative calibration model. The major absorption bands of 100% cotton samples were in the 1400~1600 nm region, and the one for 100% polyester were around 1600~1800 nm, the absorption intensity was enhancing with the content increasing of cotton or polyester. Therefore, the cotton-polyester's major absorption region was selected as the base waveband, the optimal waveband (1100~2500 nm) was found by expanding the waveband in two directions (the correlation coefficient was 0.6, and wave-point number was 934). The validation samples were predicted by the calibration model, the results showed that the model evaluation parameters was optimum in the 1100~2500 nm region, and the combination of S-G derivative, multiplicative scatter correction (MSC) and mean centering was used as the pre-processing method. RC (relational coefficient of calibration) value was 0.978, RP (relational coefficient of prediction) value was 0.940, SEC (standard error of calibration) value was 1.264, SEP (standard error of prediction) value was 1.590, and the sample's recognition accuracy was up to 93.4%. It showed that the cotton-polyester blend fabrics could be predicted by the semi-quantitative-qualitative calibration model.
Solving Upwind-Biased Discretizations: Defect-Correction Iterations
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
1999-01-01
This paper considers defect-correction solvers for a second order upwind-biased discretization of the 2D convection equation. The following important features are reported: (1) The asymptotic convergence rate is about 0.5 per defect-correction iteration. (2) If the operators involved in defect-correction iterations have different approximation order, then the initial convergence rates may be very slow. The number of iterations required to get into the asymptotic convergence regime might grow on fine grids as a negative power of h. In the case of a second order target operator and a first order driver operator, this number of iterations is roughly proportional to h-1/3. (3) If both the operators have the second approximation order, the defect-correction solver demonstrates the asymptotic convergence rate after three iterations at most. The same three iterations are required to converge algebraic error below the truncation error level. A novel comprehensive half-space Fourier mode analysis (which, by the way, can take into account the influence of discretized outflow boundary conditions as well) for the defect-correction method is developed. This analysis explains many phenomena observed in solving non-elliptic equations and provides a close prediction of the actual solution behavior. It predicts the convergence rate for each iteration and the asymptotic convergence rate. As a result of this analysis, a new very efficient adaptive multigrid algorithm solving the discrete problem to within a given accuracy is proposed. Numerical simulations confirm the accuracy of the analysis and the efficiency of the proposed algorithm. The results of the numerical tests are reported.
Empirical parameterization of a model for predicting peptide helix/coil equilibrium populations.
Andersen, N. H.; Tong, H.
1997-01-01
A modification of the Lifson-Roig formulation of helix/coil transitions is presented; it (1) incorporates end-capping and coulombic (salt bridges, hydrogen bonding, and side-chain interactions with charged termini and the helix dipole) effects, (2) helix-stabilizing hydrophobic clustering, (3) allows for different inherent termination probabilities of individual residues, and (4) differentiates helix elongation in the first versus subsequent turns of a helix. Each residue is characterized by six parameters governing helix formation. The formulation of the conditional probability of helix initiation and termination that we developed is essentially the same as one presented previously (Shalongo W, Stellwagen, E. 1995. Protein Sci 4:1161-1166) and nearly the mathematical equivalent of the new capping formulation incorporated in the model presented by Rohl et al. (1996. Protein Sci 5:2623-2637). Side-chain/side-chain interactions are, in most cases, incorporated as context dependent modifications of propagation rather than nucleation parameters. An alternative procedure for converting [theta]221 values to experimental fractional helicities (
2010-01-01
Background The binding of peptide fragments of extracellular peptides to class II MHC is a crucial event in the adaptive immune response. Each MHC allotype generally binds a distinct subset of peptides and the enormous number of possible peptide epitopes prevents their complete experimental characterization. Computational methods can utilize the limited experimental data to predict the binding affinities of peptides to class II MHC. Results We have developed the Regularized Thermodynamic Average, or RTA, method for predicting the affinities of peptides binding to class II MHC. RTA accounts for all possible peptide binding conformations using a thermodynamic average and includes a parameter constraint for regularization to improve accuracy on novel data. RTA was shown to achieve higher accuracy, as measured by AUC, than SMM-align on the same data for all 17 MHC allotypes examined. RTA also gave the highest accuracy on all but three allotypes when compared with results from 9 different prediction methods applied to the same data. In addition, the method correctly predicted the peptide binding register of 17 out of 18 peptide-MHC complexes. Finally, we found that suboptimal peptide binding registers, which are often ignored in other prediction methods, made significant contributions of at least 50% of the total binding energy for approximately 20% of the peptides. Conclusions The RTA method accurately predicts peptide binding affinities to class II MHC and accounts for multiple peptide binding registers while reducing overfitting through regularization. The method has potential applications in vaccine design and in understanding autoimmune disorders. A web server implementing the RTA prediction method is available at http://bordnerlab.org/RTA/. PMID:20089173
NASA Astrophysics Data System (ADS)
Moise Famien, Adjoua; Defrance, Dimitri; Sultan, Benjamin; Janicot, Serge; Vrac, Mathieu
2017-04-01
Different CMIP exercises show that the simulations of the future/current temperature and precipitation are complex with a high uncertainty degree. For example, the African monsoon system is not correctly simulated and most of the CMIP5 models underestimate the precipitation. Therefore, Global Climate Models (GCMs) show significant systematic biases that require bias correction before it can be used in impacts studies. Several methods of bias corrections have been developed for several years and are increasingly using more complex statistical methods. The aims of this work is to show the interest of the CDFt (Cumulative Distribution Function transfom (Michelangeli et al.,2009)) method to reduce the data bias from 29 CMIP5 GCMs over Africa and to assess the impact of bias corrected data on crop yields prediction by the end of the 21st century. In this work, we apply the CDFt to daily data covering the period from 1950 to 2099 (Historical and RCP8.5) and we correct the climate variables (temperature, precipitation, solar radiation, wind) by the use of the new daily database from the EU project WATer and global CHange (WATCH) available from 1979 to 2013 as reference data. The performance of the method is assessed in several cases. First, data are corrected based on different calibrations periods and are compared, on one hand, with observations to estimate the sensitivity of the method to the calibration period and, on other hand, with another bias-correction method used in the ISIMIP project. We find that, whatever the calibration period used, CDFt corrects well the mean state of variables and preserves their trend, as well as daily rainfall occurrence and intensity distributions. However, some differences appear when compared to the outputs obtained with the method used in ISIMIP and show that the quality of the correction is strongly related to the reference data. Secondly, we validate the bias correction method with the agronomic simulations (SARRA-H model (Kouressy et al., 2008)) by comparison with FAO crops yields estimations over West Africa. Impact simulations show that crop model is sensitive to input data. They show also decreasing in crop yields by the end of this century. Michelangeli, P. A., Vrac, M., & Loukos, H. (2009). Probabilistic downscaling approaches: Application to wind cumulative distribution functions. Geophysical Research Letters, 36(11). Kouressy M, Dingkuhn M, Vaksmann M and Heinemann A B 2008: Adaptation to diverse semi-arid environments of sorghum genotypes having different plant type and sensitivity to photoperiod. Agric. Forest Meteorol., http://dx.doi.org/10.1016/j.agrformet.2007.09.009
A General Simulation Method for Multiple Bodies in Proximate Flight
NASA Technical Reports Server (NTRS)
Meakin, Robert L.
2003-01-01
Methods of unsteady aerodynamic simulation for an arbitrary number of independent bodies flying in close proximity are considered. A novel method to efficiently detect collision contact points is described. A method to compute body trajectories in response to aerodynamic loads, applied loads, and inter-body collisions is also given. The physical correctness of the methods are verified by comparison to a set of analytic solutions. The methods, combined with a Navier-Stokes solver, are used to demonstrate the possibility of predicting the unsteady aerodynamics and flight trajectories of moving bodies that involve rigid-body collisions.
NASA Technical Reports Server (NTRS)
Wornom, S. F.
1971-01-01
This technique has been applied to study such effects on incompressible flow around cylinders at moderate to low Reynolds numbers and for compression ramps at hypersonic Mach numbers by employing a finite difference method to obtain numerical solutions. The results indicate that the technique can be applied successfully in both regimes and does predict the correct trend in regions of large curvature and displacement body effects. It was concluded that curvature corrections should only be attempted in cases where all displacement effects can be fully accounted for.
Conomos, Matthew P; Miller, Michael B; Thornton, Timothy A
2015-05-01
Population structure inference with genetic data has been motivated by a variety of applications in population genetics and genetic association studies. Several approaches have been proposed for the identification of genetic ancestry differences in samples where study participants are assumed to be unrelated, including principal components analysis (PCA), multidimensional scaling (MDS), and model-based methods for proportional ancestry estimation. Many genetic studies, however, include individuals with some degree of relatedness, and existing methods for inferring genetic ancestry fail in related samples. We present a method, PC-AiR, for robust population structure inference in the presence of known or cryptic relatedness. PC-AiR utilizes genome-screen data and an efficient algorithm to identify a diverse subset of unrelated individuals that is representative of all ancestries in the sample. The PC-AiR method directly performs PCA on the identified ancestry representative subset and then predicts components of variation for all remaining individuals based on genetic similarities. In simulation studies and in applications to real data from Phase III of the HapMap Project, we demonstrate that PC-AiR provides a substantial improvement over existing approaches for population structure inference in related samples. We also demonstrate significant efficiency gains, where a single axis of variation from PC-AiR provides better prediction of ancestry in a variety of structure settings than using 10 (or more) components of variation from widely used PCA and MDS approaches. Finally, we illustrate that PC-AiR can provide improved population stratification correction over existing methods in genetic association studies with population structure and relatedness. © 2015 WILEY PERIODICALS, INC.
Paini, Dean R.; Bianchi, Felix J. J. A.; Northfield, Tobin D.; De Barro, Paul J.
2011-01-01
Predicting future species invasions presents significant challenges to researchers and government agencies. Simply considering the vast number of potential species that could invade an area can be insurmountable. One method, recently suggested, which can analyse large datasets of invasive species simultaneously is that of a self organising map (SOM), a form of artificial neural network which can rank species by establishment likelihood. We used this method to analyse the worldwide distribution of 486 fungal pathogens and then validated the method by creating a virtual world of invasive species in which to test the SOM. This novel validation method allowed us to test SOM's ability to rank those species that can establish above those that can't. Overall, we found the SOM highly effective, having on average, a 96–98% success rate (depending on the virtual world parameters). We also found that regions with fewer species present (i.e. 1–10 species) were more difficult for the SOM to generate an accurately ranked list, with success rates varying from 100% correct down to 0% correct. However, we were able to combine the numbers of species present in a region with clustering patterns in the SOM, to further refine confidence in lists generated from these sparsely populated regions. We then used the results from the virtual world to determine confidences for lists generated from the fungal pathogen dataset. Specifically, for lists generated for Australia and its states and territories, the reliability scores were between 84–98%. We conclude that a SOM analysis is a reliable method for analysing a large dataset of potential invasive species and could be used by biosecurity agencies around the world resulting in a better overall assessment of invasion risk. PMID:22016773
Paini, Dean R; Bianchi, Felix J J A; Northfield, Tobin D; De Barro, Paul J
2011-01-01
Predicting future species invasions presents significant challenges to researchers and government agencies. Simply considering the vast number of potential species that could invade an area can be insurmountable. One method, recently suggested, which can analyse large datasets of invasive species simultaneously is that of a self organising map (SOM), a form of artificial neural network which can rank species by establishment likelihood. We used this method to analyse the worldwide distribution of 486 fungal pathogens and then validated the method by creating a virtual world of invasive species in which to test the SOM. This novel validation method allowed us to test SOM's ability to rank those species that can establish above those that can't. Overall, we found the SOM highly effective, having on average, a 96-98% success rate (depending on the virtual world parameters). We also found that regions with fewer species present (i.e. 1-10 species) were more difficult for the SOM to generate an accurately ranked list, with success rates varying from 100% correct down to 0% correct. However, we were able to combine the numbers of species present in a region with clustering patterns in the SOM, to further refine confidence in lists generated from these sparsely populated regions. We then used the results from the virtual world to determine confidences for lists generated from the fungal pathogen dataset. Specifically, for lists generated for Australia and its states and territories, the reliability scores were between 84-98%. We conclude that a SOM analysis is a reliable method for analysing a large dataset of potential invasive species and could be used by biosecurity agencies around the world resulting in a better overall assessment of invasion risk.
NASA Astrophysics Data System (ADS)
Kuzmiakova, Adele; Dillner, Ann M.; Takahama, Satoshi
2016-06-01
A growing body of research on statistical applications for characterization of atmospheric aerosol Fourier transform infrared (FT-IR) samples collected on polytetrafluoroethylene (PTFE) filters (e.g., Russell et al., 2011; Ruthenburg et al., 2014) and a rising interest in analyzing FT-IR samples collected by air quality monitoring networks call for an automated PTFE baseline correction solution. The existing polynomial technique (Takahama et al., 2013) is not scalable to a project with a large number of aerosol samples because it contains many parameters and requires expert intervention. Therefore, the question of how to develop an automated method for baseline correcting hundreds to thousands of ambient aerosol spectra given the variability in both environmental mixture composition and PTFE baselines remains. This study approaches the question by detailing the statistical protocol, which allows for the precise definition of analyte and background subregions, applies nonparametric smoothing splines to reproduce sample-specific PTFE variations, and integrates performance metrics from atmospheric aerosol and blank samples alike in the smoothing parameter selection. Referencing 794 atmospheric aerosol samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011, we start by identifying key FT-IR signal characteristics, such as non-negative absorbance or analyte segment transformation, to capture sample-specific transitions between background and analyte. While referring to qualitative properties of PTFE background, the goal of smoothing splines interpolation is to learn the baseline structure in the background region to predict the baseline structure in the analyte region. We then validate the model by comparing smoothing splines baseline-corrected spectra with uncorrected and polynomial baseline (PB)-corrected equivalents via three statistical applications: (1) clustering analysis, (2) functional group quantification, and (3) thermal optical reflectance (TOR) organic carbon (OC) and elemental carbon (EC) predictions. The discrepancy rate for a four-cluster solution is 10 %. For all functional groups but carboxylic COH the discrepancy is ≤ 10 %. Performance metrics obtained from TOR OC and EC predictions (R2 ≥ 0.94 %, bias ≤ 0.01 µg m-3, and error ≤ 0.04 µg m-3) are on a par with those obtained from uncorrected and PB-corrected spectra. The proposed protocol leads to visually and analytically similar estimates as those generated by the polynomial method. More importantly, the automated solution allows us and future users to evaluate its analytical reproducibility while minimizing reducible user bias. We anticipate the protocol will enable FT-IR researchers and data analysts to quickly and reliably analyze a large amount of data and connect them to a variety of available statistical learning methods to be applied to analyte absorbances isolated in atmospheric aerosol samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albert, J; Labarbe, R; Sterpin, E
2016-06-15
Purpose: To understand the extent to which the prompt gamma camera measurements can be used to predict the residual proton range due to setup errors and errors in the calibration curve. Methods: We generated ten variations on a default calibration curve (CC) and ten corresponding range maps (RM). Starting with the default RM, we chose a square array of N beamlets, which were then rotated by a random angle θ and shifted by a random vector s. We added a 5% distal Gaussian noise to each beamlet in order to introduce discrepancies that exist between the ranges predicted from themore » prompt gamma measurements and those simulated with Monte Carlo algorithms. For each RM, s, θ, along with an offset u in the CC, were optimized using a simple Euclidian distance between the default ranges and the ranges produced by the given RM. Results: The application of our method lead to the maximal overrange of 2.0mm and underrange of 0.6mm on average. Compared to the situations where s, θ, and u were ignored, these values were larger: 2.1mm and 4.3mm. In order to quantify the need for setup error corrections, we also performed computations in which u was corrected for, but s and θ were not. This yielded: 3.2mm and 3.2mm. The average computation time for 170 beamlets was 65 seconds. Conclusion: These results emphasize the necessity to correct for setup errors and the errors in the calibration curve. The simplicity and speed of our method makes it a good candidate for being implemented as a tool for in-room adaptive therapy. This work also demonstrates that the Prompt gamma range measurements can indeed be useful in the effort to reduce range errors. Given these results, and barring further refinements, this approach is a promising step towards an adaptive proton radiotherapy.« less
Space vehicle acoustics prediction improvement for payloads. [space shuttle
NASA Technical Reports Server (NTRS)
Dandridge, R. E.
1979-01-01
The modal analysis method was extensively modified for the prediction of space vehicle noise reduction in the shuttle payload enclosure, and this program was adapted to the IBM 360 computer. The predicted noise reduction levels for two test cases were compared with experimental results to determine the validity of the analytical model for predicting space vehicle payload noise environments in the 10 Hz one-third octave band regime. The prediction approach for the two test cases generally gave reasonable magnitudes and trends when compared with the measured noise reduction spectra. The discrepancies in the predictions could be corrected primarily by improved modeling of the vehicle structural walls and of the enclosed acoustic space to obtain a more accurate assessment of normal modes. Techniques for improving and expandng the noise prediction for a payload environment are also suggested.
Bias correction for selecting the minimal-error classifier from many machine learning models.
Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C
2014-11-15
Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Antarctic contribution to sea level rise observed by GRACE with improved GIA correction
NASA Astrophysics Data System (ADS)
Ivins, Erik R.; James, Thomas S.; Wahr, John; Schrama, Ernst J. O.; Landerer, Felix W.; Simon, Karen M.
2013-06-01
Antarctic volume changes during the past 21 thousand years are smaller than previously thought, and here we construct an ice sheet history that drives a forward model prediction of the glacial isostatic adjustment (GIA) gravity signal. The new model, in turn, should give predictions that are constrained with recent uplift data. The impact of the GIA signal on a Gravity Recovery and Climate Experiment (GRACE) Antarctic mass balance estimate depends on the specific GRACE analysis method used. For the method described in this paper, the GIA contribution to the apparent surface mass change is re-evaluated to be +55±13 Gt/yr by considering a revised ice history model and a parameter search for vertical motion predictions that best fit the GPS observations at 18 high-quality stations. Although the GIA model spans a range of possible Earth rheological structure values, the data are not yet sufficient for solving for a preferred value of upper and lower mantle viscosity nor for a preferred lithospheric thickness. GRACE monthly solutions from the Center for Space Research Release 04 (CSR-RL04) release time series from January 2003 to the beginning of January 2012, uncorrected for GIA, yield an ice mass rate of +2.9± 29 Gt/yr. The new GIA correction increases the solved-for ice mass imbalance of Antarctica to -57±34 Gt/yr. The revised GIA correction is smaller than past GRACE estimates by about 50 to 90 Gt/yr. The new upper bound to the sea level rise from the Antarctic ice sheet, averaged over the time span 2003.0-2012.0, is about 0.16±0.09 mm/yr.
Juan-Albarracín, Javier; Fuster-Garcia, Elies; Pérez-Girbés, Alexandre; Aparici-Robles, Fernando; Alberich-Bayarri, Ángel; Revert-Ventura, Antonio; Martí-Bonmatí, Luis; García-Gómez, Juan M
2018-06-01
Purpose To determine if preoperative vascular heterogeneity of glioblastoma is predictive of overall survival of patients undergoing standard-of-care treatment by using an unsupervised multiparametric perfusion-based habitat-discovery algorithm. Materials and Methods Preoperative magnetic resonance (MR) imaging including dynamic susceptibility-weighted contrast material-enhanced perfusion studies in 50 consecutive patients with glioblastoma were retrieved. Perfusion parameters of glioblastoma were analyzed and used to automatically draw four reproducible habitats that describe the tumor vascular heterogeneity: high-angiogenic and low-angiogenic regions of the enhancing tumor, potentially tumor-infiltrated peripheral edema, and vasogenic edema. Kaplan-Meier and Cox proportional hazard analyses were conducted to assess the prognostic potential of the hemodynamic tissue signature to predict patient survival. Results Cox regression analysis yielded a significant correlation between patients' survival and maximum relative cerebral blood volume (rCBV max ) and maximum relative cerebral blood flow (rCBF max ) in high-angiogenic and low-angiogenic habitats (P < .01, false discovery rate-corrected P < .05). Moreover, rCBF max in the potentially tumor-infiltrated peripheral edema habitat was also significantly correlated (P < .05, false discovery rate-corrected P < .05). Kaplan-Meier analysis demonstrated significant differences between the observed survival of populations divided according to the median of the rCBV max or rCBF max at the high-angiogenic and low-angiogenic habitats (log-rank test P < .05, false discovery rate-corrected P < .05), with an average survival increase of 230 days. Conclusion Preoperative perfusion heterogeneity contains relevant information about overall survival in patients who undergo standard-of-care treatment. The hemodynamic tissue signature method automatically describes this heterogeneity, providing a set of vascular habitats with high prognostic capabilities. © RSNA, 2018.
DOE Office of Scientific and Technical Information (OSTI.GOV)
DiLabio, Gino A., E-mail: Gino.DiLabio@nrc.ca; Department of Chemistry, University of British Columbia, Okanagan, 3333 University Way, Kelowna, British Columbia V1V 1V7; Koleini, Mohammad
2014-05-14
Dispersion-correcting potentials (DCPs) are atom-centered Gaussian functions that are applied in a manner that is similar to effective core potentials. Previous work on DCPs has focussed on their use as a simple means of improving the ability of conventional density-functional theory methods to predict the binding energies of noncovalently bonded molecular dimers. We show in this work that DCPs developed for use with the LC-ωPBE functional along with 6-31+G(2d,2p) basis sets are capable of simultaneously improving predicted noncovalent binding energies of van der Waals dimer complexes and covalent bond dissociation enthalpies in molecules. Specifically, the DCPs developed herein for themore » C, H, N, and O atoms provide binding energies for a set of 66 noncovalently bonded molecular dimers (the “S66” set) with a mean absolute error (MAE) of 0.21 kcal/mol, which represents an improvement of more than a factor of 10 over unadorned LC-ωPBE/6-31+G(2d,2p) and almost a factor of two improvement over LC-ωPBE/6-31+G(2d,2p) used in conjunction with the “D3” pairwise dispersion energy corrections. In addition, the DCPs reduce the MAE of calculated X-H and X-Y (X,Y = C, H, N, O) bond dissociation enthalpies for a set of 40 species from 3.2 kcal/mol obtained with unadorned LC-ωPBE/6-31+G(2d,2p) to 1.6 kcal/mol. Our findings demonstrate that broad improvements to the performance of DFT methods may be achievable through the use of DCPs.« less
Scheeres, Korine; Knoop, Hans; Meer, van der Jos; Bleijenberg, Gijs
2009-04-01
Effective treatment of chronic fatigue syndrome (CFS) with cognitive behavioural therapy (CBT) relies on a correct classification of so called 'fluctuating active' versus 'passive' patients. For successful treatment with CBT is it especially important to recognise the passive patients and give them a tailored treatment protocol. In the present study it was evaluated whether CFS patient's physical activity pattern can be assessed most accurately with the 'Activity Pattern Interview' (API), the International Physical Activity Questionnaire (IPAQ) or the CFS-Activity Questionnaire (CFS-AQ). The three instruments were validated compared to actometers. Actometers are until now the best and most objective instrument to measure physical activity, but they are too expensive and time consuming for most clinical practice settings. In total 226 CFS patients enrolled for CBT therapy answered the API at intake and filled in the two questionnaires. Directly after intake they wore the actometer for two weeks. Based on receiver operating characteristic (ROC) curves the validity of the three methods were assessed and compared. Both the API and the two questionnaires had an acceptable validity (0.64 to 0.71). None of the three instruments was significantly better than the others. The proportion of false predictions was rather high for all three instrument. The IPAQ had the highest proportion of correct passive predictions (sensitivity 70.1%). The validity of all three instruments appeared to be fair, and all showed rather high proportions of false classifications. Hence in fact none of the tested instruments could really be called satisfactory. Because the IPAQ showed to be the best in correctly predicting 'passive' CFS patients, which is most essentially related to treatment results, it was concluded that the IPAQ is the preferable alternative for an actometer when treating CFS patients in clinical practice.
Nijenhuis, Cynthia M; Huitema, Alwin D R; Marchetti, Serena; Blank, Christian; Haanen, John B A G; van Thienen, Johannes V; Rosing, Hilde; Schellens, Jan H M; Beijnen, Jos H
2016-10-01
Pharmacokinetic monitoring is increasingly becoming an important part of clinical care of tyrosine kinase inhibitor treatment. Vemurafenib is an oral tyrosine kinase inhibitor that inhibits mutated serine/threonine protein kinase B-Raf (BRAF) and is approved for the treatment of adult patients with BRAF V600 mutation-positive unresectable or metastatic melanoma. The aim of this study was to establish the relationship between dried blood spot (DBS) and plasma concentrations of vemurafenib to enable the use of DBS sampling, which is a minimally invasive form of sample collection. In total, 43 paired plasma and DBS samples (in duplicate) were obtained from 8 melanoma patients on vemurafenib therapy and were analyzed using high-performance liquid chromatography-tandem mass spectrometry. Plasma concentrations were predicted from the DBS concentrations using 2 methods: (1) individual hematocrit correction and blood cell-to-plasma partitioning and (2) the calculated slope explaining the relationship between DBS and plasma concentrations (without individual hematocrit correction). Vemurafenib DBS concentrations and plasma concentrations showed a strong correlation (r = 0.964), and the relationship could be described by ([vemurafenib]plasma = [vemurafenib]DBS /0.64). The predicted plasma concentrations were within ±20% of the analyzed plasma concentrations in 97% and 100% of the samples for the methods with and without hematocrit correction, respectively. In conclusion, DBS concentrations and plasma concentrations of vemurafenib are highly correlated. Plasma concentrations can be predicted from DBS concentration using the blood cell-to-plasma partition and the average hematocrit value of this cohort (0.40 L/L). DBS sampling for pharmacokinetic monitoring of vemurafenib treatment can be used in clinical practice. © 2016, The American College of Clinical Pharmacology.
Correcting for Optimistic Prediction in Small Data Sets
Smith, Gordon C. S.; Seaman, Shaun R.; Wood, Angela M.; Royston, Patrick; White, Ian R.
2014-01-01
The C statistic is a commonly reported measure of screening test performance. Optimistic estimation of the C statistic is a frequent problem because of overfitting of statistical models in small data sets, and methods exist to correct for this issue. However, many studies do not use such methods, and those that do correct for optimism use diverse methods, some of which are known to be biased. We used clinical data sets (United Kingdom Down syndrome screening data from Glasgow (1991–2003), Edinburgh (1999–2003), and Cambridge (1990–2006), as well as Scottish national pregnancy discharge data (2004–2007)) to evaluate different approaches to adjustment for optimism. We found that sample splitting, cross-validation without replication, and leave-1-out cross-validation produced optimism-adjusted estimates of the C statistic that were biased and/or associated with greater absolute error than other available methods. Cross-validation with replication, bootstrapping, and a new method (leave-pair-out cross-validation) all generated unbiased optimism-adjusted estimates of the C statistic and had similar absolute errors in the clinical data set. Larger simulation studies confirmed that all 3 methods performed similarly with 10 or more events per variable, or when the C statistic was 0.9 or greater. However, with lower events per variable or lower C statistics, bootstrapping tended to be optimistic but with lower absolute and mean squared errors than both methods of cross-validation. PMID:24966219
Bias correction of satellite-based rainfall data
NASA Astrophysics Data System (ADS)
Bhattacharya, Biswa; Solomatine, Dimitri
2015-04-01
Limitation in hydro-meteorological data availability in many catchments limits the possibility of reliable hydrological analyses especially for near-real-time predictions. However, the variety of satellite based and meteorological model products for rainfall provides new opportunities. Often times the accuracy of these rainfall products, when compared to rain gauge measurements, is not impressive. The systematic differences of these rainfall products from gauge observations can be partially compensated by adopting a bias (error) correction. Many of such methods correct the satellite based rainfall data by comparing their mean value to the mean value of rain gauge data. Refined approaches may also first find out a suitable time scale at which different data products are better comparable and then employ a bias correction at that time scale. More elegant methods use quantile-to-quantile bias correction, which however, assumes that the available (often limited) sample size can be useful in comparing probabilities of different rainfall products. Analysis of rainfall data and understanding of the process of its generation reveals that the bias in different rainfall data varies in space and time. The time aspect is sometimes taken into account by considering the seasonality. In this research we have adopted a bias correction approach that takes into account the variation of rainfall in space and time. A clustering based approach is employed in which every new data point (e.g. of Tropical Rainfall Measuring Mission (TRMM)) is first assigned to a specific cluster of that data product and then, by identifying the corresponding cluster of gauge data, the bias correction specific to that cluster is adopted. The presented approach considers the space-time variation of rainfall and as a result the corrected data is more realistic. Keywords: bias correction, rainfall, TRMM, satellite rainfall
Philipp, Bodo; Hoff, Malte; Germa, Florence; Schink, Bernhard; Beimborn, Dieter; Mersch-Sundermann, Volker
2007-02-15
Prediction of the biodegradability of organic compounds is an ecologically desirable and economically feasible tool for estimating the environmental fate of chemicals. We combined quantitative structure-activity relationships (QSAR) with the systematic collection of biochemical knowledge to establish rules for the prediction of aerobic biodegradation of N-heterocycles. Validated biodegradation data of 194 N-heterocyclic compounds were analyzed using the MULTICASE-method which delivered two QSAR models based on 17 activating (OSAR 1) and on 16 inactivating molecular fragments (GSAR 2), which were statistically significantly linked to efficient or poor biodegradability, respectively. The percentages of correct classifications were over 99% for both models, and cross-validation resulted in 67.9% (GSAR 1) and 70.4% (OSAR 2) correct predictions. Biochemical interpretation of the activating and inactivating characteristics of the molecular fragments delivered plausible mechanistic interpretations and enabled us to establish the following biodegradation rules: (1) Target sites for amidohydrolases and for cytochrome P450 monooxygenases enhance biodegradation of nonaromatic N-heterocycles. (2) Target sites for molybdenum hydroxylases enhance biodegradation of aromatic N-heterocycles. (3) Target sites for hydratation by an urocanase-like mechanism enhance biodegradation of imidazoles. Our complementary approach represents a feasible strategy for generating concrete rules for the prediction of biodegradability of organic compounds.
Multi-jet merged top-pair production including electroweak corrections
NASA Astrophysics Data System (ADS)
Gütschow, Christian; Lindert, Jonas M.; Schönherr, Marek
2018-04-01
We present theoretical predictions for the production of top-quark pairs in association with jets at the LHC including electroweak (EW) corrections. First, we present and compare differential predictions at the fixed-order level for t\\bar{t} and t\\bar{t}+ {jet} production at the LHC considering the dominant NLO EW corrections of order O(α_{s}^2 α ) and O(α_{s}^3 α ) respectively together with all additional subleading Born and one-loop contributions. The NLO EW corrections are enhanced at large energies and in particular alter the shape of the top transverse momentum distribution, whose reliable modelling is crucial for many searches for new physics at the energy frontier. Based on the fixed-order results we motivate an approximation of the EW corrections valid at the percent level, that allows us to readily incorporate the EW corrections in the MePs@Nlo framework of Sherpa combined with OpenLoops. Subsequently, we present multi-jet merged parton-level predictions for inclusive top-pair production incorporating NLO QCD + EW corrections to t\\bar{t} and t\\bar{t}+ {jet}. Finally, we compare at the particle-level against a recent 8 TeV measurement of the top transverse momentum distribution performed by ATLAS in the lepton + jet channel. We find very good agreement between the Monte Carlo prediction and the data when the EW corrections are included.
Genkawa, Takuma; Shinzawa, Hideyuki; Kato, Hideaki; Ishikawa, Daitaro; Murayama, Kodai; Komiyama, Makoto; Ozaki, Yukihiro
2015-12-01
An alternative baseline correction method for diffuse reflection near-infrared (NIR) spectra, searching region standard normal variate (SRSNV), was proposed. Standard normal variate (SNV) is an effective pretreatment method for baseline correction of diffuse reflection NIR spectra of powder and granular samples; however, its baseline correction performance depends on the NIR region used for SNV calculation. To search for an optimal NIR region for baseline correction using SNV, SRSNV employs moving window partial least squares regression (MWPLSR), and an optimal NIR region is identified based on the root mean square error (RMSE) of cross-validation of the partial least squares regression (PLSR) models with the first latent variable (LV). The performance of SRSNV was evaluated using diffuse reflection NIR spectra of mixture samples consisting of wheat flour and granular glucose (0-100% glucose at 5% intervals). From the obtained NIR spectra of the mixture in the 10 000-4000 cm(-1) region at 4 cm intervals (1501 spectral channels), a series of spectral windows consisting of 80 spectral channels was constructed, and then SNV spectra were calculated for each spectral window. Using these SNV spectra, a series of PLSR models with the first LV for glucose concentration was built. A plot of RMSE versus the spectral window position obtained using the PLSR models revealed that the 8680–8364 cm(-1) region was optimal for baseline correction using SNV. In the SNV spectra calculated using the 8680–8364 cm(-1) region (SRSNV spectra), a remarkable relative intensity change between a band due to wheat flour at 8500 cm(-1) and that due to glucose at 8364 cm(-1) was observed owing to successful baseline correction using SNV. A PLSR model with the first LV based on the SRSNV spectra yielded a determination coefficient (R2) of 0.999 and an RMSE of 0.70%, while a PLSR model with three LVs based on SNV spectra calculated in the full spectral region gave an R2 of 0.995 and an RMSE of 2.29%. Additional evaluation of SRSNV was carried out using diffuse reflection NIR spectra of marzipan and corn samples, and PLSR models based on SRSNV spectra showed good prediction results. These evaluation results indicate that SRSNV is effective in baseline correction of diffuse reflection NIR spectra and provides regression models with good prediction accuracy.
Open EFTs, IR effects & late-time resummations: systematic corrections in stochastic inflation
Burgess, C. P.; Holman, R.; Tasinato, G.
2016-01-26
Though simple inflationary models describe the CMB well, their corrections are often plagued by infrared effects that obstruct a reliable calculation of late-time behaviour. Here we adapt to cosmology tools designed to address similar issues in other physical systems with the goal of making reliable late-time inflationary predictions. The main such tool is Open EFTs which reduce in the inflationary case to Stochastic Inflation plus calculable corrections. We apply this to a simple inflationary model that is complicated enough to have dangerous IR behaviour yet simple enough to allow the inference of late-time behaviour. We find corrections to standard Stochasticmore » Inflationary predictions for the noise and drift, and we find these corrections ensure the IR finiteness of both these quantities. The late-time probability distribution, P(Φ), for super-Hubble field fluctuations are obtained as functions of the noise and drift and so these too are IR finite. We compare our results to other methods (such as large-N models) and find they agree when these models are reliable. In all cases we can explore in detail we find IR secular effects describe the slow accumulation of small perturbations to give a big effect: a significant distortion of the late-time probability distribution for the field. But the energy density associated with this is only of order H 4 at late times and so does not generate a dramatic gravitational back-reaction.« less
Open EFTs, IR effects & late-time resummations: systematic corrections in stochastic inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burgess, C. P.; Holman, R.; Tasinato, G.
Though simple inflationary models describe the CMB well, their corrections are often plagued by infrared effects that obstruct a reliable calculation of late-time behaviour. Here we adapt to cosmology tools designed to address similar issues in other physical systems with the goal of making reliable late-time inflationary predictions. The main such tool is Open EFTs which reduce in the inflationary case to Stochastic Inflation plus calculable corrections. We apply this to a simple inflationary model that is complicated enough to have dangerous IR behaviour yet simple enough to allow the inference of late-time behaviour. We find corrections to standard Stochasticmore » Inflationary predictions for the noise and drift, and we find these corrections ensure the IR finiteness of both these quantities. The late-time probability distribution, P(Φ), for super-Hubble field fluctuations are obtained as functions of the noise and drift and so these too are IR finite. We compare our results to other methods (such as large-N models) and find they agree when these models are reliable. In all cases we can explore in detail we find IR secular effects describe the slow accumulation of small perturbations to give a big effect: a significant distortion of the late-time probability distribution for the field. But the energy density associated with this is only of order H 4 at late times and so does not generate a dramatic gravitational back-reaction.« less
Adriaens, E; Guest, R; Willoughby, J A; Fochtman, P; Kandarova, H; Verstraelen, S; Van Rompay, A R
2018-06-01
Assessment of ocular irritancy is an international regulatory requirement in the safety evaluation of industrial and consumer products. Although many in vitro ocular irritation assays exist, alone they are incapable of fully categorizing chemicals. The objective of CEFIC-LRI-AIMT6-VITO CON4EI (CONsortium for in vitro Eye Irritation testing strategy) project was to develop tiered testing strategies for eye irritation assessment that can lead to complete replacement of the in vivo Draize rabbit eye test (OECD TG 405). A set of 80 reference chemicals was tested with seven test methods, one method was the Slug Mucosal Irritation (SMI) test method. The method measures the amount of mucus produced (MP) during a single 1-hour contact with a 1% and 10% dilution of the chemical. Based on the total MP, a classification (Cat 1, Cat 2, or No Cat) is predicted. The SMI test method correctly identified 65.8% of the Cat 1 chemicals with a specificity of 90.5% (low over-prediction rate for in vivo Cat 2 and No Cat chemicals). Mispredictions were predominantly unidirectional towards lower classifications with 26.7% of the liquids and 40% of the solids being underpredicted. In general, the performance was better for liquids than for solids with respectively 76.5% vs 57.1% (Cat 1), 61.5% vs 50% (Cat 2), and 87.5% vs 85.7% (No Cat) being identified correctly. Copyright © 2017 Elsevier Ltd. All rights reserved.
Anesthetic level prediction using a QCM based E-nose.
Saraoğlu, H M; Ozmen, A; Ebeoğlu, M A
2008-06-01
Anesthetic level measurement is a real time process. This paper presents a new method to measure anesthesia level in surgery rooms at hospitals using a QCM based E-Nose. The E-Nose system contains an array of eight different coated QCM sensors. In this work, the best linear reacting sensor is selected from the array and used in the experiments. Then, the sensor response time was observed about 15 min using classic method, which is impractical for on-line anesthetic level detection during a surgery. Later, the sensor transition data is analyzed to reach a decision earlier than the classical method. As a result, it is found out that the slope of transition data gives valuable information to predict the anesthetic level. With this new method, we achieved to find correct anesthetic levels within 100 s.
Computational Prediction of Protein-Protein Interactions
Ehrenberger, Tobias; Cantley, Lewis C.; Yaffe, Michael B.
2015-01-01
The prediction of protein-protein interactions and kinase-specific phosphorylation sites on individual proteins is critical for correctly placing proteins within signaling pathways and networks. The importance of this type of annotation continues to increase with the continued explosion of genomic and proteomic data, particularly with emerging data categorizing posttranslational modifications on a large scale. A variety of computational tools are available for this purpose. In this chapter, we review the general methodologies for these types of computational predictions and present a detailed user-focused tutorial of one such method and computational tool, Scansite, which is freely available to the entire scientific community over the Internet. PMID:25859943
Analytical methods to predict liquid congealing in ram air heat exchangers during cold operation
NASA Astrophysics Data System (ADS)
Coleman, Kenneth; Kosson, Robert
1989-07-01
Ram air heat exchangers used to cool liquids such as lube oils or Ethylene-Glycol/water solutions can be subject to congealing in very cold ambients, resulting in a loss of cooling capability. Two-dimensional, transient analytical models have been developed to explore this phenomenon with both continuous and staggered fin cores. Staggered fin predictions are compared to flight test data from the E-2C Allison T56 engine lube oil system during winter conditions. For simpler calculations, a viscosity ratio correction was introduced and found to provide reasonable cold ambient performance predictions for the staggered fin core, using a one-dimensional approach.
Transition Studies on a Swept-Wing Model
NASA Technical Reports Server (NTRS)
Saric, William S.
1996-01-01
The present investigation contributes to the understanding of boundary-layer stability and transition by providing detailed measurements of carefully-produced stationary crossflow vortices. It is clear that a successful prediction of transition in swept-wing flows must include an understanding of the detailed physics involved. Receptivity and nonlinear effects must not be ignored. Linear stability theory correctly predicts the expected wavelengths and mode shapes for stationary crossflow, but fails to predict the growth rates, even for low amplitudes. As new computational and analytical methods are developed to deal with three-dimensional boundary layers, the data provided by this experiment will serve as a useful benchmark for comparison.
Predictive sensor method and apparatus
NASA Technical Reports Server (NTRS)
Cambridge, Vivien J.; Koger, Thomas L.
1993-01-01
A microprocessor and electronics package employing predictive methodology was developed to accelerate the response time of slowly responding hydrogen sensors. The system developed improved sensor response time from approximately 90 seconds to 8.5 seconds. The microprocessor works in real-time providing accurate hydrogen concentration corrected for fluctuations in sensor output resulting from changes in atmospheric pressure and temperature. Following the successful development of the hydrogen sensor system, the system and predictive methodology was adapted to a commercial medical thermometer probe. Results of the experiment indicate that, with some customization of hardware and software, response time improvements are possible for medical thermometers as well as other slowly responding sensors.
Correction on the distortion of Scheimpflug imaging for dynamic central corneal thickness
NASA Astrophysics Data System (ADS)
Li, Tianjie; Tian, Lei; Wang, Like; Hon, Ying; Lam, Andrew K. C.; Huang, Yifei; Wang, Yuanyuan; Zheng, Yongping
2015-05-01
The measurement of central corneal thickness (CCT) is important in ophthalmology. Most studies concerned the value at normal status, while rare ones focused on its dynamic changing. The commercial Corvis ST is the only commercial device currently available to visualize the two-dimensional image of dynamic corneal profiles during an air puff indentation. However, the directly observed CCT involves the Scheimpflug distortion, thus misleading the clinical diagnosis. This study aimed to correct the distortion for better measuring the dynamic CCTs. The optical path was first derived to consider the influence of factors on the use of Covis ST. A correction method was then proposed to estimate the CCT at any time during air puff indentation. Simulation results demonstrated the feasibility of the intuitive-feasible calibration for measuring the stationary CCT and indicated the necessity of correction when air puffed. Experiments on three contact lenses and four human corneas verified the prediction that the CCT would be underestimated when the improper calibration was conducted for air and overestimated when it was conducted on contact lenses made of polymethylmethacrylate. Using the proposed method, the CCT was finally observed to increase by 66±34 μm at highest concavity in 48 normal human corneas.
Pile-up correction by Genetic Algorithm and Artificial Neural Network
NASA Astrophysics Data System (ADS)
Kafaee, M.; Saramad, S.
2009-08-01
Pile-up distortion is a common problem for high counting rates radiation spectroscopy in many fields such as industrial, nuclear and medical applications. It is possible to reduce pulse pile-up using hardware-based pile-up rejections. However, this phenomenon may not be eliminated completely by this approach and the spectrum distortion caused by pile-up rejection can be increased as well. In addition, inaccurate correction or rejection of pile-up artifacts in applications such as energy dispersive X-ray (EDX) spectrometers can lead to losses of counts, will give poor quantitative results and even false element identification. Therefore, it is highly desirable to use software-based models to predict and correct any recognized pile-up signals in data acquisition systems. The present paper describes two new intelligent approaches for pile-up correction; the Genetic Algorithm (GA) and Artificial Neural Networks (ANNs). The validation and testing results of these new methods have been compared, which shows excellent agreement with the measured data with 60Co source and NaI detector. The Monte Carlo simulation of these new intelligent algorithms also shows their advantages over hardware-based pulse pile-up rejection methods.
Turboprop IDEAL: a motion-resistant fat-water separation technique.
Huo, Donglai; Li, Zhiqiang; Aboussouan, Eric; Karis, John P; Pipe, James G
2009-01-01
Suppression of the fat signal in MRI is very important for many clinical applications. Multi-point water-fat separation methods, such as IDEAL (Iterative Decomposition of water and fat with Echo Asymmetry and Least-squares estimation), can robustly separate water and fat signal, but inevitably increase scan time, making separated images more easily affected by patient motions. PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) and Turboprop techniques offer an effective approach to correct for motion artifacts. By combining these techniques together, we demonstrate that the new TP-IDEAL method can provide reliable water-fat separation with robust motion correction. The Turboprop sequence was modified to acquire source images, and motion correction algorithms were adjusted to assure the registration between different echo images. Theoretical calculations were performed to predict the optimal shift and spacing of the gradient echoes. Phantom images were acquired, and results were compared with regular FSE-IDEAL. Both T1- and T2-weighted images of the human brain were used to demonstrate the effectiveness of motion correction. TP-IDEAL images were also acquired for pelvis, knee, and foot, showing great potential of this technique for general clinical applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sonzogni, A. A.; McCutchan, E. A.; Johnson, T. D.
Fission yields form an integral part of the prediction of antineutrino spectra generated by nuclear reactors, but little attention has been paid to the quality and reliability of the data used in current calculations. Following a critical review of the thermal and fast ENDF/B-VII.1 235U fission yields, deficiencies are identified and improved yields are obtained, based on corrections of erroneous yields, consistency between decay and fission yield data, and updated isomeric ratios. These corrected yields are used to calculate antineutrino spectra using the summation method. An anomalous value for the thermal fission yield of 86Ge generates an excess of antineutrinosmore » at 5–7 MeV, a feature which is no longer present when the corrected yields are used. Thermal spectra calculated with two distinct fission yield libraries (corrected ENDF/B and JEFF) differ by up to 6% in the 0–7 MeV energy window, allowing for a basic estimate of the uncertainty involved in the fission yield component of summation calculations. Lastly, the fast neutron antineutrino spectrum is calculated, which at the moment can only be obtained with the summation method and may be relevant for short baseline reactor experiments using highly enriched uranium fuel.« less
An analysis of USSPACECOM's space surveillance network sensor tasking methodology
NASA Astrophysics Data System (ADS)
Berger, Jeff M.; Moles, Joseph B.; Wilsey, David G.
1992-12-01
This study provides the basis for the development of a cost/benefit assessment model to determine the effects of alterations to the Space Surveillance Network (SSN) on orbital element (OE) set accuracy. It provides a review of current methods used by NORAD and the SSN to gather and process observations, an alternative to the current Gabbard classification method, and the development of a model to determine the effects of observation rate and correction interval on OE set accuracy. The proposed classification scheme is based on satellite J2 perturbations. Specifically, classes were established based on mean motion, eccentricity, and inclination since J2 perturbation effects are functions of only these elements. Model development began by creating representative sensor observations using a highly accurate orbital propagation model. These observations were compared to predicted observations generated using the NORAD Simplified General Perturbation (SGP4) model and differentially corrected using a Bayes, sequential estimation, algorithm. A 10-run Monte Carlo analysis was performed using this model on 12 satellites using 16 different observation rate/correction interval combinations. An ANOVA and confidence interval analysis of the results show that this model does demonstrate the differences in steady state position error based on varying observation rate and correction interval.
Hu, Min-Chun; Cheng, Ming-Hsun; Lan, Kun-Chan
2016-01-01
An automatic tongue diagnosis framework is proposed to analyze tongue images taken by smartphones. Different from conventional tongue diagnosis systems, our input tongue images are usually in low resolution and taken under unknown lighting conditions. Consequently, existing tongue diagnosis methods cannot be directly applied to give accurate results. We use the SVM (support vector machine) to predict the lighting condition and the corresponding color correction matrix according to the color difference of images taken with and without flash. We also modify the state-of-the-art work of fur and fissure detection for tongue images by taking hue information into consideration and adding a denoising step. Our method is able to correct the color of tongue images under different lighting conditions (e.g. fluorescent, incandescent, and halogen illuminant) and provide a better accuracy in tongue features detection with less processing complexity than the prior work. In this work, we proposed an automatic tongue diagnosis framework which can be applied to smartphones. Unlike the prior work which can only work in a controlled environment, our system can adapt to different lighting conditions by employing a novel color correction parameter estimation scheme.
NASA Astrophysics Data System (ADS)
Jia, Song; Xu, Tian-he; Sun, Zhang-zhen; Li, Jia-jing
2017-02-01
UT1-UTC is an important part of the Earth Orientation Parameters (EOP). The high-precision predictions of UT1-UTC play a key role in practical applications of deep space exploration, spacecraft tracking and satellite navigation and positioning. In this paper, a new prediction method with combination of Gray Model (GM(1, 1)) and Autoregressive Integrated Moving Average (ARIMA) is developed. The main idea is as following. Firstly, the UT1-UTC data are preprocessed by removing the leap second and Earth's zonal harmonic tidal to get UT1R-TAI data. Periodic terms are estimated and removed by the least square to get UT2R-TAI. Then the linear terms of UT2R-TAI data are modeled by the GM(1, 1), and the residual terms are modeled by the ARIMA. Finally, the UT2R-TAI prediction can be performed based on the combined model of GM(1, 1) and ARIMA, and the UT1-UTC predictions are obtained by adding the corresponding periodic terms, leap second correction and the Earth's zonal harmonic tidal correction. The results show that the proposed model can be used to predict UT1-UTC effectively with higher middle and long-term (from 32 to 360 days) accuracy than those of LS + AR, LS + MAR and WLS + MAR.
Komsa, Darya N; Staroverov, Viktor N
2016-11-08
Standard density-functional approximations often incorrectly predict that heteronuclear diatomic molecules dissociate into fractionally charged atoms. We demonstrate that these spurious charges can be eliminated by adapting the shape-correction method for Kohn-Sham potentials that was originally introduced to improve Rydberg excitation energies [ Phys. Rev. Lett. 2012 , 108 , 253005 ]. Specifically, we show that if a suitably determined fraction of electron charge is added to or removed from a frontier Kohn-Sham orbital level, the approximate Kohn-Sham potential of a stretched molecule self-corrects by developing a semblance of step structure; if this potential is used to obtain the electron density of the neutral molecule, charge delocalization is blocked and spurious fractional charges disappear beyond a certain internuclear distance.
Flores, David I; Sotelo-Mundo, Rogerio R; Brizuela, Carlos A
2014-01-01
The automatic identification of catalytic residues still remains an important challenge in structural bioinformatics. Sequence-based methods are good alternatives when the query shares a high percentage of identity with a well-annotated enzyme. However, when the homology is not apparent, which occurs with many structures from the structural genome initiative, structural information should be exploited. A local structural comparison is preferred to a global structural comparison when predicting functional residues. CMASA is a recently proposed method for predicting catalytic residues based on a local structure comparison. The method achieves high accuracy and a high value for the Matthews correlation coefficient. However, point substitutions or a lack of relevant data strongly affect the performance of the method. In the present study, we propose a simple extension to the CMASA method to overcome this difficulty. Extensive computational experiments are shown as proof of concept instances, as well as for a few real cases. The results show that the extension performs well when the catalytic site contains mutated residues or when some residues are missing. The proposed modification could correctly predict the catalytic residues of a mutant thymidylate synthase, 1EVF. It also successfully predicted the catalytic residues for 3HRC despite the lack of information for a relevant side chain atom in the PDB file.
Palmer, David S; Frolov, Andrey I; Ratkova, Ekaterina L; Fedorov, Maxim V
2010-12-15
We report a simple universal method to systematically improve the accuracy of hydration free energies calculated using an integral equation theory of molecular liquids, the 3D reference interaction site model. A strong linear correlation is observed between the difference of the experimental and (uncorrected) calculated hydration free energies and the calculated partial molar volume for a data set of 185 neutral organic molecules from different chemical classes. By using the partial molar volume as a linear empirical correction to the calculated hydration free energy, we obtain predictions of hydration free energies in excellent agreement with experiment (R = 0.94, σ = 0.99 kcal mol (- 1) for a test set of 120 organic molecules).
NASA Astrophysics Data System (ADS)
Garrido Torres, José A.; Ramberger, Benjamin; Früchtl, Herbert A.; Schaub, Renald; Kresse, Georg
2017-11-01
The adsorption energy of benzene on various metal substrates is predicted using the random phase approximation (RPA) for the correlation energy. Agreement with available experimental data is systematically better than 10% for both coinage and reactive metals. The results are also compared with more approximate methods, including van der Waals density functional theory (DFT), as well as dispersion-corrected DFT functionals. Although dispersion-corrected DFT can yield accurate results, for instance, on coinage metals, the adsorption energies are clearly overestimated on more reactive transition metals. Furthermore, coverage dependent adsorption energies are well described by the RPA. This shows that for the description of aromatic molecules on metal surfaces further improvements in density functionals are necessary, or more involved many-body methods such as the RPA are required.
First principles study of pressure induced polymorphic phase transition in KNO3
NASA Astrophysics Data System (ADS)
Yedukondalu, N.; Vaitheeswaran, G.
2015-06-01
We report the structural, elastic, electronic, and vibrational properties of polymorphic phases II and III of KNO3 based on density functional theory (DFT). Using semi-empirical dispersion correction (DFT-D2) method, we predicted the correct thermodynamic ground state of KNO3 and the obtained ground state properties of the polymorphs are in good agreement with the experiments. We further used this method to calculate the elastic constants, IR and Raman spectra, vibrational frequencies and their assignment of these polymorphs. The calculated Tran Blaha-modified Becke Johnson (TB-mBJ) electronic structure shows that both the polymorphic phases are direct band gap insulators with mixed ionic and covalent bonding. Also the TB-mBJ band gaps are improved over standard DFT functionals which are comparable with the available experiments.
Measures of Kindergarten Spelling and Their Relations to Later Spelling Performance.
Treiman, Rebecca; Kessler, Brett; Pollo, Tatiana Cury; Byrne, Brian; Olson, Richard K
2016-01-01
Learning the orthographic forms of words is important for both spelling and reading. To determine whether some methods of scoring children's early spellings predict later spelling performance better than do other methods, we analyzed data from 374 U.S. and Australian children who took a 10-word spelling test at the end of kindergarten (mean age 6 years, 2 months) and a standardized spelling test approximately two years later. Surprisingly, scoring methods that took account of phonological plausibility did not outperform methods that were based only on orthographic correctness. The scoring method that is most widely used in research with young children, which allots a certain number of points to each word and which considers both orthographic and phonological plausibility, did not rise to the top as a predictor. Prediction of Grade 2 spelling performance was improved to a small extent by considering children's tendency to reverse letters in kindergarten.
Measures of Kindergarten Spelling and Their Relations to Later Spelling Performance
Treiman, Rebecca; Kessler, Brett; Pollo, Tatiana Cury; Byrne, Brian; Olson, Richard K.
2016-01-01
Learning the orthographic forms of words is important for both spelling and reading. To determine whether some methods of scoring children’s early spellings predict later spelling performance better than do other methods, we analyzed data from 374 U.S. and Australian children who took a 10-word spelling test at the end of kindergarten (mean age 6 years, 2 months) and a standardized spelling test approximately two years later. Surprisingly, scoring methods that took account of phonological plausibility did not outperform methods that were based only on orthographic correctness. The scoring method that is most widely used in research with young children, which allots a certain number of points to each word and which considers both orthographic and phonological plausibility, did not rise to the top as a predictor. Prediction of Grade 2 spelling performance was improved to a small extent by considering children’s tendency to reverse letters in kindergarten. PMID:27761101
Quicksilver: Fast predictive image registration - A deep learning approach.
Yang, Xiao; Kwitt, Roland; Styner, Martin; Niethammer, Marc
2017-09-01
This paper introduces Quicksilver, a fast deformable image registration method. Quicksilver registration for image-pairs works by patch-wise prediction of a deformation model based directly on image appearance. A deep encoder-decoder network is used as the prediction model. While the prediction strategy is general, we focus on predictions for the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model. Specifically, we predict the momentum-parameterization of LDDMM, which facilitates a patch-wise prediction strategy while maintaining the theoretical properties of LDDMM, such as guaranteed diffeomorphic mappings for sufficiently strong regularization. We also provide a probabilistic version of our prediction network which can be sampled during the testing time to calculate uncertainties in the predicted deformations. Finally, we introduce a new correction network which greatly increases the prediction accuracy of an already existing prediction network. We show experimental results for uni-modal atlas-to-image as well as uni-/multi-modal image-to-image registrations. These experiments demonstrate that our method accurately predicts registrations obtained by numerical optimization, is very fast, achieves state-of-the-art registration results on four standard validation datasets, and can jointly learn an image similarity measure. Quicksilver is freely available as an open-source software. Copyright © 2017 Elsevier Inc. All rights reserved.
Evaluating approaches to find exon chains based on long reads.
Kuosmanen, Anna; Norri, Tuukka; Mäkinen, Veli
2018-05-01
Transcript prediction can be modeled as a graph problem where exons are modeled as nodes and reads spanning two or more exons are modeled as exon chains. Pacific Biosciences third-generation sequencing technology produces significantly longer reads than earlier second-generation sequencing technologies, which gives valuable information about longer exon chains in a graph. However, with the high error rates of third-generation sequencing, aligning long reads correctly around the splice sites is a challenging task. Incorrect alignments lead to spurious nodes and arcs in the graph, which in turn lead to incorrect transcript predictions. We survey several approaches to find the exon chains corresponding to long reads in a splicing graph, and experimentally study the performance of these methods using simulated data to allow for sensitivity/precision analysis. Our experiments show that short reads from second-generation sequencing can be used to significantly improve exon chain correctness either by error-correcting the long reads before splicing graph creation, or by using them to create a splicing graph on which the long-read alignments are then projected. We also study the memory and time consumption of various modules, and show that accurate exon chains lead to significantly increased transcript prediction accuracy. The simulated data and in-house scripts used for this article are available at http://www.cs.helsinki.fi/group/gsa/exon-chains/exon-chains-bib.tar.bz2.
Liu, Ya; Pan, Xianzhang; Wang, Changkun; Li, Yanli; Shi, Rongjie
2015-01-01
Robust models for predicting soil salinity that use visible and near-infrared (vis–NIR) reflectance spectroscopy are needed to better quantify soil salinity in agricultural fields. Currently available models are not sufficiently robust for variable soil moisture contents. Thus, we used external parameter orthogonalization (EPO), which effectively projects spectra onto the subspace orthogonal to unwanted variation, to remove the variations caused by an external factor, e.g., the influences of soil moisture on spectral reflectance. In this study, 570 spectra between 380 and 2400 nm were obtained from soils with various soil moisture contents and salt concentrations in the laboratory; 3 soil types × 10 salt concentrations × 19 soil moisture levels were used. To examine the effectiveness of EPO, we compared the partial least squares regression (PLSR) results established from spectra with and without EPO correction. The EPO method effectively removed the effects of moisture, and the accuracy and robustness of the soil salt contents (SSCs) prediction model, which was built using the EPO-corrected spectra under various soil moisture conditions, were significantly improved relative to the spectra without EPO correction. This study contributes to the removal of soil moisture effects from soil salinity estimations when using vis–NIR reflectance spectroscopy and can assist others in quantifying soil salinity in the future. PMID:26468645
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knill, C; Wayne State University School of Medicine, Detroit, MI; Snyder, M
Purpose: PTW’s Octavius 1000 SRS array performs IMRT QA measurements with liquid filled ionization chambers (LICs). Collection efficiencies of LICs have been shown to change during IMRT delivery as a function of LINAC pulse frequency and pulse dose, which affects QA results. In this study, two methods were developed to correct changes in collection efficiencies during IMRT QA measurements, and the effects of these corrections on QA pass rates were compared. Methods: For the first correction, Matlab software was developed that calculates pulse frequency and pulse dose for each detector, using measurement and DICOM RT Plan files. Pulse information ismore » converted to collection efficiency and measurements are corrected by multiplying detector dose by ratios of calibration to measured collection efficiencies. For the second correction, MU/min in daily 1000 SRS calibration was chosen to match average MU/min of the VMAT plan. Usefulness of derived corrections were evaluated using 6MV and 10FFF SBRT RapidArc plans delivered to the OCTAVIUS 4D system using a TrueBeam equipped with an HD- MLC. Effects of the two corrections on QA results were examined by performing 3D gamma analysis comparing predicted to measured dose, with and without corrections. Results: After complex Matlab corrections, average 3D gamma pass rates improved by [0.07%,0.40%,1.17%] for 6MV and [0.29%,1.40%,4.57%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. Maximum changes in gamma pass rates were [0.43%,1.63%,3.05%] for 6MV and [1.00%,4.80%,11.2%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. On average, pass rates of simple daily calibration corrections were within 1% of complex Matlab corrections. Conclusion: Ion recombination effects can potentially be clinically significant for OCTAVIUS 1000 SRS measurements, especially for higher pulse dose unflattened beams when using tighter gamma tolerances. Matching daily 1000 SRS calibration MU/min to average planned MU/min is a simple correction that greatly reduces ion recombination effects, improving measurements accuracy and gamma pass rates. This work was supported by PTW.« less
An entropy and viscosity corrected potential method for rotor performance prediction
NASA Technical Reports Server (NTRS)
Bridgeman, John O.; Strawn, Roger C.; Caradonna, Francis X.
1988-01-01
An unsteady Full-Potential Rotor code (FPR) has been enhanced with modifications directed at improving its drag prediction capability. The shock generated entropy has been included to provide solutions comparable to the Euler equations. A weakly interacted integral boundary layer has also been coupled to FPR in order to estimate skin-friction drag. Pressure distributions, shock positions, and drag comparisons are made with various data sets derived from two-dimensional airfoil, hovering, and advancing high speed rotor tests. In all these comparisons, the effect of the nonisentropic modification improves (i.e., weakens) the shock strength and wave drag. In addition, the boundary layer method yields reasonable estimates of skin-friction drag. Airfoil drag and hover torque data comparisons are excellent, as are predicted shock strength and positions for a high speed advancing rotor.
Brady, Amie M. G.; Meg B. Plona,
2015-07-30
A computer program was developed to manage the nowcasts by running the predictive models and posting the results to a publicly accessible Web site daily by 9 a.m. The nowcasts were able to correctly predict E. coli concentrations above or below the water-quality standard at Jaite for 79 percent of the samples compared with the measured concentrations. In comparison, the persistence model (using the previous day’s sample concentration) correctly predicted concentrations above or below the water-quality standard in only 68 percent of the samples. To determine if the Jaite nowcast could be used for the stretch of the river between Lock 29 and Jaite, the model predictions for Jaite were compared with the measured concentrations at Lock 29. The Jaite nowcast provided correct responses for 77 percent of the Lock 29 samples, which was a greater percentage than the percentage of correct responses (58 percent) from the persistence model at Lock 29.
Protein docking prediction using predicted protein-protein interface.
Li, Bin; Kihara, Daisuke
2012-01-10
Many important cellular processes are carried out by protein complexes. To provide physical pictures of interacting proteins, many computational protein-protein prediction methods have been developed in the past. However, it is still difficult to identify the correct docking complex structure within top ranks among alternative conformations. We present a novel protein docking algorithm that utilizes imperfect protein-protein binding interface prediction for guiding protein docking. Since the accuracy of protein binding site prediction varies depending on cases, the challenge is to develop a method which does not deteriorate but improves docking results by using a binding site prediction which may not be 100% accurate. The algorithm, named PI-LZerD (using Predicted Interface with Local 3D Zernike descriptor-based Docking algorithm), is based on a pair wise protein docking prediction algorithm, LZerD, which we have developed earlier. PI-LZerD starts from performing docking prediction using the provided protein-protein binding interface prediction as constraints, which is followed by the second round of docking with updated docking interface information to further improve docking conformation. Benchmark results on bound and unbound cases show that PI-LZerD consistently improves the docking prediction accuracy as compared with docking without using binding site prediction or using the binding site prediction as post-filtering. We have developed PI-LZerD, a pairwise docking algorithm, which uses imperfect protein-protein binding interface prediction to improve docking accuracy. PI-LZerD consistently showed better prediction accuracy over alternative methods in the series of benchmark experiments including docking using actual docking interface site predictions as well as unbound docking cases.
Tzetzis, George; Votsis, Evandros; Kourtessis, Thomas
2008-01-01
This experiment investigated the effects of three corrective feedback methods, using different combinations of correction, or error cues and positive feedback for learning two badminton skills with different difficulty (forehand clear - low difficulty, backhand clear - high difficulty). Outcome and self-confidence scores were used as dependent variables. The 48 participants were randomly assigned into four groups. Group A received correction cues and positive feedback. Group B received cues on errors of execution. Group C received positive feedback, correction cues and error cues. Group D was the control group. A pre, post and a retention test was conducted. A three way analysis of variance ANOVA (4 groups X 2 task difficulty X 3 measures) with repeated measures on the last factor revealed significant interactions for each depended variable. All the corrective feedback methods groups, increased their outcome scores over time for the easy skill, but only groups A and C for the difficult skill. Groups A and B had significantly better outcome scores than group C and the control group for the easy skill on the retention test. However, for the difficult skill, group C was better than groups A, B and D. The self confidence scores of groups A and C improved over time for the easy skill but not for group B and D. Again, for the difficult skill, only group C improved over time. Finally a regression analysis depicted that the improvement in performance predicted a proportion of the improvement in self confidence for both the easy and the difficult skill. It was concluded that when young athletes are taught skills of different difficulty, different type of instruction, might be more appropriate in order to improve outcome and self confidence. A more integrated approach on teaching will assist coaches or physical education teachers to be more efficient and effective. Key pointsThe type of the skill is a critical factor in determining the effectiveness of the feedback types.Different instructional methods of corrective feedback could have beneficial effects in the outcome and self-confidence of young athletesInstructions focusing on the correct cues or errors increase performance of easy skills.Positive feedback or correction cues increase self-confidence of easy skills but only the combination of error and correction cues increase self confidence and outcome scores of difficult skills. PMID:24149905
Capiau, Sara; Wilk, Leah S; De Kesel, Pieter M M; Aalders, Maurice C G; Stove, Christophe P
2018-02-06
The hematocrit (Hct) effect is one of the most important hurdles currently preventing more widespread implementation of quantitative dried blood spot (DBS) analysis in a routine context. Indeed, the Hct may affect both the accuracy of DBS methods as well as the interpretation of DBS-based results. We previously developed a method to determine the Hct of a DBS based on its hemoglobin content using noncontact diffuse reflectance spectroscopy. Despite the ease with which the analysis can be performed (i.e., mere scanning of the DBS) and the good results that were obtained, the method did require a complicated algorithm to derive the total hemoglobin content from the DBS's reflectance spectrum. As the total hemoglobin was calculated as the sum of oxyhemoglobin, methemoglobin, and hemichrome, the three main hemoglobin derivatives formed in DBS upon aging, the reflectance spectrum needed to be unmixed to determine the quantity of each of these derivatives. We now simplified the method by only using the reflectance at a single wavelength, located at a quasi-isosbestic point in the reflectance curve. At this wavelength, assuming 1-to-1 stoichiometry of the aging reaction, the reflectance is insensitive to the hemoglobin degradation and only scales with the total amount of hemoglobin and, hence, the Hct. This simplified method was successfully validated. At each quality control level as well as at the limits of quantitation (i.e., 0.20 and 0.67) bias, intra- and interday imprecision were within 10%. Method reproducibility was excellent based on incurred sample reanalysis and surpassed the reproducibility of the original method. Furthermore, the influence of the volume spotted, the measurement location within the spot, as well as storage time and temperature were evaluated, showing no relevant impact of these parameters. Application to 233 patient samples revealed a good correlation between the Hct determined on whole blood and the predicted Hct determined on venous DBS. The bias obtained with Bland and Altman analysis was -0.015 and the limits of agreement were -0.061 and 0.031, indicating that the simplified, noncontact Hct prediction method even outperforms the original method. In addition, using caffeine as a model compound, it was demonstrated that this simplified Hct prediction method can effectively be used to implement a Hct-dependent correction factor to DBS-based results to alleviate the Hct bias.
A Novel Quasi-3D Method for Cascade Flow Considering Axial Velocity Density Ratio
NASA Astrophysics Data System (ADS)
Chen, Zhiqiang; Zhou, Ming; Xu, Quanyong; Huang, Xudong
2018-03-01
A novel quasi-3D Computational Fluid Dynamics (CFD) method of mid-span flow simulation for compressor cascades is proposed. Two dimension (2D) Reynolds-Averaged Navier-Stokes (RANS) method is shown facing challenge in predicting mid-span flow with a unity Axial Velocity Density Ratio (AVDR). Three dimension (3D) RANS solution also shows distinct discrepancies if the AVDR is not predicted correctly. In this paper, 2D and 3D CFD results discrepancies are analyzed and a novel quasi-3D CFD method is proposed. The new quasi-3D model is derived by reducing 3D RANS Finite Volume Method (FVM) discretization over a one-spanwise-layer structured mesh cell. The sidewall effect is considered by two parts. The first part is explicit interface fluxes of mass, momentum and energy as well as turbulence. The second part is a cell boundary scaling factor representing sidewall boundary layer contraction. The performance of the novel quasi-3D method is validated on mid-span pressure distribution, pressure loss and shock prediction of two typical cascades. The results show good agreement with the experiment data on cascade SJ301-20 and cascade AC6-10 at all test condition. The proposed quasi-3D method shows superior accuracy over traditional 2D RANS method and 3D RANS method in performance prediction of compressor cascade.
NASA Astrophysics Data System (ADS)
Dyar, M. Darby; Giguere, Stephen; Carey, CJ; Boucher, Thomas
2016-12-01
This project examines the causes, effects, and optimization of continuum removal in laser-induced breakdown spectroscopy (LIBS) to produce the best possible prediction accuracy of elemental composition in geological samples. We compare prediction accuracy resulting from several different techniques for baseline removal, including asymmetric least squares (ALS), adaptive iteratively reweighted penalized least squares (Air-PLS), fully automatic baseline correction (FABC), continuous wavelet transformation, median filtering, polynomial fitting, the iterative thresholding Dietrich method, convex hull/rubber band techniques, and a newly-developed technique for Custom baseline removal (BLR). We assess the predictive performance of these methods using partial least-squares analysis for 13 elements of geological interest, expressed as the weight percentages of SiO2, Al2O3, TiO2, FeO, MgO, CaO, Na2O, K2O, and the parts per million concentrations of Ni, Cr, Zn, Mn, and Co. We find that previously published methods for baseline subtraction generally produce equivalent prediction accuracies for major elements. When those pre-existing methods are used, automated optimization of their adjustable parameters is always necessary to wring the best predictive accuracy out of a data set; ideally, it should be done for each individual variable. The new technique of Custom BLR produces significant improvements in prediction accuracy over existing methods across varying geological data sets, instruments, and varying analytical conditions. These results also demonstrate the dual objectives of the continuum removal problem: removing a smooth underlying signal to fit individual peaks (univariate analysis) versus using feature selection to select only those channels that contribute to best prediction accuracy for multivariate analyses. Overall, the current practice of using generalized, one-method-fits-all-spectra baseline removal results in poorer predictive performance for all methods. The extra steps needed to optimize baseline removal for each predicted variable and empower multivariate techniques with the best possible input data for optimal prediction accuracy are shown to be well worth the slight increase in necessary computations and complexity.
Kolozsvári, Bence L; Losonczy, Gergely; Pásztor, Dorottya; Fodor, Mariann
2017-01-13
Toric intraocular lens (IOL) implantation can be an effective method for correcting corneal astigmatism in patients with vitreoretinal diseases and cataract. Our purpose is to report the outcome of toric IOL implantation in two cases - a patient with scleral-buckle-induced regular corneal astigmatism and a patient with keratoconus following pars plana vitrectomy. As far as we are aware, there are no reported cases of toric IOL implantation in a vitrectomized eye with keratoconus nor of toric IOL implantation in patients with scleral-buckle-induced regular corneal astigmatism. Two patients with myopia and high corneal astigmatism underwent cataract operation with toric IOL implantation after posterior segment surgery. Myopia and high astigmatism (>2.5 diopter) were caused by previous scleral buckling in one case and by keratoconus in the other case. Pre- and postoperative examinations during the follow-up of included uncorrected and spectacle corrected distance visual acuity (UCDVA/CDVA), automated kerato-refractometry (Topcon), Pentacam HR, IOL Master (Zeiss) axial length measurements and fundus optical coherence tomography (Zeiss). One year postoperatively, the UCDVA and CDVA were 20/25 and 20/20 in both cases, respectively. The absolute residual refractive astigmatism was 1.0 and 0.75 Diopters, respectively. The IOL rotation was within 3° in both eyes, therefore IOL repositioning was not necessary. Complications were not observed in our cases. These cases demonstrate that toric IOL implantation is a predictable and safe method for the correction of high corneal astigmatism in complicated cases with different origins. Irregular corneal astigmatism in keratoconus or scleral-buckle-induced regular astigmatisms can be equally well corrected with the use of toric IOL during cataract surgery. Previous scleral buckling or pars plana vitrectomy seem to have no impact on the success of the toric IOL implantation, even in keratoconus. IOL rotational stability and refractive predictability in patients with a previous vitreoretinal surgery can be as good as in uncomplicated cases.
Early prediction of extreme stratospheric polar vortex states based on causal precursors
NASA Astrophysics Data System (ADS)
Kretschmer, Marlene; Runge, Jakob; Coumou, Dim
2017-08-01
Variability in the stratospheric polar vortex (SPV) can influence the tropospheric circulation and thereby winter weather. Early predictions of extreme SPV states are thus important to improve forecasts of winter weather including cold spells. However, dynamical models are usually restricted in lead time because they poorly capture low-frequency processes. Empirical models often suffer from overfitting problems as the relevant physical processes and time lags are often not well understood. Here we introduce a novel empirical prediction method by uniting a response-guided community detection scheme with a causal discovery algorithm. This way, we objectively identify causal precursors of the SPV at subseasonal lead times and find them to be in good agreement with known physical drivers. A linear regression prediction model based on the causal precursors can explain most SPV variability (r2 = 0.58), and our scheme correctly predicts 58% (46%) of extremely weak SPV states for lead times of 1-15 (16-30) days with false-alarm rates of only approximately 5%. Our method can be applied to any variable relevant for (sub)seasonal weather forecasts and could thus help improving long-lead predictions.
Diaz-Rodriguez, Sebastian; Bozada, Samantha M; Phifer, Jeremy R; Paluch, Andrew S
2016-11-01
We present blind predictions using the solubility parameter based method MOSCED submitted for the SAMPL5 challenge on calculating cyclohexane/water distribution coefficients at 298 K. Reference data to parameterize MOSCED was generated with knowledge only of chemical structure by performing solvation free energy calculations using electronic structure calculations in the SMD continuum solvent. To maintain simplicity and use only a single method, we approximate the distribution coefficient with the partition coefficient of the neutral species. Over the final SAMPL5 set of 53 compounds, we achieved an average unsigned error of [Formula: see text] log units (ranking 15 out of 62 entries), the correlation coefficient (R) was [Formula: see text] (ranking 35), and [Formula: see text] of the predictions had the correct sign (ranking 30). While used here to predict cyclohexane/water distribution coefficients at 298 K, MOSCED is broadly applicable, allowing one to predict temperature dependent infinite dilution activity coefficients in any solvent for which parameters exist, and provides a means by which an excess Gibbs free energy model may be parameterized to predict composition dependent phase-equilibrium.
NASA Astrophysics Data System (ADS)
Wang, Ruichen; Lu, Jingyang; Xu, Yiran; Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik
2018-05-01
Due to the progressive expansion of public mobile networks and the dramatic growth of the number of wireless users in recent years, researchers are motivated to study the radio propagation in urban environments and develop reliable and fast path loss prediction models. During last decades, different types of propagation models are developed for urban scenario path loss predictions such as the Hata model and the COST 231 model. In this paper, the path loss prediction model is thoroughly investigated using machine learning approaches. Different non-linear feature selection methods are deployed and investigated to reduce the computational complexity. The simulation results are provided to demonstratethe validity of the machine learning based path loss prediction engine, which can correctly determine the signal propagation in a wireless urban setting.
Artificial neural network EMG classifier for functional hand grasp movements prediction.
Gandolla, Marta; Ferrante, Simona; Ferrigno, Giancarlo; Baldassini, Davide; Molteni, Franco; Guanziroli, Eleonora; Cotti Cottini, Michele; Seneci, Carlo; Pedrocchi, Alessandra
2017-12-01
Objective To design and implement an electromyography (EMG)-based controller for a hand robotic assistive device, which is able to classify the user's motion intention before the effective kinematic movement execution. Methods Multiple degrees-of-freedom hand grasp movements (i.e. pinching, grasp an object, grasping) were predicted by means of surface EMG signals, recorded from 10 bipolar EMG electrodes arranged in a circular configuration around the forearm 2-3 cm from the elbow. Two cascaded artificial neural networks were then exploited to detect the patient's motion intention from the EMG signal window starting from the electrical activity onset to movement onset (i.e. electromechanical delay). Results The proposed approach was tested on eight healthy control subjects (4 females; age range 25-26 years) and it demonstrated a mean ± SD testing performance of 76% ± 14% for correctly predicting healthy users' motion intention. Two post-stroke patients tested the controller and obtained 79% and 100% of correctly classified movements under testing conditions. Conclusion A task-selection controller was developed to estimate the intended movement from the EMG measured during the electromechanical delay.
A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.
2004-12-01
We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, andmore » its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.« less
Xi, Yanwei; Arbabi, Aryan; McNaughton, Amy J M; Hamilton, Alison; Hull, Danna; Perras, Helene; Chiu, Tillie; Morrison, Shawna; Goldsmith, Claire; Creede, Emilie; Anger, Gregory J; Honeywell, Christina; Cloutier, Mireille; Macchio, Natasha; Kiss, Courtney; Liu, Xudong; Crocker, Susan; Davies, Gregory A; Brudno, Michael; Armour, Christine M
2017-01-01
To develop an alternate noninvasive prenatal testing method for the assessment of trisomy 21 (T21) using a targeted semiconductor sequencing approach. A customized AmpliSeq panel was designed with 1,067 primer pairs targeting specific regions on chromosomes 21, 18, 13, and others. A total of 235 samples, including 30 affected with T21, were sequenced with an Ion Torrent Proton sequencer, and a method was developed for assessing the probability of fetal aneuploidy via derivation of a risk score. Application of the derived risk score yields a bimodal distribution, with the affected samples clustering near 1.0 and the unaffected near 0. For a risk score cutoff of 0.345, above which all would be considered at "high risk," all 30 T21-positive pregnancies were correctly predicted to be affected, and 199 of the 205 non-T21 samples were correctly predicted. The average hands-on time spent on library preparation and sequencing was 19 h in total, and the average number of reads of sequence obtained was 3.75 million per sample. With the described targeted sequencing approach on the semiconductor platform using a custom-designed library and a probabilistic statistical approach, we have demonstrated the feasibility of an alternate method of assessment for fetal T21. © 2017 S. Karger AG, Basel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Folkvord, Sigurd; Flatmark, Kjersti; Department of Cancer and Surgery, Norwegian Radium Hospital, Oslo University Hospital
2010-10-01
Purpose: Tumor response of rectal cancer to preoperative chemoradiotherapy (CRT) varies considerably. In experimental tumor models and clinical radiotherapy, activity of particular subsets of kinase signaling pathways seems to predict radiation response. This study aimed to determine whether tumor kinase activity profiles might predict tumor response to preoperative CRT in locally advanced rectal cancer (LARC). Methods and Materials: Sixty-seven LARC patients were treated with a CRT regimen consisting of radiotherapy, fluorouracil, and, where possible, oxaliplatin. Pretreatment tumor biopsy specimens were analyzed using microarrays with kinase substrates, and the resulting substrate phosphorylation patterns were correlated with tumor response to preoperative treatmentmore » as assessed by histomorphologic tumor regression grade (TRG). A predictive model for TRG scores from phosphosubstrate signatures was obtained by partial-least-squares discriminant analysis. Prediction performance was evaluated by leave-one-out cross-validation and use of an independent test set. Results: In the patient population, 73% and 15% were scored as good responders (TRG 1-2) or intermediate responders (TRG 3), whereas 12% were assessed as poor responders (TRG 4-5). In a subset of 7 poor responders and 12 good responders, treatment outcome was correctly predicted for 95%. Application of the prediction model on the remaining patient samples resulted in correct prediction for 85%. Phosphosubstrate signatures generated by poor-responding tumors indicated high kinase activity, which was inhibited by the kinase inhibitor sunitinib, and several discriminating phosphosubstrates represented proteins derived from signaling pathways implicated in radioresistance. Conclusions: Multiplex kinase activity profiling may identify functional biomarkers predictive of tumor response to preoperative CRT in LARC.« less
Numerical weather prediction model tuning via ensemble prediction system
NASA Astrophysics Data System (ADS)
Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.
2011-12-01
This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.
Accurate Prediction of Contact Numbers for Multi-Spanning Helical Membrane Proteins
Li, Bian; Mendenhall, Jeffrey; Nguyen, Elizabeth Dong; Weiner, Brian E.; Fischer, Axel W.; Meiler, Jens
2017-01-01
Prediction of the three-dimensional (3D) structures of proteins by computational methods is acknowledged as an unsolved problem. Accurate prediction of important structural characteristics such as contact number is expected to accelerate the otherwise slow progress being made in the prediction of 3D structure of proteins. Here, we present a dropout neural network-based method, TMH-Expo, for predicting the contact number of transmembrane helix (TMH) residues from sequence. Neuronal dropout is a strategy where certain neurons of the network are excluded from back-propagation to prevent co-adaptation of hidden-layer neurons. By using neuronal dropout, overfitting was significantly reduced and performance was noticeably improved. For multi-spanning helical membrane proteins, TMH-Expo achieved a remarkable Pearson correlation coefficient of 0.69 between predicted and experimental values and a mean absolute error of only 1.68. In addition, among those membrane protein–membrane protein interface residues, 76.8% were correctly predicted. Mapping of predicted contact numbers onto structures indicates that contact numbers predicted by TMH-Expo reflect the exposure patterns of TMHs and reveal membrane protein–membrane protein interfaces, reinforcing the potential of predicted contact numbers to be used as restraints for 3D structure prediction and protein–protein docking. TMH-Expo can be accessed via a Web server at www.meilerlab.org. PMID:26804342
Light aircraft lift, drag, and moment prediction: A review and analysis
NASA Technical Reports Server (NTRS)
Smetana, F. O.; Summey, D. C.; Smith, N. S.; Carden, R. K.
1975-01-01
The historical development of analytical methods for predicting the lift, drag, and pitching moment of complete light aircraft configurations in cruising flight is reviewed. Theoretical methods, based in part on techniques described in the literature and in part on original work, are developed. These methods form the basis for understanding the computer programs given to: (1) compute the lift, drag, and moment of conventional airfoils, (2) extend these two-dimensional characteristics to three dimensions for moderate-to-high aspect ratio unswept wings, (3) plot complete configurations, (4) convert the fuselage geometric data to the correct input format, (5) compute the fuselage lift and drag, (6) compute the lift and moment of symmetrical airfoils to M = 1.0 by a simplified semi-empirical procedure, and (7) compute, in closed form, the pressure distribution over a prolate spheroid at alpha = 0. Comparisons of the predictions with experiment indicate excellent lift and drag agreement for conventional airfoils and wings. Limited comparisons of body-alone drag characteristics yield reasonable agreement. Also included are discussions for interference effects and techniques for summing the results above to obtain predictions for complete configurations.
Spatial homogenization methods for pin-by-pin neutron transport calculations
NASA Astrophysics Data System (ADS)
Kozlowski, Tomasz
For practical reactor core applications low-order transport approximations such as SP3 have been shown to provide sufficient accuracy for both static and transient calculations with considerably less computational expense than the discrete ordinate or the full spherical harmonics methods. These methods have been applied in several core simulators where homogenization was performed at the level of the pin cell. One of the principal problems has been to recover the error introduced by pin-cell homogenization. Two basic approaches to treat pin-cell homogenization error have been proposed: Superhomogenization (SPH) factors and Pin-Cell Discontinuity Factors (PDF). These methods are based on well established Equivalence Theory and Generalized Equivalence Theory to generate appropriate group constants. These methods are able to treat all sources of error together, allowing even few-group diffusion with one mesh per cell to reproduce the reference solution. A detailed investigation and consistent comparison of both homogenization techniques showed potential of PDF approach to improve accuracy of core calculation, but also reveal its limitation. In principle, the method is applicable only for the boundary conditions at which it was created, i.e. for boundary conditions considered during the homogenization process---normally zero current. Therefore, there exists a need to improve this method, making it more general and environment independent. The goal of proposed general homogenization technique is to create a function that is able to correctly predict the appropriate correction factor with only homogeneous information available, i.e. a function based on heterogeneous solution that could approximate PDFs using homogeneous solution. It has been shown that the PDF can be well approximated by least-square polynomial fit of non-dimensional heterogeneous solution and later used for PDF prediction using homogeneous solution. This shows a promise for PDF prediction for off-reference conditions, such as during reactor transients which provide conditions that can not typically be anticipated a priori.
Bozkaya, Uğur; Turney, Justin M; Yamaguchi, Yukio; Schaefer, Henry F
2012-04-28
The lowest-lying electronic singlet and triplet potential energy surfaces (PES) for the HNO-NOH system have been investigated employing high level ab initio quantum chemical methods. The reaction energies and barriers have been predicted for two isomerization and four dissociation reactions. Total energies are extrapolated to the complete basis set limit applying focal point analyses. Anharmonic zero-point vibrational energies, diagonal Born-Oppenheimer corrections, relativistic effects, and core correlation corrections are also taken into account. On the singlet PES, the (1)HNO → (1)NOH endothermicity including all corrections is predicted to be 42.23 ± 0.2 kcal mol(-1). For the barrierless decomposition of (1)HNO to H + NO, the dissociation energy is estimated to be 47.48 ± 0.2 kcal mol(-1). For (1)NOH → H + NO, the reaction endothermicity and barrier are 5.25 ± 0.2 and 7.88 ± 0.2 kcal mol(-1). On the triplet PES the reaction energy and barrier including all corrections are predicted to be 7.73 ± 0.2 and 39.31 ± 0.2 kcal mol(-1) for the isomerization reaction (3)HNO → (3)NOH. For the triplet dissociation reaction (to H + NO) the corresponding results are 29.03 ± 0.2 and 32.41 ± 0.2 kcal mol(-1). Analogous results are 21.30 ± 0.2 and 33.67 ± 0.2 kcal mol(-1) for the dissociation reaction of (3)NOH (to H + NO). Unimolecular rate constants for the isomerization and dissociation reactions were obtained utilizing kinetic modeling methods. The tunneling and kinetic isotope effects are also investigated for these reactions. The adiabatic singlet-triplet energy splittings are predicted to be 18.45 ± 0.2 and 16.05 ± 0.2 kcal mol(-1) for HNO and NOH, respectively. Kinetic analyses based on solution of simultaneous first-order ordinary-differential rate equations demonstrate that the singlet NOH molecule will be difficult to prepare at room temperature, while the triplet NOH molecule is viable with respect to isomerization and dissociation reactions up to 400 K. Hence, our theoretical findings clearly explain why (1)NOH has not yet been observed experimentally.
Burakevych, Nataliia; Mckinlay, Christopher Joel Dorman; Alsweiler, Jane Marie; Wouldes, Trecia An; Harding, Jane Elizabeth
2016-01-01
Aim To determine whether Bayley Scales of Infant and Toddler Development (3rd edition) (Bayley-III) motor scores and neurological examination at 2 years' corrected age predict motor difficulties at 4.5 years' corrected age. Method A prospective cohort study of children born at risk of neonatal hypoglycaemia in Waikato Hospital, Hamilton, New Zealand. Assessment at 2 years was performed using the Bayley-III motor scale and neurological examination, and at 4.5 years using the Movement Assessment Battery for Children (2nd edition) (MABC-2). Results Of 333 children, 8 (2%) had Bayley-III motor scores below 85, and 50 (15%) had minor deficits on neurological assessment at 2 years; 89 (27%) scored less than or equal to the 15th centile, and 54 (16%) less than or equal to the 5th centile on MABC-2 at 4.5 years. Motor score, fine and gross motor subtest scores, and neurological assessments at 2 years were poorly predictive of motor difficulties at 4.5 years, explaining 0 to 7% of variance in MABC-2 scores. A Bayley-III motor score below 85 predicted MABC-2 scores less than or equal to the 15th centile with a positive predictive value of 30% and a negative predictive value of 74% (7% sensitivity and 94% specificity). Interpretation Bayley-III motor scale and neurological examination at 2 years were poorly predictive of motor difficulties at 4.5 years. PMID:27543144
Post-processing of a low-flow forecasting system in the Thur basin (Switzerland)
NASA Astrophysics Data System (ADS)
Bogner, Konrad; Joerg-Hess, Stefanie; Bernhard, Luzi; Zappa, Massimiliano
2015-04-01
Low-flows and droughts are natural hazards with potentially severe impacts and economic loss or damage in a number of environmental and socio-economic sectors. As droughts develop slowly there is time to prepare and pre-empt some of these impacts. Real-time information and forecasting of a drought situation can therefore be an effective component of drought management. Although Switzerland has traditionally been more concerned with problems related to floods, in recent years some unprecedented low-flow situations have been experienced. Driven by the climate change debate a drought information platform has been developed to guide water resources management during situations where water resources drop below critical low-flow levels characterised by the indices duration (time between onset and offset), severity (cumulative water deficit) and magnitude (severity/duration). However to gain maximum benefit from such an information system it is essential to remove the bias from the meteorological forecast, to derive optimal estimates of the initial conditions, and to post-process the stream-flow forecasts. Quantile mapping methods for pre-processing the meteorological forecasts and improved data assimilation methods of snow measurements, which accounts for much of the seasonal stream-flow predictability for the majority of the basins in Switzerland, have been tested previously. The objective of this study is the testing of post-processing methods in order to remove bias and dispersion errors and to derive the predictive uncertainty of a calibrated low-flow forecast system. Therefore various stream-flow error correction methods with different degrees of complexity have been applied and combined with the Hydrological Uncertainty Processor (HUP) in order to minimise the differences between the observations and model predictions and to derive posterior probabilities. The complexity of the analysed error correction methods ranges from simple AR(1) models to methods including wavelet transformations and support vector machines. These methods have been combined with forecasts driven by Numerical Weather Prediction (NWP) systems with different temporal and spatial resolutions, lead-times and different numbers of ensembles covering short to medium to extended range forecasts (COSMO-LEPS, 10-15 days, monthly and seasonal ENS) as well as climatological forecasts. Additionally the suitability of various skill scores and efficiency measures regarding low-flow predictions will be tested. Amongst others the novel 2afc (2 alternatives forced choices) score and the quantile skill score and its decompositions will be applied to evaluate the probabilistic forecasts and the effects of post-processing. First results of the performance of the low-flow predictions of the hydrological model PREVAH initialised with different NWP's will be shown.
Streamflow Prediction based on Chaos Theory
NASA Astrophysics Data System (ADS)
Li, X.; Wang, X.; Babovic, V. M.
2015-12-01
Chaos theory is a popular method in hydrologic time series prediction. Local model (LM) based on this theory utilizes time-delay embedding to reconstruct the phase-space diagram. For this method, its efficacy is dependent on the embedding parameters, i.e. embedding dimension, time lag, and nearest neighbor number. The optimal estimation of these parameters is thus critical to the application of Local model. However, these embedding parameters are conventionally estimated using Average Mutual Information (AMI) and False Nearest Neighbors (FNN) separately. This may leads to local optimization and thus has limitation to its prediction accuracy. Considering about these limitation, this paper applies a local model combined with simulated annealing (SA) to find the global optimization of embedding parameters. It is also compared with another global optimization approach of Genetic Algorithm (GA). These proposed hybrid methods are applied in daily and monthly streamflow time series for examination. The results show that global optimization can contribute to the local model to provide more accurate prediction results compared with local optimization. The LM combined with SA shows more advantages in terms of its computational efficiency. The proposed scheme here can also be applied to other fields such as prediction of hydro-climatic time series, error correction, etc.
Aboagye-Sarfo, Patrick; Mai, Qun; Sanfilippo, Frank M; Preen, David B; Stewart, Louise M; Fatovich, Daniel M
2015-10-01
To develop multivariate vector-ARMA (VARMA) forecast models for predicting emergency department (ED) demand in Western Australia (WA) and compare them to the benchmark univariate autoregressive moving average (ARMA) and Winters' models. Seven-year monthly WA state-wide public hospital ED presentation data from 2006/07 to 2012/13 were modelled. Graphical and VARMA modelling methods were used for descriptive analysis and model fitting. The VARMA models were compared to the benchmark univariate ARMA and Winters' models to determine their accuracy to predict ED demand. The best models were evaluated by using error correction methods for accuracy. Descriptive analysis of all the dependent variables showed an increasing pattern of ED use with seasonal trends over time. The VARMA models provided a more precise and accurate forecast with smaller confidence intervals and better measures of accuracy in predicting ED demand in WA than the ARMA and Winters' method. VARMA models are a reliable forecasting method to predict ED demand for strategic planning and resource allocation. While the ARMA models are a closely competing alternative, they under-estimated future ED demand. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Chengjun; Markussen, Troels; Thygesen, Kristian S., E-mail: thygesen@fysik.dtu.dk
We study the effect of functional groups (CH{sub 3}*4, OCH{sub 3}, CH{sub 3}, Cl, CN, F*4) on the electronic transport properties of 1,4-benzenediamine molecular junctions using the non-equilibrium Green function method. Exchange and correlation effects are included at various levels of theory, namely density functional theory (DFT), energy level-corrected DFT (DFT+Σ), Hartree-Fock and the many-body GW approximation. All methods reproduce the expected trends for the energy of the frontier orbitals according to the electron donating or withdrawing character of the substituent group. However, only the GW method predicts the correct ordering of the conductance amongst the molecules. The absolute GWmore » (DFT) conductance is within a factor of two (three) of the experimental values. Correcting the DFT orbital energies by a simple physically motivated scissors operator, Σ, can bring the DFT conductances close to experiments, but does not improve on the relative ordering. We ascribe this to a too strong pinning of the molecular energy levels to the metal Fermi level by DFT which suppresses the variation in orbital energy with functional group.« less
Development of an Analysis and Design Optimization Framework for Marine Propellers
NASA Astrophysics Data System (ADS)
Tamhane, Ashish C.
In this thesis, a framework for the analysis and design optimization of ship propellers is developed. This framework can be utilized as an efficient synthesis tool in order to determine the main geometric characteristics of the propeller but also to provide the designer with the capability to optimize the shape of the blade sections based on their specific criteria. A hybrid lifting-line method with lifting-surface corrections to account for the three-dimensional flow effects has been developed. The prediction of the correction factors is achieved using Artificial Neural Networks and Support Vector Regression. This approach results in increased approximation accuracy compared to existing methods and allows for extrapolation of the correction factor values. The effect of viscosity is implemented in the framework via the coupling of the lifting line method with the open-source RANSE solver OpenFOAM for the calculation of lift, drag and pressure distribution on the blade sections using a transition kappa-o SST turbulence model. Case studies of benchmark high-speed propulsors are utilized in order to validate the proposed framework for propeller operation in open-water conditions but also in a ship's wake.
Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping
2011-04-01
In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.
Embedded Model Error Representation and Propagation in Climate Models
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.
2017-12-01
Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.
Jeon, Jihyoun; Hsu, Li; Gorfine, Malka
2012-07-01
Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.
On the impact of power corrections in the prediction of B → K *μ+μ- observables
NASA Astrophysics Data System (ADS)
Descotes-Genon, Sébastien; Hofer, Lars; Matias, Joaquim; Virto, Javier
2014-12-01
The recent LHCb angular analysis of the exclusive decay B → K * μ + μ - has indicated significant deviations from the Standard Model expectations. Accurate predictions can be achieved at large K *-meson recoil for an optimised set of observables designed to have no sensitivity to hadronic input in the heavy-quark limit at leading order in α s . However, hadronic uncertainties reappear through non-perturbative ΛQCD /m b power corrections, which must be assessed precisely. In the framework of QCD factorisation we present a systematic method to include factorisable power corrections and point out that their impact on angular observables depends on the scheme chosen to define the soft form factors. Associated uncertainties are found to be under control, contrary to earlier claims in the literature. We also discuss the impact of possible non-factorisable power corrections, including an estimate of charm-loop effects. We provide results for angular observables at large recoil for two different sets of inputs for the form factors, spelling out the different sources of theoretical uncertainties. Finally, we comment on a recent proposal to explain the anomaly in B → K * μ + μ - observables through charm-resonance effects, and we propose strategies to test this proposal identifying observables and kinematic regions where either the charm-loop model can be disentangled from New Physics effects or the two options leave different imprints.
2013-01-01
Background Malaria rapid diagnostic tests (RDTs) are a useful tool in endemic malaria countries, where light microscopy is not feasible. In non-endemic countries they can be used as complementary tests to provide timely results in case of microscopy inexperience. This study aims to compare the new VIKIA Malaria Ag Pf/Pan™ RDT with PCR-corrected microscopy results and the commonly used CareStart™ RDT to diagnose falciparum and non-falciparum malaria in the endemic setting of Bamako, Mali and the non-endemic setting of Lyon, France. Methods Blood samples were collected during a 12-months and six-months period in 2011 from patients suspected to have malaria in Lyon and Bamako respectively. The samples were examined by light microscopy, the VIKIA Malaria Ag Pf/Pan™ test and in Bamako additionally with the CareStart™ RDT. Discordant results were corrected by real-time PCR. Sensitivity, specificity, positive predictive value and negative predictive value were used to evaluate test performance. Results Samples of 877 patients from both sites were included. The VIKIA Malaria Ag Pf/Pan™ had a sensitivity of 98% and 96% for Plasmodium falciparum in Lyon and Bamako, respectively, performing similar to PCR-corrected microscopy. Conclusions The VIKIA Malaria Ag Pf/Pan™ performs similar to PCR-corrected microscopy for the detection of P. falciparum, making it a valuable tool in malaria endemic and non-endemic regions. PMID:23742633
Delport, Johannes Andries; Mohorovic, Ivor; Burn, Sandi; McCormick, John Kenneth; Schaus, David; Lannigan, Robert; John, Michael
2016-07-01
Meticillin-resistant Staphylococcus aureus (MRSA) bloodstream infection is responsible for significant morbidity, with mortality rates as high as 60 % if not treated appropriately. We describe a rapid method to detect MRSA in blood cultures using a combined three-hour short-incubation BRUKER matrix-assisted laser desorption/ionization time-of-flight MS BioTyper protocol and a qualitative immunochromatographic assay, the Alere Culture Colony Test PBP2a detection test. We compared this combined method with a molecular method detecting the nuc and mecA genes currently performed in our laboratory. One hundred and seventeen S. aureus blood cultures were tested of which 35 were MRSA and 82 were meticillin-sensitive S. aureus (MSSA). The rapid combined test correctly identified 100 % (82/82) of the MSSA and 85.7 % (30/35) of the MRSA after 3 h. There were five false negative results where the isolates were correctly identified as S. aureus, but PBP2a was not detected by the Culture Colony Test. The combined method has a sensitivity of 87.5 %, specificity of 100 %, a positive predictive value of 100 % and a negative predictive value of 94.3 % with the prevalence of MRSA in our S. aureus blood cultures. The combined rapid method offers a significant benefit to early detection of MRSA in positive blood cultures.
NASA Astrophysics Data System (ADS)
Babaie Mahani, A.; Eaton, D. W.
2013-12-01
Ground Motion Prediction Equations (GMPEs) are widely used in Probabilistic Seismic Hazard Assessment (PSHA) to estimate ground-motion amplitudes at Earth's surface as a function of magnitude and distance. Certain applications, such as hazard assessment for caprock integrity in the case of underground storage of CO2, waste disposal sites, and underground pipelines, require subsurface estimates of ground motion; at present, such estimates depend upon theoretical modeling and simulations. The objective of this study is to derive correction factors for GMPEs to enable estimation of amplitudes in the subsurface. We use a semi-analytic approach along with finite-difference simulations of ground-motion amplitudes for surface and underground motions. Spectral ratios of underground to surface motions are used to calculate the correction factors. Two predictive methods are used. The first is a semi-analytic approach based on a quarter-wavelength method that is widely used for earthquake site-response investigations; the second is a numerical approach based on elastic finite-difference simulations of wave propagation. Both methods are evaluated using recordings of regional earthquakes by broadband seismometers installed at the surface and at depths of 1400 m and 2100 m in the Sudbury Neutrino Observatory, Canada. Overall, both methods provide a reasonable fit to the peaks and troughs observed in the ratios of real data. The finite-difference method, however, has the capability to simulate ground motion ratios more accurately than the semi-analytic approach.
Xie, Dan; Li, Ao; Wang, Minghui; Fan, Zhewen; Feng, Huanqing
2005-01-01
Subcellular location of a protein is one of the key functional characters as proteins must be localized correctly at the subcellular level to have normal biological function. In this paper, a novel method named LOCSVMPSI has been introduced, which is based on the support vector machine (SVM) and the position-specific scoring matrix generated from profiles of PSI-BLAST. With a jackknife test on the RH2427 data set, LOCSVMPSI achieved a high overall prediction accuracy of 90.2%, which is higher than the prediction results by SubLoc and ESLpred on this data set. In addition, prediction performance of LOCSVMPSI was evaluated with 5-fold cross validation test on the PK7579 data set and the prediction results were consistently better than the previous method based on several SVMs using composition of both amino acids and amino acid pairs. Further test on the SWISSPROT new-unique data set showed that LOCSVMPSI also performed better than some widely used prediction methods, such as PSORTII, TargetP and LOCnet. All these results indicate that LOCSVMPSI is a powerful tool for the prediction of eukaryotic protein subcellular localization. An online web server (current version is 1.3) based on this method has been developed and is freely available to both academic and commercial users, which can be accessed by at . PMID:15980436
Bezrukov, Ilja; Schmidt, Holger; Mantlik, Frédéric; Schwenzer, Nina; Brendle, Cornelia; Schölkopf, Bernhard; Pichler, Bernd J
2013-10-01
Hybrid PET/MR systems have recently entered clinical practice. Thus, the accuracy of MR-based attenuation correction in simultaneously acquired data can now be investigated. We assessed the accuracy of 4 methods of MR-based attenuation correction in lesions within soft tissue, bone, and MR susceptibility artifacts: 2 segmentation-based methods (SEG1, provided by the manufacturer, and SEG2, a method with atlas-based susceptibility artifact correction); an atlas- and pattern recognition-based method (AT&PR), which also used artifact correction; and a new method combining AT&PR and SEG2 (SEG2wBONE). Attenuation maps were calculated for the PET/MR datasets of 10 patients acquired on a whole-body PET/MR system, allowing for simultaneous acquisition of PET and MR data. Eighty percent iso-contour volumes of interest were placed on lesions in soft tissue (n = 21), in bone (n = 20), near bone (n = 19), and within or near MR susceptibility artifacts (n = 9). Relative mean volume-of-interest differences were calculated with CT-based attenuation correction as a reference. For soft-tissue lesions, none of the methods revealed a significant difference in PET standardized uptake value relative to CT-based attenuation correction (SEG1, -2.6% ± 5.8%; SEG2, -1.6% ± 4.9%; AT&PR, -4.7% ± 6.5%; SEG2wBONE, 0.2% ± 5.3%). For bone lesions, underestimation of PET standardized uptake values was found for all methods, with minimized error for the atlas-based approaches (SEG1, -16.1% ± 9.7%; SEG2, -11.0% ± 6.7%; AT&PR, -6.6% ± 5.0%; SEG2wBONE, -4.7% ± 4.4%). For lesions near bone, underestimations of lower magnitude were observed (SEG1, -12.0% ± 7.4%; SEG2, -9.2% ± 6.5%; AT&PR, -4.6% ± 7.8%; SEG2wBONE, -4.2% ± 6.2%). For lesions affected by MR susceptibility artifacts, quantification errors could be reduced using the atlas-based artifact correction (SEG1, -54.0% ± 38.4%; SEG2, -15.0% ± 12.2%; AT&PR, -4.1% ± 11.2%; SEG2wBONE, 0.6% ± 11.1%). For soft-tissue lesions, none of the evaluated methods showed statistically significant errors. For bone lesions, significant underestimations of -16% and -11% occurred for methods in which bone tissue was ignored (SEG1 and SEG2). In the present attenuation correction schemes, uncorrected MR susceptibility artifacts typically result in reduced attenuation values, potentially leading to highly reduced PET standardized uptake values, rendering lesions indistinguishable from background. While AT&PR and SEG2wBONE show accurate results in both soft tissue and bone, SEG2wBONE uses a two-step approach for tissue classification, which increases the robustness of prediction and can be applied retrospectively if more precision in bone areas is needed.
Effects of ionic strength and ion pairing on (plant-wide) modelling of anaerobic digestion.
Solon, Kimberly; Flores-Alsina, Xavier; Mbamba, Christian Kazadi; Volcke, Eveline I P; Tait, Stephan; Batstone, Damien; Gernaey, Krist V; Jeppsson, Ulf
2015-03-01
Plant-wide models of wastewater treatment (such as the Benchmark Simulation Model No. 2 or BSM2) are gaining popularity for use in holistic virtual studies of treatment plant control and operations. The objective of this study is to show the influence of ionic strength (as activity corrections) and ion pairing on modelling of anaerobic digestion processes in such plant-wide models of wastewater treatment. Using the BSM2 as a case study with a number of model variants and cationic load scenarios, this paper presents the effects of an improved physico-chemical description on model predictions and overall plant performance indicators, namely effluent quality index (EQI) and operational cost index (OCI). The acid-base equilibria implemented in the Anaerobic Digestion Model No. 1 (ADM1) are modified to account for non-ideal aqueous-phase chemistry. The model corrects for ionic strength via the Davies approach to consider chemical activities instead of molar concentrations. A speciation sub-routine based on a multi-dimensional Newton-Raphson (NR) iteration method is developed to address algebraic interdependencies. The model also includes ion pairs that play an important role in wastewater treatment. The paper describes: 1) how the anaerobic digester performance is affected by physico-chemical corrections; 2) the effect on pH and the anaerobic digestion products (CO2, CH4 and H2); and, 3) how these variations are propagated from the sludge treatment to the water line. Results at high ionic strength demonstrate that corrections to account for non-ideal conditions lead to significant differences in predicted process performance (up to 18% for effluent quality and 7% for operational cost) but that for pH prediction, activity corrections are more important than ion pairing effects. Both are likely to be required when precipitation is to be modelled. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Chih, M. H.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.
2006-03-01
Optical proximity correction is the technique of pre-distorting mask layouts so that the printed patterns are as close to the desired shapes as possible. For model-based optical proximity correction, a lithographic model to predict the edge position (contour) of patterns on the wafer after lithographic processing is needed. Generally, segmentation of edges is performed prior to the correction. Pattern edges are dissected into several small segments with corresponding target points. During the correction, the edges are moved back and forth from the initial drawn position, assisted by the lithographic model, to finally settle on the proper positions. When the correction converges, the intensity predicted by the model in every target points hits the model-specific threshold value. Several iterations are required to achieve the convergence and the computation time increases with the increase of the required iterations. An artificial neural network is an information-processing paradigm inspired by biological nervous systems, such as how the brain processes information. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. A neural network can be a powerful data-modeling tool that is able to capture and represent complex input/output relationships. The network can accurately predict the behavior of a system via the learning procedure. A radial basis function network, a variant of artificial neural network, is an efficient function approximator. In this paper, a radial basis function network was used to build a mapping from the segment characteristics to the edge shift from the drawn position. This network can provide a good initial guess for each segment that OPC has carried out. The good initial guess reduces the required iterations. Consequently, cycle time can be shortened effectively. The optimization of the radial basis function network for this system was practiced by genetic algorithm, which is an artificially intelligent optimization method with a high probability to obtain global optimization. From preliminary results, the required iterations were reduced from 5 to 2 for a simple dumbbell-shape layout.
The QT Interval and Risk of Incident Atrial Fibrillation
Mandyam, Mala C.; Soliman, Elsayed Z.; Alonso, Alvaro; Dewland, Thomas A.; Heckbert, Susan R.; Vittinghoff, Eric; Cummings, Steven R.; Ellinor, Patrick T.; Chaitman, Bernard R.; Stocke, Karen; Applegate, William B.; Arking, Dan E.; Butler, Javed; Loehr, Laura R.; Magnani, Jared W.; Murphy, Rachel A.; Satterfield, Suzanne; Newman, Anne B.; Marcus, Gregory M.
2013-01-01
BACKGROUND Abnormal atrial repolarization is important in the development of atrial fibrillation (AF), but no direct measurement is available in clinical medicine. OBJECTIVE To determine whether the QT interval, a marker of ventricular repolarization, could be used to predict incident AF. METHODS We examined a prolonged QT corrected by the Framingham formula (QTFram) as a predictor of incident AF in the Atherosclerosis Risk in Communities (ARIC) study. The Cardiovascular Health Study (CHS) and Health, Aging, and Body Composition (Health ABC) study were used for validation. Secondary predictors included QT duration as a continuous variable, a short QT interval, and QT intervals corrected by other formulae. RESULTS Among 14,538 ARIC participants, a prolonged QTFram predicted a roughly two-fold increased risk of AF (hazard ratio [HR] 2.05, 95% confidence interval [CI] 1.42–2.96, p<0.001). No substantive attenuation was observed after adjustment for age, race, sex, study center, body mass index, hypertension, diabetes, coronary disease, and heart failure. The findings were validated in CHS and Health ABC and were similar across various QT correction methods. Also in ARIC, each 10-ms increase in QTFram was associated with an increased unadjusted (HR 1.14, 95%CI 1.10–1.17, p<0.001) and adjusted (HR 1.11, 95%CI 1.07–1.14, p<0.001) risk of AF. Findings regarding a short QT were inconsistent across cohorts. CONCLUSIONS A prolonged QT interval is associated with an increased risk of incident AF. PMID:23872693
A novel artificial neural network method for biomedical prediction based on matrix pseudo-inversion.
Cai, Binghuang; Jiang, Xia
2014-04-01
Biomedical prediction based on clinical and genome-wide data has become increasingly important in disease diagnosis and classification. To solve the prediction problem in an effective manner for the improvement of clinical care, we develop a novel Artificial Neural Network (ANN) method based on Matrix Pseudo-Inversion (MPI) for use in biomedical applications. The MPI-ANN is constructed as a three-layer (i.e., input, hidden, and output layers) feed-forward neural network, and the weights connecting the hidden and output layers are directly determined based on MPI without a lengthy learning iteration. The LASSO (Least Absolute Shrinkage and Selection Operator) method is also presented for comparative purposes. Single Nucleotide Polymorphism (SNP) simulated data and real breast cancer data are employed to validate the performance of the MPI-ANN method via 5-fold cross validation. Experimental results demonstrate the efficacy of the developed MPI-ANN for disease classification and prediction, in view of the significantly superior accuracy (i.e., the rate of correct predictions), as compared with LASSO. The results based on the real breast cancer data also show that the MPI-ANN has better performance than other machine learning methods (including support vector machine (SVM), logistic regression (LR), and an iterative ANN). In addition, experiments demonstrate that our MPI-ANN could be used for bio-marker selection as well. Copyright © 2013 Elsevier Inc. All rights reserved.
Alignment methods: strategies, challenges, benchmarking, and comparative overview.
Löytynoja, Ari
2012-01-01
Comparative evolutionary analyses of molecular sequences are solely based on the identities and differences detected between homologous characters. Errors in this homology statement, that is errors in the alignment of the sequences, are likely to lead to errors in the downstream analyses. Sequence alignment and phylogenetic inference are tightly connected and many popular alignment programs use the phylogeny to divide the alignment problem into smaller tasks. They then neglect the phylogenetic tree, however, and produce alignments that are not evolutionarily meaningful. The use of phylogeny-aware methods reduces the error but the resulting alignments, with evolutionarily correct representation of homology, can challenge the existing practices and methods for viewing and visualising the sequences. The inter-dependency of alignment and phylogeny can be resolved by joint estimation of the two; methods based on statistical models allow for inferring the alignment parameters from the data and correctly take into account the uncertainty of the solution but remain computationally challenging. Widely used alignment methods are based on heuristic algorithms and unlikely to find globally optimal solutions. The whole concept of one correct alignment for the sequences is questionable, however, as there typically exist vast numbers of alternative, roughly equally good alignments that should also be considered. This uncertainty is hidden by many popular alignment programs and is rarely correctly taken into account in the downstream analyses. The quest for finding and improving the alignment solution is complicated by the lack of suitable measures of alignment goodness. The difficulty of comparing alternative solutions also affects benchmarks of alignment methods and the results strongly depend on the measure used. As the effects of alignment error cannot be predicted, comparing the alignments' performance in downstream analyses is recommended.
Electronic structure properties of UO2 as a Mott insulator
NASA Astrophysics Data System (ADS)
Sheykhi, Samira; Payami, Mahmoud
2018-06-01
In this work using the density functional theory (DFT), we have studied the structural, electronic and magnetic properties of uranium dioxide with antiferromagnetic 1k-, 2k-, and 3k-order structures. Ordinary approximations in DFT, such as the local density approximation (LDA) or generalized gradient approximation (GGA), usually predict incorrect metallic behaviors for this strongly correlated electron system. Using Hubbard term correction for f-electrons, LDA+U method, as well as using the screened Heyd-Scuseria-Ernzerhof (HSE) hybrid functional for the exchange-correlation (XC), we have obtained the correct ground-state behavior as an insulator, with band gaps in good agreement with experiment.
Karp, Jerome M; Eryilmaz, Ertan; Erylimaz, Ertan; Cowburn, David
2015-01-01
There has been a longstanding interest in being able to accurately predict NMR chemical shifts from structural data. Recent studies have focused on using molecular dynamics (MD) simulation data as input for improved prediction. Here we examine the accuracy of chemical shift prediction for intein systems, which have regions of intrinsic disorder. We find that using MD simulation data as input for chemical shift prediction does not consistently improve prediction accuracy over use of a static X-ray crystal structure. This appears to result from the complex conformational ensemble of the disordered protein segments. We show that using accelerated molecular dynamics (aMD) simulations improves chemical shift prediction, suggesting that methods which better sample the conformational ensemble like aMD are more appropriate tools for use in chemical shift prediction for proteins with disordered regions. Moreover, our study suggests that data accurately reflecting protein dynamics must be used as input for chemical shift prediction in order to correctly predict chemical shifts in systems with disorder.
NASA Astrophysics Data System (ADS)
Yeom, J. M.; Kim, H. O.
2014-12-01
In this study, we estimated the rice paddy yield with moderate geostationary satellite based vegetation products and GRAMI model over South Korea. Rice is the most popular staple food for Asian people. In addition, the effects of climate change are getting stronger especially in Asian region, where the most of rice are cultivated. Therefore, accurate and timely prediction of rice yield is one of the most important to accomplish food security and to prepare natural disasters such as crop defoliation, drought, and pest infestation. In the present study, GOCI, which is world first Geostationary Ocean Color Image, was used for estimating temporal vegetation indices of the rice paddy by adopting atmospheric correction BRDF modeling. For the atmospheric correction with LUT method based on Second Simulation of the Satellite Signal in the Solar Spectrum (6S), MODIS atmospheric products such as MOD04, MOD05, MOD07 from NASA's Earth Observing System Data and Information System (EOSDIS) were used. In order to correct the surface anisotropy effect, Ross-Thick Li-Sparse Reciprocal (RTLSR) BRDF model was performed at daily basis with 16day composite period. The estimated multi-temporal vegetation images was used for crop classification by using high resolution satellite images such as Rapideye, KOMPSAT-2 and KOMPSAT-3 to extract the proportional rice paddy area in corresponding a pixel of GOCI. In the case of GRAMI crop model, initial conditions are determined by performing every 2 weeks field works at Chonnam National University, Gwangju, Korea. The corrected GOCI vegetation products were incorporated with GRAMI model to predict rice yield estimation. The predicted rice yield was compared with field measurement of rice yield.
Toric phakic implantable collamer lens for correction of astigmatism: 1-year outcomes
Mertens, Erik L
2011-01-01
Purpose: The purpose of this study was to assess predictability, efficacy, safety and stability in patients who received a toric implantable collamer lens to correct moderate to high myopic astigmatism. Methods: Forty-three eyes of 23 patients underwent implantation of a toric implantable collamer lens (STAAR Surgical Inc) for astigmatism correction. Mean spherical refraction was −4. 98 ± 3.49 diopters (D) (range: 0 to −13 D), and mean cylinder was −2.62 ± 0.97 D (range: −1.00 to −5.00 D). Main outcomes measures evaluated during a 12-month follow-up included uncorrected visual acuity (UCVA), refraction, best-corrected visual acuity (BCVA), vault, and adverse events. Results: At 12 months the mean Snellen decimal UCVA was 0.87 ± 0.27 and mean BCVA was 0.94 ± 0.21, with an efficacy index of 1.05. More than 60% of the eyes gained ≥1 line of BCVA (17 eyes, safety index of 1.14). The treatment was highly predictable for spherical equivalent (r2 = 0.99) and astigmatic components: J0 (r2 = 0.99) and J45 (r2 = 0.90). The mean spherical equivalent dropped from −7.29 ± 3.4 D to −0.17 ± 0.40 D at 12 months. Of the attempted spherical equivalent, 76.7% of the eyes were within ±0.50 D and 97.7% eyes were within ±1.00 D, respectively. For J0 and J45, 97.7% and 83.7% were within ±0.50 D, respectively. Conclusion: The results of the present study support the safety, efficacy, and predictability of toric implantable collamer lens implantation to treat moderate to high myopic astigmatism. PMID:21468348
Liu, Xian; Engel, Charles C
2012-12-20
Researchers often encounter longitudinal health data characterized with three or more ordinal or nominal categories. Random-effects multinomial logit models are generally applied to account for potential lack of independence inherent in such clustered data. When parameter estimates are used to describe longitudinal processes, however, random effects, both between and within individuals, need to be retransformed for correctly predicting outcome probabilities. This study attempts to go beyond existing work by developing a retransformation method that derives longitudinal growth trajectories of unbiased health probabilities. We estimated variances of the predicted probabilities by using the delta method. Additionally, we transformed the covariates' regression coefficients on the multinomial logit function, not substantively meaningful, to the conditional effects on the predicted probabilities. The empirical illustration uses the longitudinal data from the Asset and Health Dynamics among the Oldest Old. Our analysis compared three sets of the predicted probabilities of three health states at six time points, obtained from, respectively, the retransformation method, the best linear unbiased prediction, and the fixed-effects approach. The results demonstrate that neglect of retransforming random errors in the random-effects multinomial logit model results in severely biased longitudinal trajectories of health probabilities as well as overestimated effects of covariates on the probabilities. Copyright © 2012 John Wiley & Sons, Ltd.
A downscaling method for the assessment of local climate change
NASA Astrophysics Data System (ADS)
Bruno, E.; Portoghese, I.; Vurro, M.
2009-04-01
The use of complimentary models is necessary to study the impact of climate change scenarios on the hydrological response at different space-time scales. However, the structure of GCMs is such that their space resolution (hundreds of kilometres) is too coarse and not adequate to describe the variability of extreme events at basin scale (Burlando and Rosso, 2002). To bridge the space-time gap between the climate scenarios and the usual scale of the inputs for hydrological prediction models is a fundamental requisite for the evaluation of climate change impacts on water resources. Since models operate a simplification of a complex reality, their results cannot be expected to fit with climate observations. Identifying local climate scenarios for impact analysis implies the definition of more detailed local scenario by downscaling GCMs or RCMs results. Among the output correction methods we consider the statistical approach by Déqué (2007) reported as a ‘Variable correction method' in which the correction of model outputs is obtained by a function build with the observation dataset and operating a quantile-quantile transformation (Q-Q transform). However, in the case of daily precipitation fields the Q-Q transform is not able to correct the temporal property of the model output concerning the dry-wet lacunarity process. An alternative correction method is proposed based on a stochastic description of the arrival-duration-intensity processes in coherence with the Poissonian Rectangular Pulse scheme (PRP) (Eagleson, 1972). In this proposed approach, the Q-Q transform is applied to the PRP variables derived from the daily rainfall datasets. Consequently the corrected PRP parameters are used for the synthetic generation of statistically homogeneous rainfall time series that mimic the persistency of daily observations for the reference period. Then the PRP parameters are forced through the GCM scenarios to generate local scale rainfall records for the 21st century. The statistical parameters characterizing daily storm occurrence, storm intensity and duration needed to apply the PRP scheme are considered among STARDEX collection of extreme indices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferguson, S; Ahmad, S; Chen, Y
2016-06-15
Purpose: To commission and investigate the accuracy of an output (cGy/MU) prediction model for a compact passively scattered proton therapy system. Methods: A previously published output prediction model (Sahoo et al, Med Phys, 35, 5088–5097, 2008) was commissioned for our Mevion S250 proton therapy system. This model is a correction-based model that multiplies correction factors (d/MUwnc=ROFxSOBPF xRSFxSOBPOCFxOCRxFSFxISF). These factors accounted for changes in output due to options (12 large, 5 deep, and 7 small), modulation width M, range R, off-center, off-axis, field-size, and off-isocenter. In this study, the model was modified to ROFxSOBPFxRSFxOCRxFSFxISF-OCFxGACF by merging SOBPOCF and ISF for simplicitymore » and introducing a gantry angle correction factor (GACF). To commission the model, outputs over 1,000 data points were taken at the time of the system commissioning. The output was predicted by interpolation (1D for SOBPF, FSF, and GACF; 2D for RSF and OCR) with inverse-square calculation (ISF-OCR). The outputs of 273 combinations of R and M covering total 24 options were measured to test the model. To minimize fluence perturbation, scattered dose from range compensator and patient was not considered. The percent differences between the predicted (P) and measured (M) outputs were calculated to test the prediction accuracy ([P-M]/Mx100%). Results: GACF was required because of up to 3.5% output variation dependence on the gantry angle. A 2D interpolation was required for OCR because the dose distribution was not radially symmetric especially for the deep options. The average percent differences were −0.03±0.98% (mean±SD) and the differences of all the measurements fell within ±3%. Conclusion: It is concluded that the model can be clinically used for the compact passively scattered proton therapy system. However, great care should be taken when the field-size is less than 5×5 cm{sup 2} where a direct output measurement is required due to substantial output change by irregular block shape.« less
Correcting the anion gap for hypoalbuminaemia does not improve detection of hyperlactataemia
Dinh, C H; Ng, R; Grandinetti, A; Joffe, A; Chow, D C
2006-01-01
Background An elevated lactate level reflects impaired tissue oxygenation and is a predictor of mortality. Studies have shown that the anion gap is inadequate as a screen for hyperlactataemia, particularly in critically ill and trauma patients. A proposed explanation for the anion gap's poor sensitivity and specificity in detecting hyperlactataemia is that the serum albumin is frequently low. This study therefore, sought to compare the predictive values of the anion gap and the anion gap corrected for albumin (cAG) as an indicator of hyperlactataemia as defined by a lactate ⩾2.5 mmol/l. Methods A retrospective review of 639 sets of laboratory values from a tertiary care hospital. Patients' laboratory results were included in the study if serum chemistries and lactate were drawn consecutively. The sensitivity, specificity, and predictive values were obtained. A receiver operator characteristics curve (ROC) was drawn and the area under the curve (AUC) was calculated. Results An anion gap ⩾12 provided a sensitivity, specificity, positive predictive value, and negative predictive value of 39%, 89%, 79%, and 58%, respectively, and a cAG ⩾12 provided a sensitivity, specificity, positive predictive value, and negative predictive value of 75%, 59%, 66%, and 69%, respectively. The ROC curves between anion gap and cAG as a predictor of hyperlactataemia were almost identical. The AUC was 0.757 and 0.750, respectively. Conclusions The sensitivities, specificities, and predictive values of the anion gap and cAG were inadequate in predicting the presence of hyperlactataemia. The cAG provides no additional advantage over the anion gap in the detection of hyperlactataemia. PMID:16858097
Wiegand, Thorsten; Lehmann, Sebastian; Huth, Andreas; Fortin, Marie‐Josée
2016-01-01
Abstract Aim It has been recently suggested that different ‘unified theories of biodiversity and biogeography’ can be characterized by three common ‘minimal sufficient rules’: (1) species abundance distributions follow a hollow curve, (2) species show intraspecific aggregation, and (3) species are independently placed with respect to other species. Here, we translate these qualitative rules into a quantitative framework and assess if these minimal rules are indeed sufficient to predict multiple macroecological biodiversity patterns simultaneously. Location Tropical forest plots in Barro Colorado Island (BCI), Panama, and in Sinharaja, Sri Lanka. Methods We assess the predictive power of the three rules using dynamic and spatial simulation models in combination with census data from the two forest plots. We use two different versions of the model: (1) a neutral model and (2) an extended model that allowed for species differences in dispersal distances. In a first step we derive model parameterizations that correctly represent the three minimal rules (i.e. the model quantitatively matches the observed species abundance distribution and the distribution of intraspecific aggregation). In a second step we applied the parameterized models to predict four additional spatial biodiversity patterns. Results Species‐specific dispersal was needed to quantitatively fulfil the three minimal rules. The model with species‐specific dispersal correctly predicted the species–area relationship, but failed to predict the distance decay, the relationship between species abundances and aggregations, and the distribution of a spatial co‐occurrence index of all abundant species pairs. These results were consistent over the two forest plots. Main conclusions The three ‘minimal sufficient’ rules only provide an incomplete approximation of the stochastic spatial geometry of biodiversity in tropical forests. The assumption of independent interspecific placements is most likely violated in many forests due to shared or distinct habitat preferences. Furthermore, our results highlight missing knowledge about the relationship between species abundances and their aggregation. PMID:27667967
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, S; Robinson, A; Kiess, A
2015-06-15
Purpose: The purpose of this study is to develop an accurate and effective technique to predict and monitor volume changes of the tumor and organs at risk (OARs) from daily cone-beam CTs (CBCTs). Methods: While CBCT is typically used to minimize the patient setup error, its poor image quality impedes accurate monitoring of daily anatomical changes in radiotherapy. Reconstruction artifacts in CBCT often cause undesirable errors in registration-based contour propagation from the planning CT, a conventional way to estimate anatomical changes. To improve the registration and segmentation accuracy, we developed a new deformable image registration (DIR) that iteratively corrects CBCTmore » intensities using slice-based histogram matching during the registration process. Three popular DIR algorithms (hierarchical B-spline, demons, optical flow) augmented by the intensity correction were implemented on a graphics processing unit for efficient computation, and their performances were evaluated on six head and neck (HN) cancer cases. Four trained scientists manually contoured nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs for each case, to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial software, VelocityAI (Varian Medical Systems Inc.). Results: Manual contouring showed significant variations, [-76, +141]% from the mean of all four sets of contours. The volume differences (mean±std in cc) between the average manual segmentation and four automatic segmentations are 3.70±2.30(B-spline), 1.25±1.78(demons), 0.93±1.14(optical flow), and 4.39±3.86 (VelocityAI). In comparison to the average volume of the manual segmentations, the proposed approach significantly reduced the estimation error by 9%(B-spline), 38%(demons), and 51%(optical flow) over the conventional mutual information based method (VelocityAI). Conclusion: The proposed CT-CBCT registration with local CBCT intensity correction can accurately predict the tumor volume change with reduced errors. Although demonstrated only on HN nodal GTVs, the results imply improved accuracy for other critical structures. This work was supported by NIH/NCI under grant R42CA137886.« less
Gobin, Laure; Tassignon, Marie-José; Mathysen, Danny
2011-06-01
To propose a method of calculating the power of the 1-sided posterior chamber toric bag-in-the-lens (BIL) intraocular lens (IOL) and propose a misalignment nomogram to calculate the postoperative rotational misalignment or predict the effect of preoperative existing irregular corneal astigmatism. Antwerp University Hospital, Department of Ophthalmology, Antwerp, Belgium. Cohort study. The new IOL calculation formula uses the steepest corneal meridian and flattest corneal meridian separately (regular spherical IOL formula) followed by a customized A-constant approach based on the changes in the IOL principal plane depending on the spherical and cylindrical powers (thickness) of the IOL. The calculation of the remaining astigmatism (power and axis) in cases of postoperative rotational misalignment resulted in a nomogram that can also be used to predict the degree of tolerance for irregular corneal astigmatism correction at the lenticular plane. The calculation is performed using a worksheet. Because 10 degrees of misalignment would result in 35% refractive inaccuracy, it is the maximum acceptable corneal astigmatic irregularity for correction at the lenticular plane. Calculation of spherocylindrical power is specific to each toric IOL. Because the surgeon must fully understand the optical properties of the toric IOL that is going to be implanted, a comprehensive outline of a new calculation method specific to the toric BIL IOL is proposed. Primary rotational misalignment of the toric BIL IOL can be fine tuned postoperatively. Drs. Gobin and Mathysen have no financial or proprietary interest in any material or method mentioned. Additional disclosures are found in the footnotes. Copyright © 2011 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Ding, Haiquan; Lu, Qipeng; Gao, Hongzhi; Peng, Zhongqi
2014-01-01
To facilitate non-invasive diagnosis of anemia, specific equipment was developed, and non-invasive hemoglobin (HB) detection method based on back propagation artificial neural network (BP-ANN) was studied. In this paper, we combined a broadband light source composed of 9 LEDs with grating spectrograph and Si photodiode array, and then developed a high-performance spectrophotometric system. By using this equipment, fingertip spectra of 109 volunteers were measured. In order to deduct the interference of redundant data, principal component analysis (PCA) was applied to reduce the dimensionality of collected spectra. Then the principal components of the spectra were taken as input of BP-ANN model. On this basis we obtained the optimal network structure, in which node numbers of input layer, hidden layer, and output layer was 9, 11, and 1. Calibration and correction sample sets were used for analyzing the accuracy of non-invasive hemoglobin measurement, and prediction sample set was used for testing the adaptability of the model. The correlation coefficient of network model established by this method is 0.94, standard error of calibration, correction, and prediction are 11.29g/L, 11.47g/L, and 11.01g/L respectively. The result proves that there exist good correlations between spectra of three sample sets and actual hemoglobin level, and the model has a good robustness. It is indicated that the developed spectrophotometric system has potential for the non-invasive detection of HB levels with the method of BP-ANN combined with PCA. PMID:24761296
Piper, Rory J; Yoong, Michael M; Pujar, Suresh; Chin, Richard F
2014-01-01
Background Correcting volumetric measurements of brain structures for intracranial volume (ICV) is important in comparing volumes across subjects with different ICV. The aim of this study was to investigate whether intracranial area (ICA) reliably predicts actual ICV in a healthy pediatric cohort and in children with convulsive status epilepticus (CSE). Methods T1-weighted volumetric MRI was performed on 20 healthy children (control group), 10 with CSE with structurally normal MRI (CSE/MR-), and 12 with CSE with structurally abnormal MRI (CSE/MR+). ICA, using a mid-sagittal slice, and the actual ICV were measured. Results A high Spearman correlation was found between the ICA and ICV measurements in the control (r = 0.96; P < 0.0001), CSE/MR− (r = 0.93; P = 0.0003), and CSE/MR+ (r = 0.94; P < 0.0001) groups. On comparison of predicted and actual ICV, there was no significant difference in the CSE/MR− group (P = 0.77). However, the comparison between predicted and actual ICV was significantly different in the CSE/MR+ (P = 0.001) group. Our Bland–Altman plot showed that the ICA method consistently overestimated ICV in children in the CSE/MR+ group, especially in those with small ICV or widespread structural abnormalities. Conclusions After further validation, ICA measurement may be a reliable alternative to measuring actual ICV when correcting volume measurements for ICV, even in children with localized MRI abnormalities. Caution should be applied when the method is used in children with small ICV and those with multilobar brain pathology. PMID:25365798
Luo, Wei; Phung, Dinh; Tran, Truyen; Gupta, Sunil; Rana, Santu; Karmakar, Chandan; Shilton, Alistair; Yearwood, John; Dimitrova, Nevenka; Ho, Tu Bao; Venkatesh, Svetha; Berk, Michael
2016-12-16
As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. ©Wei Luo, Dinh Phung, Truyen Tran, Sunil Gupta, Santu Rana, Chandan Karmakar, Alistair Shilton, John Yearwood, Nevenka Dimitrova, Tu Bao Ho, Svetha Venkatesh, Michael Berk. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 16.12.2016.
Assessment of statistical methods used in library-based approaches to microbial source tracking.
Ritter, Kerry J; Carruthers, Ethan; Carson, C Andrew; Ellender, R D; Harwood, Valerie J; Kingsley, Kyle; Nakatsu, Cindy; Sadowsky, Michael; Shear, Brian; West, Brian; Whitlock, John E; Wiggins, Bruce A; Wilbur, Jayson D
2003-12-01
Several commonly used statistical methods for fingerprint identification in microbial source tracking (MST) were examined to assess the effectiveness of pattern-matching algorithms to correctly identify sources. Although numerous statistical methods have been employed for source identification, no widespread consensus exists as to which is most appropriate. A large-scale comparison of several MST methods, using identical fecal sources, presented a unique opportunity to assess the utility of several popular statistical methods. These included discriminant analysis, nearest neighbour analysis, maximum similarity and average similarity, along with several measures of distance or similarity. Threshold criteria for excluding uncertain or poorly matched isolates from final analysis were also examined for their ability to reduce false positives and increase prediction success. Six independent libraries used in the study were constructed from indicator bacteria isolated from fecal materials of humans, seagulls, cows and dogs. Three of these libraries were constructed using the rep-PCR technique and three relied on antibiotic resistance analysis (ARA). Five of the libraries were constructed using Escherichia coli and one using Enterococcus spp. (ARA). Overall, the outcome of this study suggests a high degree of variability across statistical methods. Despite large differences in correct classification rates among the statistical methods, no single statistical approach emerged as superior. Thresholds failed to consistently increase rates of correct classification and improvement was often associated with substantial effective sample size reduction. Recommendations are provided to aid in selecting appropriate analyses for these types of data.
NASA Astrophysics Data System (ADS)
Rodi, A. R.; Leon, D. C.
2012-05-01
Geometric altitude data from a combined Global Navigation Satellite System (GNSS) and inertial measurement unit (IMU) system on the University of Wyoming King Air research aircraft are used to estimate acceleration effects on static pressure measurement. Using data collected during periods of accelerated flight, comparison of measured pressure with that derived from GNSS/IMU geometric altitude show that errors exceeding 150 Pa can occur which is significant in airspeed and atmospheric air motion determination. A method is developed to predict static pressure errors from analysis of differential pressure measurements from a Rosemount model 858 differential pressure air velocity probe. The method was evaluated with a carefully designed probe towed on connecting tubing behind the aircraft - a "trailing cone" - in steady flight, and shown to have a precision of about ±10 Pa over a wide range of conditions including various altitudes, power settings, and gear and flap extensions. Under accelerated flight conditions, compared to the GNSS/IMU data, this algorithm predicts corrections to a precision of better than ±20 Pa. Some limiting factors affecting the precision of static pressure measurement on a research aircraft are examined.
A novel knowledge-based potential for RNA 3D structure evaluation
NASA Astrophysics Data System (ADS)
Yang, Yi; Gu, Qi; Zhang, Ben-Gong; Shi, Ya-Zhou; Shao, Zhi-Gang
2018-03-01
Ribonucleic acids (RNAs) play a vital role in biology, and knowledge of their three-dimensional (3D) structure is required to understand their biological functions. Recently structural prediction methods have been developed to address this issue, but a series of RNA 3D structures are generally predicted by most existing methods. Therefore, the evaluation of the predicted structures is generally indispensable. Although several methods have been proposed to assess RNA 3D structures, the existing methods are not precise enough. In this work, a new all-atom knowledge-based potential is developed for more accurately evaluating RNA 3D structures. The potential not only includes local and nonlocal interactions but also fully considers the specificity of each RNA by introducing a retraining mechanism. Based on extensive test sets generated from independent methods, the proposed potential correctly distinguished the native state and ranked near-native conformations to effectively select the best. Furthermore, the proposed potential precisely captured RNA structural features such as base-stacking and base-pairing. Comparisons with existing potential methods show that the proposed potential is very reliable and accurate in RNA 3D structure evaluation. Project supported by the National Science Foundation of China (Grants Nos. 11605125, 11105054, 11274124, and 11401448).
Work characteristics as predictors of correctional supervisors’ health outcomes
Buden, Jennifer C.; Dugan, Alicia G.; Namazi, Sara; Huedo-Medina, Tania B.; Cherniack, Martin G.; Faghri, Pouran D.
2016-01-01
Objective This study examined associations among health behaviors, psychosocial work factors, and health status. Methods Correctional supervisors (n=157) completed a survey that assessed interpersonal and organizational views on health. Chi-square and logistic regressions were used to examine relationships among variables. Results Respondents had a higher prevalence of obesity and comorbidities compared to the general U.S. adult population. Burnout was significantly associated with nutrition, physical activity, sleep duration, sleep quality, diabetes, and anxiety/depression. Job meaning, job satisfaction and workplace social support may predict health behaviors and outcomes. Conclusions Correctional supervisors are understudied and have poor overall health status. Improving health behaviors of middle-management employees may have a beneficial effect on the health of the entire workforce. This paper demonstrates the importance of psychosocial work factors that may contribute to health behaviors and outcomes. PMID:27483335
Cramer, C.H.; Kumar, A.
2003-01-01
Engineering seismoscope data collected at distances less than 300 km for the M 7.7 Bhuj, India, mainshock are compatible with ground-motion attenuation in eastern North America (ENA). The mainshock ground-motion data have been corrected to a common geological site condition using the factors of Joyner and Boore (2000) and a classification scheme of Quaternary or Tertiary sediments or rock. We then compare these data to ENA ground-motion attenuation relations. Despite uncertainties in recording method, geological site corrections, common tectonic setting, and the amount of regional seismic attenuation, the corrected Bhuj dataset agrees with the collective predictions by ENA ground-motion attenuation relations within a factor of 2. This level of agreement is within the dataset uncertainties and the normal variance for recorded earthquake ground motions.
Docking screens: right for the right reasons?
Kolb, Peter; Irwin, John J
2009-01-01
Whereas docking screens have emerged as the most practical way to use protein structure for ligand discovery, an inconsistent track record raises questions about how well docking actually works. In its favor, a growing number of publications report the successful discovery of new ligands, often supported by experimental affinity data and controls for artifacts. Few reports, however, actually test the underlying structural hypotheses that docking makes. To be successful and not just lucky, prospective docking must not only rank a true ligand among the top scoring compounds, it must also correctly orient the ligand so the score it receives is biophysically sound. If the correct binding pose is not predicted, a skeptic might well infer that the discovery was serendipitous. Surveying over 15 years of the docking literature, we were surprised to discover how rarely sufficient evidence is presented to establish whether docking actually worked for the right reasons. The paucity of experimental tests of theoretically predicted poses undermines confidence in a technique that has otherwise become widely accepted. Of course, solving a crystal structure is not always possible, and even when it is, it can be a lot of work, and is not readily accessible to all groups. Even when a structure can be determined, investigators may prefer to gloss over an erroneous structural prediction to better focus on their discovery. Still, the absence of a direct test of theory by experiment is a loss for method developers seeking to understand and improve docking methods. We hope this review will motivate investigators to solve structures and compare them with their predictions whenever possible, to advance the field.
A scan-angle correction for thermal infrared multispectral data using side lapping images
Watson, K.
1996-01-01
Thermal infrared multispectral scanner (TIMS) images, acquired with side lapping flight lines, provide dual angle observations of the same area on the ground and can thus be used to estimate variations in the atmospheric transmission with scan angle. The method was tested using TIMS aircraft data for six flight lines with about 30% sidelap for an area within Joshua Tree National Park, California. Generally the results correspond to predictions for the transmission scan-angle coefficient based on a standard atmospheric model although some differences were observed at the longer wavelength channels. A change was detected for the last pair of lines that may indicate either spatial or temporal atmospheric variation. The results demonstrate that the method provides information for correcting regional survey data (requiring multiple adjacent flight lines) that can be important in detecting subtle changes in lithology.
Haarman, Juliet A M; Maartens, Erik; van der Kooij, Herman; Buurke, Jaap H; Reenalda, Jasper; Rietman, Johan S
2017-12-02
During gait training, physical therapists continuously supervise stroke survivors and provide physical support to their pelvis when they judge that the patient is unable to keep his balance. This paper is the first in providing quantitative data about the corrective forces that therapists use during gait training. It is assumed that changes in the acceleration of a patient's COM are a good predictor for therapeutic balance assistance during the training sessions Therefore, this paper provides a method that predicts the timing of therapeutic balance assistance, based on acceleration data of the sacrum. Eight sub-acute stroke survivors and seven therapists were included in this study. Patients were asked to perform straight line walking as well as slalom walking in a conventional training setting. Acceleration of the sacrum was captured by an Inertial Magnetic Measurement Unit. Balance-assisting corrective forces applied by the therapist were collected from two force sensors positioned on both sides of the patient's hips. Measures to characterize the therapeutic balance assistance were the amount of force, duration, impulse and the anatomical plane in which the assistance took place. Based on the acceleration data of the sacrum, an algorithm was developed to predict therapeutic balance assistance. To validate the developed algorithm, the predicted events of balance assistance by the algorithm were compared with the actual provided therapeutic assistance. The algorithm was able to predict the actual therapeutic assistance with a Positive Predictive Value of 87% and a True Positive Rate of 81%. Assistance mainly took place over the medio-lateral axis and corrective forces of about 2% of the patient's body weight (15.9 N (11), median (IQR)) were provided by therapists in this plane. Median duration of balance assistance was 1.1 s (0.6) (median (IQR)) and median impulse was 9.4Ns (8.2) (median (IQR)). Although therapists were specifically instructed to aim for the force sensors on the iliac crest, a different contact location was reported in 22% of the corrections. This paper presents insights into the behavior of therapists regarding their manual physical assistance during gait training. A quantitative dataset was presented, representing therapeutic balance-assisting force characteristics. Furthermore, an algorithm was developed that predicts events at which therapeutic balance assistance was provided. Prediction scores remain high when different therapists and patients were analyzed with the same algorithm settings. Both the quantitative dataset and the developed algorithm can serve as technical input in the development of (robot-controlled) balance supportive devices.
Calculation of turbulence-driven secondary motion in ducts with arbitrary cross section
NASA Technical Reports Server (NTRS)
Demuren, A. O.
1989-01-01
Calculation methods for turbulent duct flows are generalized for ducts with arbitrary cross-sections. The irregular physical geometry is transformed into a regular one in computational space, and the flow equations are solved with a finite-volume numerical procedure. The turbulent stresses are calculated with an algebraic stress model derived by simplifying model transport equations for the individual Reynolds stresses. Two variants of such a model are considered. These procedures enable the prediction of both the turbulence-driven secondary flow and the anisotropy of the Reynolds stresses, in contrast to some of the earlier calculation methods. Model predictions are compared to experimental data for developed flow in triangular duct, trapezoidal duct and a rod-bundle geometry. The correct trends are predicted, and the quantitative agreement is mostly fair. The simpler variant of the algebraic stress model procured better agreement with the measured data.
Continuum Model of Gas Uptake for Inhomogeneous Fluids
Ihm, Yungok; Cooper, Valentino R.; Vlcek, Lukas; ...
2017-07-20
We describe a continuum model of gas uptake for inhomogeneous fluids (CMGIF) and use it to predict fluid adsorption in porous materials directly from gas-substrate interaction energies determined by first principles calculations or accurate effective force fields. The method uses a perturbation approach to correct bulk fluid interactions for local inhomogeneities caused by gas substrate interactions, and predicts local pressure and density of the adsorbed gas. The accuracy and limitations of the model are tested by comparison with the results of Grand Canonical Monte Carlo simulations of hydrogen uptake in metal-organic frameworks (MOFs). We show that the approach provides accuratemore » predictions at room temperature and at low temperatures for less strongly interacting materials. As a result, the speed of the CMGIF method makes it a promising candidate for high-throughput materials discovery in connection with existing databases of nano-porous materials.« less
Scene-based nonuniformity correction technique for infrared focal-plane arrays.
Liu, Yong-Jin; Zhu, Hong; Zhao, Yi-Gong
2009-04-20
A scene-based nonuniformity correction algorithm is presented to compensate for the gain and bias nonuniformity in infrared focal-plane array sensors, which can be separated into three parts. First, an interframe-prediction method is used to estimate the true scene, since nonuniformity correction is a typical blind-estimation problem and both scene values and detector parameters are unavailable. Second, the estimated scene, along with its corresponding observed data obtained by detectors, is employed to update the gain and the bias by means of a line-fitting technique. Finally, with these nonuniformity parameters, the compensated output of each detector is obtained by computing a very simple formula. The advantages of the proposed algorithm lie in its low computational complexity and storage requirements and ability to capture temporal drifts in the nonuniformity parameters. The performance of every module is demonstrated with simulated and real infrared image sequences. Experimental results indicate that the proposed algorithm exhibits a superior correction effect.
NASA Astrophysics Data System (ADS)
Saperstein, E. E.; Baldo, M.; Pankratov, S. S.; Tolokonnikov, S. V.
2018-05-01
A method is presented to evaluate the particle-phonon coupling (PC) corrections to the single-particle energies in semimagic nuclei, based on the direct solution of the Dyson equation with PC-corrected mass operator. It is used for finding the odd-even mass difference between even Pb and Sn isotopes and their odd-proton neighbors. The Fayans energy density functional DF3-a is used, which gives rather highly accurate predictions for these mass differences already at the mean-field level. In the case of the lead chain, account for the PC corrections induced by the low-lying phonons 21+ and 31- makes agreement of the theory with the experimental data significantly better. For the tin chain, the situation is not so definite. In this case, the PC corrections make agreement better in the case of the addition mode but they spoil the agreement for the removal mode. We discuss the reason for such a discrepancy.
Cogo-Moreira, Hugo; Brandão de Ávila, Clara Regina; Ploubidis, George B.; de Jesus Mari, Jair
2013-01-01
Objective To investigate whether specific domains of musical perception (temporal and melodic domains) predict the word-level reading skills of eight- to ten-year-old children (n = 235) with reading difficulties, normal quotient of intelligence, and no previous exposure to music education classes. Method A general-specific solution of the Montreal Battery of Evaluation of Amusia (MBEA), which underlies a musical perception construct and is constituted by three latent factors (the general, temporal, and the melodic domain), was regressed on word-level reading skills (rate of correct isolated words/non-words read per minute). Results General and melodic latent domains predicted word-level reading skills. PMID:24358358
NASA Technical Reports Server (NTRS)
Bauer, A. B.; Munson, A. G.
1977-01-01
Airframe noise measurements are reported for the DC-9-31 aircraft flown at several speeds and with a number of flap, landing gear, and slat extension configurations. The data are corrected for atmospheric attenuation and spherical divergence, and are presented for an overhead position normalized to a 1-meter height. The sound pressure levels are found to vary approximately as the fifth power of flight velocity. Both lift and drag dipoles exist as a significant part of the airframe noise. The data are compared with airframe noise predictions using the drag element and the data analysis methods. Although some of the predictions are very good, further work is needed to refine these methods, particularly for the gear-down and flaps-down configurations.
Functional classification of protein structures by local structure matching in graph representation.
Mills, Caitlyn L; Garg, Rohan; Lee, Joslynn S; Tian, Liang; Suciu, Alexandru; Cooperman, Gene; Beuning, Penny J; Ondrechen, Mary Jo
2018-03-31
As a result of high-throughput protein structure initiatives, over 14,400 protein structures have been solved by structural genomics (SG) centers and participating research groups. While the totality of SG data represents a tremendous contribution to genomics and structural biology, reliable functional information for these proteins is generally lacking. Better functional predictions for SG proteins will add substantial value to the structural information already obtained. Our method described herein, Graph Representation of Active Sites for Prediction of Function (GRASP-Func), predicts quickly and accurately the biochemical function of proteins by representing residues at the predicted local active site as graphs rather than in Cartesian coordinates. We compare the GRASP-Func method to our previously reported method, structurally aligned local sites of activity (SALSA), using the ribulose phosphate binding barrel (RPBB), 6-hairpin glycosidase (6-HG), and Concanavalin A-like Lectins/Glucanase (CAL/G) superfamilies as test cases. In each of the superfamilies, SALSA and the much faster method GRASP-Func yield similar correct classification of previously characterized proteins, providing a validated benchmark for the new method. In addition, we analyzed SG proteins using our SALSA and GRASP-Func methods to predict function. Forty-one SG proteins in the RPBB superfamily, nine SG proteins in the 6-HG superfamily, and one SG protein in the CAL/G superfamily were successfully classified into one of the functional families in their respective superfamily by both methods. This improved, faster, validated computational method can yield more reliable predictions of function that can be used for a wide variety of applications by the community. © 2018 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.
Problems With Risk Reclassification Methods for Evaluating Prediction Models
Pepe, Margaret S.
2011-01-01
For comparing the performance of a baseline risk prediction model with one that includes an additional predictor, a risk reclassification analysis strategy has been proposed. The first step is to cross-classify risks calculated according to the 2 models for all study subjects. Summary measures including the percentage of reclassification and the percentage of correct reclassification are calculated, along with 2 reclassification calibration statistics. The author shows that interpretations of the proposed summary measures and P values are problematic. The author's recommendation is to display the reclassification table, because it shows interesting information, but to use alternative methods for summarizing and comparing model performance. The Net Reclassification Index has been suggested as one alternative method. The author argues for reporting components of the Net Reclassification Index because they are more clinically relevant than is the single numerical summary measure. PMID:21555714
Comparison of online and offline based merging methods for high resolution rainfall intensities
NASA Astrophysics Data System (ADS)
Shehu, Bora; Haberlandt, Uwe
2016-04-01
Accurate rainfall intensities with high spatial and temporal resolution are crucial for urban flow prediction. Commonly, raw or bias corrected radar fields are used for forecasting, while different merging products are employed for simulation. The merging products are proven to be adequate for rainfall intensities estimation, however their application in forecasting is limited as they are developed for offline mode. This study aims at adapting and refining the offline merging techniques for the online implementation, and at comparing the performance of these methods for high resolution rainfall data. Radar bias correction based on mean fields and quantile mapping are analyzed individually and also are implemented in conditional merging. Special attention is given to the impact of different spatial and temporal filters on the predictive skill of all methods. Raw radar data and kriging interpolation of station data are considered as a reference to check the benefit of the merged products. The methods are applied for several extreme events in the time period 2006-2012 caused by different meteorological conditions, and their performance is evaluated by split sampling. The study area is located within the 112 km radius of Hannover radar in Lower Saxony, Germany and the data set constitutes of 80 recording stations in 5 min time steps. The results of this study reveal how the performance of the methods is affected by the adjustment of radar data, choice of merging method and selected event. Merging techniques can be used to improve the performance of online rainfall estimation, which gives way to the application of merging products in forecasting.
Large-extent digital soil mapping approaches for total soil depth
NASA Astrophysics Data System (ADS)
Mulder, Titia; Lacoste, Marine; Saby, Nicolas P. A.; Arrouays, Dominique
2015-04-01
Total soil depth (SDt) plays a key role in supporting various ecosystem services and properties, including plant growth, water availability and carbon stocks. Therefore, predictive mapping of SDt has been included as one of the deliverables within the GlobalSoilMap project. In this work SDt was predicted for France following the directions of GlobalSoilMap, which requires modelling at 90m resolution. This first method, further referred to as DM, consisted of modelling the deterministic trend in SDt using data mining, followed by a bias correction and ordinary kriging of the residuals. Considering the total surface area of France, being about 540K km2, employed methods may need to be able dealing with large data sets. Therefore, a second method, multi-resolution kriging (MrK) for large datasets, was implemented. This method consisted of modelling the deterministic trend by a linear model, followed by interpolation of the residuals. For the two methods, the general trend was assumed to be explained by the biotic and abiotic environmental conditions, as described by the Soil-Landscape paradigm. The mapping accuracy was evaluated by an internal validation and its concordance with previous soil maps. In addition, the prediction interval for DM and the confidence interval for MrK were determined. Finally, the opportunities and limitations of both approaches were evaluated. The results showed consistency in mapped spatial patterns and a good prediction of the mean values. DM was better capable in predicting extreme values due to the bias correction. Also, DM was more powerful in capturing the deterministic trend than the linear model of the MrK approach. However, MrK was found to be more straightforward and flexible in delivering spatial explicit uncertainty measures. The validation indicated that DM was more accurate than MrK. Improvements for DM may be expected by predicting soil depth classes. MrK shows potential for modelling beyond the country level, at high resolution. Large-extent digital soil mapping approaches for SDt may be improved by (1) taking into account SDt observations which are censored and (2) using high-resolution biotic and abiotic environmental data. The latter may improve modelling the soil-landscape interactions influencing soil pedogenesis. Concluding, this work provided a robust and reproducible method (DM) for high-resolution soil property modelling, in accordance with the GlobalSoilMap requirements and an efficient alternative for large-extent digital soil mapping (MrK).
Murata, Fernando Henrique Antunes; Ferreira, Marina Neves; Pereira-Chioccola, Vera Lucia; Spegiorin, Lígia Cosentino Junqueira Franco; Meira-Strejevitch, Cristina da Silva; Gava, Ricardo; Silveira-Carvalho, Aparecida Perpétuo; de Mattos, Luiz Carlos; Brandão de Mattos, Cinara Cássia
2017-09-01
Toxoplasmosis during pregnancy can have severe consequences. The use of sensitive and specific serological and molecular methods is extremely important for the correct diagnosis of the disease. We compared the ELISA and ELFA serological methods, conventional PCR (cPCR), Nested PCR and quantitative PCR (qPCR) in the diagnosis of Toxoplasma gondii infection in pregnant women without clinical suspicion of toxoplasmosis (G1=94) and with clinical suspicion of toxoplasmosis (G2=53). The results were compared using the Kappa index, and the sensitivity, specificity, positive predictive value and negative predictive value were calculated. The results of the serological methods showed concordance between the ELISA and ELFA methods even though ELFA identified more positive cases than ELISA. Molecular methods were discrepant with cPCR using B22/23 primers having greater sensitivity and lower specificity compared to the other molecular methods. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong
2018-05-01
In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.
Ding, Feng; Sharma, Shantanu; Chalasani, Poornima; Demidov, Vadim V.; Broude, Natalia E.; Dokholyan, Nikolay V.
2008-01-01
RNA molecules with novel functions have revived interest in the accurate prediction of RNA three-dimensional (3D) structure and folding dynamics. However, existing methods are inefficient in automated 3D structure prediction. Here, we report a robust computational approach for rapid folding of RNA molecules. We develop a simplified RNA model for discrete molecular dynamics (DMD) simulations, incorporating base-pairing and base-stacking interactions. We demonstrate correct folding of 150 structurally diverse RNA sequences. The majority of DMD-predicted 3D structures have <4 Å deviations from experimental structures. The secondary structures corresponding to the predicted 3D structures consist of 94% native base-pair interactions. Folding thermodynamics and kinetics of tRNAPhe, pseudoknots, and mRNA fragments in DMD simulations are in agreement with previous experimental findings. Folding of RNA molecules features transient, non-native conformations, suggesting non-hierarchical RNA folding. Our method allows rapid conformational sampling of RNA folding, with computational time increasing linearly with RNA length. We envision this approach as a promising tool for RNA structural and functional analyses. PMID:18456842
Bryantsev, Vyacheslav S.; Hay, Benjamin P.
2015-03-20
Selective extraction of minor actinides from lanthanides is a critical step in the reduction of radiotoxicity of spent nuclear fuels. However, the design of suitable ligands for separating chemically similar 4f- and 5f-block trivalent metal ions poses a significant challenge. Furthermore, first-principles calculations should play an important role in the design of new separation agents, but their ability to predict metal ion selectivity has not been systematically evaluated. We examine the ability of several density functional theory methods to predict selectivity of Am(III) and Eu(III) with oxygen, mixed oxygen–nitrogen, and sulfur donor ligands. The results establish a computational method capablemore » of predicting the correct order of selectivities obtained from liquid–liquid extraction and aqueous phase complexation studies. To allow reasonably accurate predictions, it was critical to employ sufficiently flexible basis sets and provide proper account of solvation effects. The approach is utilized to estimate the selectivity of novel amide-functionalized diazine and 1,2,3-triazole ligands.« less
Recurrent Neural Network Applications for Astronomical Time Series
NASA Astrophysics Data System (ADS)
Protopapas, Pavlos
2017-06-01
The benefits of good predictive models in astronomy lie in early event prediction systems and effective resource allocation. Current time series methods applicable to regular time series have not evolved to generalize for irregular time series. In this talk, I will describe two Recurrent Neural Network methods, Long Short-Term Memory (LSTM) and Echo State Networks (ESNs) for predicting irregular time series. Feature engineering along with a non-linear modeling proved to be an effective predictor. For noisy time series, the prediction is improved by training the network on error realizations using the error estimates from astronomical light curves. In addition to this, we propose a new neural network architecture to remove correlation from the residuals in order to improve prediction and compensate for the noisy data. Finally, I show how to set hyperparameters for a stable and performant solution correctly. In this work, we circumvent this obstacle by optimizing ESN hyperparameters using Bayesian optimization with Gaussian Process priors. This automates the tuning procedure, enabling users to employ the power of RNN without needing an in-depth understanding of the tuning procedure.
Gabay, Yafit; Vakil, Eli; Schiff, Rachel; Holt, Lori L.
2015-01-01
Objective Developmental dyslexia is presumed to arise from specific phonological impairments. However, an emerging theoretical framework suggests that phonological impairments may be symptoms stemming from an underlying dysfunction of procedural learning. Method We tested procedural learning in adults with dyslexia (n=15) and matched-controls (n=15) using two versions of the Weather Prediction Task: Feedback (FB) and Paired-associate (PA). In the FB-based task, participants learned associations between cues and outcomes initially by guessing and subsequently through feedback indicating the correctness of response. In the PA-based learning task, participants viewed the cue and its associated outcome simultaneously without overt response or feedback. In both versions, participants trained across 150 trials. Learning was assessed in a subsequent test without presentation of the outcome, or corrective feedback. Results The Dyslexia group exhibited impaired learning compared with the Control group on both the FB and PA versions of the weather prediction task. Conclusions The results indicate that the ability to learn by feedback is not selectively impaired in dyslexia. Rather it seems that the probabilistic nature of the task, shared by the FB and PA versions of the weather prediction task, hampers learning in those with dyslexia. Results are discussed in light of procedural learning impairments among participants with dyslexia. PMID:25730732