Springback Simulation and Compensation for High Strength Parts Using JSTAMP
NASA Astrophysics Data System (ADS)
Shindo, Terumasa; Sugitomo, Nobuhiko; Ma, Ninshu
2011-08-01
The stamping parts made from high strength steel have a large springback which is difficult to control. With the development of simulation technology, the springback can be accurately predicted using advanced kinematic material models and CAE systems. In this paper, a stamping process for a pillar part made from several classes of high strength steel was simulated using a Yoshida-Uemori kinematic material model and the springback was well predicted. To obtain the desired part shape, CAD surfaces of the stamping tools were compensated by a CAE system JSTAMP. After applying the compensation 2 or 3 times, the dimension accuracy of the simulation for the part shape achieved was about 0.5 mm. The compensated CAD surfaces of the stamping tools were directly exported from JSTAMP to CAM for machining. The effectiveness of the compensation was verified by an experiment using the compensated tools.
An Alternate Method to Springback Compensation for Sheet Metal Forming
Omar, Badrul; Jusoff, Kamaruzaman
2014-01-01
The aim of this work is to improve the accuracy of cold stamping product by accommodating springback. This is a numerical approach to improve the accuracy of springback analysis and die compensation process combining the displacement adjustment (DA) method and the spring forward (SF) algorithm. This alternate hybrid method (HM) is conducted by firstly employing DA method followed by the SF method instead of either DA or SF method individually. The springback shape and the target part are used to optimize the die surfaces compensating springback. The hybrid method (HM) algorithm has been coded in Fortran and tested in two- and three-dimensional models. By implementing the HM, the springback error can be decreased and the dimensional deviation falls in the predefined tolerance range. PMID:25165738
Numerical simulation study on rolling-chemical milling process of aluminum-lithium alloy skin panel
NASA Astrophysics Data System (ADS)
Huang, Z. B.; Sun, Z. G.; Sun, X. F.; Li, X. Q.
2017-09-01
Single curvature parts such as aircraft fuselage skin panels are usually manufactured by rolling-chemical milling process, which is usually faced with the problem of geometric accuracy caused by springback. In most cases, the methods of manual adjustment and multiple roll bending are used to control or eliminate the springback. However, these methods can cause the increase of product cost and cycle, and lead to material performance degradation. Therefore, it is of significance to precisely control the springback of rolling-chemical milling process. In this paper, using the method of experiment and numerical simulation on rolling-chemical milling process, the simulation model for rolling-chemical milling process of 2060-T8 aluminum-lithium alloy skin was established and testified by the comparison between numerical simulation and experiment results for the validity. Then, based on the numerical simulation model, the relative technological parameters which influence on the curvature of the skin panel were analyzed. Finally, the prediction of springback and the compensation can be realized by controlling the process parameters.
NASA Astrophysics Data System (ADS)
Lee, J.; Bong, H. J.; Ha, J.; Choi, J.; Barlat, F.; Lee, M.-G.
2018-05-01
In this study, a numerical sensitivity analysis of the springback prediction was performed using advanced strain hardening models. In particular, the springback in U-draw bending for dual-phase 780 steel sheets was investigated while focusing on the effect of the initial yield stress determined from the cyclic loading tests. The anisotropic hardening models could reproduce the flow stress behavior under the non-proportional loading condition for the considered parametric cases. However, various identification schemes for determining the yield stress of the anisotropic hardening models significantly influenced the springback prediction. The deviations from the measured springback varied from 4% to 13.5% depending on the identification method.
Experimental investigation of springback in air bending process
NASA Astrophysics Data System (ADS)
Alhammadi, Aysha; Rafique, Hafsa; Alkaabi, Meera; Abu Qudeiri, Jaber
2018-03-01
Bending processes is one of the important processes in sheet metal forming. One of the challenge that faces the air bending process is springback, which happens due to the elastic recovery during unloading stage. An accurate analysis of springback during the bending process is crucial to achieve a required bend angle. This paper will investigate the springback experimentally by changing many parameters such as tested material, die opening, thickness, etc. and finding its effect on the value of springback. Additionally, the paper will investigate the effect of loading time at the end of loading stage on the springback by proposing a multistage bending technique (MBT). In MBT, the loading will stop during loading stage just before the end of this stage and it will restart again shortly after. In this study, three sheet metals with different thickness will be examined, namely stainless steel, aluminium and brass. Artificial neural network (ANN) will be utilized to develop a prediction model to predict springback based on the experimental results.
An Investigation and Prediction of Springback of Sheet Metals under Cold Forming Condition
NASA Astrophysics Data System (ADS)
Elsayed, A.; Mohamed, M.; Shazly, M.; Hegazy, A.
2017-12-01
Low formability and springback especially at room temperature are known to be major obstacles to advancements in sheet metal forming industries. The integration of numerical simulation within the R&D activities of the automotive industries provides a significant development in overcoming these drawbacks. The aim of the present work is to model and predict the springback of a Galvanized low carbon steel automotive panel part. This part suffers from both positive and negative springback which physically measured using CMM. The objective is to determine the suitable forming process parameters that minimize and compensate the springback through robust FE model. The analysis of the springback was carried out following (Isotropic model and Yoshida - Uemori model) which are calibrated through cyclic stress strain curve. The material data of the Galvanized low carbon steel was implemented via lookup tables in the commercial finite element software Pam-Stamp(TM). Firstly, the FE model was validated using the deformed part which suffers from springback problem at the same forming condition. The FE results were compared with the measured experimental trails providing very good agreement. Secondly, the validated FE model was used to determine the suitable forming parameters which could minimise the springback of the deformed part.
NASA Astrophysics Data System (ADS)
Zakaria, M.; Aminanda, Y.; Rashidi, S. A.; Mat Sah, M. A.
2018-04-01
The springback phenomena of CFRP after curing process through autoclave manufacturing method results on the out of tolerance for its utilisation in aerospace industry. This paper relates to the measurements of springback for Uni-directional flat laminate as a first steps to the springback study for the real aircraft composite laminate structures. A flat laminate with dimension of 300 mm x 300 mm, 400 mm x 400 mm and 500mm x 500 mm with different number of ply; 20, 24 and 28 are manufactured. The choice of dimension and number of lay-up corresponds to the dimension and lay-up of rib structure. After process, the springbacks are measured using 3D scanner (optical-based three-dimensional) with an accuracy of 42 micrometers to obtain an accurate measurement. The analysis of the effect of dimension and number of ply to the magnitude of springback are presented within the range of specimen studied in this work.
NASA Astrophysics Data System (ADS)
Nasir, M. N. M.; Mezeix, L.; Aminanda, Y.; Seman, M. A.; Rivai, A.; Ali, K. M.
2016-02-01
This paper presents an original method in predicting the spring-back for composite aircraft structures using non-linear Finite Element Analysis (FEA) and is an extension of the previous accompanying study on flat geometry samples. Firstly, unidirectional prepreg lay-up samples are fabricated on moulds with different corner angles (30°, 45° and 90°) and the effect on spring-back deformation are observed. Then, the FEA model that was developed in the previous study on flat samples is utilized. The model maintains the physical mechanisms of spring-back such as ply stretching and tool-part interface properties with the additional mechanism in the corner effect and geometrical changes in the tool, part and the tool-part interface components. The comparative study between the experimental data and FEA results show that the FEA model predicts adequately the spring-back deformation within the range of corner angle tested.
Simulation of springback and microstructural analysis of dual phase steels
NASA Astrophysics Data System (ADS)
Kalyan, T. Sri.; Wei, Xing; Mendiguren, Joseba; Rolfe, Bernard
2013-12-01
With increasing demand for weight reduction and better crashworthiness abilities in car development, advanced high strength Dual Phase (DP) steels have been progressively used when making automotive parts. The higher strength steels exhibit higher springback and lower dimensional accuracy after stamping. This has necessitated the use of simulation of each stamped component prior to production to estimate the part's dimensional accuracy. Understanding the micro-mechanical behaviour of AHSS sheet may provide more accuracy to stamping simulations. This work can be divided basically into two parts: first modelling a standard channel forming process; second modelling the micro-structure of the process. The standard top hat channel forming process, benchmark NUMISHEET'93, is used for investigating springback effect of WISCO Dual Phase steels. The second part of this work includes the finite element analysis of microstructures to understand the behaviour of the multi-phase steel at a more fundamental level. The outcomes of this work will help in the dimensional control of steels during manufacturing stage based on the material's microstructure.
NASA Astrophysics Data System (ADS)
Lee, K. J.; Choi, Y.; Choi, H. J.; Lee, J. Y.; Lee, M. G.
2018-03-01
Finite element simulations and experiments for the split-ring test were conducted to investigate the effect of anisotropic constitutive models on the predictive capability of sheet springback. As an alternative to the commonly employed associated flow rule, a non-associated flow rule for Hill1948 yield function was implemented in the simulations. Moreover, the evolution of anisotropy with plastic deformation was efficiently modeled by identifying equivalent plastic strain-dependent anisotropic coefficients. Comparative study with different yield surfaces and elasticity models showed that the split-ring springback could be best predicted when the anisotropy in both the R value and yield stress, their evolution and variable apparent elastic modulus were taken into account in the simulations. Detailed analyses based on deformation paths superimposed on the anisotropic yield functions predicted by different constitutive models were provided to understand the complex springback response in the split-ring test.
NASA Astrophysics Data System (ADS)
Lee, K. J.; Choi, Y.; Choi, H. J.; Lee, J. Y.; Lee, M. G.
2018-06-01
Finite element simulations and experiments for the split-ring test were conducted to investigate the effect of anisotropic constitutive models on the predictive capability of sheet springback. As an alternative to the commonly employed associated flow rule, a non-associated flow rule for Hill1948 yield function was implemented in the simulations. Moreover, the evolution of anisotropy with plastic deformation was efficiently modeled by identifying equivalent plastic strain-dependent anisotropic coefficients. Comparative study with different yield surfaces and elasticity models showed that the split-ring springback could be best predicted when the anisotropy in both the R value and yield stress, their evolution and variable apparent elastic modulus were taken into account in the simulations. Detailed analyses based on deformation paths superimposed on the anisotropic yield functions predicted by different constitutive models were provided to understand the complex springback response in the split-ring test.
Influence of the pressure dependent coefficient of friction on deep drawing springback predictions
NASA Astrophysics Data System (ADS)
Gil, Imanol; Galdos, Lander; Mendiguren, Joseba; Mugarra, Endika; Sáenz de Argandoña, Eneko
2016-10-01
This research studies the effect of considering an advanced variable friction coefficient on the springback prediction of stamping processes. Traditional constant coefficient of friction considerations are being replaced by more advanced friction coefficient definitions. The aim of this work is to show the influence of defining a pressure dependent friction coefficient on numerical springback predictions of a DX54D mild steel, a HSLA380 and a DP780 high strength steel. The pressure dependent friction model of each material was fitted to the experimental data obtained by Strip Drawing tests. Then, these friction models were implemented in a numerical simulation of a drawing process of an industrial automotive part. The results showed important differences between defining a pressure dependent friction coefficient or a constant friction coefficient.
NASA Astrophysics Data System (ADS)
Yetna n'jock, M.; Houssem, B.; Labergere, C.; Saanouni, K.; Zhenming, Y.
2018-05-01
The springback is an important phenomenon which accompanies the forming of metallic sheets especially for high strength materials. A quantitative prediction of springback becomes very important for newly developed material with high mechanical characteristics. In this work, a numerical methodology is developed to quantify this undesirable phenomenon. This methodoly is based on the use of both explicit and implicit finite element solvers of Abaqus®. The most important ingredient of this methodology consists on the use of highly predictive mechanical model. A thermodynamically-consistent, non-associative and fully anisotropic elastoplastic constitutive model strongly coupled with isotropic ductile damage and accounting for distortional hardening is then used. An algorithm for local integration of the complete set of the constitutive equations is developed. This algorithm considers the rotated frame formulation (RFF) to ensure the incremental objectivity of the model in the framework of finite strains. This algorithm is implemented in both explicit (Abaqus/Explicit®) and implicit (Abaqus/Standard®) solvers of Abaqus® through the users routine VUMAT and UMAT respectively. The implicit solver of Abaqus® has been used to study spingback as it is generally a quasi-static unloading. In order to compare the methods `efficiency, the explicit method (Dynamic Relaxation Method) proposed by Rayleigh has been also used for springback prediction. The results obtained within U draw/bending benchmark are studied, discussed and compared with experimental results as reference. Finally, the purpose of this work is to evaluate the reliability of different methods predict efficiently springback in sheet metal forming.
NASA Astrophysics Data System (ADS)
Shi, Ming F.; Zhang, Li; Zhu, Xinhai
2016-08-01
The Yoshida nonlinear isotropic/kinematic hardening material model is often selected in forming simulations where an accurate springback prediction is required. Many successful application cases in the industrial scale automotive components using advanced high strength steels (AHSS) have been reported to give better springback predictions. Several issues have been raised recently in the use of the model for higher strength AHSS including the use of two C vs. one C material parameters in the Armstrong and Frederick model (AF model), the original Yoshida model vs. Original Yoshida model with modified hardening law, and constant Young's Modulus vs. decayed Young's Modulus as a function of plastic strain. In this paper, an industrial scale automotive component using 980 MPa strength materials is selected to study the effect of two C and one C material parameters in the AF model on both forming and springback prediction using the Yoshida model with and without the modified hardening law. The effect of decayed Young's Modulus on the springback prediction for AHSS is also evaluated. In addition, the limitations of the material parameters determined from tension and compression tests without multiple cycle tests are also discussed for components undergoing several bending and unbending deformations.
An Anisotropic Hardening Model for Springback Prediction
NASA Astrophysics Data System (ADS)
Zeng, Danielle; Xia, Z. Cedric
2005-08-01
As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.
Springback evaluation of friction stir welded TWB automotive sheets
NASA Astrophysics Data System (ADS)
Kim, Junehyung; Lee, Wonoh; Chung, Kyung-Hwan; Kim, Daeyong; Kim, Chongmin; Okamoto, Kazutaka; Wagoner, R. H.; Chung, Kwansoo
2011-02-01
Springback behavior of automotive friction stir welded TWB (tailor welded blank) sheets was experimentally investigated and the springback prediction capability of the constitutive law was numerically validated. Four automotive sheets, aluminum alloy 6111-T4, 5083-H18, 5083-O and dual-phase DP590 steel sheets, each having one or two different thicknesses, were considered. To represent mechanical properties, the modified Chaboche type combined isotropic-kinematic hardening law was utilized along with the non-quadratic orthogonal anisotropic yield function, Yld2000-2d, while the anisotropy of the weld zone was ignored for simplicity. For numerical simulations, mechanical properties previously characterized [1] were applied. For validation purposes, three springback tests including the unconstrained cylindrical bending, 2-D draw bending and OSU draw-bend tests were carried out. The numerical method performed reasonably well in analyzing all verification tests and it was confirmed that the springback of TWB as well as of base samples is significantly affected by the ratio of the yield stress with respect to Young's modulus and thickness.
Springback optimization in automotive Shock Absorber Cup with Genetic Algorithm
NASA Astrophysics Data System (ADS)
Kakandikar, Ganesh; Nandedkar, Vilas
2018-02-01
Drawing or forming is a process normally used to achieve a required component form from a metal blank by applying a punch which radially draws the blank into the die by a mechanical or hydraulic action or combining both. When the component is drawn for more depth than the diameter, it is usually seen as deep drawing, which involves complicated states of material deformation. Due to the radial drawing of the material as it enters the die, radial drawing stress occurs in the flange with existence of the tangential compressive stress. This compression generates wrinkles in the flange. Wrinkling is unwanted phenomenon and can be controlled by application of a blank-holding force. Tensile stresses cause thinning in the wall region of the cup. Three main types of the errors occur in such a process are wrinkling, fracturing and springback. This paper reports a work focused on the springback and control. Due to complexity of the process, tool try-outs and experimentation may be costly, bulky and time consuming. Numerical simulation proves to be a good option for studying the process and developing a control strategy for reducing the springback. Finite-element based simulations have been used popularly for such purposes. In this study, the springback in deep drawing of an automotive Shock Absorber Cup is simulated with finite element method. Taguchi design of experiments and analysis of variance are used to analyze the influencing process parameters on the springback. Mathematical relations are developed to relate the process parameters and the resulting springback. The optimization problem is formulated for the springback, referring to the displacement magnitude in the selected sections. Genetic Algorithm is then applied for process optimization with an objective to minimize the springback. The results indicate that a better prediction of the springback and process optimization could be achieved with a combined use of these methods and tools.
Springback compensation for a vehicle's steel body panel
NASA Astrophysics Data System (ADS)
Bałon, Paweł; Świątoniowski, Andrzej; Szostak, Janusz; Kiełbasa, Bartłomiej
2017-10-01
This paper presents a structural element of a vehicle, that is made from High Strength Steels. Application of this kind of materials considerably reduces construction mass due to high durability. Nevertheless, it results in appearance of springback that depends mainly on used material as well as part. Springback reduction helps to reach the reference geometry of the element by using the Finite Element Method software. Authors compared two methods of optimization of die shape. The first method defines the compensation of the die shape only for OP-20 and the second multi-operation method defines the compensation of the die shape for the OP-20 and OP-50 operations. Prediction of springback by the trial-and-error method is difficult and labor-intensive. Designing of dies requires using of appropriate FEM software to make them more economic and less time-consuming. Virtual compensation methods make it possible to receive precise result in a short time. Die compensation with software application was experimentally verified by the prototype die. Therefore, springback deformation becomes a critical problem especially for the HSS steel when the geometry is complex.
Analysis of local warm forming of high strength steel using near infrared ray energy
NASA Astrophysics Data System (ADS)
Yang, W. H.; Lee, K.; Lee, E. H.; Yang, D. Y.
2013-12-01
The automotive industry has been pressed to satisfy more rigorous fuel efficiency requirements to promote energy conservation, safety features and cost containment. To satisfy this need, high strength steel has been developed and used for many different vehicle parts. The use of high strength steels, however, requires careful analysis and creativity in order to accommodate its relatively high springback behavior. An innovative method, called local warm forming with near infrared ray, has been developed to help promote the use of high strength steels in sheet metal forming. For this method, local regions of the work piece are heated using infrared ray energy, thereby promoting the reduction of springback behavior. In this research, a V-bend test is conducted with DP980. After springback, the bend angles for specimens without local heating are compared to those with local heating. Numerical analysis has been performed using the commercial program, DEFORM-2D. This analysis is carried out with the purpose of understanding how changes to the local stress distribution will affect the springback during the unloading process. The results between experimental and computational approaches are evaluated to assure the accuracy of the simulation. Subsequent numerical simulation studies are performed to explore best practices with respect to thermal boundary conditions, timing, and applicability to the production environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, W. H., E-mail: whyang21@hyundai.com; Lee, K., E-mail: klee@deform.co.kr; Lee, E. H., E-mail: mtgs2@kaist.ac.kr, E-mail: dyyang@kaist.ac.kr
The automotive industry has been pressed to satisfy more rigorous fuel efficiency requirements to promote energy conservation, safety features and cost containment. To satisfy this need, high strength steel has been developed and used for many different vehicle parts. The use of high strength steels, however, requires careful analysis and creativity in order to accommodate its relatively high springback behavior. An innovative method, called local warm forming with near infrared ray, has been developed to help promote the use of high strength steels in sheet metal forming. For this method, local regions of the work piece are heated using infraredmore » ray energy, thereby promoting the reduction of springback behavior. In this research, a V-bend test is conducted with DP980. After springback, the bend angles for specimens without local heating are compared to those with local heating. Numerical analysis has been performed using the commercial program, DEFORM-2D. This analysis is carried out with the purpose of understanding how changes to the local stress distribution will affect the springback during the unloading process. The results between experimental and computational approaches are evaluated to assure the accuracy of the simulation. Subsequent numerical simulation studies are performed to explore best practices with respect to thermal boundary conditions, timing, and applicability to the production environment.« less
Modeling and FE Simulation of Quenchable High Strength Steels Sheet Metal Hot Forming Process
NASA Astrophysics Data System (ADS)
Liu, Hongsheng; Bao, Jun; Xing, Zhongwen; Zhang, Dejin; Song, Baoyu; Lei, Chengxi
2011-08-01
High strength steel (HSS) sheet metal hot forming process is investigated by means of numerical simulations. With regard to a reliable numerical process design, the knowledge of the thermal and thermo-mechanical properties is essential. In this article, tensile tests are performed to examine the flow stress of the material HSS 22MnB5 at different strains, strain rates, and temperatures. Constitutive model based on phenomenological approach is developed to describe the thermo-mechanical properties of the material 22MnB5 by fitting the experimental data. A 2D coupled thermo-mechanical finite element (FE) model is developed to simulate the HSS sheet metal hot forming process for U-channel part. The ABAQUS/explicit model is used conduct the hot forming stage simulations, and ABAQUS/implicit model is used for accurately predicting the springback which happens at the end of hot forming stage. Material modeling and FE numerical simulations are carried out to investigate the effect of the processing parameters on the hot forming process. The processing parameters have significant influence on the microstructure of U-channel part. The springback after hot forming stage is the main factor impairing the shape precision of hot-formed part. The mechanism of springback is advanced and verified through numerical simulations and tensile loading-unloading tests. Creep strain is found in the tensile loading-unloading test under isothermal condition and has a distinct effect on springback. According to the numerical and experimental results, it can be concluded that springback is mainly caused by different cooling rats and the nonhomogengeous shrink of material during hot forming process, the creep strain is the main factor influencing the amount of the springback.
NASA Astrophysics Data System (ADS)
Karanjule, D. B.; Bhamare, S. S.; Rao, T. H.
2018-04-01
Cold drawing is widely used deformation process for seamless tube manufacturing. Springback is one of the major problem faced in tube drawing. Springback is due to the elastic energy stored in the tubes during forming process. It is found that this springback depends upon Young’s modulus of the material. This paper reports mechanical testing of three grades of steels viz. low carbon steel, medium carbon steel and high carbon steel to measure their Young’s modulus and corresponding springback. The results shows that there is 10-20 % variation in the Young’s modulus and inverse proportion between the springback and Young’s modulus. More the percentage of carbon, more the strength, less the value of Young’s modulus and more will springback. The study further leads to identify optimum die semi angle of 15 degree, land width of 10 mm and drawing speed of 8, 6 and 4 m/min for least springback in all the three grades respectively and die semi angle as a most dominant factor causing springback.
NASA Astrophysics Data System (ADS)
Duc-Toan, Nguyen; Tien-Long, Banh; Young-Suk, Kim; Dong-Won, Jung
2011-08-01
In this study, a modified Johnson-Cook (J-C) model and an innovated method to determine (J-C) material parameters are proposed to predict more correctly stress-strain curve for tensile tests in elevated temperatures. A MATLAB tool is used to determine material parameters by fitting a curve to follow Ludwick's hardening law at various elevated temperatures. Those hardening law parameters are then utilized to determine modified (J-C) model material parameters. The modified (J-C) model shows the better prediction compared to the conventional one. As the first verification, an FEM tensile test simulation based on the isotropic hardening model for boron sheet steel at elevated temperatures was carried out via a user-material subroutine, using an explicit finite element code, and compared with the measurements. The temperature decrease of all elements due to the air cooling process was then calculated when considering the modified (J-C) model and coded to VUMAT subroutine for tensile test simulation of cooling process. The modified (J-C) model showed the good agreement between the simulation results and the corresponding experiments. The second investigation was applied for V-bending spring-back prediction of magnesium alloy sheets at elevated temperatures. Here, the combination of proposed J-C model with modified hardening law considering the unusual plastic behaviour for magnesium alloy sheet was adopted for FEM simulation of V-bending spring-back prediction and shown the good comparability with corresponding experiments.
Springback effects during single point incremental forming: Optimization of the tool path
NASA Astrophysics Data System (ADS)
Giraud-Moreau, Laurence; Belchior, Jérémy; Lafon, Pascal; Lotoing, Lionel; Cherouat, Abel; Courtielle, Eric; Guines, Dominique; Maurine, Patrick
2018-05-01
Incremental sheet forming is an emerging process to manufacture sheet metal parts. This process is more flexible than conventional one and well suited for small batch production or prototyping. During the process, the sheet metal blank is clamped by a blank-holder and a small-size smooth-end hemispherical tool moves along a user-specified path to deform the sheet incrementally. Classical three-axis CNC milling machines, dedicated structure or serial robots can be used to perform the forming operation. Whatever the considered machine, large deviations between the theoretical shape and the real shape can be observed after the part unclamping. These deviations are due to both the lack of stiffness of the machine and residual stresses in the part at the end of the forming stage. In this paper, an optimization strategy of the tool path is proposed in order to minimize the elastic springback induced by residual stresses after unclamping. A finite element model of the SPIF process allowing the shape prediction of the formed part with a good accuracy is defined. This model, based on appropriated assumptions, leads to calculation times which remain compatible with an optimization procedure. The proposed optimization method is based on an iterative correction of the tool path. The efficiency of the method is shown by an improvement of the final shape.
Springback Mechanism Analysis and Experiments on Robotic Bending of Rectangular Orthodontic Archwire
NASA Astrophysics Data System (ADS)
Jiang, Jin-Gang; Han, Ying-Shuai; Zhang, Yong-De; Liu, Yan-Jv; Wang, Zhao; Liu, Yi
2017-11-01
Fixed-appliance technology is the most common and effective malocclusion orthodontic treatment method, and its key step is the bending of orthodontic archwire. The springback of archwire did not consider the movement of the stress-strain-neutral layer. To solve this problem, a springback calculation model for rectangular orthodontic archwire is proposed. A bending springback experiment is conducted using an orthodontic archwire bending springback measurement device. The springback experimental results show that the theoretical calculation results using the proposed model coincide better with the experimental testing results than when movement of the stress-strain-neutral layer was not considered. A bending experiment with rectangular orthodontic archwire is conducted using a robotic orthodontic archwire bending system. The patient expriment result show that the maximum and minimum error ratios of formed orthodontic archwire parameters are 22.46% and 10.23% without considering springback and are decreased to 11.35% and 6.13% using the proposed model. The proposed springback calculation model, which considers the movement of the stress-strain-neutral layer, greatly improves the orthodontic archwire bending precision.
Springback and diagravitropism in Merit corn roots
NASA Technical Reports Server (NTRS)
Kelly, M. O.; Leopold, A. C.
1992-01-01
Dark-treated Merit corn (Zea mays L.) roots are diagravitropic and lose curvature upon withdrawal of the gravity stimulus (springback). Springback was not detected in a variety of corn that is orthogravitropic in the dark, nor in Merit roots in which tropistic response was enhanced either with red light or with abscisic acid. A possible interpretation is that springback may be associated with a weak growth response of diagravitropic roots.
Designing a Uniaxial Tension/Compression Test for Springback Analysis in High-Strength Steel Sheets
Stoudt, M. R.; Levine, L. E.; Ma, L.
2016-01-01
We describe an innovative design for an in-plane measurement technique that subjects thin sheet metal specimens to bidirectional loading. The goal of this measurement is to provide the critical performance data necessary to validate complex predictions of the work hardening behavior during reversed uniaxial deformation. In this approach, all of the principal forces applied to the specimen are continually measured in real-time throughout the test. This includes the lateral forces that are required to prevent out of plane displacements in the specimen that promote buckling. This additional information will, in turn, improve the accuracy of the compensation for the friction generated between the anti-bucking guides and the specimen during compression. The results from an initial series of experiments not only demonstrate that our approach is feasible, but that it generates data with the accuracy necessary to quantify the directionally-dependent changes in the yield behavior that occur when the strain path is reversed (i.e., the Bauschinger Effect). PMID:28133391
Effect of martensitic transformation on springback behavior of 304L austenitic stainless steel
NASA Astrophysics Data System (ADS)
Fathi, H.; Mohammadian Semnani, H. R.; Emadoddin, E.; Sadeghi, B. Mohammad
2017-09-01
The present paper studies the effect of martensitic transformation on the springback behavior of 304L austenitic stainless steel. Martensite volume fraction was determined at the bent portion under various strain rates after bending test. Martensitic transformation has a significant effect on the springback behavior of this material. The findings of this study indicated that the amount of springback was reduced under a situation of low strain rate, while a higher amount of springback was obtained with a higher strain rate. The reason for this phenomenon is that higher work hardening occurs during the forming process with the low strain rate due to the higher martensite volume fraction, therefore the formability of the sheet is enhanced and it leads to a decreased amount of springback after the bending test. Dependency of the springback on the martensite volume fraction and strain rate was expressed as formulas from the results of the experimental tests and simulation method. Bending tests were simulated using LS-DYNA software and utilizing MAT_TRIP to determine the martensite volume fraction and strain under various strain rates. Experimental result reveals good agreement with the simulation method.
Springback of aluminum alloy brazing sheet in warm forming
NASA Astrophysics Data System (ADS)
Han, Kyu Bin; George, Ryan; Kurukuri, Srihari; Worswick, Michael J.; Winkler, Sooky
2017-10-01
The use of aluminum is increasing in the automotive industry due to its high strength-to-weight ratio, recyclability and corrosion resistance. However, aluminum is prone to significant springback due to its low elastic modulus coupled with its high strength. In this paper, a warm forming process is studied to improve the springback characteristics of 0.2 mm thick brazing sheet with an AA3003 core and AA4045 clad. Warm forming decreases springback by lowering the flow stress. The parts formed have complex features and geometries that are representative of automotive heat exchangers. The key objective is to utilize warm forming to control the springback to improve the part flatness which enables the use of harder temper material with improved strength. The experiments are performed by using heated dies at several different temperatures up to 350 °C and the blanks are pre-heated in the dies. The measured springback showed a reduction in curvature and improved flatness after forming at higher temperatures, particularly for the harder temper material conditions.
NASA Astrophysics Data System (ADS)
Zhu, Hong; Huang, Mai; Sadagopan, Sriram; Yao, Hong
2017-09-01
With increasing vehicle fuel economy standards, automotive OEMs are widely using various AHSS grades including DP, TRIP, CP and 3rd Gen AHSS to reduce vehicle weight due to their good combination of strength and formability. As one of enabling technologies for AHSS application, the requirement for requiring accurate prediction of springback for cold stamped AHSS parts stimulated a large number of investigations in the past decade with reversed loading path at large strains followed by constitutive modeling. With a spectrum of complex loading histories occurring in production stamping processes, there were many challenges in this field including issues of test data reliability, loading path representability, constitutive model robustness and non-unique constitutive parameter-identification. In this paper, various testing approaches and constitutive modeling will be reviewed briefly and a systematic methodology from stress-strain characterization, constitutive model parameter identification for material card generation will be presented in order to support automotive OEM’s need on virtual stamping. This systematic methodology features a tension-compression test at large strain with robust anti-buckling device with concurrent friction force correction, properly selected loading paths to represent material behavior during different springback modes as well as the 10-parameter Yoshida model with knowledge-based parameter-identification through nonlinear optimization. Validation cases for lab AHSS parts will also be discussed to check applicability of this methodology.
14 CFR Appendix D to Part 23 - Wheel Spin-Up and Spring-Back Loads
Code of Federal Regulations, 2014 CFR
2014-01-01
... (0.80 may be used); F Vmax=maximum vertical force on wheel (pounds)=n j W e, where W e and n j are... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Wheel Spin-Up and Spring-Back Loads D.... D Appendix D to Part 23—Wheel Spin-Up and Spring-Back Loads D23.1 Wheel spin-up loads. (a) The...
14 CFR Appendix D to Part 23 - Wheel Spin-Up and Spring-Back Loads
Code of Federal Regulations, 2013 CFR
2013-01-01
... (0.80 may be used); F Vmax=maximum vertical force on wheel (pounds)=n j W e, where W e and n j are... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Wheel Spin-Up and Spring-Back Loads D.... D Appendix D to Part 23—Wheel Spin-Up and Spring-Back Loads D23.1 Wheel spin-up loads. (a) The...
14 CFR Appendix D to Part 23 - Wheel Spin-Up and Spring-Back Loads
Code of Federal Regulations, 2011 CFR
2011-01-01
... (0.80 may be used); F Vmax=maximum vertical force on wheel (pounds)=n j W e, where W e and n j are... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Wheel Spin-Up and Spring-Back Loads D.... D Appendix D to Part 23—Wheel Spin-Up and Spring-Back Loads D23.1 Wheel spin-up loads. (a) The...
14 CFR Appendix D to Part 23 - Wheel Spin-Up and Spring-Back Loads
Code of Federal Regulations, 2010 CFR
2010-01-01
... (0.80 may be used); F Vmax=maximum vertical force on wheel (pounds)=n j W e, where W e and n j are... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Wheel Spin-Up and Spring-Back Loads D.... D Appendix D to Part 23—Wheel Spin-Up and Spring-Back Loads D23.1 Wheel spin-up loads. (a) The...
14 CFR Appendix D to Part 23 - Wheel Spin-Up and Spring-Back Loads
Code of Federal Regulations, 2012 CFR
2012-01-01
... (0.80 may be used); F Vmax=maximum vertical force on wheel (pounds)=n j W e, where W e and n j are... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Wheel Spin-Up and Spring-Back Loads D.... D Appendix D to Part 23—Wheel Spin-Up and Spring-Back Loads D23.1 Wheel spin-up loads. (a) The...
Development of a Hybrid Deep Drawing Process to Reduce Springback of AHSS
NASA Astrophysics Data System (ADS)
Boskovic, Vladimir; Sommitsch, Christoph; Kicin, Mustafa
2017-09-01
In future, the steel manufacturers will strive for the implementation of Advanced High Strength Steels (AHSS) in the automotive industry to reduce mass and improve structural performance. A key challenge is the definition of optimal and cost effective processes as well as solutions to introduce complex steel products in cold forming. However, the application of these AHSS often leads to formability problems such as springback. One promising approach in order to minimize springback is the relaxation of stress through the targeted heating of materials in the radius area after the deep drawing process. In this study, experiments are conducted on a Dual Phase (DP) and TWining Induced Plasticity (TWIP) steel for the process feasibility study. This work analyses the influence of various heat treatment temperatures on the springback reduction of deep drawn AHSS.
NASA Astrophysics Data System (ADS)
Nasir, M. N. M.; Seman, M. A.; Mezeix, L.; Aminanda, Y.; Rivai, A.; Ali, K. M.
2017-03-01
The residual stresses that develop within fibre-reinforced laminate composites during autoclave processing lead to dimensional warpage known as spring-back deformation. A number of experiments have been conducted on flat laminate composites with unidirectional fibre orientation to examine the effects of both the intrinsic and extrinsic parameters on the warpage. This paper extends the study on to the symmetrical layup effect on spring-back for flat laminate composites. Plies stacked at various symmetrical sequences were fabricated to observe the severity of the resulting warpage. Essentially, the experimental results demonstrated that the symmetrical layups reduce the laminate stiffness in its principal direction compared to the unidirectional laminate thus, raising the spring-back warpage with the exception of the [45/-45]S layup due to its quasi-isotropic property.
Evaluation of spinal instrumentation rod bending characteristics for in-situ contouring.
Noshchenko, Andriy; Xianfeng, Yao; Armour, Grant Alan; Baldini, Todd; Patel, Vikas V; Ayers, Reed; Burger, Evalina
2011-07-01
Bending characteristics were studied in rods used for spinal instrumentation at in-situ contouring conditions. Five groups of five 6 mm diameter rods made from: cobalt alloy (VITALLIUM), titanium-aluminum-vanadium alloy (SDI™), β-titanium alloy (TNTZ), cold worked stainless steel (STIFF), and annealed stainless steel (MALLEABLE) were studied. The bending procedure was similar to that typically applied for in-situ contouring in the operating room and included two bending cycles: first--bending to 21-24° under load with further release of loading for 10 min, and second--bending to 34-37° at the previously bent site and release of load for 10 min. Applied load, bending stiffness, and springback effect were studied. Statistical evaluation included ANOVA, correlation and regression analysis. TNTZ and SDI™ rods showed the highest (p < 0.05) springback at both bending cycles. VITALLIUM and STIFF rods showed mild springback (p < 0.05). The least (p < 0.05) springback was observed in the MALLEABLE rods. Springback significantly correlated with the bend angle under load (p < 0.001). To reach the necessary bend angle after unloading, over bending should be 37-40% of the required angle in TNTZ and SDI™ rods, 27-30% in VITALLIUM and STIFF rods, and around 20% in MALLEABLE rods. Copyright © 2011 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Benzie, M. A.
1998-01-01
The objective of this research project was to examine processing and design parameters in the fabrication of composite components to obtain a better understanding and attempt to minimize springback associated with composite materials. To accomplish this, both processing and design parameters were included in a Taguchi-designed experiment. Composite angled panels were fabricated, by hand layup techniques, and the fabricated panels were inspected for springback effects. This experiment yielded several significant results. The confirmation experiment validated the reproducibility of the factorial effects, error recognized, and experiment as reliable. The material used in the design of tooling needs to be a major consideration when fabricating composite components, as expected. The factors dealing with resin flow, however, raise several potentially serious material and design questions. These questions must be dealt with up front in order to minimize springback: viscosity of the resin, vacuum bagging of the part for cure, and the curing method selected. These factors directly affect design, material selection, and processing methods.
Springback in root gravitropism
NASA Technical Reports Server (NTRS)
Leopold, A. C.; Wettlaufer, S. H.
1989-01-01
Conditions under which a gravistimulus of Merit corn roots (Zea mays L.) is withdrawn result in a subsequent loss of gravitropic curvature, an effect which we refer to as springback.' This loss of curvature begins within 1 to 10 minutes after removal of the gravistimulus. It occurs regardless of the presence or absence of the root cap. It is insensitive to inhibitors of auxin transport (2,3,5-triiodobenzoic acid, naphthylphthalamic [correction of naphthylphthalmaic] acid) or to added auxin (2,4-dichlorophenoxyacetic acid). Springback is prevented if a clinostat treatment is interjected to neutralize gravistimulation during germination, which suggests that the change in curvature is a response to a memory' effect carried over from a prior gravistimulation.
Springback in Root Gravitropism 1
Leopold, A. Carl; Wettlaufer, Scott H.
1989-01-01
Conditions under which a gravistimulus of Merit corn roots (Zea mays L.) is withdrawn result in a subsequent loss of gravitropic curvature, an effect which we refer to as `springback.' This loss of curvature begins within 1 to 10 minutes after removal of the gravistimulus. It occurs regardless of the presence or absence of the root cap. It is insensitive to inhibitors of auxin transport (2,3,5-triiodobenzoic acid, naphthylphthalmaic acid) or to added auxin (2,4-dichlorophenoxyacetic acid). Springback is prevented if a clinostat treatment is interjected to neutralize gravistimulation during germination, which suggests that the change in curvature is a response to a `memory' effect carried over from a prior gravistimulation. PMID:11537456
Influence of Forming Conditions on Springback in V-bending Process Using Servo Press
NASA Astrophysics Data System (ADS)
Abe, Shinya; Takahashi, Susumu
To improve fuel efficiency, aluminum alloys and high tensile steel sheets are increasingly being applied to automotive body parts. However, it is difficult to obtain accurate dimensions of formed parts. Therefore, technologies for reducing springback for the part formed by press are strongly demanded. It is said that the die holding time at the bottom dead center of a servo press slide can affect springback. To clarify the forming mechanisms of this phenomenon, a V bending test with a servo press was performed. Aluminum alloys sheets are applied as specimens. The location of press slide was measured by linear scales. It was found that the movement of the slide in a slide motion program differs from the actual movement of the slide. It is important to confirm if the slide is located in the position specified in the program. In addition, a springback angle measurement system is proposed that uses laser displacement measurement apparatus. Because it avoids human error, the proposed measurement system is more accurate than the image processing method.
Approaches for springback reduction when forming ultra high-strength sheet metals
NASA Astrophysics Data System (ADS)
Radonjic, R.; Liewald, M.
2016-11-01
Nowadays, the automotive industry is challenged constantly by increasing environmental regulations and the continuous enhancement of standards with regard to passenger's safety (NCAP, Part 1). In order to fulfil the aforementioned requirements, the use of ultra high-strength steels in research and industrial applications is of high interest. When forming such materials, the main problem results from the large amount of springback which occurs after the release of the part. This paper shows the applicability of several approaches for the reduction of springback amount by forming of one hat channel shaped component. A novel approach for springack reduction which is based on forming with an alternating blank draw-in is presented as well. In this investigation an ultra high-strength steel of the grade DP 980 was used. The part's measurements were taken at significant cross-sections in order to provide a qualitative comparison between the reference geometry and the part's released shape. The obtained results were analysed and used in order to quantify the success of particular approaches for springback reduction. When taking a curved hat channel shaped component as an example, the results achieved in the investigations showed that it is possible to reduce part shape deviations significantly when using DP 980 as workpiece material.
New method for springback compensation for the stamping of sheet metal components
NASA Astrophysics Data System (ADS)
Birkert, A.; Hartmann, B.; Straub, M.
2017-09-01
The need for car body structures of higher strength and at the same time lower weight results in serious challenges for the stamping process. Especially the use of high strength steel and aluminium sheets is causing growing problems with regard to elastic springback. To produce accurate parts the stamping dies must be adjusted more or less by the amount of the springback in the opposite direction. For this purpose well-known software solutions use the Displacement Adjustment Method or algorithms which are closely based on that method. A crucial issue of this method is that the generated die surfaces deviate from those of the target geometry with regard to surface area. A new Physical Compensation Method has been developed and validated which takes geometrical nonlinearity into account and creates compensated die geometries with equal-in-area die surfaces. In contrast to the standard mathematical/geometrical approach, the adjusted geometry is generated by a physical approach, which makes use of the virtual part stiffness. Hereby the target geometry is being deformed mechanically in a virtual process based on the springback simulation results by applying virtual forces in an additional elastic simulation. By doing so better part dimensions can be obtained in less tool optimization loops.
Forming and Bending of Metal Foams
NASA Astrophysics Data System (ADS)
Nebosky, Paul; Tyszka, Daniel; Niebur, Glen; Schmid, Steven
2004-06-01
This study examines the formability of a porous tantalum foam, known as trabecular metal (TM). Used as a bone ingrowth surface on orthopedic implants, TM is desirable due to its combination of high strength, low relative density, and excellent osteoconductive properties. This research aims to develop bend and stretch forming as a cost-effective alternative to net machining and EDM for manufacturing thin parts made of TM. Experimentally, bending about a single axis using a wiping die was studied by observing cracking and measuring springback. It was found that die radius and clearance strongly affect the springback properties of TM, while punch speed, embossings, die radius and clearance all influence cracking. Depending on the various combinations of die radius and clearance, springback factor ranged from .70-.91. To examine the affect of the foam microstructure, bending also was examined numerically using a horizontal hexagonal mesh. As the hexagonal cells were elongated along the sheet length, elastic springback decreased. This can be explained by the earlier onset of plastic hinging occurring at the vertices of the cells. While the numerical results matched the experimental results for the case of zero clearance, differences at higher clearances arose due to an imprecise characterization of the post-yield properties of tantalum. By changing the material properties of the struts, the models can be modified for use with other open-cell metallic foams.
NASA Astrophysics Data System (ADS)
Singh, Ranjan Kumar; Rinawa, Moti Lal
2018-04-01
The residual stresses arising in fiber-reinforced laminates during their curing in closed molds lead to changes in the composites after their removal from the molds and cooling. One of these dimensional changes of angle sections is called springback. The parameters such as lay-up, stacking sequence, material system, cure temperature, thickness etc play important role in it. In present work, it is attempted to optimize lay-up and stacking sequence for maximization of flexural stiffness and minimization of springback angle. The search algorithms are employed to obtain best sequence through repair strategy such as swap. A new search algorithm, termed as lay-up search algorithm (LSA) is also proposed, which is an extension of permutation search algorithm (PSA). The efficacy of PSA and LSA is tested on the laminates with a range of lay-ups. A computer code is developed on MATLAB implementing the above schemes. Also, the strategies for multi objective optimization using search algorithms are suggested and tested.
NASA Astrophysics Data System (ADS)
Kiliclar, Yalin; Laurischkat, Roman; Vladimirov, Ivaylo N.; Reese, Stefanie
2011-08-01
The presented project deals with a robot based incremental sheet metal forming process, which is called roboforming and has been developed at the Chair of Production Systems. It is characterized by flexible shaping using a freely programmable path-synchronous movement of two industrial robots. The final shape is produced by the incremental infeed of the forming tool in depth direction and its movement along the part contour in lateral direction. However, the resulting geometries formed in roboforming deviate several millimeters from the reference geometry. This results from the compliance of the involved machine structures and the springback effects of the workpiece. The project aims to predict these deviations caused by resiliences and to carry out a compensative path planning based on this prediction. Therefore a planning tool is implemented which compensates the robots's compliance and the springback effects of the sheet metal. The forming process is simulated by means of a finite element analysis using a material model developed at the Institute of Applied Mechanics (IFAM). It is based on the multiplicative split of the deformation gradient in the context of hyperelasticity and combines nonlinear kinematic and isotropic hardening. Low-order finite elements used to simulate thin sheet structures, such as used for the experiments, have the major problem of locking, a nonphysical stiffening effect. For an efficient finite element analysis a special solid-shell finite element formulation based on reduced integration with hourglass stabilization has been developed. To circumvent different locking effects, the enhanced assumed strain (EAS) and the assumed natural strain (ANS) concepts are included in this formulation. Having such powerful tools available we obtain more accurate geometries.
NASA Astrophysics Data System (ADS)
Dang, Van Tuan; Lafon, Pascal; Labergere, Carl
2017-10-01
In this work, a combination of Proper Orthogonal Decomposition (POD) and Radial Basis Function (RBF) is proposed to build a surrogate model based on the Benchmark Springback 3D bending from the Numisheet2011 congress. The influence of the two design parameters, the geometrical parameter of the die radius and the process parameter of the blank holder force, on the springback of the sheet after a stamping operation is analyzed. The classical Design of Experience (DoE) uses Full Factorial to design the parameter space with sample points as input data for finite element method (FEM) numerical simulation of the sheet metal stamping process. The basic idea is to consider the design parameters as additional dimensions for the solution of the displacement fields. The order of the resultant high-fidelity model is reduced through the use of POD method which performs model space reduction and results in the basis functions of the low order model. Specifically, the snapshot method is used in our work, in which the basis functions is derived from snapshot deviation of the matrix of the final displacements fields of the FEM numerical simulation. The obtained basis functions are then used to determine the POD coefficients and RBF is used for the interpolation of these POD coefficients over the parameter space. Finally, the presented POD-RBF approach which is used for shape optimization can be performed with high accuracy.
Comparison of spring characteristics of titanium-molybdenum alloy and stainless steel
Salehi, Anahita; Asatourian, Armen
2017-01-01
Background Titanium-molybdenum alloy (TMA) and stainless steel (SS) wires are commonly used in orthodontics as arch-wires for tooth movement. However, plastic deformation phenomenon in these arch-wires seems to be a major concern among orthodontists. This study aimed to compare the mechanical properties of TMA and SS wires with different dimensions. Material and Methods Seventy-two wire samples (36 TMA and 36 SS) of three different sizes (19×25, 17×25 and 16×22) were analyzed in vitro, with 12 samples in each group. Various mechanical properties of the wires, including spring-back, bending moment and stiffness were determined using a universal testing machine. Student’s t-test showed statistically significant differences in the mean values of all the groups. In addition, metallographic comparison of SS and TMA wires was conducted under an optical microscope. Results The degree of stiffness of 16×22-sized SS and TMA springs was found to be 12±2 and 5±0.4, respectively, while the bending moment was estimated to be 1927±352 (gm-mm) and 932±16 (gm-mm), respectively; the spring-back index was determined to be 0.61±0.2 and 0.4±.09, respectively (p<0.001). There were no statistically significant differences in spring-back index in larger dimensions of the wires. Conclusions Systematic analysis indicated that springs made of TMA were superior compared to those made of SS. Although both from economic and functionality viewpoints the use of TMA is suggested, further clinical investigations are recommended. Key words:Bending moment, optical microscope, spring-back, stainless steel, stiffness, titanium‒molybdenum alloy. PMID:28149469
Design of Tools for Press-countersinking or Dimpling 0.040-inch-thick-24S-T Sheet
NASA Technical Reports Server (NTRS)
Templin, R L; Fogwell, J W
1942-01-01
A set of dimpling tools was designed for 0.040-inch 24S-T sheet and flush-type rivets 1/8 inch in diameter with 100 degree countersunk heads. The dimples produced under different conditions of pressure, sheet thickness, and drill diameter are presented as cross-sectional photographs magnified 20 times. The most satisfactory values for the dimpling tools were found to be: maximum punch diameter, 0.231 inch; maximum die diameter, 0.223 inch; maximum mandrel diameter, 0.128 inch; dimple angle, 100 degree; punch springback angle, 1 1/2 degree; and die springback angle, 2 degree.
Forming of AHSS using Servo-Presses
NASA Astrophysics Data System (ADS)
Groseclose, Adam Richard
Stamping of Advanced High Strength Steel (AHSS) alloys poses several challenges due to the material's higher strength and low formability compared to conventional steels and other problems such as (a) inconsistency of incoming material properties, (b) ductile fracture during forming, (c) higher contact pressure and temperature rise during forming, (d) higher die wear leading to reduced tool life, (e) higher forming load/press capacity, and (f) large springback leading to dimensional inaccuracy in the formed part. [Palaniswamy et. al., 2007]. The use of AHSS has been increasing steadily in automotive stamping. New AHSS alloys (TRIP, TWIP) may replace some of the Hot Stamping applications. Stamping of AHSS alloys, especially higher strength materials, 780 MPa and higher, present new challenges in obtaining good part definition (corner and fillet radii), formability (fracture and resulting scrap) and in reducing springback. Servo-drive presses, having the capability to have infinitely variable and adjustable ram speed and dwell at BDC, offer a potential improvement in quality, part definition, and springback reduction especially when the infinitely adjustable slide motion is used in combination with a CNC hydraulic cushion. Thus, it is desirable to establish a scientific/engineering basis for improving the stamping conditions in forming AHSS using a servo-drive press.
NASA Astrophysics Data System (ADS)
Mitsomwang, Pusit; Borrisutthekul, Rattana; Klaiw-awoot, Ken; Pattalung, Aran
2017-09-01
This research was carried out aiming to investigate the application of a tip-bottomed tool for bending an advanced ultra-high strength steel sheet. The V-die bending experiment of a dual phase steel (DP980) sheet which had a thickness of 1.6 mm was executed using a conventional bending and a tip-bottomed punches. Experimental results revealed that the springback of the bent worksheet in the case of the tip-bottomed punch was less than that of the conventional punch case. To further discuss bending characteristics, a finite element (FE) model was developed and used to simulate the bending of the worksheet. From the FE analysis, it was found that the application of the tip-bottomed punch contributed the plastic deformation to occur at the bending region. Consequently, the springback of the worksheet reduced. In addition, the width of the punch tip was found to affect the deformation at the bending region and determined the springback of the bent worksheet. Moreover, the use of the tip-bottomed punch resulted in the apparent increase of the surface hardness of the bent worksheet, compared to the bending with the conventional punch.
Rebound mechanics of micrometre-scale, spherical particles in high-velocity impacts.
Yildirim, Baran; Yang, Hankang; Gouldstone, Andrew; Müftü, Sinan
2017-08-01
The impact mechanics of micrometre-scale metal particles with flat metal surfaces is investigated for high-velocity impacts ranging from 50 m s -1 to more than 1 km s -1 , where impact causes predominantly plastic deformation. A material model that includes high strain rate and temperature effects on the yield stress, heat generation due to plasticity, material damage due to excessive plastic strain and heat transfer is used in the numerical analysis. The coefficient of restitution e is predicted by the classical work using elastic-plastic deformation analysis with quasi-static impact mechanics to be proportional to [Formula: see text] and [Formula: see text] for the low and moderate impact velocities that span the ranges of 0-10 and 10-100 m s -1 , respectively. In the elastic-plastic and fully plastic deformation regimes the particle rebound is attributed to the elastic spring-back that initiates at the particle-substrate interface. At higher impact velocities (0.1-1 km s -1 ) e is shown to be proportional to approximately [Formula: see text]. In this deeply plastic deformation regime various deformation modes that depend on plastic flow of the material including the time lag between the rebound instances of the top and bottom points of particle and the lateral spreading of the particle are identified. In this deformation regime, the elastic spring-back initiates subsurface, in the substrate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunet, M.; Sabourin, F.
2005-08-05
This paper is concerned with the effectiveness of triangular 3-node shell element without rotational d.o.f. and the extension to a new 4-node quadrilateral shell element called S4 with only 3 translational degrees of freedom per node and one-point integration. The curvatures are computed resorting to the surrounding elements. Extension from rotation-free triangular element to a quadrilateral element requires internal curvatures in order to avoid singular bending stiffness. Two numerical examples with regular and irregular meshes are performed to show the convergence and accuracy. Deep-drawing of a box, spring-back analysis of a U-shape strip sheet and the crash simulation of amore » beam-box complete the demonstration of the bending capabilities of the proposed rotation-free triangular and quadrilateral elements.« less
Cheung, Jason Pui Yin; Cheung, Prudence Wing Hang; Cheung, Amy Yim Ling; Lui, Darren; Cheung, Kenneth M C
2018-06-01
To compare the clinical and radiological outcomes between skipped-level and all-level plating for cervical laminoplasty. Patients with cervical spondylotic myelopathy (CSM) treated by open-door laminoplasty with minimum 2-year postoperative follow-up were included. All patients had opening from C3-6 or C3-7 and were divided into skipped-level or all-level plating groups. Japanese Orthopaedic Association (JOA) scores and canal measurements were obtained preoperatively, immediate (within 1 week) postoperatively, and at 2, 6 weeks, 3, 6 and 12 months postoperatively. Paired t test was used for comparative analysis. Receiver operating characteristic analysis was used to determine the canal expansion cutoff for spring-back closure. A total of 74 subjects were included with mean age of 66.1 ± 11.3 years at surgery. Of these, 32 underwent skipped-level plating and 42 underwent all-level plating. No significant differences were noted between the two groups at baseline and follow-up. Spring-back closure was observed in up to 50% of the non-plated levels within 3 months postoperatively. The cutoff for developing spring-back closure was 7 mm canal expansion for C3-6. No differences were observed in JOA scores and recovery rates between the two groups. None of the patients with spring-back required reoperation. There were no significant differences between skipped-level and all-level plating in terms of JOA or recovery rate, and canal diameter differences. This has tremendous impact on saving costs in CSM management as up to two plates per patient undergoing a standard C3-6 laminoplasty may be omitted instead of four plates to every level to achieve similar clinical and radiological outcomes. III. These slides can be retrieved under Electronic Supplementary Material.
Numerical study of multi-point forming of thick sheet using remeshing procedure
NASA Astrophysics Data System (ADS)
Cherouat, A.; Ma, X.; Borouchaki, H.; Zhang, Q.
2018-05-01
Multi-point forming MPF is an innovative technology of manufacturing complex thick sheet metal products without the need for solid tools. The central component of this system is a pair of the desired discrete matrices of punches, and die surface constructed by changing the positions of the tools though CAD and a control system. Because reconfigurable discrete tools are used, part-manufacturing costs are reduced and manufacturing time is shorten substantially. Firstly, in this work we develop constitutive equations which couples isotropic ductile damage into various flow stress based on the Continuum Damage Mechanic theory. The modified Johnson-Cook flow model fully coupled with an isotropic ductile damage is established using the quasi-unilateral damage evolution for considering both the open and the close of micro-cracks. During the forming processes severe mesh distortion of elements occur after a few incremental forming steps. Secondly, we introduce 3D adaptive remeshing procedure based on linear tetrahedral element and geometrical/physical errors estimation to optimize the element quality, to refine the mesh size in the whole model and to adapt the deformed mesh to the tools geometry. Simulation of the MPF process (see Fig. 1) and the unloading spring-back are carried out using adaptive remeshing scheme using the commercial finite element package ABAQUS and OPTIFORM mesher. Subsequently, influencing factors of MPF spring-back are researched to investigate the MPF spring-back tendency with the proposed remeshing procedure.
Springback Compensation Process for High Strength Steel Automotive Parts
NASA Astrophysics Data System (ADS)
Onhon, M. Fatih
2016-08-01
This paper is about an advanced stamping simulation methodology used in automotive industry to shorten total die manufacturing times in a new vehicle project by means of benefiting leading edge virtual try-out technology.
Massively Parallel Processing for Fast and Accurate Stamping Simulations
NASA Astrophysics Data System (ADS)
Gress, Jeffrey J.; Xu, Siguang; Joshi, Ramesh; Wang, Chuan-tao; Paul, Sabu
2005-08-01
The competitive automotive market drives automotive manufacturers to speed up the vehicle development cycles and reduce the lead-time. Fast tooling development is one of the key areas to support fast and short vehicle development programs (VDP). In the past ten years, the stamping simulation has become the most effective validation tool in predicting and resolving all potential formability and quality problems before the dies are physically made. The stamping simulation and formability analysis has become an critical business segment in GM math-based die engineering process. As the simulation becomes as one of the major production tools in engineering factory, the simulation speed and accuracy are the two of the most important measures for stamping simulation technology. The speed and time-in-system of forming analysis becomes an even more critical to support the fast VDP and tooling readiness. Since 1997, General Motors Die Center has been working jointly with our software vendor to develop and implement a parallel version of simulation software for mass production analysis applications. By 2001, this technology was matured in the form of distributed memory processing (DMP) of draw die simulations in a networked distributed memory computing environment. In 2004, this technology was refined to massively parallel processing (MPP) and extended to line die forming analysis (draw, trim, flange, and associated spring-back) running on a dedicated computing environment. The evolution of this technology and the insight gained through the implementation of DM0P/MPP technology as well as performance benchmarks are discussed in this publication.
Aminzahed, Iman; Mashhadi, Mahmoud Mosavi; Sereshk, Mohammad Reza Vaziri
2017-02-01
Micro forming is a manufacturing process to fabricate micro parts with high quality and a cost effective manner. Deep drawing could be a favorable method for production of complicated parts in macro and micro sizes. In this paper piezoelectric actuator is used as a novel approach in the field of micro manufacturing. Also, in current work, investigations are conducted with four rectangular punches and blanks with various thicknesses. Blank holder pressure effects on thickness distributions, punch force, and springback are studied. According to the results of this work, increasing of blank holder pressure in scaled deep drawing, in contrast to thickness of drawn part, leads to decrease in the punch forces and springback. Furthermore, it is shown that in micro deep drawing, the effects of holder pressure on mentioned parameters can be ignored. Copyright © 2016 Elsevier B.V. All rights reserved.
Physical, mechanical, and flexural properties of 3 orthodontic wires: an in-vitro study.
Juvvadi, Shubhaker Rao; Kailasam, Vignesh; Padmanabhan, Sridevi; Chitharanjan, Arun B
2010-11-01
Understanding the biologic requirements of orthodontic patients requires proper characterization studies of new archwire alloys. The aims of this study were to evaluate properties of wires made of 2 new materials and to compare their properties with those of stainless steel. The sample consisted of 30 straight lengths of 3 types of wires: stainless steel, titanium-molybdenum alloy, and beta-titanium alloy. Eight properties were evaluated: wire dimension, edge bevel, composition, surface characteristics, frictional characteristics, ultimate tensile strength (UTS), modulus of elasticity (E), yield strength (YS), and load deflection characteristics. A toolmaker's microscope was used to measure the edge bevel, and x-ray fluorescence was used for composition analysis. Surface profilometry and scanning electron microscopy were used for surface evaluation. A universal testing machine was used to evaluate frictional characteristics, tensile strength, and 3-point bending. Stainless steel was the smoothest wire; it had the lowest friction and spring-back values and high values for stiffness, E, YS, and UTS. The titanium-molybdenum alloy was the roughest wire; it had high friction and intermediate spring-back, stiffness, and UTS values. The beta-titanium alloy was intermediate for smoothness, friction, and UTS but had the highest spring-back. The beta-titanium alloy with increased UTS and YS had a low E value, suggesting that it would have greater resistance to fracture, thereby overcoming a major disadvantage of titanium-molybdenum alloy wires. The beta-titanium alloy wire would also deliver gentler forces. Copyright © 2010 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Vrolijk, Mark; Ogawa, Takayuki; Camanho, Arthur; Biasutti, Manfredi; Lorenz, David
2018-05-01
As a result from the ever increasing demand to produce lighter vehicles, more and more advanced high-strength materials are used in automotive industry. Focusing on sheet metal cold forming processes, these materials require high pressing forces and exhibit large springback after forming. Due to the high pressing forces deformations occur in the tooling geometry, introducing dimensional inaccuracies in the blank and potentially impact the final springback behavior. As a result the tool deformations can have an impact on the final assembly or introduce cosmetic defects. Often several iterations are required in try-out to obtain the required tolerances, with costs going up to as much as 30% of the entire product development cost. To investigate the sheet metal part feasibility and quality, in automotive industry CAE tools are widely used. However, in current practice the influence of the tool deformations on the final part quality is generally neglected and simulations are carried out with rigid tools to avoid drastically increased calculation times. If the tool deformation is analyzed through simulation it is normally done at the end of the drawing prosses, when contact conditions are mapped on the die structure and a static analysis is performed to check the deflections of the tool. But this method does not predict the influence of these deflections on the final quality of the part. In order to take tool deformations into account during drawing simulations, ESI has developed the ability to couple solvers efficiently in a way the tool deformations can be real-time included in the drawing simulation without high increase in simulation time compared to simulations with rigid tools. In this paper a study will be presented which demonstrates the effect of tool deformations on the final part quality.
Prevention of crack in stretch flanging process using hot stamping technique
NASA Astrophysics Data System (ADS)
Syafiq, Y. Mohd; Hamedon, Z.; Azila Aziz, Wan; Razlan Yusoff, Ahmad
2017-10-01
Demand for enhancing of passenger safety as well as weight reduction of automobiles has increased the use of high strength steel sheets. As a sheet metal is a lightweight having high strength is suitable for producing automotive parts such as white body panel. The stretch flanging of the high strength steel sheet is a problem due to high springback and easy to crack. This study uses three methods to stretch flange the sheets; using lubricants, shear-edge polishing and hot stamping. The effectiveness of these methods will be measured by comparing the flange length of each methods can achieved. For stretch flange with lubricant and polished sheared edge, the flange length failed to achieve the target 15 mm while hot stamping improved the formability of the sheet and preventing the occurrence of the springback and crack. Hot stamping not only improved formability of the sheet but also transformed the microstructure into martensite thus improve the hardness and the strength of the sheet after been quenched with the dies.
14 CFR 25.473 - Landing load conditions and assumptions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... presence of systems or procedures significantly affects the lift. (c) The method of analysis of airplane... dynamic characteristics. (2) Spin-up and springback. (3) Rigid body response. (4) Structural dynamic response of the airframe, if significant. (d) The landing gear dynamic characteristics must be validated by...
14 CFR 23.479 - Level landing conditions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... corresponding instantaneous vertical ground reactions, and the forward-acting horizontal loads resulting from rapid reduction of the spin-up drag loads (spring-back) must be combined with vertical ground reactions... reactions (neglecting wing lift). (c) In the absence of specific tests or a more rational analysis for...
Contact Modelling of Large Radius Air Bending with Geometrically Exact Contact Algorithm
NASA Astrophysics Data System (ADS)
Vorkov, V.; Konyukhov, A.; Vandepitte, D.; Duflou, J. R.
2016-08-01
Usage of high-strength steels in conventional air bending is restricted due to limited bendability of these metals. Large-radius punches provide a typical approach for decreasing deformations during the bending process. However, as deflection progresses the loading scheme changes gradually. Therefore, modelling of the contact interaction is essential for an accurate description of the loading scheme. In the current contribution, the authors implemented a plane frictional contact element based on the penalty method. The geometrically exact contact algorithm is used for the penetration determination. The implementation is done using the OOFEM - open source finite element solver. In order to verify the simulation results, experiments have been conducted on a bending press brake for 4 mm Weldox 1300 with a punch radius of 30 mm and a die opening of 80 mm. The maximum error for the springback calculation is 0.87° for the bending angle of 144°. The contact interaction is a crucial part of large radius bending simulation and the implementation leads to a reliable solution for the springback angle.
NASA Astrophysics Data System (ADS)
Bang, Sungsik; Rickhey, Felix; Kim, Minsoo; Lee, Hyungyil; Kim, Naksoo
2013-12-01
In this study we establish a process to predict hardening behavior considering the Bauschinger effect for zircaloy-4 sheets. When a metal is compressed after tension in forming, the yield strength decreases. For this reason, the Bauschinger effect should be considered in FE simulations of spring-back. We suggested a suitable specimen size and a method for determining the optimum tightening torque for simple shear tests. Shear stress-strain curves are obtained for five materials. We developed a method to convert the shear load-displacement curve to the effective stress-strain curve with FEA. We simulated the simple shear forward/reverse test using the combined isotropic/kinematic hardening model. We also investigated the change of the load-displacement curve by varying the hardening coefficients. We determined the hardening coefficients so that they follow the hardening behavior of zircaloy-4 in experiments.
NASA Astrophysics Data System (ADS)
Oliveira, M. C.; Baptista, A. J.; Alves, J. L.; Menezes, L. F.; Green, D. E.; Ghaei, A.
2007-05-01
The main purpose of the "Numisheet'05 Benchmark♯3: Channel Draw/Cylindrical Cup" was to evaluate the forming characteristics of materials in multi-stage processes. The concept was to verify the strain fields achieved during the two stage forming process and also to test the ability of numerical models to predict both strain and stress fields. The first stage consisted of forming channel sections in an industrial-scale channel draw die. The material that flows through the drawbead and over the die radii into the channel sidewalls is prestrained by cyclic bending and unbending. The prestrained channel sidewalls are subsequently cut and subjected to near plane-strain Marciniak-style cup test. This study emphasizes the analysis of the first stage process, the Channel Draw, since accurate numerical results for the first stage forming and springback are essential to guarantee proper initial state variables for the subsequent stage simulation. Four different sheet materials were selected: mild steel AKDQ-HDG, high strength steel HSLA-HDG, dual phase steel DP600-HDG and an aluminium alloy AA6022-T43. The four sheet materials were formed in the same channel draw die, but with drawbead penetrations of 25%, 50% and 100%. This paper describes the testing and measurement procedures for the numerical simulation of these conditions with DD3IMP FE code. A comparison between experimental and numerical simulation results for the first stage is presented. The experimental results indicate that an increase in drawbead penetration is accompanied by a general decrease in springback, with both sidewall radius of curvature and the sidewall angle increasing with increasing drawbead penetration. An exception to this trend occurs at the shallowest bead penetration: the radius of curvature in the sidewall is larger than expected. The sequence of cyclic tension and compression is numerically studied for each drawbead penetration in order to investigate this phenomenon.
Influence of roll levelling on material properties and postforming springback
NASA Astrophysics Data System (ADS)
Galdos, Lander; Mendiguren, Joseba; de Argandoña, Eneko Saenz; Otegi, Nagore; Silvestre, Elena
2018-05-01
Roll levelling is commonly used in cut to length and blanking lines to flatten initial coils and produce residual stress free precuts. Roll straightener is also used to remove coil-set when progressive dies are used and the starting raw material is a coil. Industrial evidences have proved that roll leveler or straightener tuning is crucial to get a robust process and to obtain repetitive springback values after stamping. This is more relevant when using Advanced High Strength Steels and aluminum coils. However, the mechanisms affecting this material behavior are unknown and how the levelling technology affects the material properties has not been yet reported. In this paper, the influence the roll levelling process has on the final properties of a 6xxx aluminum alloy is studied. For that, as received coils have been relevelled using two different leveler set-ups and tensile tests have been performed using both initial and final material states. Aiming to quantify the effect of the material hardening on a real forming process, a new tangential bending prototype has been developed. As received and levelled precuts have been bent and the forming torques and the postforming angles have been compared.
NASA Astrophysics Data System (ADS)
Gisario, Annamaria; Barletta, Massimiliano; Venettacci, Simone; Veniali, Francesco
2015-06-01
Achievement of sharp bending angles with small fillet radius on stainless steel sheets by mechanical bending requires sophisticated bending device and troublesome operational procedures, which can involve expensive molds, huge presses and large loads. In addition, springback is always difficult to control, thus often leading to final parts with limited precision and accuracy. In contrast, laser-assisted bending of metals is an emerging technology, as it often allows to perform difficult and multifaceted manufacturing tasks with relatively small efforts. In the present work, laser-assisted bending of stainless steel sheets to achieve sharp angles is thus investigated. First, bending trials were performed by combining laser irradiation with an auxiliary bending device triggered by a pneumatic actuator and based on kinematic of deformable quadrilaterals. Second, laser operational parameters, that is, scanning speed, power and number of passes, were varied to identify the most suitable processing settings. Bending angles and fillet radii were measured by coordinate measurement machine. Experimental data were elaborated by combined ANalysis Of Mean (ANOM) and ANalysis Of VAriance (ANOVA). Based on experimental findings, the best strategy to achieve an aircraft prototype from a stainless steel sheet was designed and implemented.
Parameter Optimization and Electrode Improvement of Rotary Stepper Micromotor
NASA Astrophysics Data System (ADS)
Sone, Junji; Mizuma, Toshinari; Mochizuki, Shunsuke; Sarajlic, Edin; Yamahata, Christophe; Fujita, Hiroyuki
We developed a three-phase electrostatic stepper micromotor and performed a numerical simulation to improve its performance for practical use and to optimize its design. We conducted its circuit simulation by simplifying its structure, and the effect of springback force generated by supported mechanism using flexures was considered. And we considered new improvement method for electrodes. This improvement and other parameter optimizations achieved the low voltage drive of micromotor.
NASA Astrophysics Data System (ADS)
Markanday, H.; Nagarajan, D.
2018-02-01
Incremental sheet forming (ISF) is a novel die-less sheet metal forming process, which can produce components directly from the CAD geometry using a CNC milling machine at less production time and cost. The formability of the sheet material used is greatly affected by the process parameters involved and tool path adopted, and the present study is aimed to investigate the influence of different process parameter values using the helical tool path strategy on the formability of a commercial pure Al and to achieve maximum formability in the material. ISF experiments for producing an 80 mm diameter axisymmetric dome were carried out on 2 mm thickness commercially pure Al sheets for different tool speeds and feed rates in a CNC milling machine with a 10 mm hemispherical forming tool. The obtained parts were analyzed for springback, amount of thinning and maximum forming depth. The results showed that when the tool speed was increased by keeping the feed rate constant, the forming depth and thinning were also increased. On contrary, when the feed rate was increased by keeping the tool speed constant, the forming depth and thinning were decreased. Springback was found to be higher when the feed rate was increased rather than the tool speed was increased.
ShinyGPAS: interactive genomic prediction accuracy simulator based on deterministic formulas.
Morota, Gota
2017-12-20
Deterministic formulas for the accuracy of genomic predictions highlight the relationships among prediction accuracy and potential factors influencing prediction accuracy prior to performing computationally intensive cross-validation. Visualizing such deterministic formulas in an interactive manner may lead to a better understanding of how genetic factors control prediction accuracy. The software to simulate deterministic formulas for genomic prediction accuracy was implemented in R and encapsulated as a web-based Shiny application. Shiny genomic prediction accuracy simulator (ShinyGPAS) simulates various deterministic formulas and delivers dynamic scatter plots of prediction accuracy versus genetic factors impacting prediction accuracy, while requiring only mouse navigation in a web browser. ShinyGPAS is available at: https://chikudaisei.shinyapps.io/shinygpas/ . ShinyGPAS is a shiny-based interactive genomic prediction accuracy simulator using deterministic formulas. It can be used for interactively exploring potential factors that influence prediction accuracy in genome-enabled prediction, simulating achievable prediction accuracy prior to genotyping individuals, or supporting in-class teaching. ShinyGPAS is open source software and it is hosted online as a freely available web-based resource with an intuitive graphical user interface.
Formability of paperboard during deep-drawing with local steam application
NASA Astrophysics Data System (ADS)
Franke, Wilken; Stein, Philipp; Dörsam, Sven; Groche, Peter
2018-05-01
The use of paperboard can significantly improve the environmental compatibility of everyday products such as packages. Nevertheless, most packages are currently made of plastics, since the three-dimensional shaping of paperboard is possible only to a limited extent. In order to increase the forming possibilities, deep drawing of cardboard has been intensively investigated for more than a decade. An improvement with regard to increased forming limits has been achieved by heating of the tool parts, which leads to a softening of paperboard constituents such as lignin. A further approach is the moistening of the samples, whereby the hydrogen bonds between the fibers are weakened and as a result an increase of the formability. It is expected that a combination of both parameter approaches will result in a significant increase in the forming capacity and in the shape accuracy. For this reason, a new tool concept is introduced within the scope of this work which makes it possible to moisten samples during the deep drawing process by means of steam supply. The conducted investigations show that spring-back in the preferred fiber direction can be reduced by 38 %. Orthogonal to the preferred fiber direction a reduction of spring back of up to 79 % is determined, which corresponds to a perfect shape. Moreover, it was determined that the steam duration and the initial moisture content have an influence on the final shape. In addition to the increased dimensional accuracy, an optimized wrinkle compression compared to conventional deep drawing is found. According to the results, it can be summarized that a steam application in the deep drawing of paperboard significantly improves the part quality.
NASA Astrophysics Data System (ADS)
Umezu, Yasuyoshi; Watanabe, Yuko; Ma, Ninshu
2005-08-01
Since 1996, Japan Research Institute Limited (JRI) has been providing a sheet metal forming simulation system called JSTAMP-Works packaged the FEM solvers of LS-DYNA and JOH/NIKE, which might be the first multistage system at that time and has been enjoying good reputation among users in Japan. To match the recent needs, "faster, more accurate and easier", of process designers and CAE engineers, a new metal forming simulation system JSTAMP-Works/NV is developed. The JSTAMP-Works/NV packaged the automatic healing function of CAD and had much more new capabilities such as prediction of 3D trimming lines for flanging or hemming, remote control of solver execution for multi-stage forming processes and shape evaluation between FEM and CAD. On the other way, a multi-stage multi-purpose inverse FEM solver HYSTAMP is developed and will be soon put into market, which is approved to be very fast, quite accurate and robust. Lastly, authors will give some application examples of user defined ductile damage subroutine in LS-DYNA for the estimation of material failure and springback in metal forming simulation.
NASA Technical Reports Server (NTRS)
1979-01-01
A decade ago, NASA's Ames Research Center developed a new foam material for protective padding of airplane seats. Now known as Temper Foam, the material has become one of the most widely-used spinoffs. Latest application is a line of Temper Foam cushioning produced by Edmont-Wilson, Coshocton, Ohio for office and medical furniture. The example pictured is the Classic Dental Stool, manufactured by Dentsply International, Inc., York, Pennsylvania, one of four models which use Edmont-Wilson Temper Foam. Temper Foam is an open-cell, flameresistant foam with unique qualities.
Wang, Xueyi; Davidson, Nicholas J.
2011-01-01
Ensemble methods have been widely used to improve prediction accuracy over individual classifiers. In this paper, we achieve a few results about the prediction accuracies of ensemble methods for binary classification that are missed or misinterpreted in previous literature. First we show the upper and lower bounds of the prediction accuracies (i.e. the best and worst possible prediction accuracies) of ensemble methods. Next we show that an ensemble method can achieve > 0.5 prediction accuracy, while individual classifiers have < 0.5 prediction accuracies. Furthermore, for individual classifiers with different prediction accuracies, the average of the individual accuracies determines the upper and lower bounds. We perform two experiments to verify the results and show that it is hard to achieve the upper and lower bounds accuracies by random individual classifiers and better algorithms need to be developed. PMID:21853162
Liu, Huihong; Niinomi, Mitsuo; Nakai, Masaaki; Cho, Ken; Narita, Kengo; Şen, Mustafa; Shiku, Hitoshi; Matsue, Tomokazu
2015-01-01
In this study, various amounts of oxygen were added to Ti-10Cr (mass%) alloys. It is expected that a large changeable Young's modulus, caused by a deformation-induced ω-phase transformation, can be achieved in Ti-10Cr-O alloys by the appropriate oxygen addition. This "changeable Young's modulus" property can satisfy the otherwise conflicting requirements for use in spinal implant rods: high and low moduli are preferred by surgeons and patients, respectively. The influence of oxygen on the microstructures and mechanical properties of the alloys was examined, as well as the bending springback and cytocompatibility of the optimized alloy. Among the Ti-10Cr-O alloys, Ti-10Cr-0.2O (mass%) alloy shows the largest changeable Young's modulus following cold rolling for a constant reduction ratio. This is the result of two competing factors: increased apparent β-lattice stability and decreased amounts of athermal ω phase, both of which are caused by oxygen addition. The most favorable balance of these factors for the deformation-induced ω-phase transformation occurred at an oxygen concentration of 0.2mass%. Ti-10Cr-0.2O alloy not only exhibits high tensile strength and acceptable elongation, but also possesses a good combination of high bending strength, acceptable bending springback and great cytocompatibility. Therefore, Ti-10Cr-0.2O alloy is a potential material for use in spinal fixture devices. Copyright © 2014 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Prasad, Moyye Devi; Nagarajan, D.
2018-05-01
An axisymmetric dome of 70 mm in diameter and 35 mm in depth was formed using the ISF process using varying proportions (25, 50 and 75%) of spiral (S) and helical (H) tool path combinations as a single tool path strategy, on a 2 mm thickness commercially pure aluminium sheets. A maximum forming depth of ˜30 mm was observed on all the components, irrespective of the different tool path combinations employed. None of the components were fractured for the different tool path combinations used. The springback was also same and uniform for all the tool path combinations employed, except for the 75S25H which showed slightly larger springback. The wall thickness reduced drastically up to a certain forming depth and increased with the increase in forming depth for all the tool path combinations. The maximum thinning occurred near the maximum wall angle region for all the components. The wall thickness improved significantly (around 10-15%) near the maximum wall angle region for the 25S75H combination than that of the complete spiral and other tool path strategies. It is speculated that this improvement in wall thickness may be mainly due to the combined contribution of the simple shear and uniaxial dilatation deformation modes of the helical tool path strategy in the 25S75H combination. This increase in wall thickness will greatly help in reducing the plastic instability and postpone the early failure of the component.
Niobium superconducting rf cavity fabrication by electrohydraulic forming
NASA Astrophysics Data System (ADS)
Cantergiani, E.; Atieh, S.; Léaux, F.; Perez Fontenla, A. T.; Prunet, S.; Dufay-Chanat, L.; Koettig, T.; Bertinelli, F.; Capatina, O.; Favre, G.; Gerigk, F.; Jeanson, A. C.; Fuzeau, J.; Avrillaud, G.; Alleman, D.; Bonafe, J.; Marty, P.
2016-11-01
Superconducting rf (SRF) cavities are traditionally fabricated from superconducting material sheets or made of copper coated with superconducting material, followed by trim machining and electron-beam welding. An alternative technique to traditional shaping methods, such as deep-drawing and spinning, is electrohydraulic forming (EHF). In EHF, half-cells are obtained through ultrahigh-speed deformation of blank sheets, using shockwaves induced in water by a pulsed electrical discharge. With respect to traditional methods, such a highly dynamic process can yield interesting results in terms of effectiveness, repeatability, final shape precision, higher formability, and reduced springback. In this paper, the first results of EHF on high purity niobium are presented and discussed. The simulations performed in order to master the multiphysics phenomena of EHF and to adjust its process parameters are presented. The microstructures of niobium half-cells produced by EHF and by spinning have been compared in terms of damage created in the material during the forming operation. The damage was assessed through hardness measurements, residual resistivity ratio (RRR) measurements, and electron backscattered diffraction analyses. It was found that EHF does not worsen the damage of the material during forming and instead, some areas of the half-cell have shown lower damage compared to spinning. Moreover, EHF is particularly advantageous to reduce the forming time, preserve roughness, and to meet the final required shape accuracy.
Dynamic, High-Temperature, Flexible Seal
NASA Technical Reports Server (NTRS)
Steinetz, Bruce M.; Sirocky, Paul J.
1989-01-01
New seal consists of multiple plies of braided ceramic sleeves filled with small ceramic balls. Innermost braided sleeve supported by high-temperature-wire-mesh sleeve that provides both springback and preload capabilities. Ceramic balls reduce effect of relatively high porosity of braided ceramic sleeves by acting as labyrinth flow path for gases and thereby greatly increasing pressure gradient seal can sustain. Dynamic, high-temperature, flexible seal employed in hypersonic engines, two-dimensional convergent/divergent and vectorized-thrust exhaust nozzles, reentry vehicle airframes, rocket-motor casings, high-temperature furnaces, and any application requiring non-asbestos high-temperature gaskets.
Systematic bias of correlation coefficient may explain negative accuracy of genomic prediction.
Zhou, Yao; Vales, M Isabel; Wang, Aoxue; Zhang, Zhiwu
2017-09-01
Accuracy of genomic prediction is commonly calculated as the Pearson correlation coefficient between the predicted and observed phenotypes in the inference population by using cross-validation analysis. More frequently than expected, significant negative accuracies of genomic prediction have been reported in genomic selection studies. These negative values are surprising, given that the minimum value for prediction accuracy should hover around zero when randomly permuted data sets are analyzed. We reviewed the two common approaches for calculating the Pearson correlation and hypothesized that these negative accuracy values reflect potential bias owing to artifacts caused by the mathematical formulas used to calculate prediction accuracy. The first approach, Instant accuracy, calculates correlations for each fold and reports prediction accuracy as the mean of correlations across fold. The other approach, Hold accuracy, predicts all phenotypes in all fold and calculates correlation between the observed and predicted phenotypes at the end of the cross-validation process. Using simulated and real data, we demonstrated that our hypothesis is true. Both approaches are biased downward under certain conditions. The biases become larger when more fold are employed and when the expected accuracy is low. The bias of Instant accuracy can be corrected using a modified formula. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Assessing the accuracy of predictive models for numerical data: Not r nor r2, why not? Then what?
2017-01-01
Assessing the accuracy of predictive models is critical because predictive models have been increasingly used across various disciplines and predictive accuracy determines the quality of resultant predictions. Pearson product-moment correlation coefficient (r) and the coefficient of determination (r2) are among the most widely used measures for assessing predictive models for numerical data, although they are argued to be biased, insufficient and misleading. In this study, geometrical graphs were used to illustrate what were used in the calculation of r and r2 and simulations were used to demonstrate the behaviour of r and r2 and to compare three accuracy measures under various scenarios. Relevant confusions about r and r2, has been clarified. The calculation of r and r2 is not based on the differences between the predicted and observed values. The existing error measures suffer various limitations and are unable to tell the accuracy. Variance explained by predictive models based on cross-validation (VEcv) is free of these limitations and is a reliable accuracy measure. Legates and McCabe’s efficiency (E1) is also an alternative accuracy measure. The r and r2 do not measure the accuracy and are incorrect accuracy measures. The existing error measures suffer limitations. VEcv and E1 are recommended for assessing the accuracy. The applications of these accuracy measures would encourage accuracy-improved predictive models to be developed to generate predictions for evidence-informed decision-making. PMID:28837692
Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding
2013-01-01
Background In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. Results The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. Conclusions The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies. PMID:24314298
Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding.
Ould Estaghvirou, Sidi Boubacar; Ogutu, Joseph O; Schulz-Streeck, Torben; Knaak, Carsten; Ouzunova, Milena; Gordillo, Andres; Piepho, Hans-Peter
2013-12-06
In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies.
Evaluating the accuracy of SHAPE-directed RNA secondary structure predictions
Sükösd, Zsuzsanna; Swenson, M. Shel; Kjems, Jørgen; Heitsch, Christine E.
2013-01-01
Recent advances in RNA structure determination include using data from high-throughput probing experiments to improve thermodynamic prediction accuracy. We evaluate the extent and nature of improvements in data-directed predictions for a diverse set of 16S/18S ribosomal sequences using a stochastic model of experimental SHAPE data. The average accuracy for 1000 data-directed predictions always improves over the original minimum free energy (MFE) structure. However, the amount of improvement varies with the sequence, exhibiting a correlation with MFE accuracy. Further analysis of this correlation shows that accurate MFE base pairs are typically preserved in a data-directed prediction, whereas inaccurate ones are not. Thus, the positive predictive value of common base pairs is consistently higher than the directed prediction accuracy. Finally, we confirm sequence dependencies in the directability of thermodynamic predictions and investigate the potential for greater accuracy improvements in the worst performing test sequence. PMID:23325843
NASA Astrophysics Data System (ADS)
Zwickl, Titus; Carleer, Bart; Kubli, Waldemar
2005-08-01
In the past decade, sheet metal forming simulation became a well established tool to predict the formability of parts. In the automotive industry, this has enabled significant reduction in the cost and time for vehicle design and development, and has helped to improve the quality and performance of vehicle parts. However, production stoppages for troubleshooting and unplanned die maintenance, as well as production quality fluctuations continue to plague manufacturing cost and time. The focus therefore has shifted in recent times beyond mere feasibility to robustness of the product and process being engineered. Ensuring robustness is the next big challenge for the virtual tryout / simulation technology. We introduce new methods, based on systematic stochastic simulations, to visualize the behavior of the part during the whole forming process — in simulation as well as in production. Sensitivity analysis explains the response of the part to changes in influencing parameters. Virtual tryout allows quick exploration of changed designs and conditions. Robust design and manufacturing guarantees quality and process capability for the production process. While conventional simulations helped to reduce development time and cost by ensuring feasible processes, robustness engineering tools have the potential for far greater cost and time savings. Through examples we illustrate how expected and unexpected behavior of deep drawing parts may be tracked down, identified and assigned to the influential parameters. With this knowledge, defects can be eliminated or springback can be compensated e.g.; the response of the part to uncontrollable noise can be predicted and minimized. The newly introduced methods enable more reliable and predictable stamping processes in general.
Saving Material with Systematic Process Designs
NASA Astrophysics Data System (ADS)
Kerausch, M.
2011-08-01
Global competition is forcing the stamping industry to further increase quality, to shorten time-to-market and to reduce total cost. Continuous balancing between these classical time-cost-quality targets throughout the product development cycle is required to ensure future economical success. In today's industrial practice, die layout standards are typically assumed to implicitly ensure the balancing of company specific time-cost-quality targets. Although die layout standards are a very successful approach, there are two methodical disadvantages. First, the capabilities for tool design have to be continuously adapted to technological innovations; e.g. to take advantage of the full forming capability of new materials. Secondly, the great variety of die design aspects have to be reduced to a generic rule or guideline; e.g. binder shape, draw-in conditions or the use of drawbeads. Therefore, it is important to not overlook cost or quality opportunities when applying die design standards. This paper describes a systematic workflow with focus on minimizing material consumption. The starting point of the investigation is a full process plan for a typical structural part. All requirements are definedaccording to a predefined set of die design standards with industrial relevance are fulfilled. In a first step binder and addendum geometry is systematically checked for material saving potentials. In a second step, blank shape and draw-in are adjusted to meet thinning, wrinkling and springback targets for a minimum blank solution. Finally the identified die layout is validated with respect to production robustness versus splits, wrinkles and springback. For all three steps the applied methodology is based on finite element simulation combined with a stochastical variation of input variables. With the proposed workflow a well-balanced (time-cost-quality) production process assuring minimal material consumption can be achieved.
Analysis of spatial distribution of land cover maps accuracy
NASA Astrophysics Data System (ADS)
Khatami, R.; Mountrakis, G.; Stehman, S. V.
2017-12-01
Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain yielded similar AUC; iv) for the larger sample size (i.e., very dense spatial sample) and per-class predictions, the spatial domain yielded larger AUC; v) increasing the sample size improved accuracy predictions with a greater benefit accruing to the spatial domain; and vi) the function used for interpolation had the smallest effect on AUC.
Genomic Prediction of Gene Bank Wheat Landraces.
Crossa, José; Jarquín, Diego; Franco, Jorge; Pérez-Rodríguez, Paulino; Burgueño, Juan; Saint-Pierre, Carolina; Vikram, Prashant; Sansaloni, Carolina; Petroli, Cesar; Akdemir, Deniz; Sneller, Clay; Reynolds, Matthew; Tattaris, Maria; Payne, Thomas; Guzman, Carlos; Peña, Roberto J; Wenzl, Peter; Singh, Sukhwinder
2016-07-07
This study examines genomic prediction within 8416 Mexican landrace accessions and 2403 Iranian landrace accessions stored in gene banks. The Mexican and Iranian collections were evaluated in separate field trials, including an optimum environment for several traits, and in two separate environments (drought, D and heat, H) for the highly heritable traits, days to heading (DTH), and days to maturity (DTM). Analyses accounting and not accounting for population structure were performed. Genomic prediction models include genotype × environment interaction (G × E). Two alternative prediction strategies were studied: (1) random cross-validation of the data in 20% training (TRN) and 80% testing (TST) (TRN20-TST80) sets, and (2) two types of core sets, "diversity" and "prediction", including 10% and 20%, respectively, of the total collections. Accounting for population structure decreased prediction accuracy by 15-20% as compared to prediction accuracy obtained when not accounting for population structure. Accounting for population structure gave prediction accuracies for traits evaluated in one environment for TRN20-TST80 that ranged from 0.407 to 0.677 for Mexican landraces, and from 0.166 to 0.662 for Iranian landraces. Prediction accuracy of the 20% diversity core set was similar to accuracies obtained for TRN20-TST80, ranging from 0.412 to 0.654 for Mexican landraces, and from 0.182 to 0.647 for Iranian landraces. The predictive core set gave similar prediction accuracy as the diversity core set for Mexican collections, but slightly lower for Iranian collections. Prediction accuracy when incorporating G × E for DTH and DTM for Mexican landraces for TRN20-TST80 was around 0.60, which is greater than without the G × E term. For Iranian landraces, accuracies were 0.55 for the G × E model with TRN20-TST80. Results show promising prediction accuracies for potential use in germplasm enhancement and rapid introgression of exotic germplasm into elite materials. Copyright © 2016 Crossa et al.
Genomic Prediction of Gene Bank Wheat Landraces
Crossa, José; Jarquín, Diego; Franco, Jorge; Pérez-Rodríguez, Paulino; Burgueño, Juan; Saint-Pierre, Carolina; Vikram, Prashant; Sansaloni, Carolina; Petroli, Cesar; Akdemir, Deniz; Sneller, Clay; Reynolds, Matthew; Tattaris, Maria; Payne, Thomas; Guzman, Carlos; Peña, Roberto J.; Wenzl, Peter; Singh, Sukhwinder
2016-01-01
This study examines genomic prediction within 8416 Mexican landrace accessions and 2403 Iranian landrace accessions stored in gene banks. The Mexican and Iranian collections were evaluated in separate field trials, including an optimum environment for several traits, and in two separate environments (drought, D and heat, H) for the highly heritable traits, days to heading (DTH), and days to maturity (DTM). Analyses accounting and not accounting for population structure were performed. Genomic prediction models include genotype × environment interaction (G × E). Two alternative prediction strategies were studied: (1) random cross-validation of the data in 20% training (TRN) and 80% testing (TST) (TRN20-TST80) sets, and (2) two types of core sets, “diversity” and “prediction”, including 10% and 20%, respectively, of the total collections. Accounting for population structure decreased prediction accuracy by 15–20% as compared to prediction accuracy obtained when not accounting for population structure. Accounting for population structure gave prediction accuracies for traits evaluated in one environment for TRN20-TST80 that ranged from 0.407 to 0.677 for Mexican landraces, and from 0.166 to 0.662 for Iranian landraces. Prediction accuracy of the 20% diversity core set was similar to accuracies obtained for TRN20-TST80, ranging from 0.412 to 0.654 for Mexican landraces, and from 0.182 to 0.647 for Iranian landraces. The predictive core set gave similar prediction accuracy as the diversity core set for Mexican collections, but slightly lower for Iranian collections. Prediction accuracy when incorporating G × E for DTH and DTM for Mexican landraces for TRN20-TST80 was around 0.60, which is greater than without the G × E term. For Iranian landraces, accuracies were 0.55 for the G × E model with TRN20-TST80. Results show promising prediction accuracies for potential use in germplasm enhancement and rapid introgression of exotic germplasm into elite materials. PMID:27172218
Accuracy of Predicted Genomic Breeding Values in Purebred and Crossbred Pigs.
Hidalgo, André M; Bastiaansen, John W M; Lopes, Marcos S; Harlizius, Barbara; Groenen, Martien A M; de Koning, Dirk-Jan
2015-05-26
Genomic selection has been widely implemented in dairy cattle breeding when the aim is to improve performance of purebred animals. In pigs, however, the final product is a crossbred animal. This may affect the efficiency of methods that are currently implemented for dairy cattle. Therefore, the objective of this study was to determine the accuracy of predicted breeding values in crossbred pigs using purebred genomic and phenotypic data. A second objective was to compare the predictive ability of SNPs when training is done in either single or multiple populations for four traits: age at first insemination (AFI); total number of piglets born (TNB); litter birth weight (LBW); and litter variation (LVR). We performed marker-based and pedigree-based predictions. Within-population predictions for the four traits ranged from 0.21 to 0.72. Multi-population prediction yielded accuracies ranging from 0.18 to 0.67. Predictions across purebred populations as well as predicting genetic merit of crossbreds from their purebred parental lines for AFI performed poorly (not significantly different from zero). In contrast, accuracies of across-population predictions and accuracies of purebred to crossbred predictions for LBW and LVR ranged from 0.08 to 0.31 and 0.11 to 0.31, respectively. Accuracy for TNB was zero for across-population prediction, whereas for purebred to crossbred prediction it ranged from 0.08 to 0.22. In general, marker-based outperformed pedigree-based prediction across populations and traits. However, in some cases pedigree-based prediction performed similarly or outperformed marker-based prediction. There was predictive ability when purebred populations were used to predict crossbred genetic merit using an additive model in the populations studied. AFI was the only exception, indicating that predictive ability depends largely on the genetic correlation between PB and CB performance, which was 0.31 for AFI. Multi-population prediction was no better than within-population prediction for the purebred validation set. Accuracy of prediction was very trait-dependent. Copyright © 2015 Hidalgo et al.
The accuracy of Genomic Selection in Norwegian red cattle assessed by cross-validation.
Luan, Tu; Woolliams, John A; Lien, Sigbjørn; Kent, Matthew; Svendsen, Morten; Meuwissen, Theo H E
2009-11-01
Genomic Selection (GS) is a newly developed tool for the estimation of breeding values for quantitative traits through the use of dense markers covering the whole genome. For a successful application of GS, accuracy of the prediction of genomewide breeding value (GW-EBV) is a key issue to consider. Here we investigated the accuracy and possible bias of GW-EBV prediction, using real bovine SNP genotyping (18,991 SNPs) and phenotypic data of 500 Norwegian Red bulls. The study was performed on milk yield, fat yield, protein yield, first lactation mastitis traits, and calving ease. Three methods, best linear unbiased prediction (G-BLUP), Bayesian statistics (BayesB), and a mixture model approach (MIXTURE), were used to estimate marker effects, and their accuracy and bias were estimated by using cross-validation. The accuracies of the GW-EBV prediction were found to vary widely between 0.12 and 0.62. G-BLUP gave overall the highest accuracy. We observed a strong relationship between the accuracy of the prediction and the heritability of the trait. GW-EBV prediction for production traits with high heritability achieved higher accuracy and also lower bias than health traits with low heritability. To achieve a similar accuracy for the health traits probably more records will be needed.
EVALUATING RISK-PREDICTION MODELS USING DATA FROM ELECTRONIC HEALTH RECORDS.
Wang, L E; Shaw, Pamela A; Mathelier, Hansie M; Kimmel, Stephen E; French, Benjamin
2016-03-01
The availability of data from electronic health records facilitates the development and evaluation of risk-prediction models, but estimation of prediction accuracy could be limited by outcome misclassification, which can arise if events are not captured. We evaluate the robustness of prediction accuracy summaries, obtained from receiver operating characteristic curves and risk-reclassification methods, if events are not captured (i.e., "false negatives"). We derive estimators for sensitivity and specificity if misclassification is independent of marker values. In simulation studies, we quantify the potential for bias in prediction accuracy summaries if misclassification depends on marker values. We compare the accuracy of alternative prognostic models for 30-day all-cause hospital readmission among 4548 patients discharged from the University of Pennsylvania Health System with a primary diagnosis of heart failure. Simulation studies indicate that if misclassification depends on marker values, then the estimated accuracy improvement is also biased, but the direction of the bias depends on the direction of the association between markers and the probability of misclassification. In our application, 29% of the 1143 readmitted patients were readmitted to a hospital elsewhere in Pennsylvania, which reduced prediction accuracy. Outcome misclassification can result in erroneous conclusions regarding the accuracy of risk-prediction models.
Improved method for predicting protein fold patterns with ensemble classifiers.
Chen, W; Liu, X; Huang, Y; Jiang, Y; Zou, Q; Lin, C
2012-01-27
Protein folding is recognized as a critical problem in the field of biophysics in the 21st century. Predicting protein-folding patterns is challenging due to the complex structure of proteins. In an attempt to solve this problem, we employed ensemble classifiers to improve prediction accuracy. In our experiments, 188-dimensional features were extracted based on the composition and physical-chemical property of proteins and 20-dimensional features were selected using a coupled position-specific scoring matrix. Compared with traditional prediction methods, these methods were superior in terms of prediction accuracy. The 188-dimensional feature-based method achieved 71.2% accuracy in five cross-validations. The accuracy rose to 77% when we used a 20-dimensional feature vector. These methods were used on recent data, with 54.2% accuracy. Source codes and dataset, together with web server and software tools for prediction, are available at: http://datamining.xmu.edu.cn/main/~cwc/ProteinPredict.html.
Improved Short-Term Clock Prediction Method for Real-Time Positioning.
Lv, Yifei; Dai, Zhiqiang; Zhao, Qile; Yang, Sheng; Zhou, Jinning; Liu, Jingnan
2017-06-06
The application of real-time precise point positioning (PPP) requires real-time precise orbit and clock products that should be predicted within a short time to compensate for the communication delay or data gap. Unlike orbit correction, clock correction is difficult to model and predict. The widely used linear model hardly fits long periodic trends with a small data set and exhibits significant accuracy degradation in real-time prediction when a large data set is used. This study proposes a new prediction model for maintaining short-term satellite clocks to meet the high-precision requirements of real-time clocks and provide clock extrapolation without interrupting the real-time data stream. Fast Fourier transform (FFT) is used to analyze the linear prediction residuals of real-time clocks. The periodic terms obtained through FFT are adopted in the sliding window prediction to achieve a significant improvement in short-term prediction accuracy. This study also analyzes and compares the accuracy of short-term forecasts (less than 3 h) by using different length observations. Experimental results obtained from International GNSS Service (IGS) final products and our own real-time clocks show that the 3-h prediction accuracy is better than 0.85 ns. The new model can replace IGS ultra-rapid products in the application of real-time PPP. It is also found that there is a positive correlation between the prediction accuracy and the short-term stability of on-board clocks. Compared with the accuracy of the traditional linear model, the accuracy of the static PPP using the new model of the 2-h prediction clock in N, E, and U directions is improved by about 50%. Furthermore, the static PPP accuracy of 2-h clock products is better than 0.1 m. When an interruption occurs in the real-time model, the accuracy of the kinematic PPP solution using 1-h clock prediction product is better than 0.2 m, without significant accuracy degradation. This model is of practical significance because it solves the problems of interruption and delay in data broadcast in real-time clock estimation and can meet the requirements of real-time PPP.
Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction
Bandeira e Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose
2017-01-01
Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. PMID:28455415
Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction.
Bandeira E Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose
2017-06-07
Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. Copyright © 2017 Bandeira e Sousa et al.
NASA Astrophysics Data System (ADS)
Tackie, Alan Derek Nii
Computer modeling of Oriented Strand Board (OSB) properties has gained widespread attention with numerous models created to better understand OBS behavior. Recent models allow researchers to observe multiple variables such as changes in moisture content, density and resin effects on panel performance. Thickness-swell variation influences panel durability and often has adverse effects on a structural panel's bending stiffness. The prediction of out-of-plane swell under changing moisture conditions was, therefore, the essence for developing a model in this research. The finite element model accounted for both vertical and horizontal density variations, the three-dimensional (3D) density variation of the board. The density variation, resulting from manufacturing processes, affects the uniformity of thickness-swell in OSB and is often exacerbated by continuous sorption of moisture that leads to potentially damaging internal stresses in the panel. The overall thickness-swell (the cumulative swell from non-uniform horizontal density profile, panel swell from free water, and spring-back from panel compression) was addressed through the finite element model in this research. The pursued goals in this study were, first and foremost, the development of a robust and comprehensive finite element model which integrated several component studies to investigate the effects of moisture variation on the out-of-plane thickness-swell of OSB panels, and second, the extension of the developed model to predict panel stiffness. It is hoped that this paper will encourage researchers to adopt the 3D density distribution approach as a viable approach to analyzing the physical and mechanical properties of OSB.
Outcome Prediction in Mathematical Models of Immune Response to Infection.
Mai, Manuel; Wang, Kun; Huber, Greg; Kirby, Michael; Shattuck, Mark D; O'Hern, Corey S
2015-01-01
Clinicians need to predict patient outcomes with high accuracy as early as possible after disease inception. In this manuscript, we show that patient-to-patient variability sets a fundamental limit on outcome prediction accuracy for a general class of mathematical models for the immune response to infection. However, accuracy can be increased at the expense of delayed prognosis. We investigate several systems of ordinary differential equations (ODEs) that model the host immune response to a pathogen load. Advantages of systems of ODEs for investigating the immune response to infection include the ability to collect data on large numbers of 'virtual patients', each with a given set of model parameters, and obtain many time points during the course of the infection. We implement patient-to-patient variability v in the ODE models by randomly selecting the model parameters from distributions with coefficients of variation v that are centered on physiological values. We use logistic regression with one-versus-all classification to predict the discrete steady-state outcomes of the system. We find that the prediction algorithm achieves near 100% accuracy for v = 0, and the accuracy decreases with increasing v for all ODE models studied. The fact that multiple steady-state outcomes can be obtained for a given initial condition, i.e. the basins of attraction overlap in the space of initial conditions, limits the prediction accuracy for v > 0. Increasing the elapsed time of the variables used to train and test the classifier, increases the prediction accuracy, while adding explicit external noise to the ODE models decreases the prediction accuracy. Our results quantify the competition between early prognosis and high prediction accuracy that is frequently encountered by clinicians.
Adjusted Clinical Groups: Predictive Accuracy for Medicaid Enrollees in Three States
Adams, E. Kathleen; Bronstein, Janet M.; Raskind-Hood, Cheryl
2002-01-01
Actuarial split-sample methods were used to assess predictive accuracy of adjusted clinical groups (ACGs) for Medicaid enrollees in Georgia, Mississippi (lagging in managed care penetration), and California. Accuracy for two non-random groups—high-cost and located in urban poor areas—was assessed. Measures for random groups were derived with and without short-term enrollees to assess the effect of turnover on predictive accuracy. ACGs improved predictive accuracy for high-cost conditions in all States, but did so only for those in Georgia's poorest urban areas. Higher and more unpredictable expenses of short-term enrollees moderated the predictive power of ACGs. This limitation was significant in Mississippi due in part, to that State's very high proportion of short-term enrollees. PMID:12545598
Chen, L; Schenkel, F; Vinsky, M; Crews, D H; Li, C
2013-10-01
In beef cattle, phenotypic data that are difficult and/or costly to measure, such as feed efficiency, and DNA marker genotypes are usually available on a small number of animals of different breeds or populations. To achieve a maximal accuracy of genomic prediction using the phenotype and genotype data, strategies for forming a training population to predict genomic breeding values (GEBV) of the selection candidates need to be evaluated. In this study, we examined the accuracy of predicting GEBV for residual feed intake (RFI) based on 522 Angus and 395 Charolais steers genotyped on SNP with the Illumina Bovine SNP50 Beadchip for 3 training population forming strategies: within breed, across breed, and by pooling data from the 2 breeds (i.e., combined). Two other scenarios with the training and validation data split by birth year and by sire family within a breed were also investigated to assess the impact of genetic relationships on the accuracy of genomic prediction. Three statistical methods including the best linear unbiased prediction with the relationship matrix defined based on the pedigree (PBLUP), based on the SNP genotypes (GBLUP), and a Bayesian method (BayesB) were used to predict the GEBV. The results showed that the accuracy of the GEBV prediction was the highest when the prediction was within breed and when the validation population had greater genetic relationships with the training population, with a maximum of 0.58 for Angus and 0.64 for Charolais. The within-breed prediction accuracies dropped to 0.29 and 0.38, respectively, when the validation populations had a minimal pedigree link with the training population. When the training population of a different breed was used to predict the GEBV of the validation population, that is, across-breed genomic prediction, the accuracies were further reduced to 0.10 to 0.22, depending on the prediction method used. Pooling data from the 2 breeds to form the training population resulted in accuracies increased to 0.31 and 0.43, respectively, for the Angus and Charolais validation populations. The results suggested that the genetic relationship of selection candidates with the training population has a greater impact on the accuracy of GEBV using the Illumina Bovine SNP50 Beadchip. Pooling data from different breeds to form the training population will improve the accuracy of across breed genomic prediction for RFI in beef cattle.
Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases
Zhang, Hongpo
2018-01-01
Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN) is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart) and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively. PMID:29854369
The accuracy of new wheelchair users' predictions about their future wheelchair use.
Hoenig, Helen; Griffiths, Patricia; Ganesh, Shanti; Caves, Kevin; Harris, Frances
2012-06-01
This study examined the accuracy of new wheelchair user predictions about their future wheelchair use. This was a prospective cohort study of 84 community-dwelling veterans provided a new manual wheelchair. The association between predicted and actual wheelchair use was strong at 3 mos (ϕ coefficient = 0.56), with 90% of those who anticipated using the wheelchair at 3 mos still using it (i.e., positive predictive value = 0.96) and 60% of those who anticipated not using it indeed no longer using the wheelchair (i.e., negative predictive value = 0.60, overall accuracy = 0.92). Predictive accuracy diminished over time, with overall accuracy declining from 0.92 at 3 mos to 0.66 at 6 mos. At all time points, and for all types of use, patients better predicted use as opposed to disuse, with correspondingly higher positive than negative predictive values. Accuracy of prediction of use in specific indoor and outdoor locations varied according to location. This study demonstrates the importance of better understanding the potential mismatch between the anticipated and actual patterns of wheelchair use. The findings suggest that users can be relied upon to accurately predict their basic wheelchair-related needs in the short-term. Further exploration is needed to identify characteristics that will aid users and their providers in more accurately predicting mobility needs for the long-term.
Performance of genomic prediction within and across generations in maritime pine.
Bartholomé, Jérôme; Van Heerwaarden, Joost; Isik, Fikret; Boury, Christophe; Vidal, Marjorie; Plomion, Christophe; Bouffier, Laurent
2016-08-11
Genomic selection (GS) is a promising approach for decreasing breeding cycle length in forest trees. Assessment of progeny performance and of the prediction accuracy of GS models over generations is therefore a key issue. A reference population of maritime pine (Pinus pinaster) with an estimated effective inbreeding population size (status number) of 25 was first selected with simulated data. This reference population (n = 818) covered three generations (G0, G1 and G2) and was genotyped with 4436 single-nucleotide polymorphism (SNP) markers. We evaluated the effects on prediction accuracy of both the relatedness between the calibration and validation sets and validation on the basis of progeny performance. Pedigree-based (best linear unbiased prediction, ABLUP) and marker-based (genomic BLUP and Bayesian LASSO) models were used to predict breeding values for three different traits: circumference, height and stem straightness. On average, the ABLUP model outperformed genomic prediction models, with a maximum difference in prediction accuracies of 0.12, depending on the trait and the validation method. A mean difference in prediction accuracy of 0.17 was found between validation methods differing in terms of relatedness. Including the progenitors in the calibration set reduced this difference in prediction accuracy to 0.03. When only genotypes from the G0 and G1 generations were used in the calibration set and genotypes from G2 were used in the validation set (progeny validation), prediction accuracies ranged from 0.70 to 0.85. This study suggests that the training of prediction models on parental populations can predict the genetic merit of the progeny with high accuracy: an encouraging result for the implementation of GS in the maritime pine breeding program.
Morgante, Fabio; Huang, Wen; Maltecca, Christian; Mackay, Trudy F C
2018-06-01
Predicting complex phenotypes from genomic data is a fundamental aim of animal and plant breeding, where we wish to predict genetic merits of selection candidates; and of human genetics, where we wish to predict disease risk. While genomic prediction models work well with populations of related individuals and high linkage disequilibrium (LD) (e.g., livestock), comparable models perform poorly for populations of unrelated individuals and low LD (e.g., humans). We hypothesized that low prediction accuracies in the latter situation may occur when the genetics architecture of the trait departs from the infinitesimal and additive architecture assumed by most prediction models. We used simulated data for 10,000 lines based on sequence data from a population of unrelated, inbred Drosophila melanogaster lines to evaluate this hypothesis. We show that, even in very simplified scenarios meant as a stress test of the commonly used Genomic Best Linear Unbiased Predictor (G-BLUP) method, using all common variants yields low prediction accuracy regardless of the trait genetic architecture. However, prediction accuracy increases when predictions are informed by the genetic architecture inferred from mapping the top variants affecting main effects and interactions in the training data, provided there is sufficient power for mapping. When the true genetic architecture is largely or partially due to epistatic interactions, the additive model may not perform well, while models that account explicitly for interactions generally increase prediction accuracy. Our results indicate that accounting for genetic architecture can improve prediction accuracy for quantitative traits.
Electrophysiological evidence for preserved primacy of lexical prediction in aging.
Dave, Shruti; Brothers, Trevor A; Traxler, Matthew J; Ferreira, Fernanda; Henderson, John M; Swaab, Tamara Y
2018-05-28
Young adults show consistent neural benefits of predictable contexts when processing upcoming words, but these benefits are less clear-cut in older adults. Here we disentangle the neural correlates of prediction accuracy and contextual support during word processing, in order to test current theories that suggest that neural mechanisms underlying predictive processing are specifically impaired in older adults. During a sentence comprehension task, older and younger readers were asked to predict passage-final words and report the accuracy of these predictions. Age-related reductions were observed for N250 and N400 effects of prediction accuracy, as well as for N400 effects of contextual support independent of prediction accuracy. Furthermore, temporal primacy of predictive processing (i.e., earlier facilitation for successful predictions) was preserved across the lifespan, suggesting that predictive mechanisms are unlikely to be uniquely impaired in older adults. In addition, older adults showed prediction effects on frontal post-N400 positivities (PNPs) that were similar in amplitude to PNPs in young adults. Previous research has shown correlations between verbal fluency and lexical prediction in older adult readers, suggesting that the production system may be linked to capacity for lexical prediction, especially in aging. The current study suggests that verbal fluency modulates PNP effects of contextual support, but not prediction accuracy. Taken together, our findings suggest that aging does not result in specific declines in lexical prediction. Copyright © 2018 Elsevier Ltd. All rights reserved.
Genomic prediction of reproduction traits for Merino sheep.
Bolormaa, S; Brown, D J; Swan, A A; van der Werf, J H J; Hayes, B J; Daetwyler, H D
2017-06-01
Economically important reproduction traits in sheep, such as number of lambs weaned and litter size, are expressed only in females and later in life after most selection decisions are made, which makes them ideal candidates for genomic selection. Accurate genomic predictions would lead to greater genetic gain for these traits by enabling accurate selection of young rams with high genetic merit. The aim of this study was to design and evaluate the accuracy of a genomic prediction method for female reproduction in sheep using daughter trait deviations (DTD) for sires and ewe phenotypes (when individual ewes were genotyped) for three reproduction traits: number of lambs born (NLB), litter size (LSIZE) and number of lambs weaned. Genomic best linear unbiased prediction (GBLUP), BayesR and pedigree BLUP analyses of the three reproduction traits measured on 5340 sheep (4503 ewes and 837 sires) with real and imputed genotypes for 510 174 SNPs were performed. The prediction of breeding values using both sire and ewe trait records was validated in Merino sheep. Prediction accuracy was evaluated by across sire family and random cross-validations. Accuracies of genomic estimated breeding values (GEBVs) were assessed as the mean Pearson correlation adjusted by the accuracy of the input phenotypes. The addition of sire DTD into the prediction analysis resulted in higher accuracies compared with using only ewe records in genomic predictions or pedigree BLUP. Using GBLUP, the average accuracy based on the combined records (ewes and sire DTD) was 0.43 across traits, but the accuracies varied by trait and type of cross-validations. The accuracies of GEBVs from random cross-validations (range 0.17-0.61) were higher than were those from sire family cross-validations (range 0.00-0.51). The GEBV accuracies of 0.41-0.54 for NLB and LSIZE based on the combined records were amongst the highest in the study. Although BayesR was not significantly different from GBLUP in prediction accuracy, it identified several candidate genes which are known to be associated with NLB and LSIZE. The approach provides a way to make use of all data available in genomic prediction for traits that have limited recording. © 2017 Stichting International Foundation for Animal Genetics.
Waide, Emily H; Tuggle, Christopher K; Serão, Nick V L; Schroyen, Martine; Hess, Andrew; Rowland, Raymond R R; Lunney, Joan K; Plastow, Graham; Dekkers, Jack C M
2018-02-01
Genomic prediction of the pig's response to the porcine reproductive and respiratory syndrome (PRRS) virus (PRRSV) would be a useful tool in the swine industry. This study investigated the accuracy of genomic prediction based on porcine SNP60 Beadchip data using training and validation datasets from populations with different genetic backgrounds that were challenged with different PRRSV isolates. Genomic prediction accuracy averaged 0.34 for viral load (VL) and 0.23 for weight gain (WG) following experimental PRRSV challenge, which demonstrates that genomic selection could be used to improve response to PRRSV infection. Training on WG data during infection with a less virulent PRRSV, KS06, resulted in poor accuracy of prediction for WG during infection with a more virulent PRRSV, NVSL. Inclusion of single nucleotide polymorphisms (SNPs) that are in linkage disequilibrium with a major quantitative trait locus (QTL) on chromosome 4 was vital for accurate prediction of VL. Overall, SNPs that were significantly associated with either trait in single SNP genome-wide association analysis were unable to predict the phenotypes with an accuracy as high as that obtained by using all genotyped SNPs across the genome. Inclusion of data from close relatives into the training population increased whole genome prediction accuracy by 33% for VL and by 37% for WG but did not affect the accuracy of prediction when using only SNPs in the major QTL region. Results show that genomic prediction of response to PRRSV infection is moderately accurate and, when using all SNPs on the porcine SNP60 Beadchip, is not very sensitive to differences in virulence of the PRRSV in training and validation populations. Including close relatives in the training population increased prediction accuracy when using the whole genome or SNPs other than those near a major QTL.
Genomic Prediction of Seed Quality Traits Using Advanced Barley Breeding Lines.
Nielsen, Nanna Hellum; Jahoor, Ahmed; Jensen, Jens Due; Orabi, Jihad; Cericola, Fabio; Edriss, Vahid; Jensen, Just
2016-01-01
Genomic selection was recently introduced in plant breeding. The objective of this study was to develop genomic prediction for important seed quality parameters in spring barley. The aim was to predict breeding values without expensive phenotyping of large sets of lines. A total number of 309 advanced spring barley lines tested at two locations each with three replicates were phenotyped and each line was genotyped by Illumina iSelect 9Kbarley chip. The population originated from two different breeding sets, which were phenotyped in two different years. Phenotypic measurements considered were: seed size, protein content, protein yield, test weight and ergosterol content. A leave-one-out cross-validation strategy revealed high prediction accuracies ranging between 0.40 and 0.83. Prediction across breeding sets resulted in reduced accuracies compared to the leave-one-out strategy. Furthermore, predicting across full and half-sib-families resulted in reduced prediction accuracies. Additionally, predictions were performed using reduced marker sets and reduced training population sets. In conclusion, using less than 200 lines in the training set can result in low prediction accuracy, and the accuracy will then be highly dependent on the family structure of the selected training set. However, the results also indicate that relatively small training sets (200 lines) are sufficient for genomic prediction in commercial barley breeding. In addition, our results indicate a minimum marker set of 1,000 to decrease the risk of low prediction accuracy for some traits or some families.
Genomic Prediction of Seed Quality Traits Using Advanced Barley Breeding Lines
Nielsen, Nanna Hellum; Jahoor, Ahmed; Jensen, Jens Due; Orabi, Jihad; Cericola, Fabio; Edriss, Vahid; Jensen, Just
2016-01-01
Genomic selection was recently introduced in plant breeding. The objective of this study was to develop genomic prediction for important seed quality parameters in spring barley. The aim was to predict breeding values without expensive phenotyping of large sets of lines. A total number of 309 advanced spring barley lines tested at two locations each with three replicates were phenotyped and each line was genotyped by Illumina iSelect 9Kbarley chip. The population originated from two different breeding sets, which were phenotyped in two different years. Phenotypic measurements considered were: seed size, protein content, protein yield, test weight and ergosterol content. A leave-one-out cross-validation strategy revealed high prediction accuracies ranging between 0.40 and 0.83. Prediction across breeding sets resulted in reduced accuracies compared to the leave-one-out strategy. Furthermore, predicting across full and half-sib-families resulted in reduced prediction accuracies. Additionally, predictions were performed using reduced marker sets and reduced training population sets. In conclusion, using less than 200 lines in the training set can result in low prediction accuracy, and the accuracy will then be highly dependent on the family structure of the selected training set. However, the results also indicate that relatively small training sets (200 lines) are sufficient for genomic prediction in commercial barley breeding. In addition, our results indicate a minimum marker set of 1,000 to decrease the risk of low prediction accuracy for some traits or some families. PMID:27783639
Influence of outliers on accuracy estimation in genomic prediction in plant breeding.
Estaghvirou, Sidi Boubacar Ould; Ogutu, Joseph O; Piepho, Hans-Peter
2014-10-01
Outliers often pose problems in analyses of data in plant breeding, but their influence on the performance of methods for estimating predictive accuracy in genomic prediction studies has not yet been evaluated. Here, we evaluate the influence of outliers on the performance of methods for accuracy estimation in genomic prediction studies using simulation. We simulated 1000 datasets for each of 10 scenarios to evaluate the influence of outliers on the performance of seven methods for estimating accuracy. These scenarios are defined by the number of genotypes, marker effect variance, and magnitude of outliers. To mimic outliers, we added to one observation in each simulated dataset, in turn, 5-, 8-, and 10-times the error SD used to simulate small and large phenotypic datasets. The effect of outliers on accuracy estimation was evaluated by comparing deviations in the estimated and true accuracies for datasets with and without outliers. Outliers adversely influenced accuracy estimation, more so at small values of genetic variance or number of genotypes. A method for estimating heritability and predictive accuracy in plant breeding and another used to estimate accuracy in animal breeding were the most accurate and resistant to outliers across all scenarios and are therefore preferable for accuracy estimation in genomic prediction studies. The performances of the other five methods that use cross-validation were less consistent and varied widely across scenarios. The computing time for the methods increased as the size of outliers and sample size increased and the genetic variance decreased. Copyright © 2014 Ould Estaghvirou et al.
The effect of using genealogy-based haplotypes for genomic prediction
2013-01-01
Background Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. Methods A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (π) of the haplotype covariates had zero effect, i.e. a Bayesian mixture method. Results About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some cases, decreased the bias of prediction. With the Bayesian method, accuracy of prediction was less sensitive to parameter π when fitting haplotypes compared to fitting markers. Conclusions Use of haplotypes based on genealogy can slightly increase the accuracy of genomic prediction. Improved methods to cluster the haplotypes constructed from local genealogy could lead to additional gains in accuracy. PMID:23496971
The effect of using genealogy-based haplotypes for genomic prediction.
Edriss, Vahid; Fernando, Rohan L; Su, Guosheng; Lund, Mogens S; Guldbrandtsen, Bernt
2013-03-06
Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (π) of the haplotype covariates had zero effect, i.e. a Bayesian mixture method. About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some cases, decreased the bias of prediction. With the Bayesian method, accuracy of prediction was less sensitive to parameter π when fitting haplotypes compared to fitting markers. Use of haplotypes based on genealogy can slightly increase the accuracy of genomic prediction. Improved methods to cluster the haplotypes constructed from local genealogy could lead to additional gains in accuracy.
Achamrah, Najate; Jésus, Pierre; Grigioni, Sébastien; Rimbert, Agnès; Petit, André; Déchelotte, Pierre; Folope, Vanessa; Coëffier, Moïse
2018-01-01
Predictive equations have been specifically developed for obese patients to estimate resting energy expenditure (REE). Body composition (BC) assessment is needed for some of these equations. We assessed the impact of BC methods on the accuracy of specific predictive equations developed in obese patients. REE was measured (mREE) by indirect calorimetry and BC assessed by bioelectrical impedance analysis (BIA) and dual-energy X-ray absorptiometry (DXA). mREE, percentages of prediction accuracy (±10% of mREE) were compared. Predictive equations were studied in 2588 obese patients. Mean mREE was 1788 ± 6.3 kcal/24 h. Only the Müller (BIA) and Harris & Benedict (HB) equations provided REE with no difference from mREE. The Huang, Müller, Horie-Waitzberg, and HB formulas provided a higher accurate prediction (>60% of cases). The use of BIA provided better predictions of REE than DXA for the Huang and Müller equations. Inversely, the Horie-Waitzberg and Lazzer formulas provided a higher accuracy using DXA. Accuracy decreased when applied to patients with BMI ≥ 40, except for the Horie-Waitzberg and Lazzer (DXA) formulas. Müller equations based on BIA provided a marked improvement of REE prediction accuracy than equations not based on BC. The interest of BC to improve REE predictive equations accuracy in obese patients should be confirmed. PMID:29320432
Beaulieu, Jean; Doerksen, Trevor K; MacKay, John; Rainville, André; Bousquet, Jean
2014-12-02
Genomic selection (GS) may improve selection response over conventional pedigree-based selection if markers capture more detailed information than pedigrees in recently domesticated tree species and/or make it more cost effective. Genomic prediction accuracies using 1748 trees and 6932 SNPs representative of as many distinct gene loci were determined for growth and wood traits in white spruce, within and between environments and breeding groups (BG), each with an effective size of Ne ≈ 20. Marker subsets were also tested. Model fits and/or cross-validation (CV) prediction accuracies for ridge regression (RR) and the least absolute shrinkage and selection operator models approached those of pedigree-based models. With strong relatedness between CV sets, prediction accuracies for RR within environment and BG were high for wood (r = 0.71-0.79) and moderately high for growth (r = 0.52-0.69) traits, in line with trends in heritabilities. For both classes of traits, these accuracies achieved between 83% and 92% of those obtained with phenotypes and pedigree information. Prediction into untested environments remained moderately high for wood (r ≥ 0.61) but dropped significantly for growth (r ≥ 0.24) traits, emphasizing the need to phenotype in all test environments and model genotype-by-environment interactions for growth traits. Removing relatedness between CV sets sharply decreased prediction accuracies for all traits and subpopulations, falling near zero between BGs with no known shared ancestry. For marker subsets, similar patterns were observed but with lower prediction accuracies. Given the need for high relatedness between CV sets to obtain good prediction accuracies, we recommend to build GS models for prediction within the same breeding population only. Breeding groups could be merged to build genomic prediction models as long as the total effective population size does not exceed 50 individuals in order to obtain high prediction accuracy such as that obtained in the present study. A number of markers limited to a few hundred would not negatively impact prediction accuracies, but these could decrease more rapidly over generations. The most promising short-term approach for genomic selection would likely be the selection of superior individuals within large full-sib families vegetatively propagated to implement multiclonal forestry.
2009-01-01
Background Genomic selection (GS) uses molecular breeding values (MBV) derived from dense markers across the entire genome for selection of young animals. The accuracy of MBV prediction is important for a successful application of GS. Recently, several methods have been proposed to estimate MBV. Initial simulation studies have shown that these methods can accurately predict MBV. In this study we compared the accuracies and possible bias of five different regression methods in an empirical application in dairy cattle. Methods Genotypes of 7,372 SNP and highly accurate EBV of 1,945 dairy bulls were used to predict MBV for protein percentage (PPT) and a profit index (Australian Selection Index, ASI). Marker effects were estimated by least squares regression (FR-LS), Bayesian regression (Bayes-R), random regression best linear unbiased prediction (RR-BLUP), partial least squares regression (PLSR) and nonparametric support vector regression (SVR) in a training set of 1,239 bulls. Accuracy and bias of MBV prediction were calculated from cross-validation of the training set and tested against a test team of 706 young bulls. Results For both traits, FR-LS using a subset of SNP was significantly less accurate than all other methods which used all SNP. Accuracies obtained by Bayes-R, RR-BLUP, PLSR and SVR were very similar for ASI (0.39-0.45) and for PPT (0.55-0.61). Overall, SVR gave the highest accuracy. All methods resulted in biased MBV predictions for ASI, for PPT only RR-BLUP and SVR predictions were unbiased. A significant decrease in accuracy of prediction of ASI was seen in young test cohorts of bulls compared to the accuracy derived from cross-validation of the training set. This reduction was not apparent for PPT. Combining MBV predictions with pedigree based predictions gave 1.05 - 1.34 times higher accuracies compared to predictions based on pedigree alone. Some methods have largely different computational requirements, with PLSR and RR-BLUP requiring the least computing time. Conclusions The four methods which use information from all SNP namely RR-BLUP, Bayes-R, PLSR and SVR generate similar accuracies of MBV prediction for genomic selection, and their use in the selection of immediate future generations in dairy cattle will be comparable. The use of FR-LS in genomic selection is not recommended. PMID:20043835
Uribe-Rivera, David E; Soto-Azat, Claudio; Valenzuela-Sánchez, Andrés; Bizama, Gustavo; Simonetti, Javier A; Pliscoff, Patricio
2017-07-01
Climate change is a major threat to biodiversity; the development of models that reliably predict its effects on species distributions is a priority for conservation biogeography. Two of the main issues for accurate temporal predictions from Species Distribution Models (SDM) are model extrapolation and unrealistic dispersal scenarios. We assessed the consequences of these issues on the accuracy of climate-driven SDM predictions for the dispersal-limited Darwin's frog Rhinoderma darwinii in South America. We calibrated models using historical data (1950-1975) and projected them across 40 yr to predict distribution under current climatic conditions, assessing predictive accuracy through the area under the ROC curve (AUC) and True Skill Statistics (TSS), contrasting binary model predictions against temporal-independent validation data set (i.e., current presences/absences). To assess the effects of incorporating dispersal processes we compared the predictive accuracy of dispersal constrained models with no dispersal limited SDMs; and to assess the effects of model extrapolation on the predictive accuracy of SDMs, we compared this between extrapolated and no extrapolated areas. The incorporation of dispersal processes enhanced predictive accuracy, mainly due to a decrease in the false presence rate of model predictions, which is consistent with discrimination of suitable but inaccessible habitat. This also had consequences on range size changes over time, which is the most used proxy for extinction risk from climate change. The area of current climatic conditions that was absent in the baseline conditions (i.e., extrapolated areas) represents 39% of the study area, leading to a significant decrease in predictive accuracy of model predictions for those areas. Our results highlight (1) incorporating dispersal processes can improve predictive accuracy of temporal transference of SDMs and reduce uncertainties of extinction risk assessments from global change; (2) as geographical areas subjected to novel climates are expected to arise, they must be reported as they show less accurate predictions under future climate scenarios. Consequently, environmental extrapolation and dispersal processes should be explicitly incorporated to report and reduce uncertainties in temporal predictions of SDMs, respectively. Doing so, we expect to improve the reliability of the information we provide for conservation decision makers under future climate change scenarios. © 2017 by the Ecological Society of America.
Flow and Compaction During the Vacuum Assisted Resin Transfer Molding Process
NASA Technical Reports Server (NTRS)
Grimsley, Brian W.; Hubert, Pascal; Song, Xiao-Lan; Cano, Roberto J.; Loos, Alfred C.; Pipes, R. Byron
2001-01-01
The flow of an epoxy resin and compaction behavior of carbon fiber preform during vacuum- assisted resin transfer molding (VARTM) infiltration was measured using an instrumented tool. Composite panels were fabricated by the VARTM process using SAERTEX(R)2 multi-axial non- crimp carbon fiber fabric and the A.T.A.R.D. SI-ZG-5A epoxy resin. Resin pressure and preform thickness variation was measured during infiltration. The effects of the resin on the compaction behavior of the preform were measured. The local preform compaction during the infiltration is a combination of wetting and spring-back deformations. Flow front position computed by the 3DINFIL model was compared with the experimental data.
Genomic selection across multiple breeding cycles in applied bread wheat breeding.
Michel, Sebastian; Ametz, Christian; Gungor, Huseyin; Epure, Doru; Grausgruber, Heinrich; Löschenberger, Franziska; Buerstmayr, Hermann
2016-06-01
We evaluated genomic selection across five breeding cycles of bread wheat breeding. Bias of within-cycle cross-validation and methods for improving the prediction accuracy were assessed. The prospect of genomic selection has been frequently shown by cross-validation studies using the same genetic material across multiple environments, but studies investigating genomic selection across multiple breeding cycles in applied bread wheat breeding are lacking. We estimated the prediction accuracy of grain yield, protein content and protein yield of 659 inbred lines across five independent breeding cycles and assessed the bias of within-cycle cross-validation. We investigated the influence of outliers on the prediction accuracy and predicted protein yield by its components traits. A high average heritability was estimated for protein content, followed by grain yield and protein yield. The bias of the prediction accuracy using populations from individual cycles using fivefold cross-validation was accordingly substantial for protein yield (17-712 %) and less pronounced for protein content (8-86 %). Cross-validation using the cycles as folds aimed to avoid this bias and reached a maximum prediction accuracy of [Formula: see text] = 0.51 for protein content, [Formula: see text] = 0.38 for grain yield and [Formula: see text] = 0.16 for protein yield. Dropping outlier cycles increased the prediction accuracy of grain yield to [Formula: see text] = 0.41 as estimated by cross-validation, while dropping outlier environments did not have a significant effect on the prediction accuracy. Independent validation suggests, on the other hand, that careful consideration is necessary before an outlier correction is undertaken, which removes lines from the training population. Predicting protein yield by multiplying genomic estimated breeding values of grain yield and protein content raised the prediction accuracy to [Formula: see text] = 0.19 for this derived trait.
Edwards, T.C.; Cutler, D.R.; Zimmermann, N.E.; Geiser, L.; Moisen, Gretchen G.
2006-01-01
We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys. ?? 2006 Elsevier B.V. All rights reserved.
Improving transmembrane protein consensus topology prediction using inter-helical interaction.
Wang, Han; Zhang, Chao; Shi, Xiaohu; Zhang, Li; Zhou, You
2012-11-01
Alpha helix transmembrane proteins (αTMPs) represent roughly 30% of all open reading frames (ORFs) in a typical genome and are involved in many critical biological processes. Due to the special physicochemical properties, it is hard to crystallize and obtain high resolution structures experimentally, thus, sequence-based topology prediction is highly desirable for the study of transmembrane proteins (TMPs), both in structure prediction and function prediction. Various model-based topology prediction methods have been developed, but the accuracy of those individual predictors remain poor due to the limitation of the methods or the features they used. Thus, the consensus topology prediction method becomes practical for high accuracy applications by combining the advances of the individual predictors. Here, based on the observation that inter-helical interactions are commonly found within the transmembrane helixes (TMHs) and strongly indicate the existence of them, we present a novel consensus topology prediction method for αTMPs, CNTOP, which incorporates four top leading individual topology predictors, and further improves the prediction accuracy by using the predicted inter-helical interactions. The method achieved 87% prediction accuracy based on a benchmark dataset and 78% accuracy based on a non-redundant dataset which is composed of polytopic αTMPs. Our method derives the highest topology accuracy than any other individual predictors and consensus predictors, at the same time, the TMHs are more accurately predicted in their length and locations, where both the false positives (FPs) and the false negatives (FNs) decreased dramatically. The CNTOP is available at: http://ccst.jlu.edu.cn/JCSB/cntop/CNTOP.html. Copyright © 2012 Elsevier B.V. All rights reserved.
Final Technical Report: Increasing Prediction Accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Bruce Hardison; Hansen, Clifford; Stein, Joshua
2015-12-01
PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.
Genotyping by sequencing for genomic prediction in a soybean breeding population.
Jarquín, Diego; Kocak, Kyle; Posadas, Luis; Hyma, Katie; Jedlicka, Joseph; Graef, George; Lorenz, Aaron
2014-08-29
Advances in genotyping technology, such as genotyping by sequencing (GBS), are making genomic prediction more attractive to reduce breeding cycle times and costs associated with phenotyping. Genomic prediction and selection has been studied in several crop species, but no reports exist in soybean. The objectives of this study were (i) evaluate prospects for genomic selection using GBS in a typical soybean breeding program and (ii) evaluate the effect of GBS marker selection and imputation on genomic prediction accuracy. To achieve these objectives, a set of soybean lines sampled from the University of Nebraska Soybean Breeding Program were genotyped using GBS and evaluated for yield and other agronomic traits at multiple Nebraska locations. Genotyping by sequencing scored 16,502 single nucleotide polymorphisms (SNPs) with minor-allele frequency (MAF) > 0.05 and percentage of missing values ≤ 5% on 301 elite soybean breeding lines. When SNPs with up to 80% missing values were included, 52,349 SNPs were scored. Prediction accuracy for grain yield, assessed using cross validation, was estimated to be 0.64, indicating good potential for using genomic selection for grain yield in soybean. Filtering SNPs based on missing data percentage had little to no effect on prediction accuracy, especially when random forest imputation was used to impute missing values. The highest accuracies were observed when random forest imputation was used on all SNPs, but differences were not significant. A standard additive G-BLUP model was robust; modeling additive-by-additive epistasis did not provide any improvement in prediction accuracy. The effect of training population size on accuracy began to plateau around 100, but accuracy steadily climbed until the largest possible size was used in this analysis. Including only SNPs with MAF > 0.30 provided higher accuracies when training populations were smaller. Using GBS for genomic prediction in soybean holds good potential to expedite genetic gain. Our results suggest that standard additive G-BLUP models can be used on unfiltered, imputed GBS data without loss in accuracy.
Correa, Katharina; Bangera, Rama; Figueroa, René; Lhorente, Jean P; Yáñez, José M
2017-01-31
Sea lice infestations caused by Caligus rogercresseyi are a main concern to the salmon farming industry due to associated economic losses. Resistance to this parasite was shown to have low to moderate genetic variation and its genetic architecture was suggested to be polygenic. The aim of this study was to compare accuracies of breeding value predictions obtained with pedigree-based best linear unbiased prediction (P-BLUP) methodology against different genomic prediction approaches: genomic BLUP (G-BLUP), Bayesian Lasso, and Bayes C. To achieve this, 2404 individuals from 118 families were measured for C. rogercresseyi count after a challenge and genotyped using 37 K single nucleotide polymorphisms. Accuracies were assessed using fivefold cross-validation and SNP densities of 0.5, 1, 5, 10, 25 and 37 K. Accuracy of genomic predictions increased with increasing SNP density and was higher than pedigree-based BLUP predictions by up to 22%. Both Bayesian and G-BLUP methods can predict breeding values with higher accuracies than pedigree-based BLUP, however, G-BLUP may be the preferred method because of reduced computation time and ease of implementation. A relatively low marker density (i.e. 10 K) is sufficient for maximal increase in accuracy when using G-BLUP or Bayesian methods for genomic prediction of C. rogercresseyi resistance in Atlantic salmon.
Wren, Christopher; Vogel, Melanie; Lord, Stephen; Abrams, Dominic; Bourke, John; Rees, Philip; Rosenthal, Eric
2012-02-01
The aim of this study was to examine the accuracy in predicting pathway location in children with Wolff-Parkinson-White syndrome for each of seven published algorithms. ECGs from 100 consecutive children with Wolff-Parkinson-White syndrome undergoing electrophysiological study were analysed by six investigators using seven published algorithms, six of which had been developed in adult patients. Accuracy and concordance of predictions were adjusted for the number of pathway locations. Accessory pathways were left-sided in 49, septal in 20 and right-sided in 31 children. Overall accuracy of prediction was 30-49% for the exact location and 61-68% including adjacent locations. Concordance between investigators varied between 41% and 86%. No algorithm was better at predicting septal pathways (accuracy 5-35%, improving to 40-78% including adjacent locations), but one was significantly worse. Predictive accuracy was 24-53% for the exact location of right-sided pathways (50-71% including adjacent locations) and 32-55% for the exact location of left-sided pathways (58-73% including adjacent locations). All algorithms were less accurate in our hands than in other authors' own assessment. None performed well in identifying midseptal or right anteroseptal accessory pathway locations.
Heidaritabar, M; Wolc, A; Arango, J; Zeng, J; Settar, P; Fulton, J E; O'Sullivan, N P; Bastiaansen, J W M; Fernando, R L; Garrick, D J; Dekkers, J C M
2016-10-01
Most genomic prediction studies fit only additive effects in models to estimate genomic breeding values (GEBV). However, if dominance genetic effects are an important source of variation for complex traits, accounting for them may improve the accuracy of GEBV. We investigated the effect of fitting dominance and additive effects on the accuracy of GEBV for eight egg production and quality traits in a purebred line of brown layers using pedigree or genomic information (42K single-nucleotide polymorphism (SNP) panel). Phenotypes were corrected for the effect of hatch date. Additive and dominance genetic variances were estimated using genomic-based [genomic best linear unbiased prediction (GBLUP)-REML and BayesC] and pedigree-based (PBLUP-REML) methods. Breeding values were predicted using a model that included both additive and dominance effects and a model that included only additive effects. The reference population consisted of approximately 1800 animals hatched between 2004 and 2009, while approximately 300 young animals hatched in 2010 were used for validation. Accuracy of prediction was computed as the correlation between phenotypes and estimated breeding values of the validation animals divided by the square root of the estimate of heritability in the whole population. The proportion of dominance variance to total phenotypic variance ranged from 0.03 to 0.22 with PBLUP-REML across traits, from 0 to 0.03 with GBLUP-REML and from 0.01 to 0.05 with BayesC. Accuracies of GEBV ranged from 0.28 to 0.60 across traits. Inclusion of dominance effects did not improve the accuracy of GEBV, and differences in their accuracies between genomic-based methods were small (0.01-0.05), with GBLUP-REML yielding higher prediction accuracies than BayesC for egg production, egg colour and yolk weight, while BayesC yielded higher accuracies than GBLUP-REML for the other traits. In conclusion, fitting dominance effects did not impact accuracy of genomic prediction of breeding values in this population. © 2016 Blackwell Verlag GmbH.
Hidalgo, A M; Bastiaansen, J W M; Lopes, M S; Veroneze, R; Groenen, M A M; de Koning, D-J
2015-07-01
Genomic selection is applied to dairy cattle breeding to improve the genetic progress of purebred (PB) animals, whereas in pigs and poultry the target is a crossbred (CB) animal for which a different strategy appears to be needed. The source of information used to estimate the breeding values, i.e., using phenotypes of CB or PB animals, may affect the accuracy of prediction. The objective of our study was to assess the direct genomic value (DGV) accuracy of CB and PB pigs using different sources of phenotypic information. Data used were from 3 populations: 2,078 Dutch Landrace-based, 2,301 Large White-based, and 497 crossbreds from an F1 cross between the 2 lines. Two female reproduction traits were analyzed: gestation length (GLE) and total number of piglets born (TNB). Phenotypes used in the analyses originated from offspring of genotyped individuals. Phenotypes collected on CB and PB animals were analyzed as separate traits using a single-trait model. Breeding values were estimated separately for each trait in a pedigree BLUP analysis and subsequently deregressed. Deregressed EBV for each trait originating from different sources (CB or PB offspring) were used to study the accuracy of genomic prediction. Accuracy of prediction was computed as the correlation between DGV and the DEBV of the validation population. Accuracy of prediction within PB populations ranged from 0.43 to 0.62 across GLE and TNB. Accuracies to predict genetic merit of CB animals with one PB population in the training set ranged from 0.12 to 0.28, with the exception of using the CB offspring phenotype of the Dutch Landrace that resulted in an accuracy estimate around 0 for both traits. Accuracies to predict genetic merit of CB animals with both parental PB populations in the training set ranged from 0.17 to 0.30. We conclude that prediction within population and trait had good predictive ability regardless of the trait being the PB or CB performance, whereas using PB population(s) to predict genetic merit of CB animals had zero to moderate predictive ability. We observed that the DGV accuracy of CB animals when training on PB data was greater than or equal to training on CB data. However, when results are corrected for the different levels of reliabilities in the PB and CB training data, we showed that training on CB data does outperform PB data for the prediction of CB genetic merit, indicating that more CB animals should be phenotyped to increase the reliability and, consequently, accuracy of DGV for CB genetic merit.
ERIC Educational Resources Information Center
Kwon, Heekyung
2011-01-01
The objective of this study is to provide a systematic account of three typical phenomena surrounding absolute accuracy of metacomprehension assessments: (1) the absolute accuracy of predictions is typically quite low; (2) there exist individual differences in absolute accuracy of predictions as a function of reading skill; and (3) postdictions…
Accuracies of univariate and multivariate genomic prediction models in African cassava.
Okeke, Uche Godfrey; Akdemir, Deniz; Rabbi, Ismail; Kulakow, Peter; Jannink, Jean-Luc
2017-12-04
Genomic selection (GS) promises to accelerate genetic gain in plant breeding programs especially for crop species such as cassava that have long breeding cycles. Practically, to implement GS in cassava breeding, it is necessary to evaluate different GS models and to develop suitable models for an optimized breeding pipeline. In this paper, we compared (1) prediction accuracies from a single-trait (uT) and a multi-trait (MT) mixed model for a single-environment genetic evaluation (Scenario 1), and (2) accuracies from a compound symmetric multi-environment model (uE) parameterized as a univariate multi-kernel model to a multivariate (ME) multi-environment mixed model that accounts for genotype-by-environment interaction for multi-environment genetic evaluation (Scenario 2). For these analyses, we used 16 years of public cassava breeding data for six target cassava traits and a fivefold cross-validation scheme with 10-repeat cycles to assess model prediction accuracies. In Scenario 1, the MT models had higher prediction accuracies than the uT models for all traits and locations analyzed, which amounted to on average a 40% improved prediction accuracy. For Scenario 2, we observed that the ME model had on average (across all locations and traits) a 12% improved prediction accuracy compared to the uE model. We recommend the use of multivariate mixed models (MT and ME) for cassava genetic evaluation. These models may be useful for other plant species.
The Use of Linear Programming for Prediction.
ERIC Educational Resources Information Center
Schnittjer, Carl J.
The purpose of the study was to develop a linear programming model to be used for prediction, test the accuracy of the predictions, and compare the accuracy with that produced by curvilinear multiple regression analysis. (Author)
PPCM: Combing multiple classifiers to improve protein-protein interaction prediction
Yao, Jianzhuang; Guo, Hong; Yang, Xiaohan
2015-08-01
Determining protein-protein interaction (PPI) in biological systems is of considerable importance, and prediction of PPI has become a popular research area. Although different classifiers have been developed for PPI prediction, no single classifier seems to be able to predict PPI with high confidence. We postulated that by combining individual classifiers the accuracy of PPI prediction could be improved. We developed a method called protein-protein interaction prediction classifiers merger (PPCM), and this method combines output from two PPI prediction tools, GO2PPI and Phyloprof, using Random Forests algorithm. The performance of PPCM was tested by area under the curve (AUC) using anmore » assembled Gold Standard database that contains both positive and negative PPI pairs. Our AUC test showed that PPCM significantly improved the PPI prediction accuracy over the corresponding individual classifiers. We found that additional classifiers incorporated into PPCM could lead to further improvement in the PPI prediction accuracy. Furthermore, cross species PPCM could achieve competitive and even better prediction accuracy compared to the single species PPCM. This study established a robust pipeline for PPI prediction by integrating multiple classifiers using Random Forests algorithm. Ultimately, this pipeline will be useful for predicting PPI in nonmodel species.« less
Blanche, Paul; Proust-Lima, Cécile; Loubère, Lucie; Berr, Claudine; Dartigues, Jean-François; Jacqmin-Gadda, Hélène
2015-03-01
Thanks to the growing interest in personalized medicine, joint modeling of longitudinal marker and time-to-event data has recently started to be used to derive dynamic individual risk predictions. Individual predictions are called dynamic because they are updated when information on the subject's health profile grows with time. We focus in this work on statistical methods for quantifying and comparing dynamic predictive accuracy of this kind of prognostic models, accounting for right censoring and possibly competing events. Dynamic area under the ROC curve (AUC) and Brier Score (BS) are used to quantify predictive accuracy. Nonparametric inverse probability of censoring weighting is used to estimate dynamic curves of AUC and BS as functions of the time at which predictions are made. Asymptotic results are established and both pointwise confidence intervals and simultaneous confidence bands are derived. Tests are also proposed to compare the dynamic prediction accuracy curves of two prognostic models. The finite sample behavior of the inference procedures is assessed via simulations. We apply the proposed methodology to compare various prediction models using repeated measures of two psychometric tests to predict dementia in the elderly, accounting for the competing risk of death. Models are estimated on the French Paquid cohort and predictive accuracies are evaluated and compared on the French Three-City cohort. © 2014, The International Biometric Society.
Bio-knowledge based filters improve residue-residue contact prediction accuracy.
Wozniak, P P; Pelc, J; Skrzypecki, M; Vriend, G; Kotulska, M
2018-05-29
Residue-residue contact prediction through direct coupling analysis has reached impressive accuracy, but yet higher accuracy will be needed to allow for routine modelling of protein structures. One way to improve the prediction accuracy is to filter predicted contacts using knowledge about the particular protein of interest or knowledge about protein structures in general. We focus on the latter and discuss a set of filters that can be used to remove false positive contact predictions. Each filter depends on one or a few cut-off parameters for which the filter performance was investigated. Combining all filters while using default parameters resulted for a test-set of 851 protein domains in the removal of 29% of the predictions of which 92% were indeed false positives. All data and scripts are available from http://comprec-lin.iiar.pwr.edu.pl/FPfilter/. malgorzata.kotulska@pwr.edu.pl. Supplementary data are available at Bioinformatics online.
Protein contact prediction using patterns of correlation.
Hamilton, Nicholas; Burrage, Kevin; Ragan, Mark A; Huber, Thomas
2004-09-01
We describe a new method for using neural networks to predict residue contact pairs in a protein. The main inputs to the neural network are a set of 25 measures of correlated mutation between all pairs of residues in two "windows" of size 5 centered on the residues of interest. While the individual pair-wise correlations are a relatively weak predictor of contact, by training the network on windows of correlation the accuracy of prediction is significantly improved. The neural network is trained on a set of 100 proteins and then tested on a disjoint set of 1033 proteins of known structure. An average predictive accuracy of 21.7% is obtained taking the best L/2 predictions for each protein, where L is the sequence length. Taking the best L/10 predictions gives an average accuracy of 30.7%. The predictor is also tested on a set of 59 proteins from the CASP5 experiment. The accuracy is found to be relatively consistent across different sequence lengths, but to vary widely according to the secondary structure. Predictive accuracy is also found to improve by using multiple sequence alignments containing many sequences to calculate the correlations. Copyright 2004 Wiley-Liss, Inc.
Karzmark, Peter; Deutsch, Gayle K
2018-01-01
This investigation was designed to determine the predictive accuracy of a comprehensive neuropsychological and brief neuropsychological test battery with regard to the capacity to perform instrumental activities of daily living (IADLs). Accuracy statistics that included measures of sensitivity, specificity, positive and negative predicted power and positive likelihood ratio were calculated for both types of batteries. The sample was drawn from a general neurological group of adults (n = 117) that included a number of older participants (age >55; n = 38). Standardized neuropsychological assessments were administered to all participants and were comprised of the Halstead Reitan Battery and portions of the Wechsler Adult Intelligence Scale-III. A comprehensive test battery yielded a moderate increase over base-rate in predictive accuracy that generalized to older individuals. There was only limited support for using a brief battery, for although sensitivity was high, specificity was low. We found that a comprehensive neuropsychological test battery provided good classification accuracy for predicting IADL capacity.
Correcting Memory Improves Accuracy of Predicted Task Duration
ERIC Educational Resources Information Center
Roy, Michael M.; Mitten, Scott T.; Christenfeld, Nicholas J. S.
2008-01-01
People are often inaccurate in predicting task duration. The memory bias explanation holds that this error is due to people having incorrect memories of how long previous tasks have taken, and these biased memories cause biased predictions. Therefore, the authors examined the effect on increasing predictive accuracy of correcting memory through…
NASA Astrophysics Data System (ADS)
Dyar, M. Darby; Giguere, Stephen; Carey, CJ; Boucher, Thomas
2016-12-01
This project examines the causes, effects, and optimization of continuum removal in laser-induced breakdown spectroscopy (LIBS) to produce the best possible prediction accuracy of elemental composition in geological samples. We compare prediction accuracy resulting from several different techniques for baseline removal, including asymmetric least squares (ALS), adaptive iteratively reweighted penalized least squares (Air-PLS), fully automatic baseline correction (FABC), continuous wavelet transformation, median filtering, polynomial fitting, the iterative thresholding Dietrich method, convex hull/rubber band techniques, and a newly-developed technique for Custom baseline removal (BLR). We assess the predictive performance of these methods using partial least-squares analysis for 13 elements of geological interest, expressed as the weight percentages of SiO2, Al2O3, TiO2, FeO, MgO, CaO, Na2O, K2O, and the parts per million concentrations of Ni, Cr, Zn, Mn, and Co. We find that previously published methods for baseline subtraction generally produce equivalent prediction accuracies for major elements. When those pre-existing methods are used, automated optimization of their adjustable parameters is always necessary to wring the best predictive accuracy out of a data set; ideally, it should be done for each individual variable. The new technique of Custom BLR produces significant improvements in prediction accuracy over existing methods across varying geological data sets, instruments, and varying analytical conditions. These results also demonstrate the dual objectives of the continuum removal problem: removing a smooth underlying signal to fit individual peaks (univariate analysis) versus using feature selection to select only those channels that contribute to best prediction accuracy for multivariate analyses. Overall, the current practice of using generalized, one-method-fits-all-spectra baseline removal results in poorer predictive performance for all methods. The extra steps needed to optimize baseline removal for each predicted variable and empower multivariate techniques with the best possible input data for optimal prediction accuracy are shown to be well worth the slight increase in necessary computations and complexity.
Jiang, Y; Zhao, Y; Rodemann, B; Plieske, J; Kollers, S; Korzun, V; Ebmeyer, E; Argillier, O; Hinze, M; Ling, J; Röder, M S; Ganal, M W; Mette, M F; Reif, J C
2015-03-01
Genome-wide mapping approaches in diverse populations are powerful tools to unravel the genetic architecture of complex traits. The main goals of our study were to investigate the potential and limits to unravel the genetic architecture and to identify the factors determining the accuracy of prediction of the genotypic variation of Fusarium head blight (FHB) resistance in wheat (Triticum aestivum L.) based on data collected with a diverse panel of 372 European varieties. The wheat lines were phenotyped in multi-location field trials for FHB resistance and genotyped with 782 simple sequence repeat (SSR) markers, and 9k and 90k single-nucleotide polymorphism (SNP) arrays. We applied genome-wide association mapping in combination with fivefold cross-validations and observed surprisingly high accuracies of prediction for marker-assisted selection based on the detected quantitative trait loci (QTLs). Using a random sample of markers not selected for marker-trait associations revealed only a slight decrease in prediction accuracy compared with marker-based selection exploiting the QTL information. The same picture was confirmed in a simulation study, suggesting that relatedness is a main driver of the accuracy of prediction in marker-assisted selection of FHB resistance. When the accuracy of prediction of three genomic selection models was contrasted for the three marker data sets, no significant differences in accuracies among marker platforms and genomic selection models were observed. Marker density impacted the accuracy of prediction only marginally. Consequently, genomic selection of FHB resistance can be implemented most cost-efficiently based on low- to medium-density SNP arrays.
Prediction algorithms for urban traffic control
DOT National Transportation Integrated Search
1979-02-01
The objectives of this study are to 1) review and assess the state-of-the-art of prediction algorithms for urban traffic control in terms of their accuracy and application, and 2) determine the prediction accuracy obtainable by examining the performa...
Medium- and Long-term Prediction of LOD Change by the Leap-step Autoregressive Model
NASA Astrophysics Data System (ADS)
Wang, Qijie
2015-08-01
The accuracy of medium- and long-term prediction of length of day (LOD) change base on combined least-square and autoregressive (LS+AR) deteriorates gradually. Leap-step autoregressive (LSAR) model can significantly reduce the edge effect of the observation sequence. Especially, LSAR model greatly improves the resolution of signals’ low-frequency components. Therefore, it can improve the efficiency of prediction. In this work, LSAR is used to forecast the LOD change. The LOD series from EOP 08 C04 provided by IERS is modeled by both the LSAR and AR models. The results of the two models are analyzed and compared. When the prediction length is between 10-30 days, the accuracy improvement is less than 10%. When the prediction length amounts to above 30 day, the accuracy improved obviously, with the maximum being around 19%. The results show that the LSAR model has higher prediction accuracy and stability in medium- and long-term prediction.
Lee, S Hong; Clark, Sam; van der Werf, Julius H J
2017-01-01
Genomic prediction is emerging in a wide range of fields including animal and plant breeding, risk prediction in human precision medicine and forensic. It is desirable to establish a theoretical framework for genomic prediction accuracy when the reference data consists of information sources with varying degrees of relationship to the target individuals. A reference set can contain both close and distant relatives as well as 'unrelated' individuals from the wider population in the genomic prediction. The various sources of information were modeled as different populations with different effective population sizes (Ne). Both the effective number of chromosome segments (Me) and Ne are considered to be a function of the data used for prediction. We validate our theory with analyses of simulated as well as real data, and illustrate that the variation in genomic relationships with the target is a predictor of the information content of the reference set. With a similar amount of data available for each source, we show that close relatives can have a substantially larger effect on genomic prediction accuracy than lesser related individuals. We also illustrate that when prediction relies on closer relatives, there is less improvement in prediction accuracy with an increase in training data or marker panel density. We release software that can estimate the expected prediction accuracy and power when combining different reference sources with various degrees of relationship to the target, which is useful when planning genomic prediction (before or after collecting data) in animal, plant and human genetics.
On the accuracy of ERS-1 orbit predictions
NASA Technical Reports Server (NTRS)
Koenig, Rolf; Li, H.; Massmann, Franz-Heinrich; Raimondo, J. C.; Rajasenan, C.; Reigber, C.
1993-01-01
Since the launch of ERS-1, the D-PAF (German Processing and Archiving Facility) provides regularly orbit predictions for the worldwide SLR (Satellite Laser Ranging) tracking network. The weekly distributed orbital elements are so called tuned IRV's and tuned SAO-elements. The tuning procedure, designed to improve the accuracy of the recovery of the orbit at the stations, is discussed based on numerical results. This shows that tuning of elements is essential for ERS-1 with the currently applied tracking procedures. The orbital elements are updated by daily distributed time bias functions. The generation of the time bias function is explained. Problems and numerical results are presented. The time bias function increases the prediction accuracy considerably. Finally, the quality assessment of ERS-1 orbit predictions is described. The accuracy is compiled for about 250 days since launch. The average accuracy lies in the range of 50-100 ms and has considerably improved.
Krendl, Anne C; Rule, Nicholas O; Ambady, Nalini
2014-09-01
Young adults can be surprisingly accurate at making inferences about people from their faces. Although these first impressions have important consequences for both the perceiver and the target, it remains an open question whether first impression accuracy is preserved with age. Specifically, could age differences in impressions toward others stem from age-related deficits in accurately detecting complex social cues? Research on aging and impression formation suggests that young and older adults show relative consensus in their first impressions, but it is unknown whether they differ in accuracy. It has been widely shown that aging disrupts emotion recognition accuracy, and that these impairments may predict deficits in other social judgments, such as detecting deceit. However, it is unclear whether general impression formation accuracy (e.g., emotion recognition accuracy, detecting complex social cues) relies on similar or distinct mechanisms. It is important to examine this question to evaluate how, if at all, aging might affect overall accuracy. Here, we examined whether aging impaired first impression accuracy in predicting real-world outcomes and categorizing social group membership. Specifically, we studied whether emotion recognition accuracy and age-related cognitive decline (which has been implicated in exacerbating deficits in emotion recognition) predict first impression accuracy. Our results revealed that emotion recognition accuracy did not predict first impression accuracy, nor did age-related cognitive decline impair it. These findings suggest that domains of social perception outside of emotion recognition may rely on mechanisms that are relatively unimpaired by aging. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Posterior Predictive Checks for Conditional Independence between Response Time and Accuracy
ERIC Educational Resources Information Center
Bolsinova, Maria; Tijmstra, Jesper
2016-01-01
Conditional independence (CI) between response time and response accuracy is a fundamental assumption of many joint models for time and accuracy used in educational measurement. In this study, posterior predictive checks (PPCs) are proposed for testing this assumption. These PPCs are based on three discrepancy measures reflecting different…
The microcomputer scientific software series 4: testing prediction accuracy.
H. Michael Rauscher
1986-01-01
A computer program, ATEST, is described in this combination user's guide / programmer's manual. ATEST provides users with an efficient and convenient tool to test the accuracy of predictors. As input ATEST requires observed-predicted data pairs. The output reports the two components of accuracy, bias and precision.
Belay, T K; Dagnachew, B S; Boison, S A; Ådnøy, T
2018-03-28
Milk infrared spectra are routinely used for phenotyping traits of interest through links developed between the traits and spectra. Predicted individual traits are then used in genetic analyses for estimated breeding value (EBV) or for phenotypic predictions using a single-trait mixed model; this approach is referred to as indirect prediction (IP). An alternative approach [direct prediction (DP)] is a direct genetic analysis of (a reduced dimension of) the spectra using a multitrait model to predict multivariate EBV of the spectral components and, ultimately, also to predict the univariate EBV or phenotype for the traits of interest. We simulated 3 traits under different genetic (low: 0.10 to high: 0.90) and residual (zero to high: ±0.90) correlation scenarios between the 3 traits and assumed the first trait is a linear combination of the other 2 traits. The aim was to compare the IP and DP approaches for predictions of EBV and phenotypes under the different correlation scenarios. We also evaluated relationships between performances of the 2 approaches and the accuracy of calibration equations. Moreover, the effect of using different regression coefficients estimated from simulated phenotypes (β p ), true breeding values (β g ), and residuals (β r ) on performance of the 2 approaches were evaluated. The simulated data contained 2,100 parents (100 sires and 2,000 cows) and 8,000 offspring (4 offspring per cow). Of the 8,000 observations, 2,000 were randomly selected and used to develop links between the first and the other 2 traits using partial least square (PLS) regression analysis. The different PLS regression coefficients, such as β p , β g , and β r , were used in subsequent predictions following the IP and DP approaches. We used BLUP analyses for the remaining 6,000 observations using the true (co)variance components that had been used for the simulation. Accuracy of prediction (of EBV and phenotype) was calculated as a correlation between predicted and true values from the simulations. The results showed that accuracies of EBV prediction were higher in the DP than in the IP approach. The reverse was true for accuracy of phenotypic prediction when using β p but not when using β g and β r , where accuracy of phenotypic prediction in the DP was slightly higher than in the IP approach. Within the DP approach, accuracies of EBV when using β g were higher than when using β p only at the low genetic correlation scenario. However, we found no differences in EBV prediction accuracy between the β p and β g in the IP approach. Accuracy of the calibration models increased with an increase in genetic and residual correlations between the traits. Performance of both approaches increased with an increase in accuracy of the calibration models. In conclusion, the DP approach is a good strategy for EBV prediction but not for phenotypic prediction, where the classical PLS regression-based equations or the IP approach provided better results. The Authors. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
NASA Technical Reports Server (NTRS)
1988-01-01
One innovation developed by a contractor at Ames Research Center was an open cell polymeric foam material with unusual properties. Intended as padding for aircraft seats the material offered better impact protection against accidents, and also enhanced passenger comfort because it distributed body weight evenly over the entire contact area. Called a slow springback foam, it flows to match the contour of the body pressing against it, and returns to its original shape once the pressure is removed. It has many applications including aircraft cushions and padding, dental stools, and athletic equipment. Now it's used by Dynamic Systems, Inc. for medical applications such as wheel chairs for severely disabled people which allow them to sit for 3-8 hours where they used to be uncomfortable in 15-30 minutes.
Reshaping of large aeronautical structural parts: A simplified simulation approach
NASA Astrophysics Data System (ADS)
Mena, Ramiro; Aguado, José V.; Guinard, Stéphane; Huerta, Antonio
2018-05-01
Large aeronautical structural parts present important distortions after machining. This problem is caused by the presence of residual stresses, which are developed during previous manufacturing steps (quenching). Before being put into service, the nominal geometry is restored by means of mechanical methods. This operation is called reshaping and exclusively depends on the skills of a well-trained and experienced operator. Moreover, this procedure is time consuming and nowadays, it is only based on a trial and error approach. Therefore, there is a need at industrial level to solve this problem with the support of numerical simulation tools. By using a simplification hypothesis, it was found that the springback phenomenon behaves linearly and it allows developing a strategy to implement reshaping at an industrial level.
Neurocognitive and Behavioral Predictors of Math Performance in Children with and without ADHD
Antonini, Tanya N.; O’Brien, Kathleen M.; Narad, Megan E.; Langberg, Joshua M.; Tamm, Leanne; Epstein, Jeff N.
2014-01-01
Objective: This study examined neurocognitive and behavioral predictors of math performance in children with and without attention-deficit/hyperactivity disorder (ADHD). Method: Neurocognitive and behavioral variables were examined as predictors of 1) standardized mathematics achievement scores,2) productivity on an analog math task, and 3) accuracy on an analog math task. Results: Children with ADHD had lower achievement scores but did not significantly differ from controls on math productivity or accuracy. N-back accuracy and parent-rated attention predicted math achievement. N-back accuracy and observed attention predicted math productivity. Alerting scores on the Attentional Network Task predicted math accuracy. Mediation analyses indicated that n-back accuracy significantly mediated the relationship between diagnostic group and math achievement. Conclusion: Neurocognition, rather than behavior, may account for the deficits in math achievement exhibited by many children with ADHD. PMID:24071774
Neurocognitive and Behavioral Predictors of Math Performance in Children With and Without ADHD.
Antonini, Tanya N; Kingery, Kathleen M; Narad, Megan E; Langberg, Joshua M; Tamm, Leanne; Epstein, Jeffery N
2016-02-01
This study examined neurocognitive and behavioral predictors of math performance in children with and without ADHD. Neurocognitive and behavioral variables were examined as predictors of (a) standardized mathematics achievement scores, (b) productivity on an analog math task, and (c) accuracy on an analog math task. Children with ADHD had lower achievement scores but did not significantly differ from controls on math productivity or accuracy. N-back accuracy and parent-rated attention predicted math achievement. N-back accuracy and observed attention predicted math productivity. Alerting scores on the attentional network task predicted math accuracy. Mediation analyses indicated that n-back accuracy significantly mediated the relationship between diagnostic group and math achievement. Neurocognition, rather than behavior, may account for the deficits in math achievement exhibited by many children with ADHD. © The Author(s) 2013.
Artificial neural network prediction of ischemic tissue fate in acute stroke imaging
Huang, Shiliang; Shen, Qiang; Duong, Timothy Q
2010-01-01
Multimodal magnetic resonance imaging of acute stroke provides predictive value that can be used to guide stroke therapy. A flexible artificial neural network (ANN) algorithm was developed and applied to predict ischemic tissue fate on three stroke groups: 30-, 60-minute, and permanent middle cerebral artery occlusion in rats. Cerebral blood flow (CBF), apparent diffusion coefficient (ADC), and spin–spin relaxation time constant (T2) were acquired during the acute phase up to 3 hours and again at 24 hours followed by histology. Infarct was predicted on a pixel-by-pixel basis using only acute (30-minute) stroke data. In addition, neighboring pixel information and infarction incidence were also incorporated into the ANN model to improve prediction accuracy. Receiver-operating characteristic analysis was used to quantify prediction accuracy. The major findings were the following: (1) CBF alone poorly predicted the final infarct across three experimental groups; (2) ADC alone adequately predicted the infarct; (3) CBF+ADC improved the prediction accuracy; (4) inclusion of neighboring pixel information and infarction incidence further improved the prediction accuracy; and (5) prediction was more accurate for permanent occlusion, followed by 60- and 30-minute occlusion. The ANN predictive model could thus provide a flexible and objective framework for clinicians to evaluate stroke treatment options on an individual patient basis. PMID:20424631
NASA Astrophysics Data System (ADS)
Wang, Chuantao (C. T.)
2005-08-01
In the past decade, sheet metal forming and die development has been transformed to a science-based and technology-driven engineering and manufacturing enterprise from a tryout-based craft. Stamping CAE, especially the sheet metal forming simulation, as one of the core components in digital die making and digital stamping, has played a key role in this historical transition. The stamping simulation technology and its industrial applications have greatly impacted automotive sheet metal product design, die developments, die construction and tryout, and production stamping. The stamping CAE community has successfully resolved the traditional formability problems such as splits and wrinkles. The evolution of the stamping CAE technology and business demands opens even greater opportunities and challenges to stamping CAE community in the areas of (1) continuously improving simulation accuracy, drastically reducing simulation time-in-system, and improving operationalability (friendliness), (2) resolving those historically difficult-to-resolve problems such as dimensional quality problems (springback and twist) and surface quality problems (distortion and skid/impact lines), (3) resolving total manufacturability problems in line die operations including blanking, draw/redraw, trim/piercing, and flanging, and (4) overcoming new problems in forming new sheet materials with new forming techniques. In this article, the author first provides an overview of the stamping CAE technology adventures and achievements, and industrial applications in the past decade. Then the author presents a summary of increasing manufacturability needs from the formability to total quality and total manufacturability of sheet metal stampings. Finally, the paper outlines the new needs and trends for continuous improvements and innovations to meet increasing challenges in line die formability and quality requirements in automotive stamping.
Cross-validation of recent and longstanding resting metabolic rate prediction equations
USDA-ARS?s Scientific Manuscript database
Resting metabolic rate (RMR) measurement is time consuming and requires specialized equipment. Prediction equations provide an easy method to estimate RMR; however, their accuracy likely varies across individuals. Understanding the factors that influence predicted RMR accuracy at the individual lev...
Prospects for Genomic Selection in Cassava Breeding.
Wolfe, Marnin D; Del Carpio, Dunia Pino; Alabi, Olumide; Ezenwaka, Lydia C; Ikeogu, Ugochukwu N; Kayondo, Ismail S; Lozano, Roberto; Okeke, Uche G; Ozimati, Alfred A; Williams, Esuma; Egesi, Chiedozie; Kawuki, Robert S; Kulakow, Peter; Rabbi, Ismail Y; Jannink, Jean-Luc
2017-11-01
Cassava ( Crantz) is a clonally propagated staple food crop in the tropics. Genomic selection (GS) has been implemented at three breeding institutions in Africa to reduce cycle times. Initial studies provided promising estimates of predictive abilities. Here, we expand on previous analyses by assessing the accuracy of seven prediction models for seven traits in three prediction scenarios: cross-validation within populations, cross-population prediction and cross-generation prediction. We also evaluated the impact of increasing the training population (TP) size by phenotyping progenies selected either at random or with a genetic algorithm. Cross-validation results were mostly consistent across programs, with nonadditive models predicting of 10% better on average. Cross-population accuracy was generally low (mean = 0.18) but prediction of cassava mosaic disease increased up to 57% in one Nigerian population when data from another related population were combined. Accuracy across generations was poorer than within-generation accuracy, as expected, but accuracy for dry matter content and mosaic disease severity should be sufficient for rapid-cycling GS. Selection of a prediction model made some difference across generations, but increasing TP size was more important. With a genetic algorithm, selection of one-third of progeny could achieve an accuracy equivalent to phenotyping all progeny. We are in the early stages of GS for this crop but the results are promising for some traits. General guidelines that are emerging are that TPs need to continue to grow but phenotyping can be done on a cleverly selected subset of individuals, reducing the overall phenotyping burden. Copyright © 2017 Crop Science Society of America.
He, Jun; Xu, Jiaqi; Wu, Xiao-Lin; Bauck, Stewart; Lee, Jungjae; Morota, Gota; Kachman, Stephen D; Spangler, Matthew L
2018-04-01
SNP chips are commonly used for genotyping animals in genomic selection but strategies for selecting low-density (LD) SNPs for imputation-mediated genomic selection have not been addressed adequately. The main purpose of the present study was to compare the performance of eight LD (6K) SNP panels, each selected by a different strategy exploiting a combination of three major factors: evenly-spaced SNPs, increased minor allele frequencies, and SNP-trait associations either for single traits independently or for all the three traits jointly. The imputation accuracies from 6K to 80K SNP genotypes were between 96.2 and 98.2%. Genomic prediction accuracies obtained using imputed 80K genotypes were between 0.817 and 0.821 for daughter pregnancy rate, between 0.838 and 0.844 for fat yield, and between 0.850 and 0.863 for milk yield. The two SNP panels optimized on the three major factors had the highest genomic prediction accuracy (0.821-0.863), and these accuracies were very close to those obtained using observed 80K genotypes (0.825-0.868). Further exploration of the underlying relationships showed that genomic prediction accuracies did not respond linearly to imputation accuracies, but were significantly affected by genotype (imputation) errors of SNPs in association with the traits to be predicted. SNPs optimal for map coverage and MAF were favorable for obtaining accurate imputation of genotypes whereas trait-associated SNPs improved genomic prediction accuracies. Thus, optimal LD SNP panels were the ones that combined both strengths. The present results have practical implications on the design of LD SNP chips for imputation-enabled genomic prediction.
Maden, Orhan; Balci, Kevser Gülcihan; Selcuk, Mehmet Timur; Balci, Mustafa Mücahit; Açar, Burak; Unal, Sefa; Kara, Meryem; Selcuk, Hatice
2015-12-01
The aim of this study was to investigate the accuracy of three algorithms in predicting accessory pathway locations in adult patients with Wolff-Parkinson-White syndrome in Turkish population. A total of 207 adult patients with Wolff-Parkinson-White syndrome were retrospectively analyzed. The most preexcited 12-lead electrocardiogram in sinus rhythm was used for analysis. Two investigators blinded to the patient data used three algorithms for prediction of accessory pathway location. Among all locations, 48.5% were left-sided, 44% were right-sided, and 7.5% were located in the midseptum or anteroseptum. When only exact locations were accepted as match, predictive accuracy for Chiang was 71.5%, 72.4% for d'Avila, and 71.5% for Arruda. The percentage of predictive accuracy of all algorithms did not differ between the algorithms (p = 1.000; p = 0.875; p = 0.885, respectively). The best algorithm for prediction of right-sided, left-sided, and anteroseptal and midseptal accessory pathways was Arruda (p < 0.001). Arruda was significantly better than d'Avila in predicting adjacent sites (p = 0.035) and the percent of the contralateral site prediction was higher with d'Avila than Arruda (p = 0.013). All algorithms were similar in predicting accessory pathway location and the predicted accuracy was lower than previously reported by their authors. However, according to the accessory pathway site, the algorithm designed by Arruda et al. showed better predictions than the other algorithms and using this algorithm may provide advantages before a planned ablation.
Accuracy test for link prediction in terms of similarity index: The case of WS and BA models
NASA Astrophysics Data System (ADS)
Ahn, Min-Woo; Jung, Woo-Sung
2015-07-01
Link prediction is a technique that uses the topological information in a given network to infer the missing links in it. Since past research on link prediction has primarily focused on enhancing performance for given empirical systems, negligible attention has been devoted to link prediction with regard to network models. In this paper, we thus apply link prediction to two network models: The Watts-Strogatz (WS) model and Barabási-Albert (BA) model. We attempt to gain a better understanding of the relation between accuracy and each network parameter (mean degree, the number of nodes and the rewiring probability in the WS model) through network models. Six similarity indices are used, with precision and area under the ROC curve (AUC) value as the accuracy metrics. We observe a positive correlation between mean degree and accuracy, and size independence of the AUC value.
Effectiveness of link prediction for face-to-face behavioral networks.
Tsugawa, Sho; Ohsaki, Hiroyuki
2013-01-01
Research on link prediction for social networks has been actively pursued. In link prediction for a given social network obtained from time-windowed observation, new link formation in the network is predicted from the topology of the obtained network. In contrast, recent advances in sensing technology have made it possible to obtain face-to-face behavioral networks, which are social networks representing face-to-face interactions among people. However, the effectiveness of link prediction techniques for face-to-face behavioral networks has not yet been explored in depth. To clarify this point, here we investigate the accuracy of conventional link prediction techniques for networks obtained from the history of face-to-face interactions among participants at an academic conference. Our findings were (1) that conventional link prediction techniques predict new link formation with a precision of 0.30-0.45 and a recall of 0.10-0.20, (2) that prolonged observation of social networks often degrades the prediction accuracy, (3) that the proposed decaying weight method leads to higher prediction accuracy than can be achieved by observing all records of communication and simply using them unmodified, and (4) that the prediction accuracy for face-to-face behavioral networks is relatively high compared to that for non-social networks, but not as high as for other types of social networks.
The effect of concurrent hand movement on estimated time to contact in a prediction motion task.
Zheng, Ran; Maraj, Brian K V
2018-04-27
In many activities, we need to predict the arrival of an occluded object. This action is called prediction motion or motion extrapolation. Previous researchers have found that both eye tracking and the internal clocking model are involved in the prediction motion task. Additionally, it is reported that concurrent hand movement facilitates the eye tracking of an externally generated target in a tracking task, even if the target is occluded. The present study examined the effect of concurrent hand movement on the estimated time to contact in a prediction motion task. We found different (accurate/inaccurate) concurrent hand movements had the opposite effect on the eye tracking accuracy and estimated TTC in the prediction motion task. That is, the accurate concurrent hand tracking enhanced eye tracking accuracy and had the trend to increase the precision of estimated TTC, but the inaccurate concurrent hand tracking decreased eye tracking accuracy and disrupted estimated TTC. However, eye tracking accuracy does not determine the precision of estimated TTC.
ERIC Educational Resources Information Center
Hilton, N. Zoe; Harris, Grant T.
2009-01-01
Prediction effect sizes such as ROC area are important for demonstrating a risk assessment's generalizability and utility. How a study defines recidivism might affect predictive accuracy. Nonrecidivism is problematic when predicting specialized violence (e.g., domestic violence). The present study cross-validates the ability of the Ontario…
Improving Fermi Orbit Determination and Prediction in an Uncertain Atmospheric Drag Environment
NASA Technical Reports Server (NTRS)
Vavrina, Matthew A.; Newman, Clark P.; Slojkowski, Steven E.; Carpenter, J. Russell
2014-01-01
Orbit determination and prediction of the Fermi Gamma-ray Space Telescope trajectory is strongly impacted by the unpredictability and variability of atmospheric density and the spacecraft's ballistic coefficient. Operationally, Global Positioning System point solutions are processed with an extended Kalman filter for orbit determination, and predictions are generated for conjunction assessment with secondary objects. When these predictions are compared to Joint Space Operations Center radar-based solutions, the close approach distance between the two predictions can greatly differ ahead of the conjunction. This work explores strategies for improving prediction accuracy and helps to explain the prediction disparities. Namely, a tuning analysis is performed to determine atmospheric drag modeling and filter parameters that can improve orbit determination as well as prediction accuracy. A 45% improvement in three-day prediction accuracy is realized by tuning the ballistic coefficient and atmospheric density stochastic models, measurement frequency, and other modeling and filter parameters.
NASA Astrophysics Data System (ADS)
Wang, Qianxin; Hu, Chao; Xu, Tianhe; Chang, Guobin; Hernández Moraleda, Alberto
2017-12-01
Analysis centers (ACs) for global navigation satellite systems (GNSSs) cannot accurately obtain real-time Earth rotation parameters (ERPs). Thus, the prediction of ultra-rapid orbits in the international terrestrial reference system (ITRS) has to utilize the predicted ERPs issued by the International Earth Rotation and Reference Systems Service (IERS) or the International GNSS Service (IGS). In this study, the accuracy of ERPs predicted by IERS and IGS is analyzed. The error of the ERPs predicted for one day can reach 0.15 mas and 0.053 ms in polar motion and UT1-UTC direction, respectively. Then, the impact of ERP errors on ultra-rapid orbit prediction by GNSS is studied. The methods for orbit integration and frame transformation in orbit prediction with introduced ERP errors dominate the accuracy of the predicted orbit. Experimental results show that the transformation from the geocentric celestial references system (GCRS) to ITRS exerts the strongest effect on the accuracy of the predicted ultra-rapid orbit. To obtain the most accurate predicted ultra-rapid orbit, a corresponding real-time orbit correction method is developed. First, orbits without ERP-related errors are predicted on the basis of ITRS observed part of ultra-rapid orbit for use as reference. Then, the corresponding predicted orbit is transformed from GCRS to ITRS to adjust for the predicted ERPs. Finally, the corrected ERPs with error slopes are re-introduced to correct the predicted orbit in ITRS. To validate the proposed method, three experimental schemes are designed: function extrapolation, simulation experiments, and experiments with predicted ultra-rapid orbits and international GNSS Monitoring and Assessment System (iGMAS) products. Experimental results show that using the proposed correction method with IERS products considerably improved the accuracy of ultra-rapid orbit prediction (except the geosynchronous BeiDou orbits). The accuracy of orbit prediction is enhanced by at least 50% (error related to ERP) when a highly accurate observed orbit is used with the correction method. For iGMAS-predicted orbits, the accuracy improvement ranges from 8.5% for the inclined BeiDou orbits to 17.99% for the GPS orbits. This demonstrates that the correction method proposed by this study can optimize the ultra-rapid orbit prediction.
Protein Secondary Structure Prediction Using AutoEncoder Network and Bayes Classifier
NASA Astrophysics Data System (ADS)
Wang, Leilei; Cheng, Jinyong
2018-03-01
Protein secondary structure prediction is belong to bioinformatics,and it's important in research area. In this paper, we propose a new prediction way of protein using bayes classifier and autoEncoder network. Our experiments show some algorithms including the construction of the model, the classification of parameters and so on. The data set is a typical CB513 data set for protein. In terms of accuracy, the method is the cross validation based on the 3-fold. Then we can get the Q3 accuracy. Paper results illustrate that the autoencoder network improved the prediction accuracy of protein secondary structure.
Tokunaga, Makoto; Watanabe, Susumu; Sonoda, Shigeru
2017-09-01
Multiple linear regression analysis is often used to predict the outcome of stroke rehabilitation. However, the predictive accuracy may not be satisfactory. The objective of this study was to elucidate the predictive accuracy of a method of calculating motor Functional Independence Measure (mFIM) at discharge from mFIM effectiveness predicted by multiple regression analysis. The subjects were 505 patients with stroke who were hospitalized in a convalescent rehabilitation hospital. The formula "mFIM at discharge = mFIM effectiveness × (91 points - mFIM at admission) + mFIM at admission" was used. By including the predicted mFIM effectiveness obtained through multiple regression analysis in this formula, we obtained the predicted mFIM at discharge (A). We also used multiple regression analysis to directly predict mFIM at discharge (B). The correlation between the predicted and the measured values of mFIM at discharge was compared between A and B. The correlation coefficients were .916 for A and .878 for B. Calculating mFIM at discharge from mFIM effectiveness predicted by multiple regression analysis had a higher degree of predictive accuracy of mFIM at discharge than that directly predicted. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Kvavilashvili, Lia; Ford, Ruth M
2014-11-01
It is well documented that young children greatly overestimate their performance on tests of retrospective memory (RM), but the current investigation is the first to examine children's prediction accuracy for prospective memory (PM). Three studies were conducted, each testing a different group of 5-year-olds. In Study 1 (N=46), participants were asked to predict their success in a simple event-based PM task (remembering to convey a message to a toy mole if they encountered a particular picture during a picture-naming activity). Before naming the pictures, children listened to either a reminder story or a neutral story. Results showed that children were highly accurate in their PM predictions (78% accuracy) and that the reminder story appeared to benefit PM only in children who predicted they would remember the PM response. In Study 2 (N=80), children showed high PM prediction accuracy (69%) regardless of whether the cue was specific or general and despite typical overoptimism regarding their performance on a 10-item RM task using item-by-item prediction. Study 3 (N=35) showed that children were prone to overestimate RM even when asked about their ability to recall a single item-the mole's unusual name. In light of these findings, we consider possible reasons for children's impressive PM prediction accuracy, including the potential involvement of future thinking in performance predictions and PM. Copyright © 2014 Elsevier Inc. All rights reserved.
Auinger, Hans-Jürgen; Schönleben, Manfred; Lehermeier, Christina; Schmidt, Malthe; Korzun, Viktor; Geiger, Hartwig H; Piepho, Hans-Peter; Gordillo, Andres; Wilde, Peer; Bauer, Eva; Schön, Chris-Carolin
2016-11-01
Genomic prediction accuracy can be significantly increased by model calibration across multiple breeding cycles as long as selection cycles are connected by common ancestors. In hybrid rye breeding, application of genome-based prediction is expected to increase selection gain because of long selection cycles in population improvement and development of hybrid components. Essentially two prediction scenarios arise: (1) prediction of the genetic value of lines from the same breeding cycle in which model training is performed and (2) prediction of lines from subsequent cycles. It is the latter from which a reduction in cycle length and consequently the strongest impact on selection gain is expected. We empirically investigated genome-based prediction of grain yield, plant height and thousand kernel weight within and across four selection cycles of a hybrid rye breeding program. Prediction performance was assessed using genomic and pedigree-based best linear unbiased prediction (GBLUP and PBLUP). A total of 1040 S 2 lines were genotyped with 16 k SNPs and each year testcrosses of 260 S 2 lines were phenotyped in seven or eight locations. The performance gap between GBLUP and PBLUP increased significantly for all traits when model calibration was performed on aggregated data from several cycles. Prediction accuracies obtained from cross-validation were in the order of 0.70 for all traits when data from all cycles (N CS = 832) were used for model training and exceeded within-cycle accuracies in all cases. As long as selection cycles are connected by a sufficient number of common ancestors and prediction accuracy has not reached a plateau when increasing sample size, aggregating data from several preceding cycles is recommended for predicting genetic values in subsequent cycles despite decreasing relatedness over time.
Weng, Ziqing; Wolc, Anna; Shen, Xia; Fernando, Rohan L; Dekkers, Jack C M; Arango, Jesus; Settar, Petek; Fulton, Janet E; O'Sullivan, Neil P; Garrick, Dorian J
2016-03-19
Genomic estimated breeding values (GEBV) based on single nucleotide polymorphism (SNP) genotypes are widely used in animal improvement programs. It is typically assumed that the larger the number of animals is in the training set, the higher is the prediction accuracy of GEBV. The aim of this study was to quantify genomic prediction accuracy depending on the number of ancestral generations included in the training set, and to determine the optimal number of training generations for different traits in an elite layer breeding line. Phenotypic records for 16 traits on 17,793 birds were used. All parents and some selection candidates from nine non-overlapping generations were genotyped for 23,098 segregating SNPs. An animal model with pedigree relationships (PBLUP) and the BayesB genomic prediction model were applied to predict EBV or GEBV at each validation generation (progeny of the most recent training generation) based on varying numbers of immediately preceding ancestral generations. Prediction accuracy of EBV or GEBV was assessed as the correlation between EBV and phenotypes adjusted for fixed effects, divided by the square root of trait heritability. The optimal number of training generations that resulted in the greatest prediction accuracy of GEBV was determined for each trait. The relationship between optimal number of training generations and heritability was investigated. On average, accuracies were higher with the BayesB model than with PBLUP. Prediction accuracies of GEBV increased as the number of closely-related ancestral generations included in the training set increased, but reached an asymptote or slightly decreased when distant ancestral generations were used in the training set. The optimal number of training generations was 4 or more for high heritability traits but less than that for low heritability traits. For less heritable traits, limiting the training datasets to individuals closely related to the validation population resulted in the best predictions. The effect of adding distant ancestral generations in the training set on prediction accuracy differed between traits and the optimal number of necessary training generations is associated with the heritability of traits.
Omran, Dalia; Zayed, Rania A; Nabeel, Mohammed M; Mobarak, Lamiaa; Zakaria, Zeinab; Farid, Azza; Hassany, Mohamed; Saif, Sameh; Mostafa, Muhammad; Saad, Omar Khalid; Yosry, Ayman
2018-05-01
Stage of liver fibrosis is critical for treatment decision and prediction of outcomes in chronic hepatitis C (CHC) patients. We evaluated the diagnostic accuracy of transient elastography (TE)-FibroScan and noninvasive serum markers tests in the assessment of liver fibrosis in CHC patients, in reference to liver biopsy. One-hundred treatment-naive CHC patients were subjected to liver biopsy, TE-FibroScan, and eight serum biomarkers tests; AST/ALT ratio (AAR), AST to platelet ratio index (APRI), age-platelet index (AP index), fibrosis quotient (FibroQ), fibrosis 4 index (FIB-4), cirrhosis discriminant score (CDS), King score, and Goteborg University Cirrhosis Index (GUCI). Receiver operating characteristic curves were constructed to compare the diagnostic accuracy of these noninvasive methods in predicting significant fibrosis in CHC patients. TE-FibroScan predicted significant fibrosis at cutoff value 8.5 kPa with area under the receiver operating characteristic (AUROC) 0.90, sensitivity 83%, specificity 91.5%, positive predictive value (PPV) 91.2%, and negative predictive value (NPV) 84.4%. Serum biomarkers tests showed that AP index and FibroQ had the highest diagnostic accuracy in predicting significant liver fibrosis at cutoff 4.5 and 2.7, AUROC was 0.8 and 0.8 with sensitivity 73.6% and 73.6%, specificity 70.2% and 68.1%, PPV 71.1% and 69.8%, and NPV 72.9% and 72.3%, respectively. Combined AP index and FibroQ had AUROC 0.83 with sensitivity 73.6%, specificity 80.9%, PPV 79.6%, and NPV 75.7% for predicting significant liver fibrosis. APRI, FIB-4, CDS, King score, and GUCI had intermediate accuracy in predicting significant liver fibrosis with AUROC 0.68, 0.78, 0.74, 0.74, and 0.67, respectively, while AAR had low accuracy in predicting significant liver fibrosis. TE-FibroScan is the most accurate noninvasive alternative to liver biopsy. AP index and FibroQ, either as individual tests or combined, have good accuracy in predicting significant liver fibrosis, and are better combined for higher specificity.
Clark, Samuel A; Hickey, John M; Daetwyler, Hans D; van der Werf, Julius H J
2012-02-09
The theory of genomic selection is based on the prediction of the effects of genetic markers in linkage disequilibrium with quantitative trait loci. However, genomic selection also relies on relationships between individuals to accurately predict genetic value. This study aimed to examine the importance of information on relatives versus that of unrelated or more distantly related individuals on the estimation of genomic breeding values. Simulated and real data were used to examine the effects of various degrees of relationship on the accuracy of genomic selection. Genomic Best Linear Unbiased Prediction (gBLUP) was compared to two pedigree based BLUP methods, one with a shallow one generation pedigree and the other with a deep ten generation pedigree. The accuracy of estimated breeding values for different groups of selection candidates that had varying degrees of relationships to a reference data set of 1750 animals was investigated. The gBLUP method predicted breeding values more accurately than BLUP. The most accurate breeding values were estimated using gBLUP for closely related animals. Similarly, the pedigree based BLUP methods were also accurate for closely related animals, however when the pedigree based BLUP methods were used to predict unrelated animals, the accuracy was close to zero. In contrast, gBLUP breeding values, for animals that had no pedigree relationship with animals in the reference data set, allowed substantial accuracy. An animal's relationship to the reference data set is an important factor for the accuracy of genomic predictions. Animals that share a close relationship to the reference data set had the highest accuracy from genomic predictions. However a baseline accuracy that is driven by the reference data set size and the overall population effective population size enables gBLUP to estimate a breeding value for unrelated animals within a population (breed), using information previously ignored by pedigree based BLUP methods.
Juliana, Philomin; Singh, Ravi P; Singh, Pawan K; Crossa, Jose; Rutkoski, Jessica E; Poland, Jesse A; Bergstrom, Gary C; Sorrells, Mark E
2017-07-01
The leaf spotting diseases in wheat that include Septoria tritici blotch (STB) caused by , Stagonospora nodorum blotch (SNB) caused by , and tan spot (TS) caused by pose challenges to breeding programs in selecting for resistance. A promising approach that could enable selection prior to phenotyping is genomic selection that uses genome-wide markers to estimate breeding values (BVs) for quantitative traits. To evaluate this approach for seedling and/or adult plant resistance (APR) to STB, SNB, and TS, we compared the predictive ability of least-squares (LS) approach with genomic-enabled prediction models including genomic best linear unbiased predictor (GBLUP), Bayesian ridge regression (BRR), Bayes A (BA), Bayes B (BB), Bayes Cπ (BC), Bayesian least absolute shrinkage and selection operator (BL), and reproducing kernel Hilbert spaces markers (RKHS-M), a pedigree-based model (RKHS-P) and RKHS markers and pedigree (RKHS-MP). We observed that LS gave the lowest prediction accuracies and RKHS-MP, the highest. The genomic-enabled prediction models and RKHS-P gave similar accuracies. The increase in accuracy using genomic prediction models over LS was 48%. The mean genomic prediction accuracies were 0.45 for STB (APR), 0.55 for SNB (seedling), 0.66 for TS (seedling) and 0.48 for TS (APR). We also compared markers from two whole-genome profiling approaches: genotyping by sequencing (GBS) and diversity arrays technology sequencing (DArTseq) for prediction. While, GBS markers performed slightly better than DArTseq, combining markers from the two approaches did not improve accuracies. We conclude that implementing GS in breeding for these diseases would help to achieve higher accuracies and rapid gains from selection. Copyright © 2017 Crop Science Society of America.
Can nutrient status of four woody plant species be predicted using field spectrometry?
NASA Astrophysics Data System (ADS)
Ferwerda, Jelle G.; Skidmore, Andrew K.
This paper demonstrates the potential of hyperspectral remote sensing to predict the chemical composition (i.e., nitrogen, phosphorous, calcium, potassium, sodium, and magnesium) of three tree species (i.e., willow, mopane and olive) and one shrub species (i.e., heather). Reflectance spectra, derivative spectra and continuum-removed spectra were compared in terms of predictive power. Results showed that the best predictions for nitrogen, phosphorous, and magnesium occur when using derivative spectra, and the best predictions for sodium, potassium, and calcium occur when using continuum-removed data. To test whether a general model for multiple species is also valid for individual species, a bootstrapping routine was applied. Prediction accuracies for the individual species were lower then prediction accuracies obtained for the combined dataset for all except one element/species combination, indicating that indices with high prediction accuracies at the landscape scale are less appropriate to detect the chemical content of individual species.
The Influence of Delaying Judgments of Learning on Metacognitive Accuracy: A Meta-Analytic Review
ERIC Educational Resources Information Center
Rhodes, Matthew G.; Tauber, Sarah K.
2011-01-01
Many studies have examined the accuracy of predictions of future memory performance solicited through judgments of learning (JOLs). Among the most robust findings in this literature is that delaying predictions serves to substantially increase the relative accuracy of JOLs compared with soliciting JOLs immediately after study, a finding termed the…
NASA Astrophysics Data System (ADS)
Sembiring, J.; Jones, F.
2018-03-01
Red cell Distribution Width (RDW) and platelet ratio (RPR) can predict liver fibrosis and cirrhosis in chronic hepatitis B with relatively high accuracy. RPR was superior to other non-invasive methods to predict liver fibrosis, such as AST and ALT ratio, AST and platelet ratio Index and FIB-4. The aim of this study was to assess diagnostic accuracy liver fibrosis by using RDW and platelets ratio in chronic hepatitis B patients based on compared with Fibroscan. This cross-sectional study was conducted at Adam Malik Hospital from January-June 2015. We examine 34 patients hepatitis B chronic, screen RDW, platelet, and fibroscan. Data were statistically analyzed. The result RPR with ROC procedure has an accuracy of 72.3% (95% CI: 84.1% - 97%). In this study, the RPR had a moderate ability to predict fibrosis degree (p = 0.029 with AUC> 70%). The cutoff value RPR was 0.0591, sensitivity and spesificity were 71.4% and 60%, Positive Prediction Value (PPV) was 55.6% and Negative Predictions Value (NPV) was 75%, positive likelihood ratio was 1.79 and negative likelihood ratio was 0.48. RPR have the ability to predict the degree of liver fibrosis in chronic hepatitis B patients with moderate accuracy.
Comparison of Three Risk Scores to Predict Outcomes of Severe Lower Gastrointestinal Bleeding
Camus, Marine; Jensen, Dennis M.; Ohning, Gordon V.; Kovacs, Thomas O.; Jutabha, Rome; Ghassemi, Kevin A.; Machicado, Gustavo A.; Dulai, Gareth S.; Jensen, Mary Ellen; Gornbein, Jeffrey A.
2014-01-01
Background & aims Improved medical decisions by using a score at the initial patient triage level may lead to improvements in patient management, outcomes, and resource utilization. There is no validated score for management of lower gastrointestinal bleeding (LGIB) unlike for upper GIB. The aim of our study was to compare the accuracies of 3 different prognostic scores (CURE Hemostasis prognosis score, Charlston index and ASA score) for the prediction of 30 day rebleeding, surgery and death in severe LGIB. Methods Data on consecutive patients hospitalized with severe GI bleeding from January 2006 to October 2011 in our two-tertiary academic referral centers were prospectively collected. Sensitivities, specificities, accuracies and area under the receiver operating characteristic (AUROC) were computed for three scores for predictions of rebleeding, surgery and mortality at 30 days. Results 235 consecutive patients with LGIB were included between 2006 and 2011. 23% of patients rebled, 6% had surgery, and 7.7% of patients died. The accuracies of each score never reached 70% for predicting rebleeding or surgery in either. The ASA score had a highest accuracy for predicting mortality within 30 days (83.5%) whereas the CURE Hemostasis prognosis score and the Charlson index both had accuracies less than 75% for the prediction of death within 30 days. Conclusions ASA score could be useful to predict death within 30 days. However a new score is still warranted to predict all 30 days outcomes (rebleeding, surgery and death) in LGIB. PMID:25599218
Effectiveness of Link Prediction for Face-to-Face Behavioral Networks
Tsugawa, Sho; Ohsaki, Hiroyuki
2013-01-01
Research on link prediction for social networks has been actively pursued. In link prediction for a given social network obtained from time-windowed observation, new link formation in the network is predicted from the topology of the obtained network. In contrast, recent advances in sensing technology have made it possible to obtain face-to-face behavioral networks, which are social networks representing face-to-face interactions among people. However, the effectiveness of link prediction techniques for face-to-face behavioral networks has not yet been explored in depth. To clarify this point, here we investigate the accuracy of conventional link prediction techniques for networks obtained from the history of face-to-face interactions among participants at an academic conference. Our findings were (1) that conventional link prediction techniques predict new link formation with a precision of 0.30–0.45 and a recall of 0.10–0.20, (2) that prolonged observation of social networks often degrades the prediction accuracy, (3) that the proposed decaying weight method leads to higher prediction accuracy than can be achieved by observing all records of communication and simply using them unmodified, and (4) that the prediction accuracy for face-to-face behavioral networks is relatively high compared to that for non-social networks, but not as high as for other types of social networks. PMID:24339956
Wang, Ming; Long, Qi
2016-09-01
Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.
Xue, Y.; Liu, S.; Hu, Y.; Yang, J.; Chen, Q.
2007-01-01
To improve the accuracy in prediction, Genetic Algorithm based Adaptive Neural Network Ensemble (GA-ANNE) is presented. Intersections are allowed between different training sets based on the fuzzy clustering analysis, which ensures the diversity as well as the accuracy of individual Neural Networks (NNs). Moreover, to improve the accuracy of the adaptive weights of individual NNs, GA is used to optimize the cluster centers. Empirical results in predicting carbon flux of Duke Forest reveal that GA-ANNE can predict the carbon flux more accurately than Radial Basis Function Neural Network (RBFNN), Bagging NN ensemble, and ANNE. ?? 2007 IEEE.
Hydrometeorological model for streamflow prediction
Tangborn, Wendell V.
1979-01-01
The hydrometeorological model described in this manual was developed to predict seasonal streamflow from water in storage in a basin using streamflow and precipitation data. The model, as described, applies specifically to the Skokomish, Nisqually, and Cowlitz Rivers, in Washington State, and more generally to streams in other regions that derive seasonal runoff from melting snow. Thus the techniques demonstrated for these three drainage basins can be used as a guide for applying this method to other streams. Input to the computer program consists of daily averages of gaged runoff of these streams, and daily values of precipitation collected at Longmire, Kid Valley, and Cushman Dam. Predictions are based on estimates of the absolute storage of water, predominately as snow: storage is approximately equal to basin precipitation less observed runoff. A pre-forecast test season is used to revise the storage estimate and improve the prediction accuracy. To obtain maximum prediction accuracy for operational applications with this model , a systematic evaluation of several hydrologic and meteorologic variables is first necessary. Six input options to the computer program that control prediction accuracy are developed and demonstrated. Predictions of streamflow can be made at any time and for any length of season, although accuracy is usually poor for early-season predictions (before December 1) or for short seasons (less than 15 days). The coefficient of prediction (CP), the chief measure of accuracy used in this manual, approaches zero during the late autumn and early winter seasons and reaches a maximum of about 0.85 during the spring snowmelt season. (Kosco-USGS)
Protein docking prediction using predicted protein-protein interface.
Li, Bin; Kihara, Daisuke
2012-01-10
Many important cellular processes are carried out by protein complexes. To provide physical pictures of interacting proteins, many computational protein-protein prediction methods have been developed in the past. However, it is still difficult to identify the correct docking complex structure within top ranks among alternative conformations. We present a novel protein docking algorithm that utilizes imperfect protein-protein binding interface prediction for guiding protein docking. Since the accuracy of protein binding site prediction varies depending on cases, the challenge is to develop a method which does not deteriorate but improves docking results by using a binding site prediction which may not be 100% accurate. The algorithm, named PI-LZerD (using Predicted Interface with Local 3D Zernike descriptor-based Docking algorithm), is based on a pair wise protein docking prediction algorithm, LZerD, which we have developed earlier. PI-LZerD starts from performing docking prediction using the provided protein-protein binding interface prediction as constraints, which is followed by the second round of docking with updated docking interface information to further improve docking conformation. Benchmark results on bound and unbound cases show that PI-LZerD consistently improves the docking prediction accuracy as compared with docking without using binding site prediction or using the binding site prediction as post-filtering. We have developed PI-LZerD, a pairwise docking algorithm, which uses imperfect protein-protein binding interface prediction to improve docking accuracy. PI-LZerD consistently showed better prediction accuracy over alternative methods in the series of benchmark experiments including docking using actual docking interface site predictions as well as unbound docking cases.
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models. PMID:26890307
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia's marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to 'small p and large n' problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models.
Zhao, Y; Mette, M F; Gowda, M; Longin, C F H; Reif, J C
2014-06-01
Based on data from field trials with a large collection of 135 elite winter wheat inbred lines and 1604 F1 hybrids derived from them, we compared the accuracy of prediction of marker-assisted selection and current genomic selection approaches for the model traits heading time and plant height in a cross-validation approach. For heading time, the high accuracy seen with marker-assisted selection severely dropped with genomic selection approaches RR-BLUP (ridge regression best linear unbiased prediction) and BayesCπ, whereas for plant height, accuracy was low with marker-assisted selection as well as RR-BLUP and BayesCπ. Differences in the linkage disequilibrium structure of the functional and single-nucleotide polymorphism markers relevant for the two traits were identified in a simulation study as a likely explanation for the different trends in accuracies of prediction. A new genomic selection approach, weighted best linear unbiased prediction (W-BLUP), designed to treat the effects of known functional markers more appropriately, proved to increase the accuracy of prediction for both traits and thus closes the gap between marker-assisted and genomic selection.
Zhao, Y; Mette, M F; Gowda, M; Longin, C F H; Reif, J C
2014-01-01
Based on data from field trials with a large collection of 135 elite winter wheat inbred lines and 1604 F1 hybrids derived from them, we compared the accuracy of prediction of marker-assisted selection and current genomic selection approaches for the model traits heading time and plant height in a cross-validation approach. For heading time, the high accuracy seen with marker-assisted selection severely dropped with genomic selection approaches RR-BLUP (ridge regression best linear unbiased prediction) and BayesCπ, whereas for plant height, accuracy was low with marker-assisted selection as well as RR-BLUP and BayesCπ. Differences in the linkage disequilibrium structure of the functional and single-nucleotide polymorphism markers relevant for the two traits were identified in a simulation study as a likely explanation for the different trends in accuracies of prediction. A new genomic selection approach, weighted best linear unbiased prediction (W-BLUP), designed to treat the effects of known functional markers more appropriately, proved to increase the accuracy of prediction for both traits and thus closes the gap between marker-assisted and genomic selection. PMID:24518889
Utsumi, Takanobu; Oka, Ryo; Endo, Takumi; Yano, Masashi; Kamijima, Shuichi; Kamiya, Naoto; Fujimura, Masaaki; Sekita, Nobuyuki; Mikami, Kazuo; Hiruta, Nobuyuki; Suzuki, Hiroyoshi
2015-11-01
The aim of this study is to validate and compare the predictive accuracy of two nomograms predicting the probability of Gleason sum upgrading between biopsy and radical prostatectomy pathology among representative patients with prostate cancer. We previously developed a nomogram, as did Chun et al. In this validation study, patients originated from two centers: Toho University Sakura Medical Center (n = 214) and Chibaken Saiseikai Narashino Hospital (n = 216). We assessed predictive accuracy using area under the curve values and constructed calibration plots to grasp the tendency for each institution. Both nomograms showed a high predictive accuracy in each institution, although the constructed calibration plots of the two nomograms underestimated the actual probability in Toho University Sakura Medical Center. Clinicians need to use calibration plots for each institution to correctly understand the tendency of each nomogram for their patients, even if each nomogram has a good predictive accuracy. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Wu, Cai; Li, Liang
2018-05-15
This paper focuses on quantifying and estimating the predictive accuracy of prognostic models for time-to-event outcomes with competing events. We consider the time-dependent discrimination and calibration metrics, including the receiver operating characteristics curve and the Brier score, in the context of competing risks. To address censoring, we propose a unified nonparametric estimation framework for both discrimination and calibration measures, by weighting the censored subjects with the conditional probability of the event of interest given the observed data. The proposed method can be extended to time-dependent predictive accuracy metrics constructed from a general class of loss functions. We apply the methodology to a data set from the African American Study of Kidney Disease and Hypertension to evaluate the predictive accuracy of a prognostic risk score in predicting end-stage renal disease, accounting for the competing risk of pre-end-stage renal disease death, and evaluate its numerical performance in extensive simulation studies. Copyright © 2018 John Wiley & Sons, Ltd.
Bianchi, Lorenzo; Schiavina, Riccardo; Borghesi, Marco; Bianchi, Federico Mineo; Briganti, Alberto; Carini, Marco; Terrone, Carlo; Mottrie, Alex; Gacci, Mauro; Gontero, Paolo; Imbimbo, Ciro; Marchioro, Giansilvio; Milanese, Giulio; Mirone, Vincenzo; Montorsi, Francesco; Morgia, Giuseppe; Novara, Giacomo; Porreca, Angelo; Volpe, Alessandro; Brunocilla, Eugenio
2018-04-06
To assess the predictive accuracy and the clinical value of a recent nomogram predicting cancer-specific mortality-free survival after surgery in pN1 prostate cancer patients through an external validation. We evaluated 518 prostate cancer patients treated with radical prostatectomy and pelvic lymph node dissection with evidence of nodal metastases at final pathology, at 10 tertiary centers. External validation was carried out using regression coefficients of the previously published nomogram. The performance characteristics of the model were assessed by quantifying predictive accuracy, according to the area under the curve in the receiver operating characteristic curve and model calibration. Furthermore, we systematically analyzed the specificity, sensitivity, positive predictive value and negative predictive value for each nomogram-derived probability cut-off. Finally, we implemented decision curve analysis, in order to quantify the nomogram's clinical value in routine practice. External validation showed inferior predictive accuracy as referred to in the internal validation (65.8% vs 83.3%, respectively). The discrimination (area under the curve) of the multivariable model was 66.7% (95% CI 60.1-73.0%) by testing with receiver operating characteristic curve analysis. The calibration plot showed an overestimation throughout the range of predicted cancer-specific mortality-free survival rates probabilities. However, in decision curve analysis, the nomogram's use showed a net benefit when compared with the scenarios of treating all patients or none. In an external setting, the nomogram showed inferior predictive accuracy and suboptimal calibration characteristics as compared to that reported in the original population. However, decision curve analysis showed a clinical net benefit, suggesting a clinical implication to correctly manage pN1 prostate cancer patients after surgery. © 2018 The Japanese Urological Association.
Assessing and Ensuring GOES-R Magnetometer Accuracy
NASA Technical Reports Server (NTRS)
Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald
2016-01-01
The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.
2000-06-30
At dawn on Launch Pad 36A, Cape Canaveral Air Force Station, an Atlas IIA/Centaur rocket is fueled for launch of NASA’s Tracking and Data Relay Satellite (TDRS-H). One of three satellites (labeled H, I and J) being built by the Hughes Space and Communications Company, the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the Space Shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-30
At dawn on Launch Pad 36A, Cape Canaveral Air Force Station, an Atlas IIA/Centaur rocket is fueled for launch of NASA’s Tracking and Data Relay Satellite (TDRS-H). One of three satellites (labeled H, I and J) being built by the Hughes Space and Communications Company, the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the Space Shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-30
NASA’s Tracking and Data Relay Satellite (TDRS-H) sits poised on Launch Pad 36A, Cape Canaveral Air Force Station, before its scheduled launch aboard an Atlas IIA/Centaur rocket. One of three satellites (labeled H, I and J) being built by the Hughes Space and Communications Company, the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the space shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-30
NASA’s Tracking and Data Relay Satellite (TDRS-H) sits poised on Launch Pad 36A, Cape Canaveral Air Force Station, before its scheduled launch aboard an Atlas IIA/Centaur rocket. One of three satellites (labeled H, I and J) being built by the Hughes Space and Communications Company, the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the space shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
Experimental analysis of the sheet metal forming behavior of newly developed press hardening steels
NASA Astrophysics Data System (ADS)
Meza-García, Enrique; Kräusel, Verena; Landgrebe, Dirk
2018-05-01
The aim of this work was the characterization of the newly developed press hardening sheet alloys 1800 PHS and 2000 PHS developed by SSAB with regard to their hot forming behavior on the basis of the experimental determination of relevant mechanical and technological properties. For this purpose conventional and non-conventional sheet metal testing methods were used. To determine the friction coefficient, the strip drawing test was applied, while the deep drawing cup test was used to determine the maximum draw depth. Finally, a V-bending test was carried out to evaluate the springback behavior of the investigated alloys by varying the blank temperature and quenching media. This work provides a technological guideline for the production of press hardened sheet parts made of these investigated sheet metals.
Dissolved oxygen content prediction in crab culture using a hybrid intelligent method
Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang
2016-01-01
A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds. PMID:27270206
Dissolved oxygen content prediction in crab culture using a hybrid intelligent method.
Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang
2016-06-08
A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds.
Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De
2016-01-01
The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).
Accuracy of taxonomy prediction for 16S rRNA and fungal ITS sequences
2018-01-01
Prediction of taxonomy for marker gene sequences such as 16S ribosomal RNA (rRNA) is a fundamental task in microbiology. Most experimentally observed sequences are diverged from reference sequences of authoritatively named organisms, creating a challenge for prediction methods. I assessed the accuracy of several algorithms using cross-validation by identity, a new benchmark strategy which explicitly models the variation in distances between query sequences and the closest entry in a reference database. When the accuracy of genus predictions was averaged over a representative range of identities with the reference database (100%, 99%, 97%, 95% and 90%), all tested methods had ≤50% accuracy on the currently-popular V4 region of 16S rRNA. Accuracy was found to fall rapidly with identity; for example, better methods were found to have V4 genus prediction accuracy of ∼100% at 100% identity but ∼50% at 97% identity. The relationship between identity and taxonomy was quantified as the probability that a rank is the lowest shared by a pair of sequences with a given pair-wise identity. With the V4 region, 95% identity was found to be a twilight zone where taxonomy is highly ambiguous because the probabilities that the lowest shared rank between pairs of sequences is genus, family, order or class are approximately equal. PMID:29682424
Exploring Mouse Protein Function via Multiple Approaches.
Huang, Guohua; Chu, Chen; Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning; Cai, Yu-Dong
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality.
Exploring Mouse Protein Function via Multiple Approaches
Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality. PMID:27846315
Revealing how network structure affects accuracy of link prediction
NASA Astrophysics Data System (ADS)
Yang, Jin-Xuan; Zhang, Xiao-Dong
2017-08-01
Link prediction plays an important role in network reconstruction and network evolution. The network structure affects the accuracy of link prediction, which is an interesting problem. In this paper we use common neighbors and the Gini coefficient to reveal the relation between them, which can provide a good reference for the choice of a suitable link prediction algorithm according to the network structure. Moreover, the statistical analysis reveals correlation between the common neighbors index, Gini coefficient index and other indices to describe the network structure, such as Laplacian eigenvalues, clustering coefficient, degree heterogeneity, and assortativity of network. Furthermore, a new method to predict missing links is proposed. The experimental results show that the proposed algorithm yields better prediction accuracy and robustness to the network structure than existing currently used methods for a variety of real-world networks.
Analysis of energy-based algorithms for RNA secondary structure prediction
2012-01-01
Background RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. Results We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Conclusions Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets. PMID:22296803
Analysis of energy-based algorithms for RNA secondary structure prediction.
Hajiaghayi, Monir; Condon, Anne; Hoos, Holger H
2012-02-01
RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets.
DOT National Transportation Integrated Search
2015-07-01
Implementing the recommendations of this study is expected to significantly : improve the accuracy of camber measurements and predictions and to : ultimately help reduce construction delays, improve bridge serviceability, : and decrease costs.
Genomic selection for crossbred performance accounting for breed-specific effects.
Lopes, Marcos S; Bovenhuis, Henk; Hidalgo, André M; van Arendonk, Johan A M; Knol, Egbert F; Bastiaansen, John W M
2017-06-26
Breed-specific effects are observed when the same allele of a given genetic marker has a different effect depending on its breed origin, which results in different allele substitution effects across breeds. In such a case, single-breed breeding values may not be the most accurate predictors of crossbred performance. Our aim was to estimate the contribution of alleles from each parental breed to the genetic variance of traits that are measured in crossbred offspring, and to compare the prediction accuracies of estimated direct genomic values (DGV) from a traditional genomic selection model (GS) that are trained on purebred or crossbred data, with accuracies of DGV from a model that accounts for breed-specific effects (BS), trained on purebred or crossbred data. The final dataset was composed of 924 Large White, 924 Landrace and 924 two-way cross (F1) genotyped and phenotyped animals. The traits evaluated were litter size (LS) and gestation length (GL) in pigs. The genetic correlation between purebred and crossbred performance was higher than 0.88 for both LS and GL. For both traits, the additive genetic variance was larger for alleles inherited from the Large White breed compared to alleles inherited from the Landrace breed (0.74 and 0.56 for LS, and 0.42 and 0.40 for GL, respectively). The highest prediction accuracies of crossbred performance were obtained when training was done on crossbred data. For LS, prediction accuracies were the same for GS and BS DGV (0.23), while for GL, prediction accuracy for BS DGV was similar to the accuracy of GS DGV (0.53 and 0.52, respectively). In this study, training on crossbred data resulted in higher prediction accuracy than training on purebred data and evidence of breed-specific effects for LS and GL was demonstrated. However, when training was done on crossbred data, both GS and BS models resulted in similar prediction accuracies. In future studies, traits with a lower genetic correlation between purebred and crossbred performance should be included to further assess the value of the BS model in genomic predictions.
A new method of power load prediction in electrification railway
NASA Astrophysics Data System (ADS)
Dun, Xiaohong
2018-04-01
Aiming at the character of electrification railway, the paper mainly studies the problem of load prediction in electrification railway. After the preprocessing of data, and the similar days are separated on the basis of its statistical characteristics. Meanwhile the accuracy of different methods is analyzed. The paper provides a new thought of prediction and a new method of accuracy of judgment for the load prediction of power system.
Hengartner, M P; Heekeren, K; Dvorsky, D; Walitza, S; Rössler, W; Theodoridou, A
2017-09-01
The aim of this study was to critically examine the prognostic validity of various clinical high-risk (CHR) criteria alone and in combination with additional clinical characteristics. A total of 188 CHR positive persons from the region of Zurich, Switzerland (mean age 20.5 years; 60.2% male), meeting ultra high-risk (UHR) and/or basic symptoms (BS) criteria, were followed over three years. The test battery included the Structured Interview for Prodromal Syndromes (SIPS), verbal IQ and many other screening tools. Conversion to psychosis was defined according to ICD-10 criteria for schizophrenia (F20) or brief psychotic disorder (F23). Altogether n=24 persons developed manifest psychosis within three years and according to Kaplan-Meier survival analysis, the projected conversion rate was 17.5%. The predictive accuracy of UHR was statistically significant but poor (area under the curve [AUC]=0.65, P<.05), whereas BS did not predict psychosis beyond mere chance (AUC=0.52, P=.730). Sensitivity and specificity were 0.83 and 0.47 for UHR, and 0.96 and 0.09 for BS. UHR plus BS achieved an AUC=0.66, with sensitivity and specificity of 0.75 and 0.56. In comparison, baseline antipsychotic medication yielded a predictive accuracy of AUC=0.62 (sensitivity=0.42; specificity=0.82). A multivariable prediction model comprising continuous measures of positive symptoms and verbal IQ achieved a substantially improved prognostic accuracy (AUC=0.85; sensitivity=0.86; specificity=0.85; positive predictive value=0.54; negative predictive value=0.97). We showed that BS have no predictive accuracy beyond chance, while UHR criteria poorly predict conversion to psychosis. Combining BS with UHR criteria did not improve the predictive accuracy of UHR alone. In contrast, dimensional measures of both positive symptoms and verbal IQ showed excellent prognostic validity. A critical re-thinking of binary at-risk criteria is necessary in order to improve the prognosis of psychotic disorders. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Putz, A M; Tiezzi, F; Maltecca, C; Gray, K A; Knauer, M T
2018-02-01
The objective of this study was to compare and determine the optimal validation method when comparing accuracy from single-step GBLUP (ssGBLUP) to traditional pedigree-based BLUP. Field data included six litter size traits. Simulated data included ten replicates designed to mimic the field data in order to determine the method that was closest to the true accuracy. Data were split into training and validation sets. The methods used were as follows: (i) theoretical accuracy derived from the prediction error variance (PEV) of the direct inverse (iLHS), (ii) approximated accuracies from the accf90(GS) program in the BLUPF90 family of programs (Approx), (iii) correlation between predictions and the single-step GEBVs from the full data set (GEBV Full ), (iv) correlation between predictions and the corrected phenotypes of females from the full data set (Y c ), (v) correlation from method iv divided by the square root of the heritability (Y ch ) and (vi) correlation between sire predictions and the average of their daughters' corrected phenotypes (Y cs ). Accuracies from iLHS increased from 0.27 to 0.37 (37%) in the Large White. Approximation accuracies were very consistent and close in absolute value (0.41 to 0.43). Both iLHS and Approx were much less variable than the corrected phenotype methods (ranging from 0.04 to 0.27). On average, simulated data showed an increase in accuracy from 0.34 to 0.44 (29%) using ssGBLUP. Both iLHS and Y ch approximated the increase well, 0.30 to 0.46 and 0.36 to 0.45, respectively. GEBV Full performed poorly in both data sets and is not recommended. Results suggest that for within-breed selection, theoretical accuracy using PEV was consistent and accurate. When direct inversion is infeasible to get the PEV, correlating predictions to the corrected phenotypes divided by the square root of heritability is adequate given a large enough validation data set. © 2017 Blackwell Verlag GmbH.
Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar
2015-01-01
For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489
Rutkoski, Jessica; Poland, Jesse; Mondal, Suchismita; Autrique, Enrique; Pérez, Lorena González; Crossa, José; Reynolds, Matthew; Singh, Ravi
2016-01-01
Genomic selection can be applied prior to phenotyping, enabling shorter breeding cycles and greater rates of genetic gain relative to phenotypic selection. Traits measured using high-throughput phenotyping based on proximal or remote sensing could be useful for improving pedigree and genomic prediction model accuracies for traits not yet possible to phenotype directly. We tested if using aerial measurements of canopy temperature, and green and red normalized difference vegetation index as secondary traits in pedigree and genomic best linear unbiased prediction models could increase accuracy for grain yield in wheat, Triticum aestivum L., using 557 lines in five environments. Secondary traits on training and test sets, and grain yield on the training set were modeled as multivariate, and compared to univariate models with grain yield on the training set only. Cross validation accuracies were estimated within and across-environment, with and without replication, and with and without correcting for days to heading. We observed that, within environment, with unreplicated secondary trait data, and without correcting for days to heading, secondary traits increased accuracies for grain yield by 56% in pedigree, and 70% in genomic prediction models, on average. Secondary traits increased accuracy slightly more when replicated, and considerably less when models corrected for days to heading. In across-environment prediction, trends were similar but less consistent. These results show that secondary traits measured in high-throughput could be used in pedigree and genomic prediction to improve accuracy. This approach could improve selection in wheat during early stages if validated in early-generation breeding plots. PMID:27402362
ERIC Educational Resources Information Center
Myers, Jamie S.; Grigsby, Jim; Teel, Cynthia S.; Kramer, Andrew M.
2009-01-01
The goals of this study were to evaluate the accuracy of nurses' predictions of rehabilitation potential in older adults admitted to inpatient rehabilitation facilities and to ascertain whether the addition of a measure of executive cognitive function would enhance predictive accuracy. Secondary analysis was performed on prospective data collected…
Psychopathy, IQ, and Violence in European American and African American County Jail Inmates
ERIC Educational Resources Information Center
Walsh, Zach; Swogger, Marc T.; Kosson, David S.
2004-01-01
The accuracy of the prediction of criminal violence may be improved by combining psychopathy with other variables that have been found to predict violence. Research has suggested that assessing intelligence (i.e., IQ) as well as psychopathy improves the accuracy of violence prediction. In the present study, the authors tested this hypothesis by…
Prediction Accuracy: The Role of Feedback in 6th Graders' Recall Predictions
ERIC Educational Resources Information Center
Al-Harthy, Ibrahim S.
2016-01-01
The current study focused on the role of feedback on students' prediction accuracy (calibration). This phenomenon has been widely studied, but questions remain about how best to improve it. In the current investigation, fifty-seven students from sixth grade were randomly assigned to control and experimental groups. Thirty pictures were chosen from…
Luo, Shanhong; Snider, Anthony G
2009-11-01
There has been a long-standing debate about whether having accurate self-perceptions or holding positive illusions of self is more adaptive. This debate has recently expanded to consider the role of accuracy and bias of partner perceptions in romantic relationships. In the present study, we hypothesized that because accuracy, positivity bias, and similarity bias are likely to serve distinct functions in relationships, they should all make independent contributions to the prediction of marital satisfaction. In a sample of 288 newlywed couples, we tested this hypothesis by simultaneously modeling the actor effects and partner effects of accuracy, positivity bias, and similarity bias in predicting husbands' and wives' satisfaction. Findings across several perceptual domains suggest that all three perceptual indices independently predicted the perceiver's satisfaction. Accuracy and similarity bias, but not positivity bias, made unique contributions to the target's satisfaction. No sex differences were found.
Investigation on the Accuracy of Superposition Predictions of Film Cooling Effectiveness
NASA Astrophysics Data System (ADS)
Meng, Tong; Zhu, Hui-ren; Liu, Cun-liang; Wei, Jian-sheng
2018-05-01
Film cooling effectiveness on flat plates with double rows of holes has been studied experimentally and numerically in this paper. This configuration is widely used to simulate the multi-row film cooling on turbine vane. Film cooling effectiveness of double rows of holes and each single row was used to study the accuracy of superposition predictions. Method of stable infrared measurement technique was used to measure the surface temperature on the flat plate. This paper analyzed the factors that affect the film cooling effectiveness including hole shape, hole arrangement, row-to-row spacing and blowing ratio. Numerical simulations were performed to analyze the flow structure and film cooling mechanisms between each film cooling row. Results show that the blowing ratio within the range of 0.5 to 2 has a significant influence on the accuracy of superposition predictions. At low blowing ratios, results obtained by superposition method agree well with the experimental data. While at high blowing ratios, the accuracy of superposition prediction decreases. Another significant factor is hole arrangement. Results obtained by superposition prediction are nearly the same as experimental values of staggered arrangement structures. For in-line configurations, the superposition values of film cooling effectiveness are much higher than experimental data. For different hole shapes, the accuracy of superposition predictions on converging-expanding holes is better than cylinder holes and compound angle holes. For two different hole spacing structures in this paper, predictions show good agreement with the experiment results.
Vathsangam, Harshvardhan; Emken, Adar; Schroeder, E. Todd; Spruijt-Metz, Donna; Sukhatme, Gaurav S.
2011-01-01
This paper describes an experimental study in estimating energy expenditure from treadmill walking using a single hip-mounted triaxial inertial sensor comprised of a triaxial accelerometer and a triaxial gyroscope. Typical physical activity characterization using accelerometer generated counts suffers from two drawbacks - imprecison (due to proprietary counts) and incompleteness (due to incomplete movement description). We address these problems in the context of steady state walking by directly estimating energy expenditure with data from a hip-mounted inertial sensor. We represent the cyclic nature of walking with a Fourier transform of sensor streams and show how one can map this representation to energy expenditure (as measured by V O2 consumption, mL/min) using three regression techniques - Least Squares Regression (LSR), Bayesian Linear Regression (BLR) and Gaussian Process Regression (GPR). We perform a comparative analysis of the accuracy of sensor streams in predicting energy expenditure (measured by RMS prediction accuracy). Triaxial information is more accurate than uniaxial information. LSR based approaches are prone to outlier sensitivity and overfitting. Gyroscopic information showed equivalent if not better prediction accuracy as compared to accelerometers. Combining accelerometer and gyroscopic information provided better accuracy than using either sensor alone. We also analyze the best algorithmic approach among linear and nonlinear methods as measured by RMS prediction accuracy and run time. Nonlinear regression methods showed better prediction accuracy but required an order of magnitude of run time. This paper emphasizes the role of probabilistic techniques in conjunction with joint modeling of triaxial accelerations and rotational rates to improve energy expenditure prediction for steady-state treadmill walking. PMID:21690001
High accuracy operon prediction method based on STRING database scores.
Taboada, Blanca; Verde, Cristina; Merino, Enrique
2010-07-01
We present a simple and highly accurate computational method for operon prediction, based on intergenic distances and functional relationships between the protein products of contiguous genes, as defined by STRING database (Jensen,L.J., Kuhn,M., Stark,M., Chaffron,S., Creevey,C., Muller,J., Doerks,T., Julien,P., Roth,A., Simonovic,M. et al. (2009) STRING 8-a global view on proteins and their functional interactions in 630 organisms. Nucleic Acids Res., 37, D412-D416). These two parameters were used to train a neural network on a subset of experimentally characterized Escherichia coli and Bacillus subtilis operons. Our predictive model was successfully tested on the set of experimentally defined operons in E. coli and B. subtilis, with accuracies of 94.6 and 93.3%, respectively. As far as we know, these are the highest accuracies ever obtained for predicting bacterial operons. Furthermore, in order to evaluate the predictable accuracy of our model when using an organism's data set for the training procedure, and a different organism's data set for testing, we repeated the E. coli operon prediction analysis using a neural network trained with B. subtilis data, and a B. subtilis analysis using a neural network trained with E. coli data. Even for these cases, the accuracies reached with our method were outstandingly high, 91.5 and 93%, respectively. These results show the potential use of our method for accurately predicting the operons of any other organism. Our operon predictions for fully-sequenced genomes are available at http://operons.ibt.unam.mx/OperonPredictor/.
Comparison of Three Risk Scores to Predict Outcomes of Severe Lower Gastrointestinal Bleeding.
Camus, Marine; Jensen, Dennis M; Ohning, Gordon V; Kovacs, Thomas O; Jutabha, Rome; Ghassemi, Kevin A; Machicado, Gustavo A; Dulai, Gareth S; Jensen, Mary E; Gornbein, Jeffrey A
2016-01-01
Improved medical decisions by using a score at the initial patient triage level may lead to improvements in patient management, outcomes, and resource utilization. There is no validated score for management of lower gastrointestinal bleeding (LGIB) unlike for upper gastrointestinal bleeding. The aim of our study was to compare the accuracies of 3 different prognostic scores [Center for Ulcer Research and Education Hemostasis prognosis score, Charlson index, and American Society of Anesthesiologists (ASA) score] for the prediction of 30-day rebleeding, surgery, and death in severe LGIB. Data on consecutive patients hospitalized with severe gastrointestinal bleeding from January 2006 to October 2011 in our 2 tertiary academic referral centers were prospectively collected. Sensitivities, specificities, accuracies, and area under the receiver operator characteristic curve were computed for 3 scores for predictions of rebleeding, surgery, and mortality at 30 days. Two hundred thirty-five consecutive patients with LGIB were included between 2006 and 2011. Twenty-three percent of patients rebled, 6% had surgery, and 7.7% of patients died. The accuracies of each score never reached 70% for predicting rebleeding or surgery in either. The ASA score had a highest accuracy for predicting mortality within 30 days (83.5%), whereas the Center for Ulcer Research and Education Hemostasis prognosis score and the Charlson index both had accuracies <75% for the prediction of death within 30 days. ASA score could be useful to predict death within 30 days. However, a new score is still warranted to predict all 30 days outcomes (rebleeding, surgery, and death) in LGIB.
Massa, Luiz M; Hoffman, Jeanne M; Cardenas, Diana D
2009-01-01
To determine the validity, accuracy, and predictive value of the signs and symptoms of urinary tract infection (UTI) for individuals with spinal cord injury (SCI) using intermittent catheterization (IC) and the accuracy of individuals with SCI on IC at predicting their own UTI. Prospective cohort based on data from the first 3 months of a 1-year randomized controlled trial to evaluate UTI prevention effectiveness of hydrophilic and standard catheters. Fifty-six community-based individuals on IC. Presence of UTI as defined as bacteriuria with a colony count of at least 10(5) colony-forming units/mL and at least 1 sign or symptom of UTI. Analysis of monthly urine culture and urinalysis data combined with analysis of monthly data collected using a questionnaire that asked subjects to self-report on UTI signs and symptoms and whether or not they felt they had a UTI. Overall, "cloudy urine" had the highest accuracy (83.1%), and "leukocytes in the urine" had the highest sensitivity (82.8%). The highest specificity was for "fever" (99.0%); however, it had a very low sensitivity (6.9%). Subjects were able to predict their own UTI with an accuracy of 66.2%, and the negative predictive value (82.8%) was substantially higher than the positive predictive value (32.6%). The UTI signs and symptoms can predict a UTI more accurately than individual subjects can by using subjective impressions of their own signs and symptoms. Subjects were better at predicting when they did not have a UTI than when they did have a UTI.
Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman
2011-01-01
This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626
CPO Prediction: Accuracy Assessment and Impact on UT1 Intensive Results
NASA Technical Reports Server (NTRS)
Malkin, Zinovy
2010-01-01
The UT1 Intensive results heavily depend on the celestial pole offset (CPO) model used during data processing. Since accurate CPO values are available with a delay of two to four weeks, CPO predictions are necessarily applied to the UT1 Intensive data analysis, and errors in the predictions can influence the operational UT1 accuracy. In this paper we assess the real accuracy of CPO prediction using the actual IERS and PUL predictions made in 2007-2009. Also, results of operational processing were analyzed to investigate the actual impact of EOP prediction errors on the rapid UT1 results. It was found that the impact of CPO prediction errors is at a level of several microseconds, whereas the impact of the inaccuracy in the polar motion prediction may be about one order of magnitude larger for ultra-rapid UT1 results. The situation can be amended if the IERS Rapid solution will be updated more frequently.
Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies
2010-01-01
Background All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences. Results The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%. Conclusions This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant applications in evaluating engineered antibodies, and may be adapted for evaluating engineered proteins in general. PMID:20144194
Analysis of near infrared spectra for age-grading of wild populations of Anopheles gambiae.
Krajacich, Benjamin J; Meyers, Jacob I; Alout, Haoues; Dabiré, Roch K; Dowell, Floyd E; Foy, Brian D
2017-11-07
Understanding the age-structure of mosquito populations, especially malaria vectors such as Anopheles gambiae, is important for assessing the risk of infectious mosquitoes, and how vector control interventions may impact this risk. The use of near-infrared spectroscopy (NIRS) for age-grading has been demonstrated previously on laboratory and semi-field mosquitoes, but to date has not been utilized on wild-caught mosquitoes whose age is externally validated via parity status or parasite infection stage. In this study, we developed regression and classification models using NIRS on datasets of wild An. gambiae (s.l.) reared from larvae collected from the field in Burkina Faso, and two laboratory strains. We compared the accuracy of these models for predicting the ages of wild-caught mosquitoes that had been scored for their parity status as well as for positivity for Plasmodium sporozoites. Regression models utilizing variable selection increased predictive accuracy over the more common full-spectrum partial least squares (PLS) approach for cross-validation of the datasets, validation, and independent test sets. Models produced from datasets that included the greatest range of mosquito samples (i.e. different sampling locations and times) had the highest predictive accuracy on independent testing sets, though overall accuracy on these samples was low. For classification, we found that intramodel accuracy ranged between 73.5-97.0% for grouping of mosquitoes into "early" and "late" age classes, with the highest prediction accuracy found in laboratory colonized mosquitoes. However, this accuracy was decreased on test sets, with the highest classification of an independent set of wild-caught larvae reared to set ages being 69.6%. Variation in NIRS data, likely from dietary, genetic, and other factors limits the accuracy of this technique with wild-caught mosquitoes. Alternative algorithms may help improve prediction accuracy, but care should be taken to either maximize variety in models or minimize confounders.
Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies.
David, Maria Pamela C; Concepcion, Gisela P; Padlan, Eduardo A
2010-02-08
All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences. The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%. This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant applications in evaluating engineered antibodies, and may be adapted for evaluating engineered proteins in general.
Conde-Agudelo, A; Papageorghiou, A T; Kennedy, S H; Villar, J
2013-05-01
Several biomarkers for predicting intrauterine growth restriction (IUGR) have been proposed in recent years. However, the predictive performance of these biomarkers has not been systematically evaluated. To determine the predictive accuracy of novel biomarkers for IUGR in women with singleton gestations. Electronic databases, reference list checking and conference proceedings. Observational studies that evaluated the accuracy of novel biomarkers proposed for predicting IUGR. Data were extracted on characteristics, quality and predictive accuracy from each study to construct 2×2 tables. Summary receiver operating characteristic curves, sensitivities, specificities and likelihood ratios (LRs) were generated. A total of 53 studies, including 39,974 women and evaluating 37 novel biomarkers, fulfilled the inclusion criteria. Overall, the predictive accuracy of angiogenic factors for IUGR was minimal (median pooled positive and negative LRs of 1.7, range 1.0-19.8; and 0.8, range 0.0-1.0, respectively). Two small case-control studies reported high predictive values for placental growth factor and angiopoietin-2 only when IUGR was defined as birthweight centile with clinical or pathological evidence of fetal growth restriction. Biomarkers related to endothelial function/oxidative stress, placental protein/hormone, and others such as serum levels of vitamin D, urinary albumin:creatinine ratio, thyroid function tests and metabolomic profile had low predictive accuracy. None of the novel biomarkers evaluated in this review are sufficiently accurate to recommend their use as predictors of IUGR in routine clinical practice. However, the use of biomarkers in combination with biophysical parameters and maternal characteristics could be more useful and merits further research. © 2013 The Authors BJOG An International Journal of Obstetrics and Gynaecology © 2013 RCOG.
de Saint Laumer, Jean‐Yves; Leocata, Sabine; Tissot, Emeline; Baroux, Lucie; Kampf, David M.; Merle, Philippe; Boschung, Alain; Seyfried, Markus
2015-01-01
We previously showed that the relative response factors of volatile compounds were predictable from either combustion enthalpies or their molecular formulae only 1. We now extend this prediction to silylated derivatives by adding an increment in the ab initio calculation of combustion enthalpies. The accuracy of the experimental relative response factors database was also improved and its population increased to 490 values. In particular, more brominated compounds were measured, and their prediction accuracy was improved by adding a correction factor in the algorithm. The correlation coefficient between predicted and measured values increased from 0.936 to 0.972, leading to a mean prediction accuracy of ± 6%. Thus, 93% of the relative response factors values were predicted with an accuracy of better than ± 10%. The capabilities of the extended algorithm are exemplified by (i) the quick and accurate quantification of hydroxylated metabolites resulting from a biodegradation test after silylation and prediction of their relative response factors, without having the reference substances available; and (ii) the rapid purity determinations of volatile compounds. This study confirms that Gas chromatography with a flame ionization detector and using predicted relative response factors is one of the few techniques that enables quantification of volatile compounds without calibrating the instrument with the pure reference substance. PMID:26179324
Integrated Strategy Improves the Prediction Accuracy of miRNA in Large Dataset
Lipps, David; Devineni, Sree
2016-01-01
MiRNAs are short non-coding RNAs of about 22 nucleotides, which play critical roles in gene expression regulation. The biogenesis of miRNAs is largely determined by the sequence and structural features of their parental RNA molecules. Based on these features, multiple computational tools have been developed to predict if RNA transcripts contain miRNAs or not. Although being very successful, these predictors started to face multiple challenges in recent years. Many predictors were optimized using datasets of hundreds of miRNA samples. The sizes of these datasets are much smaller than the number of known miRNAs. Consequently, the prediction accuracy of these predictors in large dataset becomes unknown and needs to be re-tested. In addition, many predictors were optimized for either high sensitivity or high specificity. These optimization strategies may bring in serious limitations in applications. Moreover, to meet continuously raised expectations on these computational tools, improving the prediction accuracy becomes extremely important. In this study, a meta-predictor mirMeta was developed by integrating a set of non-linear transformations with meta-strategy. More specifically, the outputs of five individual predictors were first preprocessed using non-linear transformations, and then fed into an artificial neural network to make the meta-prediction. The prediction accuracy of meta-predictor was validated using both multi-fold cross-validation and independent dataset. The final accuracy of meta-predictor in newly-designed large dataset is improved by 7% to 93%. The meta-predictor is also proved to be less dependent on datasets, as well as has refined balance between sensitivity and specificity. This study has two folds of importance: First, it shows that the combination of non-linear transformations and artificial neural networks improves the prediction accuracy of individual predictors. Second, a new miRNA predictor with significantly improved prediction accuracy is developed for the community for identifying novel miRNAs and the complete set of miRNAs. Source code is available at: https://github.com/xueLab/mirMeta PMID:28002428
Yao, Chen; Zhu, Xiaojin; Weigel, Kent A
2016-11-07
Genomic prediction for novel traits, which can be costly and labor-intensive to measure, is often hampered by low accuracy due to the limited size of the reference population. As an option to improve prediction accuracy, we introduced a semi-supervised learning strategy known as the self-training model, and applied this method to genomic prediction of residual feed intake (RFI) in dairy cattle. We describe a self-training model that is wrapped around a support vector machine (SVM) algorithm, which enables it to use data from animals with and without measured phenotypes. Initially, a SVM model was trained using data from 792 animals with measured RFI phenotypes. Then, the resulting SVM was used to generate self-trained phenotypes for 3000 animals for which RFI measurements were not available. Finally, the SVM model was re-trained using data from up to 3792 animals, including those with measured and self-trained RFI phenotypes. Incorporation of additional animals with self-trained phenotypes enhanced the accuracy of genomic predictions compared to that of predictions that were derived from the subset of animals with measured phenotypes. The optimal ratio of animals with self-trained phenotypes to animals with measured phenotypes (2.5, 2.0, and 1.8) and the maximum increase achieved in prediction accuracy measured as the correlation between predicted and actual RFI phenotypes (5.9, 4.1, and 2.4%) decreased as the size of the initial training set (300, 400, and 500 animals with measured phenotypes) increased. The optimal number of animals with self-trained phenotypes may be smaller when prediction accuracy is measured as the mean squared error rather than the correlation between predicted and actual RFI phenotypes. Our results demonstrate that semi-supervised learning models that incorporate self-trained phenotypes can achieve genomic prediction accuracies that are comparable to those obtained with models using larger training sets that include only animals with measured phenotypes. Semi-supervised learning can be helpful for genomic prediction of novel traits, such as RFI, for which the size of reference population is limited, in particular, when the animals to be predicted and the animals in the reference population originate from the same herd-environment.
Vallejo, Roger L; Leeds, Timothy D; Gao, Guangtu; Parsons, James E; Martin, Kyle E; Evenhuis, Jason P; Fragomeni, Breno O; Wiens, Gregory D; Palti, Yniv
2017-02-01
Previously, we have shown that bacterial cold water disease (BCWD) resistance in rainbow trout can be improved using traditional family-based selection, but progress has been limited to exploiting only between-family genetic variation. Genomic selection (GS) is a new alternative that enables exploitation of within-family genetic variation. We compared three GS models [single-step genomic best linear unbiased prediction (ssGBLUP), weighted ssGBLUP (wssGBLUP), and BayesB] to predict genomic-enabled breeding values (GEBV) for BCWD resistance in a commercial rainbow trout population, and compared the accuracy of GEBV to traditional estimates of breeding values (EBV) from a pedigree-based BLUP (P-BLUP) model. We also assessed the impact of sampling design on the accuracy of GEBV predictions. For these comparisons, we used BCWD survival phenotypes recorded on 7893 fish from 102 families, of which 1473 fish from 50 families had genotypes [57 K single nucleotide polymorphism (SNP) array]. Naïve siblings of the training fish (n = 930 testing fish) were genotyped to predict their GEBV and mated to produce 138 progeny testing families. In the following generation, 9968 progeny were phenotyped to empirically assess the accuracy of GEBV predictions made on their non-phenotyped parents. The accuracy of GEBV from all tested GS models were substantially higher than the P-BLUP model EBV. The highest increase in accuracy relative to the P-BLUP model was achieved with BayesB (97.2 to 108.8%), followed by wssGBLUP at iteration 2 (94.4 to 97.1%) and 3 (88.9 to 91.2%) and ssGBLUP (83.3 to 85.3%). Reducing the training sample size to n = ~1000 had no negative impact on the accuracy (0.67 to 0.72), but with n = ~500 the accuracy dropped to 0.53 to 0.61 if the training and testing fish were full-sibs, and even substantially lower, to 0.22 to 0.25, when they were not full-sibs. Using progeny performance data, we showed that the accuracy of genomic predictions is substantially higher than estimates obtained from the traditional pedigree-based BLUP model for BCWD resistance. Overall, we found that using a much smaller training sample size compared to similar studies in livestock, GS can substantially improve the selection accuracy and genetic gains for this trait in a commercial rainbow trout breeding population.
Assessing and Ensuring GOES-R Magnetometer Accuracy
NASA Technical Reports Server (NTRS)
Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald
2016-01-01
The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.
Zheng, Leilei; Chai, Hao; Chen, Wanzhen; Yu, Rongrong; He, Wei; Jiang, Zhengyan; Yu, Shaohua; Li, Huichun; Wang, Wei
2011-12-01
Early parental bonding experiences play a role in emotion recognition and expression in later adulthood, and patients with personality disorder frequently experience inappropriate parental bonding styles, therefore the aim of the present study was to explore whether parental bonding style is correlated with recognition of facial emotion in personality disorder patients. The Parental Bonding Instrument (PBI) and the Matsumoto and Ekman Japanese and Caucasian Facial Expressions of Emotion (JACFEE) photo set tests were carried out in 289 participants. Patients scored lower on parental Care but higher on parental Freedom Control and Autonomy Denial subscales, and they displayed less accuracy when recognizing contempt, disgust and happiness than the healthy volunteers. In healthy volunteers, maternal Autonomy Denial significantly predicted accuracy when recognizing fear, and maternal Care predicted the accuracy of recognizing sadness. In patients, paternal Care negatively predicted the accuracy of recognizing anger, paternal Freedom Control predicted the perceived intensity of contempt, maternal Care predicted the accuracy of recognizing sadness, and the intensity of disgust. Parenting bonding styles have an impact on the decoding process and sensitivity when recognizing facial emotions, especially in personality disorder patients. © 2011 The Authors. Psychiatry and Clinical Neurosciences © 2011 Japanese Society of Psychiatry and Neurology.
Alternative evaluation metrics for risk adjustment methods.
Park, Sungchul; Basu, Anirban
2018-06-01
Risk adjustment is instituted to counter risk selection by accurately equating payments with expected expenditures. Traditional risk-adjustment methods are designed to estimate accurate payments at the group level. However, this generates residual risks at the individual level, especially for high-expenditure individuals, thereby inducing health plans to avoid those with high residual risks. To identify an optimal risk-adjustment method, we perform a comprehensive comparison of prediction accuracies at the group level, at the tail distributions, and at the individual level across 19 estimators: 9 parametric regression, 7 machine learning, and 3 distributional estimators. Using the 2013-2014 MarketScan database, we find that no one estimator performs best in all prediction accuracies. Generally, machine learning and distribution-based estimators achieve higher group-level prediction accuracy than parametric regression estimators. However, parametric regression estimators show higher tail distribution prediction accuracy and individual-level prediction accuracy, especially at the tails of the distribution. This suggests that there is a trade-off in selecting an appropriate risk-adjustment method between estimating accurate payments at the group level and lower residual risks at the individual level. Our results indicate that an optimal method cannot be determined solely on the basis of statistical metrics but rather needs to account for simulating plans' risk selective behaviors. Copyright © 2018 John Wiley & Sons, Ltd.
Tan, Cheng; Wu, Zhenfang; Ren, Jiangli; Huang, Zhuolin; Liu, Dewu; He, Xiaoyan; Prakapenka, Dzianis; Zhang, Ran; Li, Ning; Da, Yang; Hu, Xiaoxiang
2017-03-29
The number of teats in pigs is related to a sow's ability to rear piglets to weaning age. Several studies have identified genes and genomic regions that affect teat number in swine but few common results were reported. The objective of this study was to identify genetic factors that affect teat number in pigs, evaluate the accuracy of genomic prediction, and evaluate the contribution of significant genes and genomic regions to genomic broad-sense heritability and prediction accuracy using 41,108 autosomal single nucleotide polymorphisms (SNPs) from genotyping-by-sequencing on 2936 Duroc boars. Narrow-sense heritability and dominance heritability of teat number estimated by genomic restricted maximum likelihood were 0.365 ± 0.030 and 0.035 ± 0.019, respectively. The accuracy of genomic predictions, calculated as the average correlation between the genomic best linear unbiased prediction and phenotype in a tenfold validation study, was 0.437 ± 0.064 for the model with additive and dominance effects and 0.435 ± 0.064 for the model with additive effects only. Genome-wide association studies (GWAS) using three methods of analysis identified 85 significant SNP effects for teat number on chromosomes 1, 6, 7, 10, 11, 12 and 14. The region between 102.9 and 106.0 Mb on chromosome 7, which was reported in several studies, had the most significant SNP effects in or near the PTGR2, FAM161B, LIN52, VRTN, FCF1, AREL1 and LRRC74A genes. This region accounted for 10.0% of the genomic additive heritability and 8.0% of the accuracy of prediction. The second most significant chromosome region not reported by previous GWAS was the region between 77.7 and 79.7 Mb on chromosome 11, where SNPs in the FGF14 gene had the most significant effect and accounted for 5.1% of the genomic additive heritability and 5.2% of the accuracy of prediction. The 85 significant SNPs accounted for 28.5 to 28.8% of the genomic additive heritability and 35.8 to 36.8% of the accuracy of prediction. The three methods used for the GWAS identified 85 significant SNPs with additive effects on teat number, including SNPs in a previously reported chromosomal region and SNPs in novel chromosomal regions. Most significant SNPs with larger estimated effects also had larger contributions to the total genomic heritability and accuracy of prediction than other SNPs.
Rath, Timo; Tontini, Gian E; Nägel, Andreas; Vieth, Michael; Zopf, Steffen; Günther, Claudia; Hoffman, Arthur; Neurath, Markus F; Neumann, Helmut
2015-10-22
Distal diminutive colorectal polyps are common and accurate endoscopic prediction of hyperplastic or adenomatous polyp histology could reduce procedural time, costs and potential risks associated with the resection. Within this study we assessed whether digital chromoendoscopy can accurately predict the histology of distal diminutive colorectal polyps according to the ASGE PIVI statement. In this prospective cohort study, 224 consecutive patients undergoing screening or surveillance colonoscopy were included. Real time histology of 121 diminutive distal colorectal polyps was evaluated using high-definition endoscopy with digital chromoendoscopy and the accuracy of predicting histology with digital chromoendoscopy was assessed. The overall accuracy of digital chromoendoscopy for prediction of adenomatous polyp histology was 90.1 %. Sensitivity, specificity, positive and negative predictive values were 93.3, 88.7, 88.7, and 93.2 %, respectively. In high-confidence predictions, the accuracy increased to 96.3 % while sensitivity, specificity, positive and negative predictive values were calculated as 98.1, 94.4, 94.5, and 98.1 %, respectively. Surveillance intervals with digital chromoendoscopy were correctly predicted with >90 % accuracy. High-definition endoscopy in combination with digital chromoendoscopy allowed real-time in vivo prediction of distal colorectal polyp histology and is accurate enough to leave distal colorectal polyps in place without resection or to resect and discard them without pathologic assessment. This approach has the potential to reduce costs and risks associated with the redundant removal of diminutive colorectal polyps. ClinicalTrials NCT02217449.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moslehi, Salim; Reddy, T. Agami; Katipamula, Srinivas
This research was undertaken to evaluate different inverse models for predicting power output of solar photovoltaic (PV) systems under different practical scenarios. In particular, we have investigated whether PV power output prediction accuracy can be improved if module/cell temperature was measured in addition to climatic variables, and also the extent to which prediction accuracy degrades if solar irradiation is not measured on the plane of array but only on a horizontal surface. We have also investigated the significance of different independent or regressor variables, such as wind velocity and incident angle modifier in predicting PV power output and cell temperature.more » The inverse regression model forms have been evaluated both in terms of their goodness-of-fit, and their accuracy and robustness in terms of their predictive performance. Given the accuracy of the measurements, expected CV-RMSE of hourly power output prediction over the year varies between 3.2% and 8.6% when only climatic data are used. Depending on what type of measured climatic and PV performance data is available, different scenarios have been identified and the corresponding appropriate modeling pathways have been proposed. The corresponding models are to be implemented on a controller platform for optimum operational planning of microgrids and integrated energy systems.« less
A Final Approach Trajectory Model for Current Operations
NASA Technical Reports Server (NTRS)
Gong, Chester; Sadovsky, Alexander
2010-01-01
Predicting accurate trajectories with limited intent information is a challenge faced by air traffic management decision support tools in operation today. One such tool is the FAA's Terminal Proximity Alert system which is intended to assist controllers in maintaining safe separation of arrival aircraft during final approach. In an effort to improve the performance of such tools, two final approach trajectory models are proposed; one based on polynomial interpolation, the other on the Fourier transform. These models were tested against actual traffic data and used to study effects of the key final approach trajectory modeling parameters of wind, aircraft type, and weight class, on trajectory prediction accuracy. Using only the limited intent data available to today's ATM system, both the polynomial interpolation and Fourier transform models showed improved trajectory prediction accuracy over a baseline dead reckoning model. Analysis of actual arrival traffic showed that this improved trajectory prediction accuracy leads to improved inter-arrival separation prediction accuracy for longer look ahead times. The difference in mean inter-arrival separation prediction error between the Fourier transform and dead reckoning models was 0.2 nmi for a look ahead time of 120 sec, a 33 percent improvement, with a corresponding 32 percent improvement in standard deviation.
Danner, Omar K; Hendren, Sandra; Santiago, Ethel; Nye, Brittany; Abraham, Prasad
2017-04-01
Enhancing the efficiency of diagnosis and treatment of severe sepsis by using physiologically-based, predictive analytical strategies has not been fully explored. We hypothesize assessment of heart-rate-to-systolic-ratio significantly increases the timeliness and accuracy of sepsis prediction after emergency department (ED) presentation. We evaluated the records of 53,313 ED patients from a large, urban teaching hospital between January and June 2015. The HR-to-systolic ratio was compared to SIRS criteria for sepsis prediction. There were 884 patients with discharge diagnoses of sepsis, severe sepsis, and/or septic shock. Variations in three presenting variables, heart rate, systolic BP and temperature were determined to be primary early predictors of sepsis with a 74% (654/884) accuracy compared to 34% (304/884) using SIRS criteria (p < 0.0001)in confirmed septic patients. Physiologically-based predictive analytics improved the accuracy and expediency of sepsis identification via detection of variations in HR-to-systolic ratio. This approach may lead to earlier sepsis workup and life-saving interventions. Copyright © 2017 Elsevier Inc. All rights reserved.
Lopes, F B; Wu, X-L; Li, H; Xu, J; Perkins, T; Genho, J; Ferretti, R; Tait, R G; Bauck, S; Rosa, G J M
2018-02-01
Reliable genomic prediction of breeding values for quantitative traits requires the availability of sufficient number of animals with genotypes and phenotypes in the training set. As of 31 October 2016, there were 3,797 Brangus animals with genotypes and phenotypes. These Brangus animals were genotyped using different commercial SNP chips. Of them, the largest group consisted of 1,535 animals genotyped by the GGP-LDV4 SNP chip. The remaining 2,262 genotypes were imputed to the SNP content of the GGP-LDV4 chip, so that the number of animals available for training the genomic prediction models was more than doubled. The present study showed that the pooling of animals with both original or imputed 40K SNP genotypes substantially increased genomic prediction accuracies on the ten traits. By supplementing imputed genotypes, the relative gains in genomic prediction accuracies on estimated breeding values (EBV) were from 12.60% to 31.27%, and the relative gain in genomic prediction accuracies on de-regressed EBV was slightly small (i.e. 0.87%-18.75%). The present study also compared the performance of five genomic prediction models and two cross-validation methods. The five genomic models predicted EBV and de-regressed EBV of the ten traits similarly well. Of the two cross-validation methods, leave-one-out cross-validation maximized the number of animals at the stage of training for genomic prediction. Genomic prediction accuracy (GPA) on the ten quantitative traits was validated in 1,106 newly genotyped Brangus animals based on the SNP effects estimated in the previous set of 3,797 Brangus animals, and they were slightly lower than GPA in the original data. The present study was the first to leverage currently available genotype and phenotype resources in order to harness genomic prediction in Brangus beef cattle. © 2018 Blackwell Verlag GmbH.
Evaluation of Data-Driven Models for Predicting Solar Photovoltaics Power Output
Moslehi, Salim; Reddy, T. Agami; Katipamula, Srinivas
2017-09-10
This research was undertaken to evaluate different inverse models for predicting power output of solar photovoltaic (PV) systems under different practical scenarios. In particular, we have investigated whether PV power output prediction accuracy can be improved if module/cell temperature was measured in addition to climatic variables, and also the extent to which prediction accuracy degrades if solar irradiation is not measured on the plane of array but only on a horizontal surface. We have also investigated the significance of different independent or regressor variables, such as wind velocity and incident angle modifier in predicting PV power output and cell temperature.more » The inverse regression model forms have been evaluated both in terms of their goodness-of-fit, and their accuracy and robustness in terms of their predictive performance. Given the accuracy of the measurements, expected CV-RMSE of hourly power output prediction over the year varies between 3.2% and 8.6% when only climatic data are used. Depending on what type of measured climatic and PV performance data is available, different scenarios have been identified and the corresponding appropriate modeling pathways have been proposed. The corresponding models are to be implemented on a controller platform for optimum operational planning of microgrids and integrated energy systems.« less
Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.
2000-01-01
Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.
Subjective Life Expectancy Among College Students.
Rodemann, Alyssa E; Arigo, Danielle
2017-09-14
Establishing healthy habits in college is important for long-term health. Despite existing health promotion efforts, many college students fail to meet recommendations for behaviors such as healthy eating and exercise, which may be due to low perceived risk for health problems. The goals of this study were to examine: (1) the accuracy of life expectancy predictions, (2) potential individual differences in accuracy (i.e., gender and conscientiousness), and (3) potential change in accuracy after inducing awareness of current health behaviors. College students from a small northeastern university completed an electronic survey, including demographics, initial predictions of their life expectancy, and their recent health behaviors. At the end of the survey, participants were asked to predict their life expectancy a second time. Their health data were then submitted to a validated online algorithm to generate calculated life expectancy. Participants significantly overestimated their initial life expectancy, and neither gender nor conscientiousness was related to the accuracy of these predictions. Further, subjective life expectancy decreased from initial to final predictions. These findings suggest that life expectancy perceptions present a unique-and potentially modifiable-psychological process that could influence college students' self-care.
Adeyekun, A A; Orji, M O
2014-04-01
To compare the predictive accuracy of foetal trans-cerebellar diameter (TCD) with those of other biometric parameters in the estimation of gestational age (GA). A cross-sectional study. The University of Benin Teaching Hospital, Nigeria. Four hundred and fifty healthy singleton pregnant women, between 14-42 weeks gestation. Trans-cerebellar diameter (TCD), biparietal diameter (BPD), femur length (FL), abdominal circumference (AC) values across the gestational age range studied. Correlation and predictive values of TCD compared to those of other biometric parameters. The range of values for TCD was 11.9 - 59.7mm (mean = 34.2 ± 14.1mm). TCD correlated more significantly with menstrual age compared with other biometric parameters (r = 0.984, p = 0.000). TCD had a higher predictive accuracy of 96.9% ± 12 days), BPD (93.8% ± 14.1 days). AC (92.7% ± 15.3 days). TCD has a stronger predictive accuracy for gestational age compared to other routinely used foetal biometric parameters among Nigerian Africans.
2014-01-01
Background Although the X chromosome is the second largest bovine chromosome, markers on the X chromosome are not used for genomic prediction in some countries and populations. In this study, we presented a method for computing genomic relationships using X chromosome markers, investigated the accuracy of imputation from a low density (7K) to the 54K SNP (single nucleotide polymorphism) panel, and compared the accuracy of genomic prediction with and without using X chromosome markers. Methods The impact of considering X chromosome markers on prediction accuracy was assessed using data from Nordic Holstein bulls and different sets of SNPs: (a) the 54K SNPs for reference and test animals, (b) SNPs imputed from the 7K to the 54K SNP panel for test animals, (c) SNPs imputed from the 7K to the 54K panel for half of the reference animals, and (d) the 7K SNP panel for all animals. Beagle and Findhap were used for imputation. GBLUP (genomic best linear unbiased prediction) models with or without X chromosome markers and with or without a residual polygenic effect were used to predict genomic breeding values for 15 traits. Results Averaged over the two imputation datasets, correlation coefficients between imputed and true genotypes for autosomal markers, pseudo-autosomal markers, and X-specific markers were 0.971, 0.831 and 0.935 when using Findhap, and 0.983, 0.856 and 0.937 when using Beagle. Estimated reliabilities of genomic predictions based on the imputed datasets using Findhap or Beagle were very close to those using the real 54K data. Genomic prediction using all markers gave slightly higher reliabilities than predictions without X chromosome markers. Based on our data which included only bulls, using a G matrix that accounted for sex-linked relationships did not improve prediction, compared with a G matrix that did not account for sex-linked relationships. A model that included a polygenic effect did not recover the loss of prediction accuracy from exclusion of X chromosome markers. Conclusions The results from this study suggest that markers on the X chromosome contribute to accuracy of genomic predictions and should be used for routine genomic evaluation. PMID:25080199
NASA Astrophysics Data System (ADS)
Dyar, M. Darby; Fassett, Caleb I.; Giguere, Stephen; Lepore, Kate; Byrne, Sarah; Boucher, Thomas; Carey, CJ; Mahadevan, Sridhar
2016-09-01
This study uses 1356 spectra from 452 geologically-diverse samples, the largest suite of LIBS rock spectra ever assembled, to compare the accuracy of elemental predictions in models that use only spectral regions thought to contain peaks arising from the element of interest versus those that use information in the entire spectrum. Results show that for the elements Si, Al, Ti, Fe, Mg, Ca, Na, K, Ni, Mn, Cr, Co, and Zn, univariate predictions based on single emission lines are by far the least accurate, no matter how carefully the region of channels/wavelengths is chosen and despite the prominence of the selected emission lines. An automated iterative algorithm was developed to sweep through all 5485 channels of data and select the single region that produces the optimal prediction accuracy for each element using univariate analysis. For the eight major elements, use of this technique results in a 35% improvement in prediction accuracy; for minors, the improvement is 13%. The best wavelength region choice for any given univariate analysis is likely to be an inherent property of the specific training set that cannot be generalized. In comparison, multivariate analysis using partial least-squares (PLS) almost universally outperforms univariate analysis. PLS using all the same wavelength regions from the univariate analysis produces results that improve in accuracy by 63% for major elements and 3% for minor element. This difference is likely a reflection of signal to noise ratios, which are far better for major elements than for minor elements, and likely limit their prediction accuracy by any technique. We also compare predictions using specific wavelength ranges for each element against those employing all channels. Masking out channels to focus on emission lines from a specific element that occurs decreases prediction accuracy for major elements but is useful for minor elements with low signals and proportionally much higher noise; use of PLS rather than univariate analysis is still recommended. Finally, we tested the generalizability of our results by analyzing a second data set from a different instrument. Overall prediction accuracies for the mixed data sets are higher than for either set alone for all major and minor elements except Ni, Cr, and Co, where results are roughly comparable.
Bolormaa, S; Pryce, J E; Kemper, K; Savin, K; Hayes, B J; Barendse, W; Zhang, Y; Reich, C M; Mason, B A; Bunch, R J; Harrison, B E; Reverter, A; Herd, R M; Tier, B; Graser, H-U; Goddard, M E
2013-07-01
The aim of this study was to assess the accuracy of genomic predictions for 19 traits including feed efficiency, growth, and carcass and meat quality traits in beef cattle. The 10,181 cattle in our study had real or imputed genotypes for 729,068 SNP although not all cattle were measured for all traits. Animals included Bos taurus, Brahman, composite, and crossbred animals. Genomic EBV (GEBV) were calculated using 2 methods of genomic prediction [BayesR and genomic BLUP (GBLUP)] either using a common training dataset for all breeds or using a training dataset comprising only animals of the same breed. Accuracies of GEBV were assessed using 5-fold cross-validation. The accuracy of genomic prediction varied by trait and by method. Traits with a large number of recorded and genotyped animals and with high heritability gave the greatest accuracy of GEBV. Using GBLUP, the average accuracy was 0.27 across traits and breeds, but the accuracies between breeds and between traits varied widely. When the training population was restricted to animals from the same breed as the validation population, GBLUP accuracies declined by an average of 0.04. The greatest decline in accuracy was found for the 4 composite breeds. The BayesR accuracies were greater by an average of 0.03 than GBLUP accuracies, particularly for traits with known genes of moderate to large effect mutations segregating. The accuracies of 0.43 to 0.48 for IGF-I traits were among the greatest in the study. Although accuracies are low compared with those observed in dairy cattle, genomic selection would still be beneficial for traits that are hard to improve by conventional selection, such as tenderness and residual feed intake. BayesR identified many of the same quantitative trait loci as a genomewide association study but appeared to map them more precisely. All traits appear to be highly polygenic with thousands of SNP independently associated with each trait.
Assessing Predictive Properties of Genome-Wide Selection in Soybeans
Xavier, Alencar; Muir, William M.; Rainey, Katy Martin
2016-01-01
Many economically important traits in plant breeding have low heritability or are difficult to measure. For these traits, genomic selection has attractive features and may boost genetic gains. Our goal was to evaluate alternative scenarios to implement genomic selection for yield components in soybean (Glycine max L. merr). We used a nested association panel with cross validation to evaluate the impacts of training population size, genotyping density, and prediction model on the accuracy of genomic prediction. Our results indicate that training population size was the factor most relevant to improvement in genome-wide prediction, with greatest improvement observed in training sets up to 2000 individuals. We discuss assumptions that influence the choice of the prediction model. Although alternative models had minor impacts on prediction accuracy, the most robust prediction model was the combination of reproducing kernel Hilbert space regression and BayesB. Higher genotyping density marginally improved accuracy. Our study finds that breeding programs seeking efficient genomic selection in soybeans would best allocate resources by investing in a representative training set. PMID:27317786
Assessment of Protein Side-Chain Conformation Prediction Methods in Different Residue Environments
Peterson, Lenna X.; Kang, Xuejiao; Kihara, Daisuke
2016-01-01
Computational prediction of side-chain conformation is an important component of protein structure prediction. Accurate side-chain prediction is crucial for practical applications of protein structure models that need atomic detailed resolution such as protein and ligand design. We evaluated the accuracy of eight side-chain prediction methods in reproducing the side-chain conformations of experimentally solved structures deposited to the Protein Data Bank. Prediction accuracy was evaluated for a total of four different structural environments (buried, surface, interface, and membrane-spanning) in three different protein types (monomeric, multimeric, and membrane). Overall, the highest accuracy was observed for buried residues in monomeric and multimeric proteins. Notably, side-chains at protein interfaces and membrane-spanning regions were better predicted than surface residues even though the methods did not all use multimeric and membrane proteins for training. Thus, we conclude that the current methods are as practically useful for modeling protein docking interfaces and membrane-spanning regions as for modeling monomers. PMID:24619909
A new software for prediction of femoral neck fractures.
Testi, Debora; Cappello, Angelo; Sgallari, Fiorella; Rumpf, Martin; Viceconti, Marco
2004-08-01
Femoral neck fractures are an important clinical, social and economic problem. Even if many different attempts have been carried out to improve the accuracy predicting the fracture risk, it was demonstrated in retrospective studies that the standard clinical protocol achieves an accuracy of about 65%. A new procedure was developed including for the prediction not only bone mineral density but also geometric and femoral strength information and achieving an accuracy of about 80% in a previous retrospective study. Aim of the present work was to re-engineer research-based procedures and develop a real-time software for the prediction of the risk for femoral fracture. The result was efficient, repeatable and easy to use software for the evaluation of the femoral neck fracture risk to be inserted in the daily clinical practice providing a useful tool for the improvement of fracture prediction.
Predicting Earth orientation changes from global forecasts of atmosphere-hydrosphere dynamics
NASA Astrophysics Data System (ADS)
Dobslaw, Henryk; Dill, Robert
2018-02-01
Effective Angular Momentum (EAM) functions obtained from global numerical simulations of atmosphere, ocean, and land surface dynamics are routinely processed by the Earth System Modelling group at Deutsches GeoForschungsZentrum. EAM functions are available since January 1976 with up to 3 h temporal resolution. Additionally, 6 days-long EAM forecasts are routinely published every day. Based on hindcast experiments with 305 individual predictions distributed over 15 months, we demonstrate that EAM forecasts improve the prediction accuracy of the Earth Orientation Parameters at all forecast horizons between 1 and 6 days. At day 6, prediction accuracy improves down to 1.76 mas for the terrestrial pole offset, and 2.6 mas for Δ UT1, which correspond to an accuracy increase of about 41% over predictions published in Bulletin A by the International Earth Rotation and Reference System Service.
Probability of criminal acts of violence: a test of jury predictive accuracy.
Reidy, Thomas J; Sorensen, Jon R; Cunningham, Mark D
2013-01-01
The ability of capital juries to accurately predict future prison violence at the sentencing phase of aggravated murder trials was examined through retrospective review of the disciplinary records of 115 male inmates sentenced to either life (n = 65) or death (n = 50) in Oregon from 1985 through 2008, with a mean post-conviction time at risk of 15.3 years. Violent prison behavior was completely unrelated to predictions made by capital jurors, with bidirectional accuracy simply reflecting the base rate of assaultive misconduct in the group. Rejection of the special issue predicting future violence enjoyed 90% accuracy. Conversely, predictions that future violence was probable had 90% error rates. More than 90% of the assaultive rule violations committed by these offenders resulted in no harm or only minor injuries. Copyright © 2013 John Wiley & Sons, Ltd.
Researches on High Accuracy Prediction Methods of Earth Orientation Parameters
NASA Astrophysics Data System (ADS)
Xu, X. Q.
2015-09-01
The Earth rotation reflects the coupling process among the solid Earth, atmosphere, oceans, mantle, and core of the Earth on multiple spatial and temporal scales. The Earth rotation can be described by the Earth's orientation parameters, which are abbreviated as EOP (mainly including two polar motion components PM_X and PM_Y, and variation in the length of day ΔLOD). The EOP is crucial in the transformation between the terrestrial and celestial reference systems, and has important applications in many areas such as the deep space exploration, satellite precise orbit determination, and astrogeodynamics. However, the EOP products obtained by the space geodetic technologies generally delay by several days to two weeks. The growing demands for modern space navigation make high-accuracy EOP prediction be a worthy topic. This thesis is composed of the following three aspects, for the purpose of improving the EOP forecast accuracy. (1) We analyze the relation between the length of the basic data series and the EOP forecast accuracy, and compare the EOP prediction accuracy for the linear autoregressive (AR) model and the nonlinear artificial neural network (ANN) method by performing the least squares (LS) extrapolations. The results show that the high precision forecast of EOP can be realized by appropriate selection of the basic data series length according to the required time span of EOP prediction: for short-term prediction, the basic data series should be shorter, while for the long-term prediction, the series should be longer. The analysis also showed that the LS+AR model is more suitable for the short-term forecasts, while the LS+ANN model shows the advantages in the medium- and long-term forecasts. (2) We develop for the first time a new method which combines the autoregressive model and Kalman filter (AR+Kalman) in short-term EOP prediction. The equations of observation and state are established using the EOP series and the autoregressive coefficients respectively, which are used to improve/re-evaluate the AR model. Comparing to the single AR model, the AR+Kalman method performs better in the prediction of UT1-UTC and ΔLOD, and the improvement in the prediction of the polar motion is significant. (3) Following the successful Earth Orientation Parameter Prediction Comparison Campaign (EOP PCC), the Earth Orientation Parameter Combination of Prediction Pilot Project (EOPC PPP) was sponsored in 2010. As one of the participants from China, we update and submit the short- and medium-term (1 to 90 days) EOP predictions every day. From the current comparative statistics, our prediction accuracy is on the medium international level. We will carry out more innovative researches to improve the EOP forecast accuracy and enhance our level in EOP forecast.
Methods for evaluating the predictive accuracy of structural dynamic models
NASA Technical Reports Server (NTRS)
Hasselman, Timothy K.; Chrostowski, Jon D.
1991-01-01
Modeling uncertainty is defined in terms of the difference between predicted and measured eigenvalues and eigenvectors. Data compiled from 22 sets of analysis/test results was used to create statistical databases for large truss-type space structures and both pretest and posttest models of conventional satellite-type space structures. Modeling uncertainty is propagated through the model to produce intervals of uncertainty on frequency response functions, both amplitude and phase. This methodology was used successfully to evaluate the predictive accuracy of several structures, including the NASA CSI Evolutionary Structure tested at Langley Research Center. Test measurements for this structure were within + one-sigma intervals of predicted accuracy for the most part, demonstrating the validity of the methodology and computer code.
Zhou, L; Lund, M S; Wang, Y; Su, G
2014-08-01
This study investigated genomic predictions across Nordic Holstein and Nordic Red using various genomic relationship matrices. Different sources of information, such as consistencies of linkage disequilibrium (LD) phase and marker effects, were used to construct the genomic relationship matrices (G-matrices) across these two breeds. Single-trait genomic best linear unbiased prediction (GBLUP) model and two-trait GBLUP model were used for single-breed and two-breed genomic predictions. The data included 5215 Nordic Holstein bulls and 4361 Nordic Red bulls, which was composed of three populations: Danish Red, Swedish Red and Finnish Ayrshire. The bulls were genotyped with 50 000 SNP chip. Using the two-breed predictions with a joint Nordic Holstein and Nordic Red reference population, accuracies increased slightly for all traits in Nordic Red, but only for some traits in Nordic Holstein. Among the three subpopulations of Nordic Red, accuracies increased more for Danish Red than for Swedish Red and Finnish Ayrshire. This is because closer genetic relationships exist between Danish Red and Nordic Holstein. Among Danish Red, individuals with higher genomic relationship coefficients with Nordic Holstein showed more increased accuracies in the two-breed predictions. Weighting the two-breed G-matrices by LD phase consistencies, marker effects or both did not further improve accuracies of the two-breed predictions. © 2014 Blackwell Verlag GmbH.
Dang, Mia; Ramsaran, Kalinda D; Street, Melissa E; Syed, S Noreen; Barclay-Goddard, Ruth; Stratford, Paul W; Miller, Patricia A
2011-01-01
To estimate the predictive accuracy and clinical usefulness of the Chedoke-McMaster Stroke Assessment (CMSA) predictive equations. A longitudinal prognostic study using historical data obtained from 104 patients admitted post cerebrovascular accident was undertaken. Data were abstracted for all patients undergoing rehabilitation post stroke who also had documented admission and discharge CMSA scores. Published predictive equations were used to determine predicted outcomes. To determine the accuracy and clinical usefulness of the predictive model, shrinkage coefficients and predictions with 95% confidence bands were calculated. Complete data were available for 74 patients with a mean age of 65.3±12.4 years. The shrinkage values for the six Impairment Inventory (II) dimensions varied from -0.05 to 0.09; the shrinkage value for the Activity Inventory (AI) was 0.21. The error associated with predictive values was greater than ±1.5 stages for the II dimensions and greater than ±24 points for the AI. This study shows that the large error associated with the predictions (as defined by the confidence band) for the CMSA II and AI limits their clinical usefulness as a predictive measure. Further research to establish predictive models using alternative statistical procedures is warranted.
Development of machine learning models for diagnosis of glaucoma.
Kim, Seong Jae; Cho, Kyong Jin; Oh, Sejong
2017-01-01
The study aimed to develop machine learning models that have strong prediction power and interpretability for diagnosis of glaucoma based on retinal nerve fiber layer (RNFL) thickness and visual field (VF). We collected various candidate features from the examination of retinal nerve fiber layer (RNFL) thickness and visual field (VF). We also developed synthesized features from original features. We then selected the best features proper for classification (diagnosis) through feature evaluation. We used 100 cases of data as a test dataset and 399 cases of data as a training and validation dataset. To develop the glaucoma prediction model, we considered four machine learning algorithms: C5.0, random forest (RF), support vector machine (SVM), and k-nearest neighbor (KNN). We repeatedly composed a learning model using the training dataset and evaluated it by using the validation dataset. Finally, we got the best learning model that produces the highest validation accuracy. We analyzed quality of the models using several measures. The random forest model shows best performance and C5.0, SVM, and KNN models show similar accuracy. In the random forest model, the classification accuracy is 0.98, sensitivity is 0.983, specificity is 0.975, and AUC is 0.979. The developed prediction models show high accuracy, sensitivity, specificity, and AUC in classifying among glaucoma and healthy eyes. It will be used for predicting glaucoma against unknown examination records. Clinicians may reference the prediction results and be able to make better decisions. We may combine multiple learning models to increase prediction accuracy. The C5.0 model includes decision rules for prediction. It can be used to explain the reasons for specific predictions.
Ahnlide, I; Zalaudek, I; Nilsson, F; Bjellerup, M; Nielsen, K
2016-10-01
Prediction of the histopathological subtype of basal cell carcinoma (BCC) is important for tailoring optimal treatment, especially in patients with suspected superficial BCC (sBCC). To assess the accuracy of the preoperative prediction of subtypes of BCC in clinical practice, to evaluate whether dermoscopic examination enhances accuracy and to find dermoscopic criteria for discriminating sBCC from other subtypes. The main presurgical diagnosis was compared with the histopathological, postoperative diagnosis of routinely excised skin tumours in a predominantly fair-skinned patient cohort of northern Europe during a study period of 3 years (2011-13). The study period was split in two: during period 1, dermoscopy was optional (850 cases with a pre- or postoperative diagnosis of BCC), while during period 2 (after an educational dermoscopic update) dermoscopy was mandatory (651 cases). A classification tree based on clinical and dermoscopic features for prediction of sBCC was applied. For a total of 3544 excised skin tumours, the sensitivity for the diagnosis of BCC (any subtype) was 93·3%, specificity 91·8%, and the positive predictive value (PPV) 89·0%. The diagnostic accuracy as well as the PPV and the positive likelihood ratio for sBCC were significantly higher when dermoscopy was mandatory. A flat surface and multiple small erosions predicted sBCC. The study shows a high accuracy for an overall diagnosis of BCC and increased accuracy in prediction of sBCC for the period when dermoscopy was applied in all cases. The most discriminating findings for sBCC, based on clinical and dermoscopic features in this fair-skinned population, were a flat surface and multiple small erosions. © 2016 British Association of Dermatologists.
Steeg, Sarah; Quinlivan, Leah; Nowland, Rebecca; Carroll, Robert; Casey, Deborah; Clements, Caroline; Cooper, Jayne; Davies, Linda; Knipe, Duleeka; Ness, Jennifer; O'Connor, Rory C; Hawton, Keith; Gunnell, David; Kapur, Nav
2018-04-25
Risk scales are used widely in the management of patients presenting to hospital following self-harm. However, there is evidence that their diagnostic accuracy in predicting repeat self-harm is limited. Their predictive accuracy in population settings, and in identifying those at highest risk of suicide is not known. We compared the predictive accuracy of the Manchester Self-Harm Rule (MSHR), ReACT Self-Harm Rule (ReACT), SAD PERSONS Scale (SPS) and Modified SAD PERSONS Scale (MSPS) in an unselected sample of patients attending hospital following self-harm. Data on 4000 episodes of self-harm presenting to Emergency Departments (ED) between 2010 and 2012 were obtained from four established monitoring systems in England. Episodes were assigned a risk category for each scale and followed up for 6 months. The episode-based repeat rate was 28% (1133/4000) and the incidence of suicide was 0.5% (18/3962). The MSHR and ReACT performed with high sensitivity (98% and 94% respectively) and low specificity (15% and 23%). The SPS and the MSPS performed with relatively low sensitivity (24-29% and 9-12% respectively) and high specificity (76-77% and 90%). The area under the curve was 71% for both MSHR and ReACT, 51% for SPS and 49% for MSPS. Differences in predictive accuracy by subgroup were small. The scales were less accurate at predicting suicide than repeat self-harm. The scales failed to accurately predict repeat self-harm and suicide. The findings support existing clinical guidance not to use risk classification scales alone to determine treatment or predict future risk.
Turan, Bulent; Goldstein, Mary K.; Garber, Alan M.; Carstensen, Laura L.
2011-01-01
Objective At times caregivers make life-and-death decisions for loved ones. Yet very little is known about the factors that make caregivers more or less accurate as surrogate decision makers for their loved ones. Previous research suggests that in low stress situations, individuals with high attachment-related anxiety are attentive to their relationship partners’ wishes and concerns, but get overwhelmed by stressful situations. Individuals with high attachment-related avoidance are likely to avoid intimacy and stressful situations altogether. We hypothesized that both of these insecure attachment patterns limit surrogates’ ability to process distressing information and should therefore be associated with lower accuracy in the stressful task of predicting their loved ones’ end-of-life health care wishes. Methods Older patients visiting a medical clinic stated their preferences toward end-of-life health care in different health contexts and surrogate decision makers independently predicted those preferences. For comparison purposes, surrogates also predicted patients’ perceptions of everyday living conditions so that surrogates’ accuracy of their loved ones’ perceptions in non-stressful situations could be assessed. Results Surrogates high on either type of insecure attachment dimension were less accurate in predicting their loved ones’ end-of-life health care wishes. Interestingly, even though surrogates’ attachment-related anxiety was associated with lower accuracy of end-of-life health care wishes of patients, it was associated with higher accuracy in the non-stressful task of predicting their everyday living conditions. Conclusions Attachment orientation plays an important role in accuracy about loved ones’ end-of-life health care wishes. Interventions may target emotion regulation strategies associated with insecure attachment orientations. PMID:22081941
Wogan, Guinevere O. U.
2016-01-01
A primary assumption of environmental niche models (ENMs) is that models are both accurate and transferable across geography or time; however, recent work has shown that models may be accurate but not highly transferable. While some of this is due to modeling technique, individual species ecologies may also underlie this phenomenon. Life history traits certainly influence the accuracy of predictive ENMs, but their impact on model transferability is less understood. This study investigated how life history traits influence the predictive accuracy and transferability of ENMs using historically calibrated models for birds. In this study I used historical occurrence and climate data (1950-1990s) to build models for a sample of birds, and then projected them forward to the ‘future’ (1960-1990s). The models were then validated against models generated from occurrence data at that ‘future’ time. Internal and external validation metrics, as well as metrics assessing transferability, and Generalized Linear Models were used to identify life history traits that were significant predictors of accuracy and transferability. This study found that the predictive ability of ENMs differs with regard to life history characteristics such as range, migration, and habitat, and that the rarity versus commonness of a species affects the predicted stability and overlap and hence the transferability of projected models. Projected ENMs with both high accuracy and transferability scores, still sometimes suffered from over- or under- predicted species ranges. Life history traits certainly influenced the accuracy of predictive ENMs for birds, but while aspects of geographic range impact model transferability, the mechanisms underlying this are less understood. PMID:26959979
NASA Astrophysics Data System (ADS)
Qian, Xiaoshan
2018-01-01
The traditional model of evaporation process parameters have continuity and cumulative characteristics of the prediction error larger issues, based on the basis of the process proposed an adaptive particle swarm neural network forecasting method parameters established on the autoregressive moving average (ARMA) error correction procedure compensated prediction model to predict the results of the neural network to improve prediction accuracy. Taking a alumina plant evaporation process to analyze production data validation, and compared with the traditional model, the new model prediction accuracy greatly improved, can be used to predict the dynamic process of evaporation of sodium aluminate solution components.
NASA Astrophysics Data System (ADS)
Tao, Yulong; Miao, Yunshui; Han, Jiaqi; Yan, Feiyun
2018-05-01
Aiming at the low accuracy of traditional forecasting methods such as linear regression method, this paper presents a prediction method for predicting the relationship between bridge steel box girder and its displacement with wavelet neural network. Compared with traditional forecasting methods, this scheme has better local characteristics and learning ability, which greatly improves the prediction ability of deformation. Through analysis of the instance and found that after compared with the traditional prediction method based on wavelet neural network, the rigid beam deformation prediction accuracy is higher, and is superior to the BP neural network prediction results, conform to the actual demand of engineering design.
NASA Astrophysics Data System (ADS)
Davenport, F., IV; Harrison, L.; Shukla, S.; Husak, G. J.; Funk, C. C.
2017-12-01
We evaluate the predictive accuracy of an ensemble of empirical model specifications that use earth observation data to predict sub-national grain yields in Mexico and East Africa. Products that are actively used for seasonal drought monitoring are tested as yield predictors. Our research is driven by the fact that East Africa is a region where decisions regarding agricultural production are critical to preventing the loss of economic livelihoods and human life. Regional grain yield forecasts can be used to anticipate availability and prices of key staples, which can turn can inform decisions about targeting humanitarian response such as food aid. Our objective is to identify-for a given region, grain, and time year- what type of model and/or earth observation can most accurately predict end of season yields. We fit a set of models to county level panel data from Mexico, Kenya, Sudan, South Sudan, and Somalia. We then examine out of sample predicative accuracy using various linear and non-linear models that incorporate spatial and time varying coefficients. We compare accuracy within and across models that use predictor variables from remotely sensed measures of precipitation, temperature, soil moisture, and other land surface processes. We also examine at what point in the season a given model or product is most useful for determining predictive accuracy. Finally we compare predictive accuracy across a variety of agricultural regimes including high intensity irrigated commercial agricultural and rain fed subsistence level farms.
Assessing genomic selection prediction accuracy in a dynamic barley breeding
USDA-ARS?s Scientific Manuscript database
Genomic selection is a method to improve quantitative traits in crops and livestock by estimating breeding values of selection candidates using phenotype and genome-wide marker data sets. Prediction accuracy has been evaluated through simulation and cross-validation, however validation based on prog...
Predictive Accuracy of Exercise Stress Testing the Healthy Adult.
ERIC Educational Resources Information Center
Lamont, Linda S.
1981-01-01
Exercise stress testing provides information on the aerobic capacity, heart rate, and blood pressure responses to graded exercises of a healthy adult. The reliability of exercise tests as a diagnostic procedure is discussed in relation to sensitivity and specificity and predictive accuracy. (JN)
Linkage disequilibrium among commonly genotyped SNP and variants detected from bull sequence
USDA-ARS?s Scientific Manuscript database
Genomic prediction utilizing causal variants could increase selection accuracy above that achieved with SNP genotyped by commercial assays. A number of variants detected from sequencing influential sires are likely to be causal, but noticable improvements in prediction accuracy using imputed sequen...
Bulashevska, Alla; Eils, Roland
2006-06-14
The subcellular location of a protein is closely related to its function. It would be worthwhile to develop a method to predict the subcellular location for a given protein when only the amino acid sequence of the protein is known. Although many efforts have been made to predict subcellular location from sequence information only, there is the need for further research to improve the accuracy of prediction. A novel method called HensBC is introduced to predict protein subcellular location. HensBC is a recursive algorithm which constructs a hierarchical ensemble of classifiers. The classifiers used are Bayesian classifiers based on Markov chain models. We tested our method on six various datasets; among them are Gram-negative bacteria dataset, data for discriminating outer membrane proteins and apoptosis proteins dataset. We observed that our method can predict the subcellular location with high accuracy. Another advantage of the proposed method is that it can improve the accuracy of the prediction of some classes with few sequences in training and is therefore useful for datasets with imbalanced distribution of classes. This study introduces an algorithm which uses only the primary sequence of a protein to predict its subcellular location. The proposed recursive scheme represents an interesting methodology for learning and combining classifiers. The method is computationally efficient and competitive with the previously reported approaches in terms of prediction accuracies as empirical results indicate. The code for the software is available upon request.
Catto, James W F; Linkens, Derek A; Abbod, Maysam F; Chen, Minyou; Burton, Julian L; Feeley, Kenneth M; Hamdy, Freddie C
2003-09-15
New techniques for the prediction of tumor behavior are needed, because statistical analysis has a poor accuracy and is not applicable to the individual. Artificial intelligence (AI) may provide these suitable methods. Whereas artificial neural networks (ANN), the best-studied form of AI, have been used successfully, its hidden networks remain an obstacle to its acceptance. Neuro-fuzzy modeling (NFM), another AI method, has a transparent functional layer and is without many of the drawbacks of ANN. We have compared the predictive accuracies of NFM, ANN, and traditional statistical methods, for the behavior of bladder cancer. Experimental molecular biomarkers, including p53 and the mismatch repair proteins, and conventional clinicopathological data were studied in a cohort of 109 patients with bladder cancer. For all three of the methods, models were produced to predict the presence and timing of a tumor relapse. Both methods of AI predicted relapse with an accuracy ranging from 88% to 95%. This was superior to statistical methods (71-77%; P < 0.0006). NFM appeared better than ANN at predicting the timing of relapse (P = 0.073). The use of AI can accurately predict cancer behavior. NFM has a similar or superior predictive accuracy to ANN. However, unlike the impenetrable "black-box" of a neural network, the rules of NFM are transparent, enabling validation from clinical knowledge and the manipulation of input variables to allow exploratory predictions. This technique could be used widely in a variety of areas of medicine.
Niioka, Takenori; Uno, Tsukasa; Yasui-Furukori, Norio; Takahata, Takenori; Shimizu, Mikiko; Sugawara, Kazunobu; Tateishi, Tomonori
2007-04-01
The aim of this study was to determine the pharmacokinetics of low-dose nedaplatin combined with paclitaxel and radiation therapy in patients having non-small-cell lung carcinoma and establish the optimal dosage regimen for low-dose nedaplatin. We also evaluated predictive accuracy of reported formulas to estimate the area under the plasma concentration-time curve (AUC) of low-dose nedaplatin. A total of 19 patients were administered a constant intravenous infusion of 20 mg/m(2) body surface area (BSA) nedaplatin for an hour, and blood samples were collected at 1, 2, 3, 4, 6, 8, and 19 h after the administration. Plasma concentrations of unbound platinum were measured, and the actual value of platinum AUC (actual AUC) was calculated based on these data. The predicted value of platinum AUC (predicted AUC) was determined by three predictive methods reported in previous studies, consisting of Bayesian method, limited sampling strategies with plasma concentration at a single time point, and simple formula method (SFM) without measured plasma concentration. Three error indices, mean prediction error (ME, measure of bias), mean absolute error (MAE, measure of accuracy), and root mean squared prediction error (RMSE, measure of precision), were obtained from the difference between the actual and the predicted AUC, to compare the accuracy between the three predictive methods. The AUC showed more than threefold inter-patient variation, and there was a favorable correlation between nedaplatin clearance and creatinine clearance (Ccr) (r = 0.832, P < 0.01). In three error indices, MAE and RMSE showed significant difference between the three AUC predictive methods, and the method of SFM had the most favorable results, in which %ME, %MAE, and %RMSE were 5.5, 10.7, and 15.4, respectively. The dosage regimen of low-dose nedaplatin should be established based on Ccr rather than on BSA. Since prediction accuracy of SFM, which did not require measured plasma concentration, was most favorable among the three methods evaluated in this study, SFM could be the most practical method to predict AUC of low-dose nedaplatin in a clinical situation judging from its high accuracy in predicting AUC without measured plasma concentration.
Paudel, Prakash; Kovai, Vilas; Naduvilath, Thomas; Phuong, Ha Thanh; Ho, Suit May; Giap, Nguyen Viet
2016-01-01
To assess validity of teacher-based vision screening and elicit factors associated with accuracy of vision screening in Vietnam. After brief training, teachers independently measured visual acuity (VA) in 555 children aged 12-15 years in Ba Ria - Vung Tau Province. Teacher VA measurements were compared to those of refractionists. Sensitivity, specificity, positive predictive value and negative predictive value were calculated for uncorrected VA (UVA) and presenting VA (PVA) 20/40 or worse in either eye. Chi-square, Fisher's exact test and multivariate logistic regression were used to assess factors associated with accuracy of vision screening. Level of significance was set at 5%. Trained teachers in Vietnam demonstrated 86.7% sensitivity, 95.7% specificity, 86.7% positive predictive value and 95.7% negative predictive value in identifying children with visual impairment using the UVA measurement. PVA measurement revealed low accuracy for teachers, which was significantly associated with child's age, sex, spectacle wear and myopic status, but UVA measurement showed no such associations. Better accuracy was achieved in measurement of VA and identification of children with visual impairment using UVA measurement compared to PVA. UVA measurement is recommended for teacher-based vision screening programs.
2000-06-30
In the early morning hours, NASA’s Tracking and Data Relay Satellite (TDRS-H) sits poised on Launch Pad 36A, Cape Canaveral Air Force Station, before its scheduled launch aboard an Atlas IIA/Centaur rocket. One of three satellites (labeled H, I and J) being built by the Hughes Space and Communications Company, the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the Space Shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-30
After tower rollback just before dawn on Launch Pad 36A, Cape Canaveral Air Force Station, NASA’s Tracking and Data Relay Satellite (TDRS-H) sits bathed in spotlights before liftoff atop an Atlas IIA/Centaur rocket. One of three satellites (labeled H, I and J) being built by the Hughes Space and Communications Company, the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the Space Shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-30
NASA’s Tracking and Data Relay Satellite (TDRS-H) rises into the blue sky from Pad 36A, Cape Canaveral Air Force Station. Liftoff occurred at 8:56 a.m. EDT aboard an Atlas IIA/Centaur rocket. One of three satellites (labeled H, I and J) being built by the Hughes Space and Communications Company, the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the space shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-30
NASA’s Tracking and Data Relay Satellite (TDRS-H) rises into the blue sky from Pad 36A, Cape Canaveral Air Force Station. Liftoff occurred at 8:56 a.m. EDT aboard an Atlas IIA/Centaur rocket. One of three satellites (labeled H, I and J) being built by the Hughes Space and Communications Company, the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the space shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-30
In the early morning hours, NASA’s Tracking and Data Relay Satellite (TDRS-H) sits poised on Launch Pad 36A, Cape Canaveral Air Force Station, before its scheduled launch aboard an Atlas IIA/Centaur rocket. One of three satellites (labeled H, I and J) being built by the Hughes Space and Communications Company, the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the Space Shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-30
In the early morning hours on Launch Pad 36A, Cape Canaveral Air Force Station, the tower rolls back from NASA’s Tracking and Data Relay Satellite (TDRS-H) before liftoff atop an Atlas IIA/Centaur rocket. One of three satellites (labeled H, I and J) being built by the Hughes Space and Communications Company, the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the Space Shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-30
NASA’s Tracking and Data Relay Satellite (TDRS-H) rises into the blue sky from Pad 36A, Cape Canaveral Air Force Station. Liftoff occurred at 8:56 a.m. EDT aboard an Atlas IIA/Centaur rocket. One of three satellites (labeled H, I and J) being built by the Hughes Space and Communications Company, the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the space shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-30
NASA’s Tracking and Data Relay Satellite (TDRS-H) rises into the blue sky from Pad 36A, Cape Canaveral Air Force Station. Liftoff occurred at 8:56 a.m. EDT aboard an Atlas IIA/Centaur rocket. One of three satellites (labeled H, I and J) being built by the Hughes Space and Communications Company, the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the space shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-30
In the early morning hours on Launch Pad 36A, Cape Canaveral Air Force Station, the tower rolls back from NASA’s Tracking and Data Relay Satellite (TDRS-H) before liftoff atop an Atlas IIA/Centaur rocket. One of three satellites (labeled H, I and J) being built by the Hughes Space and Communications Company, the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the Space Shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-30
After tower rollback just before dawn on Launch Pad 36A, Cape Canaveral Air Force Station, NASA’s Tracking and Data Relay Satellite (TDRS-H) sits bathed in spotlights before liftoff atop an Atlas IIA/Centaur rocket. One of three satellites (labeled H, I and J) being built by the Hughes Space and Communications Company, the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the Space Shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
Electrical-assisted double side incremental forming and processes thereof
Roth, John; Cao, Jian
2014-06-03
A process for forming a sheet metal component using an electric current passing through the component is provided. The process can include providing a double side incremental forming machine, the machine operable to perform a plurality of double side incremental deformations on the sheet metal component and also apply an electric direct current to the sheet metal component during at least part of the forming. The direct current can be applied before or after the forming has started and/or be terminated before or after the forming has stopped. The direct current can be applied to any portion of the sheet metal. The electrical assistance can reduce the magnitude of force required to produce a given amount of deformation, increase the amount of deformation exhibited before failure and/or reduce any springback typically exhibited by the sheet metal component.
State of Jet Noise Prediction-NASA Perspective
NASA Technical Reports Server (NTRS)
Bridges, James E.
2008-01-01
This presentation covers work primarily done under the Airport Noise Technical Challenge portion of the Supersonics Project in the Fundamental Aeronautics Program. To provide motivation and context, the presentation starts with a brief overview of the Airport Noise Technical Challenge. It then covers the state of NASA s jet noise prediction tools in empirical, RANS-based, and time-resolved categories. The empirical tools, requires seconds to provide a prediction of noise spectral directivity with an accuracy of a few dB, but only for axisymmetric configurations. The RANS-based tools are able to discern the impact of three-dimensional features, but are currently deficient in predicting noise from heated jets and jets with high speed and require hours to produce their prediction. The time-resolved codes are capable of predicting resonances and other time-dependent phenomena, but are very immature, requiring months to deliver predictions without unknown accuracies and dependabilities. In toto, however, when one considers the progress being made it appears that aeroacoustic prediction tools are soon to approach the level of sophistication and accuracy of aerodynamic engineering tools.
Prediction of Spirometric Forced Expiratory Volume (FEV1) Data Using Support Vector Regression
NASA Astrophysics Data System (ADS)
Kavitha, A.; Sujatha, C. M.; Ramakrishnan, S.
2010-01-01
In this work, prediction of forced expiratory volume in 1 second (FEV1) in pulmonary function test is carried out using the spirometer and support vector regression analysis. Pulmonary function data are measured with flow volume spirometer from volunteers (N=175) using a standard data acquisition protocol. The acquired data are then used to predict FEV1. Support vector machines with polynomial kernel function with four different orders were employed to predict the values of FEV1. The performance is evaluated by computing the average prediction accuracy for normal and abnormal cases. Results show that support vector machines are capable of predicting FEV1 in both normal and abnormal cases and the average prediction accuracy for normal subjects was higher than that of abnormal subjects. Accuracy in prediction was found to be high for a regularization constant of C=10. Since FEV1 is the most significant parameter in the analysis of spirometric data, it appears that this method of assessment is useful in diagnosing the pulmonary abnormalities with incomplete data and data with poor recording.
Empirical Accuracies of U.S. Space Surveillance Network Reentry Predictions
NASA Technical Reports Server (NTRS)
Johnson, Nicholas L.
2008-01-01
The U.S. Space Surveillance Network (SSN) issues formal satellite reentry predictions for objects which have the potential for generating debris which could pose a hazard to people or property on Earth. These prognostications, known as Tracking and Impact Prediction (TIP) messages, are nominally distributed at daily intervals beginning four days prior to the anticipated reentry and several times during the final 24 hours in orbit. The accuracy of these messages depends on the nature of the satellite s orbit, the characteristics of the space vehicle, solar activity, and many other factors. Despite the many influences on the time and the location of reentry, a useful assessment of the accuracies of TIP messages can be derived and compared with the official accuracies included with each TIP message. This paper summarizes the results of a study of numerous uncontrolled reentries of spacecraft and rocket bodies from nearly circular orbits over a span of several years. Insights are provided into the empirical accuracies and utility of SSN TIP messages.
Gonzalez, Maritza G; Reed, Kathryn L; Center, Katherine E; Hill, Meghan G
2017-05-01
The purpose of this study was to investigate the relationship between the maternal body mass index (BMI) and the accuracy of ultrasound-derived birth weight. A retrospective chart review was performed on women who had an ultrasound examination between 36 and 43 weeks' gestation and had complete delivery data available through electronic medical records. The ultrasound-derived fetal weight was adjusted by 30 g per day of gestation that elapsed between the ultrasound examination and delivery to arrive at the predicted birth weight. A total of 403 pregnant women met inclusion criteria. Age ranged from 13-44 years (mean ± SD, 28.38 ± 5.97 years). The mean BMI was 32.62 ± 8.59 kg/m 2 . Most of the women did not have diabetes (n = 300 [74.0%]). The sample was primarily white (n = 165 [40.9%]) and Hispanic (n = 147 [36.5%]). The predicted weight of neonates at delivery (3677.07 ± 540.51 g) was higher than the actual birth weight (3335.92 ± 585.46 g). Based on regression analyses, as the BMI increased, so did the predicted weight (P < .01) and weight at delivery (P < .01). The accuracy of the estimated ultrasound-derived birth weight was not predicted by the maternal BMI (P = .22). Maternal race and diabetes status were not associated with the accuracy of ultrasound in predicting birth weight. Both predicted and actual birth weight increased as the BMI increased. However, the BMI did not affect the accuracy of the estimated ultrasound-derived birth weight. Maternal race and diabetes status did not influence the accuracy of the ultrasound-derived predicted birth weight. © 2017 by the American Institute of Ultrasound in Medicine.
A fast and robust iterative algorithm for prediction of RNA pseudoknotted secondary structures
2014-01-01
Background Improving accuracy and efficiency of computational methods that predict pseudoknotted RNA secondary structures is an ongoing challenge. Existing methods based on free energy minimization tend to be very slow and are limited in the types of pseudoknots that they can predict. Incorporating known structural information can improve prediction accuracy; however, there are not many methods for prediction of pseudoknotted structures that can incorporate structural information as input. There is even less understanding of the relative robustness of these methods with respect to partial information. Results We present a new method, Iterative HFold, for pseudoknotted RNA secondary structure prediction. Iterative HFold takes as input a pseudoknot-free structure, and produces a possibly pseudoknotted structure whose energy is at least as low as that of any (density-2) pseudoknotted structure containing the input structure. Iterative HFold leverages strengths of earlier methods, namely the fast running time of HFold, a method that is based on the hierarchical folding hypothesis, and the energy parameters of HotKnots V2.0. Our experimental evaluation on a large data set shows that Iterative HFold is robust with respect to partial information, with average accuracy on pseudoknotted structures steadily increasing from roughly 54% to 79% as the user provides up to 40% of the input structure. Iterative HFold is much faster than HotKnots V2.0, while having comparable accuracy. Iterative HFold also has significantly better accuracy than IPknot on our HK-PK and IP-pk168 data sets. Conclusions Iterative HFold is a robust method for prediction of pseudoknotted RNA secondary structures, whose accuracy with more than 5% information about true pseudoknot-free structures is better than that of IPknot, and with about 35% information about true pseudoknot-free structures compares well with that of HotKnots V2.0 while being significantly faster. Iterative HFold and all data used in this work are freely available at http://www.cs.ubc.ca/~hjabbari/software.php. PMID:24884954
Evaluating Methods of Updating Training Data in Long-Term Genomewide Selection
Neyhart, Jeffrey L.; Tiede, Tyler; Lorenz, Aaron J.; Smith, Kevin P.
2017-01-01
Genomewide selection is hailed for its ability to facilitate greater genetic gains per unit time. Over breeding cycles, the requisite linkage disequilibrium (LD) between quantitative trait loci and markers is expected to change as a result of recombination, selection, and drift, leading to a decay in prediction accuracy. Previous research has identified the need to update the training population using data that may capture new LD generated over breeding cycles; however, optimal methods of updating have not been explored. In a barley (Hordeum vulgare L.) breeding simulation experiment, we examined prediction accuracy and response to selection when updating the training population each cycle with the best predicted lines, the worst predicted lines, both the best and worst predicted lines, random lines, criterion-selected lines, or no lines. In the short term, we found that updating with the best predicted lines or the best and worst predicted lines resulted in high prediction accuracy and genetic gain, but in the long term, all methods (besides not updating) performed similarly. We also examined the impact of including all data in the training population or only the most recent data. Though patterns among update methods were similar, using a smaller but more recent training population provided a slight advantage in prediction accuracy and genetic gain. In an actual breeding program, a breeder might desire to gather phenotypic data on lines predicted to be the best, perhaps to evaluate possible cultivars. Therefore, our results suggest that an optimal method of updating the training population is also very practical. PMID:28315831
Hahn, Sowon; Buttaccio, Daniel R; Hahn, Jungwon; Lee, Taehun
2015-01-01
The present study demonstrates that levels of extraversion and neuroticism can predict attentional performance during a change detection task. After completing a change detection task built on the flicker paradigm, participants were assessed for personality traits using the Revised Eysenck Personality Questionnaire (EPQ-R). Multiple regression analyses revealed that higher levels of extraversion predict increased change detection accuracies, while higher levels of neuroticism predict decreased change detection accuracies. In addition, neurotic individuals exhibited decreased sensitivity A' and increased fixation dwell times. Hierarchical regression analyses further revealed that eye movement measures mediate the relationship between neuroticism and change detection accuracies. Based on the current results, we propose that neuroticism is associated with decreased attentional control over the visual field, presumably due to decreased attentional disengagement. Extraversion can predict increased attentional performance, but the effect is smaller than the relationship between neuroticism and attention.
New insights from cluster analysis methods for RNA secondary structure prediction
Rogers, Emily; Heitsch, Christine
2016-01-01
A widening gap exists between the best practices for RNA secondary structure prediction developed by computational researchers and the methods used in practice by experimentalists. Minimum free energy (MFE) predictions, although broadly used, are outperformed by methods which sample from the Boltzmann distribution and data mine the results. In particular, moving beyond the single structure prediction paradigm yields substantial gains in accuracy. Furthermore, the largest improvements in accuracy and precision come from viewing secondary structures not at the base pair level but at lower granularity/higher abstraction. This suggests that random errors affecting precision and systematic ones affecting accuracy are both reduced by this “fuzzier” view of secondary structures. Thus experimentalists who are willing to adopt a more rigorous, multilayered approach to secondary structure prediction by iterating through these levels of granularity will be much better able to capture fundamental aspects of RNA base pairing. PMID:26971529
Isma’eel, Hussain A.; Sakr, George E.; Almedawar, Mohamad M.; Fathallah, Jihan; Garabedian, Torkom; Eddine, Savo Bou Zein
2015-01-01
Background High dietary salt intake is directly linked to hypertension and cardiovascular diseases (CVDs). Predicting behaviors regarding salt intake habits is vital to guide interventions and increase their effectiveness. We aim to compare the accuracy of an artificial neural network (ANN) based tool that predicts behavior from key knowledge questions along with clinical data in a high cardiovascular risk cohort relative to the least square models (LSM) method. Methods We collected knowledge, attitude and behavior data on 115 patients. A behavior score was calculated to classify patients’ behavior towards reducing salt intake. Accuracy comparison between ANN and regression analysis was calculated using the bootstrap technique with 200 iterations. Results Starting from a 69-item questionnaire, a reduced model was developed and included eight knowledge items found to result in the highest accuracy of 62% CI (58-67%). The best prediction accuracy in the full and reduced models was attained by ANN at 66% and 62%, respectively, compared to full and reduced LSM at 40% and 34%, respectively. The average relative increase in accuracy over all in the full and reduced models is 82% and 102%, respectively. Conclusions Using ANN modeling, we can predict salt reduction behaviors with 66% accuracy. The statistical model has been implemented in an online calculator and can be used in clinics to estimate the patient’s behavior. This will help implementation in future research to further prove clinical utility of this tool to guide therapeutic salt reduction interventions in high cardiovascular risk individuals. PMID:26090333
Integrated Computational Solution for Predicting Skin Sensitization Potential of Molecules
Desai, Aarti; Singh, Vivek K.; Jere, Abhay
2016-01-01
Introduction Skin sensitization forms a major toxicological endpoint for dermatology and cosmetic products. Recent ban on animal testing for cosmetics demands for alternative methods. We developed an integrated computational solution (SkinSense) that offers a robust solution and addresses the limitations of existing computational tools i.e. high false positive rate and/or limited coverage. Results The key components of our solution include: QSAR models selected from a combinatorial set, similarity information and literature-derived sub-structure patterns of known skin protein reactive groups. Its prediction performance on a challenge set of molecules showed accuracy = 75.32%, CCR = 74.36%, sensitivity = 70.00% and specificity = 78.72%, which is better than several existing tools including VEGA (accuracy = 45.00% and CCR = 54.17% with ‘High’ reliability scoring), DEREK (accuracy = 72.73% and CCR = 71.44%) and TOPKAT (accuracy = 60.00% and CCR = 61.67%). Although, TIMES-SS showed higher predictive power (accuracy = 90.00% and CCR = 92.86%), the coverage was very low (only 10 out of 77 molecules were predicted reliably). Conclusions Owing to improved prediction performance and coverage, our solution can serve as a useful expert system towards Integrated Approaches to Testing and Assessment for skin sensitization. It would be invaluable to cosmetic/ dermatology industry for pre-screening their molecules, and reducing time, cost and animal testing. PMID:27271321
Nateghi, Roshanak; Guikema, Seth D; Quiring, Steven M
2011-12-01
This article compares statistical methods for modeling power outage durations during hurricanes and examines the predictive accuracy of these methods. Being able to make accurate predictions of power outage durations is valuable because the information can be used by utility companies to plan their restoration efforts more efficiently. This information can also help inform customers and public agencies of the expected outage times, enabling better collective response planning, and coordination of restoration efforts for other critical infrastructures that depend on electricity. In the long run, outage duration estimates for future storm scenarios may help utilities and public agencies better allocate risk management resources to balance the disruption from hurricanes with the cost of hardening power systems. We compare the out-of-sample predictive accuracy of five distinct statistical models for estimating power outage duration times caused by Hurricane Ivan in 2004. The methods compared include both regression models (accelerated failure time (AFT) and Cox proportional hazard models (Cox PH)) and data mining techniques (regression trees, Bayesian additive regression trees (BART), and multivariate additive regression splines). We then validate our models against two other hurricanes. Our results indicate that BART yields the best prediction accuracy and that it is possible to predict outage durations with reasonable accuracy. © 2011 Society for Risk Analysis.
Increased genomic prediction accuracy in wheat breeding using a large Australian panel.
Norman, Adam; Taylor, Julian; Tanaka, Emi; Telfer, Paul; Edwards, James; Martinant, Jean-Pierre; Kuchel, Haydn
2017-12-01
Genomic prediction accuracy within a large panel was found to be substantially higher than that previously observed in smaller populations, and also higher than QTL-based prediction. In recent years, genomic selection for wheat breeding has been widely studied, but this has typically been restricted to population sizes under 1000 individuals. To assess its efficacy in germplasm representative of commercial breeding programmes, we used a panel of 10,375 Australian wheat breeding lines to investigate the accuracy of genomic prediction for grain yield, physical grain quality and other physiological traits. To achieve this, the complete panel was phenotyped in a dedicated field trial and genotyped using a custom Axiom TM Affymetrix SNP array. A high-quality consensus map was also constructed, allowing the linkage disequilibrium present in the germplasm to be investigated. Using the complete SNP array, genomic prediction accuracies were found to be substantially higher than those previously observed in smaller populations and also more accurate compared to prediction approaches using a finite number of selected quantitative trait loci. Multi-trait genetic correlations were also assessed at an additive and residual genetic level, identifying a negative genetic correlation between grain yield and protein as well as a positive genetic correlation between grain size and test weight.
Application of GA-SVM method with parameter optimization for landslide development prediction
NASA Astrophysics Data System (ADS)
Li, X. Z.; Kong, J. M.
2013-10-01
Prediction of landslide development process is always a hot issue in landslide research. So far, many methods for landslide displacement series prediction have been proposed. Support vector machine (SVM) has been proved to be a novel algorithm with good performance. However, the performance strongly depends on the right selection of the parameters (C and γ) of SVM model. In this study, we presented an application of GA-SVM method with parameter optimization in landslide displacement rate prediction. We selected a typical large-scale landslide in some hydro - electrical engineering area of Southwest China as a case. On the basis of analyzing the basic characteristics and monitoring data of the landslide, a single-factor GA-SVM model and a multi-factor GA-SVM model of the landslide were built. Moreover, the models were compared with single-factor and multi-factor SVM models of the landslide. The results show that, the four models have high prediction accuracies, but the accuracies of GA-SVM models are slightly higher than those of SVM models and the accuracies of multi-factor models are slightly higher than those of single-factor models for the landslide prediction. The accuracy of the multi-factor GA-SVM models is the highest, with the smallest RSME of 0.0009 and the biggest RI of 0.9992.
Ma, Xin; Guo, Jing; Sun, Xiao
2015-01-01
The prediction of RNA-binding proteins is one of the most challenging problems in computation biology. Although some studies have investigated this problem, the accuracy of prediction is still not sufficient. In this study, a highly accurate method was developed to predict RNA-binding proteins from amino acid sequences using random forests with the minimum redundancy maximum relevance (mRMR) method, followed by incremental feature selection (IFS). We incorporated features of conjoint triad features and three novel features: binding propensity (BP), nonbinding propensity (NBP), and evolutionary information combined with physicochemical properties (EIPP). The results showed that these novel features have important roles in improving the performance of the predictor. Using the mRMR-IFS method, our predictor achieved the best performance (86.62% accuracy and 0.737 Matthews correlation coefficient). High prediction accuracy and successful prediction performance suggested that our method can be a useful approach to identify RNA-binding proteins from sequence information.
NASA Astrophysics Data System (ADS)
Xu, Wenbo; Jing, Shaocai; Yu, Wenjuan; Wang, Zhaoxian; Zhang, Guoping; Huang, Jianxi
2013-11-01
In this study, the high risk areas of Sichuan Province with debris flow, Panzhihua and Liangshan Yi Autonomous Prefecture, were taken as the studied areas. By using rainfall and environmental factors as the predictors and based on the different prior probability combinations of debris flows, the prediction of debris flows was compared in the areas with statistical methods: logistic regression (LR) and Bayes discriminant analysis (BDA). The results through the comprehensive analysis show that (a) with the mid-range scale prior probability, the overall predicting accuracy of BDA is higher than those of LR; (b) with equal and extreme prior probabilities, the overall predicting accuracy of LR is higher than those of BDA; (c) the regional predicting models of debris flows with rainfall factors only have worse performance than those introduced environmental factors, and the predicting accuracies of occurrence and nonoccurrence of debris flows have been changed in the opposite direction as the supplemented information.
Nho, Kwangsik; Shen, Li; Kim, Sungeun; Risacher, Shannon L.; West, John D.; Foroud, Tatiana; Jack, Clifford R.; Weiner, Michael W.; Saykin, Andrew J.
2010-01-01
Mild Cognitive Impairment (MCI) is thought to be a precursor to the development of early Alzheimer’s disease (AD). For early diagnosis of AD, the development of a model that is able to predict the conversion of amnestic MCI to AD is challenging. Using automatic whole-brain MRI analysis techniques and pattern classification methods, we developed a model to differentiate AD from healthy controls (HC), and then applied it to the prediction of MCI conversion to AD. Classification was performed using support vector machines (SVMs) together with a SVM-based feature selection method, which selected a set of most discriminating predictors for optimizing prediction accuracy. We obtained 90.5% cross-validation accuracy for classifying AD and HC, and 72.3% accuracy for predicting MCI conversion to AD. These analyses suggest that a classifier trained to separate HC vs. AD has substantial potential for predicting MCI conversion to AD. PMID:21347037
A Real-time Breakdown Prediction Method for Urban Expressway On-ramp Bottlenecks
NASA Astrophysics Data System (ADS)
Ye, Yingjun; Qin, Guoyang; Sun, Jian; Liu, Qiyuan
2018-01-01
Breakdown occurrence on expressway is considered to relate with various factors. Therefore, to investigate the association between breakdowns and these factors, a Bayesian network (BN) model is adopted in this paper. Based on the breakdown events identified at 10 urban expressways on-ramp in Shanghai, China, 23 parameters before breakdowns are extracted, including dynamic environment conditions aggregated with 5-minutes and static geometry features. Different time periods data are used to predict breakdown. Results indicate that the models using 5-10 min data prior to breakdown performs the best prediction, with the prediction accuracies higher than 73%. Moreover, one unified model for all bottlenecks is also built and shows reasonably good prediction performance with the classification accuracy of breakdowns about 75%, at best. Additionally, to simplify the model parameter input, the random forests (RF) model is adopted to identify the key variables. Modeling with the selected 7 parameters, the refined BN model can predict breakdown with adequate accuracy.
Perez-Cruz, Pedro E.; dos Santos, Renata; Silva, Thiago Buosi; Crovador, Camila Souza; Nascimento, Maria Salete de Angelis; Hall, Stacy; Fajardo, Julieta; Bruera, Eduardo; Hui, David
2014-01-01
Context Survival prognostication is important during end-of-life. The accuracy of clinician prediction of survival (CPS) over time has not been well characterized. Objectives To examine changes in prognostication accuracy during the last 14 days of life in a cohort of patients with advanced cancer admitted to two acute palliative care units and to compare the accuracy between the temporal and probabilistic approaches. Methods Physicians and nurses prognosticated survival daily for cancer patients in two hospitals until death/discharge using two prognostic approaches: temporal and probabilistic. We assessed accuracy for each method daily during the last 14 days of life comparing accuracy at day −14 (baseline) with accuracy at each time point using a test of proportions. Results 6718 temporal and 6621 probabilistic estimations were provided by physicians and nurses for 311 patients, respectively. Median (interquartile range) survival was 8 (4, 20) days. Temporal CPS had low accuracy (10–40%) and did not change over time. In contrast, probabilistic CPS was significantly more accurate (p<.05 at each time point) but decreased close to death. Conclusion Probabilistic CPS was consistently more accurate than temporal CPS over the last 14 days of life; however, its accuracy decreased as patients approached death. Our findings suggest that better tools to predict impending death are necessary. PMID:24746583
Yu, Shao Hua; Zhu, Jun Peng; Xu, You; Zheng, Lei Lei; Chai, Hao; He, Wei; Liu, Wei Bo; Li, Hui Chun; Wang, Wei
2012-12-01
To study the contribution of executive function to abnormal recognition of facial expressions of emotion in schizophrenia patients. Abnormal recognition of facial expressions of emotion was assayed according to Japanese and Caucasian facial expressions of emotion (JACFEE), Wisconsin card sorting test (WCST), positive and negative symptom scale, and Hamilton anxiety and depression scale, respectively, in 88 paranoid schizophrenia patients and 75 healthy volunteers. Patients scored higher on the Positive and Negative Symptom Scale and the Hamilton Anxiety and Depression Scales, displayed lower JACFEE recognition accuracies and poorer WCST performances. The JACFEE recognition accuracy of contempt and disgust was negatively correlated with the negative symptom scale score while the recognition accuracy of fear was positively with the positive symptom scale score and the recognition accuracy of surprise was negatively with the general psychopathology score in patients. Moreover, the WCST could predict the JACFEE recognition accuracy of contempt, disgust, and sadness in patients, and the perseverative errors negatively predicted the recognition accuracy of sadness in healthy volunteers. The JACFEE recognition accuracy of sadness could predict the WCST categories in paranoid schizophrenia patients. Recognition accuracy of social-/moral emotions, such as contempt, disgust and sadness is related to the executive function in paranoid schizophrenia patients, especially when regarding sadness. Copyright © 2012 The Editorial Board of Biomedical and Environmental Sciences. Published by Elsevier B.V. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Effect of moisture content variation on the accuracy of single kernel deoxynivalenol (DON) prediction by near-infrared (NIR) spectroscopy was investigated. Sample moisture content (MC) considerably affected accuracy of the current NIR DON calibration by underestimating or over estimating DON at high...
Bayesian decision support for coding occupational injury data.
Nanda, Gaurav; Grattan, Kathleen M; Chu, MyDzung T; Davis, Letitia K; Lehto, Mark R
2016-06-01
Studies on autocoding injury data have found that machine learning algorithms perform well for categories that occur frequently but often struggle with rare categories. Therefore, manual coding, although resource-intensive, cannot be eliminated. We propose a Bayesian decision support system to autocode a large portion of the data, filter cases for manual review, and assist human coders by presenting them top k prediction choices and a confusion matrix of predictions from Bayesian models. We studied the prediction performance of Single-Word (SW) and Two-Word-Sequence (TW) Naïve Bayes models on a sample of data from the 2011 Survey of Occupational Injury and Illness (SOII). We used the agreement in prediction results of SW and TW models, and various prediction strength thresholds for autocoding and filtering cases for manual review. We also studied the sensitivity of the top k predictions of the SW model, TW model, and SW-TW combination, and then compared the accuracy of the manually assigned codes to SOII data with that of the proposed system. The accuracy of the proposed system, assuming well-trained coders reviewing a subset of only 26% of cases flagged for review, was estimated to be comparable (86.5%) to the accuracy of the original coding of the data set (range: 73%-86.8%). Overall, the TW model had higher sensitivity than the SW model, and the accuracy of the prediction results increased when the two models agreed, and for higher prediction strength thresholds. The sensitivity of the top five predictions was 93%. The proposed system seems promising for coding injury data as it offers comparable accuracy and less manual coding. Accurate and timely coded occupational injury data is useful for surveillance as well as prevention activities that aim to make workplaces safer. Copyright © 2016 Elsevier Ltd and National Safety Council. All rights reserved.
Prediction of Industrial Electric Energy Consumption in Anhui Province Based on GA-BP Neural Network
NASA Astrophysics Data System (ADS)
Zhang, Jiajing; Yin, Guodong; Ni, Youcong; Chen, Jinlan
2018-01-01
In order to improve the prediction accuracy of industrial electrical energy consumption, a prediction model of industrial electrical energy consumption was proposed based on genetic algorithm and neural network. The model use genetic algorithm to optimize the weights and thresholds of BP neural network, and the model is used to predict the energy consumption of industrial power in Anhui Province, to improve the prediction accuracy of industrial electric energy consumption in Anhui province. By comparing experiment of GA-BP prediction model and BP neural network model, the GA-BP model is more accurate with smaller number of neurons in the hidden layer.
Radiomics-based Prognosis Analysis for Non-Small Cell Lung Cancer
NASA Astrophysics Data System (ADS)
Zhang, Yucheng; Oikonomou, Anastasia; Wong, Alexander; Haider, Masoom A.; Khalvati, Farzad
2017-04-01
Radiomics characterizes tumor phenotypes by extracting large numbers of quantitative features from radiological images. Radiomic features have been shown to provide prognostic value in predicting clinical outcomes in several studies. However, several challenges including feature redundancy, unbalanced data, and small sample sizes have led to relatively low predictive accuracy. In this study, we explore different strategies for overcoming these challenges and improving predictive performance of radiomics-based prognosis for non-small cell lung cancer (NSCLC). CT images of 112 patients (mean age 75 years) with NSCLC who underwent stereotactic body radiotherapy were used to predict recurrence, death, and recurrence-free survival using a comprehensive radiomics analysis. Different feature selection and predictive modeling techniques were used to determine the optimal configuration of prognosis analysis. To address feature redundancy, comprehensive analysis indicated that Random Forest models and Principal Component Analysis were optimum predictive modeling and feature selection methods, respectively, for achieving high prognosis performance. To address unbalanced data, Synthetic Minority Over-sampling technique was found to significantly increase predictive accuracy. A full analysis of variance showed that data endpoints, feature selection techniques, and classifiers were significant factors in affecting predictive accuracy, suggesting that these factors must be investigated when building radiomics-based predictive models for cancer prognosis.
Development of predictive mapping techniques for soil survey and salinity mapping
NASA Astrophysics Data System (ADS)
Elnaggar, Abdelhamid A.
Conventional soil maps represent a valuable source of information about soil characteristics, however they are subjective, very expensive, and time-consuming to prepare. Also, they do not include explicit information about the conceptual mental model used in developing them nor information about their accuracy, in addition to the error associated with them. Decision tree analysis (DTA) was successfully used in retrieving the expert knowledge embedded in old soil survey data. This knowledge was efficiently used in developing predictive soil maps for the study areas in Benton and Malheur Counties, Oregon and accessing their consistency. A retrieved soil-landscape model from a reference area in Harney County was extrapolated to develop a preliminary soil map for the neighboring unmapped part of Malheur County. The developed map had a low prediction accuracy and only a few soil map units (SMUs) were predicted with significant accuracy, mostly those shallow SMUs that have either a lithic contact with the bedrock or developed on a duripan. On the other hand, the developed soil map based on field data was predicted with very high accuracy (overall was about 97%). Salt-affected areas of the Malheur County study area are indicated by their high spectral reflectance and they are easily discriminated from the remote sensing data. However, remote sensing data fails to distinguish between the different classes of soil salinity. Using the DTA method, five classes of soil salinity were successfully predicted with an overall accuracy of about 99%. Moreover, the calculated area of salt-affected soil was overestimated when mapped using remote sensing data compared to that predicted by using DTA. Hence, DTA could be a very helpful approach in developing soil survey and soil salinity maps in more objective, effective, less-expensive and quicker ways based on field data.
Karuppiah Ramachandran, Vignesh Raja; Alblas, Huibert J; Le, Duc V; Meratnia, Nirvana
2018-05-24
In the last decade, seizure prediction systems have gained a lot of attention because of their enormous potential to largely improve the quality-of-life of the epileptic patients. The accuracy of the prediction algorithms to detect seizure in real-world applications is largely limited because the brain signals are inherently uncertain and affected by various factors, such as environment, age, drug intake, etc., in addition to the internal artefacts that occur during the process of recording the brain signals. To deal with such ambiguity, researchers transitionally use active learning, which selects the ambiguous data to be annotated by an expert and updates the classification model dynamically. However, selecting the particular data from a pool of large ambiguous datasets to be labelled by an expert is still a challenging problem. In this paper, we propose an active learning-based prediction framework that aims to improve the accuracy of the prediction with a minimum number of labelled data. The core technique of our framework is employing the Bernoulli-Gaussian Mixture model (BGMM) to determine the feature samples that have the most ambiguity to be annotated by an expert. By doing so, our approach facilitates expert intervention as well as increasing medical reliability. We evaluate seven different classifiers in terms of the classification time and memory required. An active learning framework built on top of the best performing classifier is evaluated in terms of required annotation effort to achieve a high level of prediction accuracy. The results show that our approach can achieve the same accuracy as a Support Vector Machine (SVM) classifier using only 20 % of the labelled data and also improve the prediction accuracy even under the noisy condition.
Amuzu-Aweh, E N; Bijma, P; Kinghorn, B P; Vereijken, A; Visscher, J; van Arendonk, J Am; Bovenhuis, H
2013-12-01
Prediction of heterosis has a long history with mixed success, partly due to low numbers of genetic markers and/or small data sets. We investigated the prediction of heterosis for egg number, egg weight and survival days in domestic white Leghorns, using ∼400 000 individuals from 47 crosses and allele frequencies on ∼53 000 genome-wide single nucleotide polymorphisms (SNPs). When heterosis is due to dominance, and dominance effects are independent of allele frequencies, heterosis is proportional to the squared difference in allele frequency (SDAF) between parental pure lines (not necessarily homozygous). Under these assumptions, a linear model including regression on SDAF partitions crossbred phenotypes into pure-line values and heterosis, even without pure-line phenotypes. We therefore used models where phenotypes of crossbreds were regressed on the SDAF between parental lines. Accuracy of prediction was determined using leave-one-out cross-validation. SDAF predicted heterosis for egg number and weight with an accuracy of ∼0.5, but did not predict heterosis for survival days. Heterosis predictions allowed preselection of pure lines before field-testing, saving ∼50% of field-testing cost with only 4% loss in heterosis. Accuracies from cross-validation were lower than from the model-fit, suggesting that accuracies previously reported in literature are overestimated. Cross-validation also indicated that dominance cannot fully explain heterosis. Nevertheless, the dominance model had considerable accuracy, clearly greater than that of a general/specific combining ability model. This work also showed that heterosis can be modelled even when pure-line phenotypes are unavailable. We concluded that SDAF is a useful predictor of heterosis in commercial layer breeding.
Lee, Jae-Hong; Kim, Do-Hyung; Jeong, Seong-Nyum; Choi, Seong-Ho
2018-04-01
The aim of the current study was to develop a computer-assisted detection system based on a deep convolutional neural network (CNN) algorithm and to evaluate the potential usefulness and accuracy of this system for the diagnosis and prediction of periodontally compromised teeth (PCT). Combining pretrained deep CNN architecture and a self-trained network, periapical radiographic images were used to determine the optimal CNN algorithm and weights. The diagnostic and predictive accuracy, sensitivity, specificity, positive predictive value, negative predictive value, receiver operating characteristic (ROC) curve, area under the ROC curve, confusion matrix, and 95% confidence intervals (CIs) were calculated using our deep CNN algorithm, based on a Keras framework in Python. The periapical radiographic dataset was split into training (n=1,044), validation (n=348), and test (n=348) datasets. With the deep learning algorithm, the diagnostic accuracy for PCT was 81.0% for premolars and 76.7% for molars. Using 64 premolars and 64 molars that were clinically diagnosed as severe PCT, the accuracy of predicting extraction was 82.8% (95% CI, 70.1%-91.2%) for premolars and 73.4% (95% CI, 59.9%-84.0%) for molars. We demonstrated that the deep CNN algorithm was useful for assessing the diagnosis and predictability of PCT. Therefore, with further optimization of the PCT dataset and improvements in the algorithm, a computer-aided detection system can be expected to become an effective and efficient method of diagnosing and predicting PCT.
Zanderigo, Francesca; Sparacino, Giovanni; Kovatchev, Boris; Cobelli, Claudio
2007-09-01
The aim of this article was to use continuous glucose error-grid analysis (CG-EGA) to assess the accuracy of two time-series modeling methodologies recently developed to predict glucose levels ahead of time using continuous glucose monitoring (CGM) data. We considered subcutaneous time series of glucose concentration monitored every 3 minutes for 48 hours by the minimally invasive CGM sensor Glucoday® (Menarini Diagnostics, Florence, Italy) in 28 type 1 diabetic volunteers. Two prediction algorithms, based on first-order polynomial and autoregressive (AR) models, respectively, were considered with prediction horizons of 30 and 45 minutes and forgetting factors (ff) of 0.2, 0.5, and 0.8. CG-EGA was used on the predicted profiles to assess their point and dynamic accuracies using original CGM profiles as reference. Continuous glucose error-grid analysis showed that the accuracy of both prediction algorithms is overall very good and that their performance is similar from a clinical point of view. However, the AR model seems preferable for hypoglycemia prevention. CG-EGA also suggests that, irrespective of the time-series model, the use of ff = 0.8 yields the highest accurate readings in all glucose ranges. For the first time, CG-EGA is proposed as a tool to assess clinically relevant performance of a prediction method separately at hypoglycemia, euglycemia, and hyperglycemia. In particular, we have shown that CG-EGA can be helpful in comparing different prediction algorithms, as well as in optimizing their parameters.
Genomic Prediction Accounting for Residual Heteroskedasticity
Ou, Zhining; Tempelman, Robert J.; Steibel, Juan P.; Ernst, Catherine W.; Bates, Ronald O.; Bello, Nora M.
2015-01-01
Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. PMID:26564950
Improved accuracy of intraocular lens power calculation with the Zeiss IOLMaster.
Olsen, Thomas
2007-02-01
This study aimed to demonstrate how the level of accuracy in intraocular lens (IOL) power calculation can be improved with optical biometry using partial optical coherence interferometry (PCI) (Zeiss IOLMaster) and current anterior chamber depth (ACD) prediction algorithms. Intraocular lens power in 461 consecutive cataract operations was calculated using both PCI and ultrasound and the accuracy of the results of each technique were compared. To illustrate the importance of ACD prediction per se, predictions were calculated using both a recently published 5-variable method and the Haigis 2-variable method and the results compared. All calculations were optimized in retrospect to account for systematic errors, including IOL constants and other off-set errors. The average absolute IOL prediction error (observed minus expected refraction) was 0.65 dioptres with ultrasound and 0.43 D with PCI using the 5-variable ACD prediction method (p < 0.00001). The number of predictions within +/- 0.5 D, +/- 1.0 D and +/- 2.0 D of the expected outcome was 62.5%, 92.4% and 99.9% with PCI, compared with 45.5%, 77.3% and 98.4% with ultrasound, respectively (p < 0.00001). The 2-variable ACD method resulted in an average error in PCI predictions of 0.46 D, which was significantly higher than the error in the 5-variable method (p < 0.001). The accuracy of IOL power calculation can be significantly improved using calibrated axial length readings obtained with PCI and modern IOL power calculation formulas incorporating the latest generation ACD prediction algorithms.
Gamal El-Dien, Omnia; Ratcliffe, Blaise; Klápště, Jaroslav; Chen, Charles; Porth, Ilga; El-Kassaby, Yousry A
2015-05-09
Genomic selection (GS) in forestry can substantially reduce the length of breeding cycle and increase gain per unit time through early selection and greater selection intensity, particularly for traits of low heritability and late expression. Affordable next-generation sequencing technologies made it possible to genotype large numbers of trees at a reasonable cost. Genotyping-by-sequencing was used to genotype 1,126 Interior spruce trees representing 25 open-pollinated families planted over three sites in British Columbia, Canada. Four imputation algorithms were compared (mean value (MI), singular value decomposition (SVD), expectation maximization (EM), and a newly derived, family-based k-nearest neighbor (kNN-Fam)). Trees were phenotyped for several yield and wood attributes. Single- and multi-site GS prediction models were developed using the Ridge Regression Best Linear Unbiased Predictor (RR-BLUP) and the Generalized Ridge Regression (GRR) to test different assumption about trait architecture. Finally, using PCA, multi-trait GS prediction models were developed. The EM and kNN-Fam imputation methods were superior for 30 and 60% missing data, respectively. The RR-BLUP GS prediction model produced better accuracies than the GRR indicating that the genetic architecture for these traits is complex. GS prediction accuracies for multi-site were high and better than those of single-sites while multi-site predictability produced the lowest accuracies reflecting type-b genetic correlations and deemed unreliable. The incorporation of genomic information in quantitative genetics analyses produced more realistic heritability estimates as half-sib pedigree tended to inflate the additive genetic variance and subsequently both heritability and gain estimates. Principle component scores as representatives of multi-trait GS prediction models produced surprising results where negatively correlated traits could be concurrently selected for using PCA2 and PCA3. The application of GS to open-pollinated family testing, the simplest form of tree improvement evaluation methods, was proven to be effective. Prediction accuracies obtained for all traits greatly support the integration of GS in tree breeding. While the within-site GS prediction accuracies were high, the results clearly indicate that single-site GS models ability to predict other sites are unreliable supporting the utilization of multi-site approach. Principle component scores provided an opportunity for the concurrent selection of traits with different phenotypic optima.
Dang, Mia; Ramsaran, Kalinda D.; Street, Melissa E.; Syed, S. Noreen; Barclay-Goddard, Ruth; Miller, Patricia A.
2011-01-01
ABSTRACT Purpose: To estimate the predictive accuracy and clinical usefulness of the Chedoke–McMaster Stroke Assessment (CMSA) predictive equations. Method: A longitudinal prognostic study using historical data obtained from 104 patients admitted post cerebrovascular accident was undertaken. Data were abstracted for all patients undergoing rehabilitation post stroke who also had documented admission and discharge CMSA scores. Published predictive equations were used to determine predicted outcomes. To determine the accuracy and clinical usefulness of the predictive model, shrinkage coefficients and predictions with 95% confidence bands were calculated. Results: Complete data were available for 74 patients with a mean age of 65.3±12.4 years. The shrinkage values for the six Impairment Inventory (II) dimensions varied from −0.05 to 0.09; the shrinkage value for the Activity Inventory (AI) was 0.21. The error associated with predictive values was greater than ±1.5 stages for the II dimensions and greater than ±24 points for the AI. Conclusions: This study shows that the large error associated with the predictions (as defined by the confidence band) for the CMSA II and AI limits their clinical usefulness as a predictive measure. Further research to establish predictive models using alternative statistical procedures is warranted. PMID:22654239
Discrimination in measures of knowledge monitoring accuracy
Was, Christopher A.
2014-01-01
Knowledge monitoring predicts academic outcomes in many contexts. However, measures of knowledge monitoring accuracy are often incomplete. In the current study, a measure of students’ ability to discriminate known from unknown information as a component of knowledge monitoring was considered. Undergraduate students’ knowledge monitoring accuracy was assessed and used to predict final exam scores in a specific course. It was found that gamma, a measure commonly used as the measure of knowledge monitoring accuracy, accounted for a small, but significant amount of variance in academic performance whereas the discrimination and bias indexes combined to account for a greater amount of variance in academic performance. PMID:25339979
All-atom 3D structure prediction of transmembrane β-barrel proteins from sequences.
Hayat, Sikander; Sander, Chris; Marks, Debora S; Elofsson, Arne
2015-04-28
Transmembrane β-barrels (TMBs) carry out major functions in substrate transport and protein biogenesis but experimental determination of their 3D structure is challenging. Encouraged by successful de novo 3D structure prediction of globular and α-helical membrane proteins from sequence alignments alone, we developed an approach to predict the 3D structure of TMBs. The approach combines the maximum-entropy evolutionary coupling method for predicting residue contacts (EVfold) with a machine-learning approach (boctopus2) for predicting β-strands in the barrel. In a blinded test for 19 TMB proteins of known structure that have a sufficient number of diverse homologous sequences available, this combined method (EVfold_bb) predicts hydrogen-bonded residue pairs between adjacent β-strands at an accuracy of ∼70%. This accuracy is sufficient for the generation of all-atom 3D models. In the transmembrane barrel region, the average 3D structure accuracy [template-modeling (TM) score] of top-ranked models is 0.54 (ranging from 0.36 to 0.85), with a higher (44%) number of residue pairs in correct strand-strand registration than in earlier methods (18%). Although the nonbarrel regions are predicted less accurately overall, the evolutionary couplings identify some highly constrained loop residues and, for FecA protein, the barrel including the structure of a plug domain can be accurately modeled (TM score = 0.68). Lower prediction accuracy tends to be associated with insufficient sequence information and we therefore expect increasing numbers of β-barrel families to become accessible to accurate 3D structure prediction as the number of available sequences increases.
Chung, Hyun Sik; Lee, Yu Jung; Jo, Yun Sung
2017-02-21
BACKGROUND Acute liver failure (ALF) is known to be a rapidly progressive and fatal disease. Various models which could help to estimate the post-transplant outcome for ALF have been developed; however, none of them have been proved to be the definitive predictive model of accuracy. We suggest a new predictive model, and investigated which model has the highest predictive accuracy for the short-term outcome in patients who underwent living donor liver transplantation (LDLT) due to ALF. MATERIAL AND METHODS Data from a total 88 patients were collected retrospectively. King's College Hospital criteria (KCH), Child-Turcotte-Pugh (CTP) classification, and model for end-stage liver disease (MELD) score were calculated. Univariate analysis was performed, and then multivariate statistical adjustment for preoperative variables of ALF prognosis was performed. A new predictive model was developed, called the MELD conjugated serum phosphorus model (MELD-p). The individual diagnostic accuracy and cut-off value of models in predicting 3-month post-transplant mortality were evaluated using the area under the receiver operating characteristic curve (AUC). The difference in AUC between MELD-p and the other models was analyzed. The diagnostic improvement in MELD-p was assessed using the net reclassification improvement (NRI) and integrated discrimination improvement (IDI). RESULTS The MELD-p and MELD scores had high predictive accuracy (AUC >0.9). KCH and serum phosphorus had an acceptable predictive ability (AUC >0.7). The CTP classification failed to show discriminative accuracy in predicting 3-month post-transplant mortality. The difference in AUC between MELD-p and the other models had statistically significant associations with CTP and KCH. The cut-off value of MELD-p was 3.98 for predicting 3-month post-transplant mortality. The NRI was 9.9% and the IDI was 2.9%. CONCLUSIONS MELD-p score can predict 3-month post-transplant mortality better than other scoring systems after LDLT due to ALF. The recommended cut-off value of MELD-p is 3.98.
Samad, Manar D; Ulloa, Alvaro; Wehner, Gregory J; Jing, Linyuan; Hartzel, Dustin; Good, Christopher W; Williams, Brent A; Haggerty, Christopher M; Fornwalt, Brandon K
2018-06-09
The goal of this study was to use machine learning to more accurately predict survival after echocardiography. Predicting patient outcomes (e.g., survival) following echocardiography is primarily based on ejection fraction (EF) and comorbidities. However, there may be significant predictive information within additional echocardiography-derived measurements combined with clinical electronic health record data. Mortality was studied in 171,510 unselected patients who underwent 331,317 echocardiograms in a large regional health system. We investigated the predictive performance of nonlinear machine learning models compared with that of linear logistic regression models using 3 different inputs: 1) clinical variables, including 90 cardiovascular-relevant International Classification of Diseases, Tenth Revision, codes, and age, sex, height, weight, heart rate, blood pressures, low-density lipoprotein, high-density lipoprotein, and smoking; 2) clinical variables plus physician-reported EF; and 3) clinical variables and EF, plus 57 additional echocardiographic measurements. Missing data were imputed with a multivariate imputation by using a chained equations algorithm (MICE). We compared models versus each other and baseline clinical scoring systems by using a mean area under the curve (AUC) over 10 cross-validation folds and across 10 survival durations (6 to 60 months). Machine learning models achieved significantly higher prediction accuracy (all AUC >0.82) over common clinical risk scores (AUC = 0.61 to 0.79), with the nonlinear random forest models outperforming logistic regression (p < 0.01). The random forest model including all echocardiographic measurements yielded the highest prediction accuracy (p < 0.01 across all models and survival durations). Only 10 variables were needed to achieve 96% of the maximum prediction accuracy, with 6 of these variables being derived from echocardiography. Tricuspid regurgitation velocity was more predictive of survival than LVEF. In a subset of studies with complete data for the top 10 variables, multivariate imputation by chained equations yielded slightly reduced predictive accuracies (difference in AUC of 0.003) compared with the original data. Machine learning can fully utilize large combinations of disparate input variables to predict survival after echocardiography with superior accuracy. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Response Latency as a Predictor of the Accuracy of Children's Reports
ERIC Educational Resources Information Center
Ackerman, Rakefet; Koriat, Asher
2011-01-01
Researchers have explored various diagnostic cues to the accuracy of information provided by child eyewitnesses. Previous studies indicated that children's confidence in their reports predicts the relative accuracy of these reports, and that the confidence-accuracy relationship generally improves as children grow older. In this study, we examined…
Rapid race perception despite individuation and accuracy goals.
Kubota, Jennifer T; Ito, Tiffany
2017-08-01
Perceivers rapidly process social category information and form stereotypic impressions of unfamiliar others. However, a goal to individuate a target or to accurately predict their behavior can result in individuated impressions. It is unknown how the combination of both accuracy and individuation goals affects perceptual category processing. To explore this, participants were given both the goal to individuate targets and accurately predict behavior. We then recorded event-related brain potentials while participants viewed photos of black and white males along with four pieces of individuating information in the form of descriptions of past behavior. Even with explicit individuation and accuracy task goals, participants rapidly differentiated targets by race within 200 ms. Importantly, this rapid categorical processing did not influence behavioral outcomes as participants made individuated predictions. These findings indicate that individuals engage in category processing even when provided with individuation and accuracy goals, but that this processing does not necessarily result in category-based judgments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raboin, P J
1998-01-01
The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D.more » Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.« less
Optically Tuned MM-Wave IMPATT Source.
1987-07-01
phase of the work has been extended and generalised. Accuracy of the theory in predicting tuning at the higher oscillator voltage swings has been greatly...Accuracy of the theory in predicting tuning at the higher oscillator voltage swings has been greatly improved by reformulating the Bessel function...voltage modulation and a peak optically injected locking current of 100 pA the predicted ftl locking range would be 540MHz, a practicaUy useful value. 4
Feasibility of developing LSI microcircuit reliability prediction models
NASA Technical Reports Server (NTRS)
Ryerson, C. M.
1972-01-01
In the proposed modeling approach, when any of the essential key factors are not known initially, they can be approximated in various ways with a known impact on the accuracy of the final predictions. For example, on any program where reliability predictions are started at interim states of project completion, a-priori approximate estimates of the key factors are established for making preliminary predictions. Later these are refined for greater accuracy as subsequent program information of a more definitive nature becomes available. Specific steps to develop, validate and verify these new models are described.
Linden, Ariel
2006-04-01
Diagnostic or predictive accuracy concerns are common in all phases of a disease management (DM) programme, and ultimately play an influential role in the assessment of programme effectiveness. Areas, such as the identification of diseased patients, predictive modelling of future health status and costs and risk stratification, are just a few of the domains in which assessment of accuracy is beneficial, if not critical. The most commonly used analytical model for this purpose is the standard 2 x 2 table method in which sensitivity and specificity are calculated. However, there are several limitations to this approach, including the reliance on a single defined criterion or cut-off for determining a true-positive result, use of non-standardized measurement instruments and sensitivity to outcome prevalence. This paper introduces the receiver operator characteristic (ROC) analysis as a more appropriate and useful technique for assessing diagnostic and predictive accuracy in DM. Its advantages include; testing accuracy across the entire range of scores and thereby not requiring a predetermined cut-off point, easily examined visual and statistical comparisons across tests or scores, and independence from outcome prevalence. Therefore the implementation of ROC as an evaluation tool should be strongly considered in the various phases of a DM programme.
Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model
Li, Xiaoqing; Wang, Yu
2018-01-01
Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing technology. PMID:29351254
Wang, Hue-Yu; Wen, Ching-Feng; Chiu, Yu-Hsien; Lee, I-Nong; Kao, Hao-Yun; Lee, I-Chen; Ho, Wen-Hsien
2013-01-01
An adaptive-network-based fuzzy inference system (ANFIS) was compared with an artificial neural network (ANN) in terms of accuracy in predicting the combined effects of temperature (10.5 to 24.5°C), pH level (5.5 to 7.5), sodium chloride level (0.25% to 6.25%) and sodium nitrite level (0 to 200 ppm) on the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. THE ANFIS AND ANN MODELS WERE COMPARED IN TERMS OF SIX STATISTICAL INDICES CALCULATED BY COMPARING THEIR PREDICTION RESULTS WITH ACTUAL DATA: mean absolute percentage error (MAPE), root mean square error (RMSE), standard error of prediction percentage (SEP), bias factor (Bf), accuracy factor (Af), and absolute fraction of variance (R (2)). Graphical plots were also used for model comparison. The learning-based systems obtained encouraging prediction results. Sensitivity analyses of the four environmental factors showed that temperature and, to a lesser extent, NaCl had the most influence on accuracy in predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. The observed effectiveness of ANFIS for modeling microbial kinetic parameters confirms its potential use as a supplemental tool in predictive mycology. Comparisons between growth rates predicted by ANFIS and actual experimental data also confirmed the high accuracy of the Gaussian membership function in ANFIS. Comparisons of the six statistical indices under both aerobic and anaerobic conditions also showed that the ANFIS model was better than all ANN models in predicting the four kinetic parameters. Therefore, the ANFIS model is a valuable tool for quickly predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions.
Al-Otaibi, H M; Hardman, J G
2011-11-01
Existing methods allow prediction of Pa(O₂) during adjustment of Fi(O₂). However, these are cumbersome and lack sufficient accuracy for use in the clinical setting. The present studies aim to extend the validity of a novel formula designed to predict Pa(O₂) during adjustment of Fi(O₂) and to compare it with the current methods. Sixty-seven new data sets were collected from 46 randomly selected, mechanically ventilated patients. Each data set consisted of two subsets (before and 20 min after Fi(O₂) adjustment) and contained ventilator settings, pH, and arterial blood gas values. We compared the accuracy of Pa(O₂) prediction using a new formula (which utilizes only the pre-adjustment Pa(O₂) and pre- and post-adjustment Fi(O₂) with prediction using assumptions of constant Pa(O₂)/Fi(O₂) or constant Pa(O₂)/Pa(O₂). Subsequently, 20 clinicians predicted Pa(O₂) using the new formula and using Nunn's isoshunt diagram. The accuracy of the clinician's predictions was examined. The 95% limits of agreement (LA(95%)) between predicted and measured Pa(O₂) in the patient group were: new formula 0.11 (2.0) kPa, Pa(O₂)/Fi(O₂) -1.9 (4.4) kPa, and Pa(O₂)/Pa(O₂) -1.0 (3.6) kPa. The LA(95%) of clinicians' predictions of Pa(O₂) were 0.56 (3.6) kPa (new formula) and -2.7 (6.4) kPa (isoshunt diagram). The new formula's prediction of changes in Pa(O₂) is acceptably accurate and reliable and better than any other existing method. Its use by clinicians appears to improve accuracy over the most popular existing method. The simplicity of the new method may allow its regular use in the critical care setting.
Wang, Hue-Yu; Wen, Ching-Feng; Chiu, Yu-Hsien; Lee, I-Nong; Kao, Hao-Yun; Lee, I-Chen; Ho, Wen-Hsien
2013-01-01
Background An adaptive-network-based fuzzy inference system (ANFIS) was compared with an artificial neural network (ANN) in terms of accuracy in predicting the combined effects of temperature (10.5 to 24.5°C), pH level (5.5 to 7.5), sodium chloride level (0.25% to 6.25%) and sodium nitrite level (0 to 200 ppm) on the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. Methods The ANFIS and ANN models were compared in terms of six statistical indices calculated by comparing their prediction results with actual data: mean absolute percentage error (MAPE), root mean square error (RMSE), standard error of prediction percentage (SEP), bias factor (Bf), accuracy factor (Af), and absolute fraction of variance (R 2). Graphical plots were also used for model comparison. Conclusions The learning-based systems obtained encouraging prediction results. Sensitivity analyses of the four environmental factors showed that temperature and, to a lesser extent, NaCl had the most influence on accuracy in predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. The observed effectiveness of ANFIS for modeling microbial kinetic parameters confirms its potential use as a supplemental tool in predictive mycology. Comparisons between growth rates predicted by ANFIS and actual experimental data also confirmed the high accuracy of the Gaussian membership function in ANFIS. Comparisons of the six statistical indices under both aerobic and anaerobic conditions also showed that the ANFIS model was better than all ANN models in predicting the four kinetic parameters. Therefore, the ANFIS model is a valuable tool for quickly predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. PMID:23705023
Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model.
Xin, Jingzhou; Zhou, Jianting; Yang, Simon X; Li, Xiaoqing; Wang, Yu
2018-01-19
Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing technology.
Accuracy of Self-Reported Cervical and Breast Cancer Screening by Women with Intellectual Disability
ERIC Educational Resources Information Center
Son, Esther; Parish, Susan L.; Swaine, Jamie G.; Luken, Karen
2013-01-01
This study examines the accuracy of self-report of cervical and breast cancer screening by women with intellectual disability ("n" ?=? 155). Data from face-to-face interviews and medical records were analyzed. Total agreement, sensitivity, specificity, positive predictive value and negative predictive value were calculated. Total…
ERIC Educational Resources Information Center
Borgmeier, Chris; Horner, Robert H.
2006-01-01
Faced with limited resources, schools require tools that increase the accuracy and efficiency of functional behavioral assessment. Yarbrough and Carr (2000) provided evidence that informant confidence ratings of the likelihood of problem behavior in specific situations offered a promising tool for predicting the accuracy of function-based…
Predictive Validity and Accuracy of Oral Reading Fluency for English Learners
ERIC Educational Resources Information Center
Vanderwood, Michael L.; Tung, Catherine Y.; Checca, C. Jason
2014-01-01
The predictive validity and accuracy of an oral reading fluency (ORF) measure for a statewide assessment in English language arts was examined for second-grade native English speakers (NESs) and English learners (ELs) with varying levels of English proficiency. In addition to comparing ELs with native English speakers, the impact of English…
NASA Astrophysics Data System (ADS)
DSuryadi; Delyuzar; Soekimin
2018-03-01
Indonesia is the second country with the TB (tuberculosis) burden in the world. Improvement in controlling TB and reducing the complications can accelerate early diagnosis and correct treatment. PCR test is a gold standard. However, it is quite expensive for routine diagnosis. Therefore, an accurate and cheaper diagnostic method such as fine needle aspiration biopsy is needed. The study aimsto determine the accuracy of fine needle aspiration biopsy cytology in the diagnosis of tuberculous lymphadenitis. A cross-sectional analytic study was conducted to the samples from patients suspected with tuberculous lymphadenitis. The fine needle aspiration biopsy (FNAB)test was performed and confirmed by PCR test.There is a comparison to the sensitivity, specificity, accuracy, positive predictive value and negative predictive value of both methods. Sensitivity (92.50%), specificity (96.49%), accuracy (94.85%), positive predictive value (94.87%) and negative predictive value (94.83%) were in FNAB test compared to gold standard. We concluded that fine needle aspiration biopsy is a recommendation for a cheaper and accurate diagnostic test for tuberculous lymphadenitis diagnosis.
A review of propeller noise prediction methodology: 1919-1994
NASA Technical Reports Server (NTRS)
Metzger, F. Bruce
1995-01-01
This report summarizes a review of the literature regarding propeller noise prediction methods. The review is divided into six sections: (1) early methods; (2) more recent methods based on earlier theory; (3) more recent methods based on the Acoustic Analogy; (4) more recent methods based on Computational Acoustics; (5) empirical methods; and (6) broadband methods. The report concludes that there are a large number of noise prediction procedures available which vary markedly in complexity. Deficiencies in accuracy of methods in many cases may be related, not to the methods themselves, but the accuracy and detail of the aerodynamic inputs used to calculate noise. The steps recommended in the report to provide accurate and easy to use prediction methods are: (1) identify reliable test data; (2) define and conduct test programs to fill gaps in the existing data base; (3) identify the most promising prediction methods; (4) evaluate promising prediction methods relative to the data base; (5) identify and correct the weaknesses in the prediction methods, including lack of user friendliness, and include features now available only in research codes; (6) confirm the accuracy of improved prediction methods to the data base; and (7) make the methods widely available and provide training in their use.
NASA Astrophysics Data System (ADS)
Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu
2016-06-01
To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.
BIG DATA ANALYTICS AND PRECISION ANIMAL AGRICULTURE SYMPOSIUM: Data to decisions.
White, B J; Amrine, D E; Larson, R L
2018-04-14
Big data are frequently used in many facets of business and agronomy to enhance knowledge needed to improve operational decisions. Livestock operations collect data of sufficient quantity to perform predictive analytics. Predictive analytics can be defined as a methodology and suite of data evaluation techniques to generate a prediction for specific target outcomes. The objective of this manuscript is to describe the process of using big data and the predictive analytic framework to create tools to drive decisions in livestock production, health, and welfare. The predictive analytic process involves selecting a target variable, managing the data, partitioning the data, then creating algorithms, refining algorithms, and finally comparing accuracy of the created classifiers. The partitioning of the datasets allows model building and refining to occur prior to testing the predictive accuracy of the model with naive data to evaluate overall accuracy. Many different classification algorithms are available for predictive use and testing multiple algorithms can lead to optimal results. Application of a systematic process for predictive analytics using data that is currently collected or that could be collected on livestock operations will facilitate precision animal management through enhanced livestock operational decisions.
Song, Woo-Jung; Kim, Hyun Jung; Shim, Ji-Su; Won, Ha-Kyeong; Kang, Sung-Yoon; Sohn, Kyoung-Hee; Kim, Byung-Keun; Jo, Eun-Jung; Kim, Min-Hye; Kim, Sang-Heon; Park, Heung-Woo; Kim, Sun-Sin; Chang, Yoon-Seok; Morice, Alyn H; Lee, Byung-Jae; Cho, Sang-Heon
2017-09-01
Individual studies have suggested the utility of fractional exhaled nitric oxide (Feno) measurement in detecting cough-variant asthma (CVA) and eosinophilic bronchitis (EB) in patients with chronic cough. We sought to obtain summary estimates of diagnostic test accuracy of Feno measurement in predicting CVA, EB, or both in adults with chronic cough. Electronic databases were searched for studies published until January 2016, without language restriction. Cross-sectional studies that reported the diagnostic accuracy of Feno measurement for detecting CVA or EB were included. Risk of bias was assessed with Quality Assessment of Diagnostic Accuracy Studies 2. Random effects meta-analyses were performed to obtain summary estimates of the diagnostic accuracy of Feno measurement. A total of 15 studies involving 2187 adults with chronic cough were identified. Feno measurement had a moderate diagnostic accuracy in predicting CVA in patients with chronic cough, showing the summary area under the curve to be 0.87 (95% CI, 0.83-0.89). Specificity was higher and more consistent than sensitivity (0.85 [95% CI, 0.81-0.88] and 0.72 [95% CI, 0.61-0.81], respectively). However, in the nonasthmatic population with chronic cough, the diagnostic accuracy to predict EB was found to be relatively lower (summary area under the curve, 0.81 [95% CI, 0.77-0.84]), and specificity was inconsistent. The present meta-analyses indicated the diagnostic potential of Feno measurement as a rule-in test for detecting CVA in adult patients with chronic cough. However, Feno measurement may not be useful to predict EB in nonasthmatic subjects with chronic cough. These findings warrant further studies to validate the roles of Feno measurement in clinical practice of patients with chronic cough. Copyright © 2017 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.
Accuracy of fetal sex determination on ultrasound examination in the first trimester of pregnancy.
Manzanares, Sebastián; Benítez, Adara; Naveiro-Fuentes, Mariña; López-Criado, María Setefilla; Sánchez-Gila, Mar
2016-06-01
The aim of this study was to evaluate the feasibility and success rate of sex determination on transabdominal sonographic examination at 11-13 weeks' gestation and to identify factors influencing accuracy. In this prospective observational evaluation of 672 fetuses between 11 weeks' and 13 weeks + 6 days' gestational age (GA), we determined fetal sex according to the angle of the genital tubercle viewed on the midsagittal plane. We also analyzed maternal, fetal, and operator factors possibly influencing the accuracy of the determination. Fetal sex determination was feasible in 608 of the 672 fetuses (90.5%), and the prediction was correct in 532 of those 608 cases (87.5%). Fetal sex was more accurately predicted as the fetal crown-rump length (CRL), and GA increased and was less accurately predicted as the maternal body mass index increased. A CRL greater than 55.7 mm, a GA more than 12 weeks + 2 days, and a body mass index below 23.8 were identified as the best cutoff values for sex prediction. None of the other analyzed factors influenced the feasibility or accuracy of sex determination. The sex of a fetus can be accurately determined on sonographic examination in the first trimester of pregnancy; the accuracy of this prediction is influenced by the fetal CRL and GA and by the maternal body mass index. © 2015 Wiley Periodicals, Inc. J Clin Ultrasound 44:272-277, 2016. © 2015 Wiley Periodicals, Inc.
Predictive accuracy of combined genetic and environmental risk scores.
Dudbridge, Frank; Pashayan, Nora; Yang, Jian
2018-02-01
The substantial heritability of most complex diseases suggests that genetic data could provide useful risk prediction. To date the performance of genetic risk scores has fallen short of the potential implied by heritability, but this can be explained by insufficient sample sizes for estimating highly polygenic models. When risk predictors already exist based on environment or lifestyle, two key questions are to what extent can they be improved by adding genetic information, and what is the ultimate potential of combined genetic and environmental risk scores? Here, we extend previous work on the predictive accuracy of polygenic scores to allow for an environmental score that may be correlated with the polygenic score, for example when the environmental factors mediate the genetic risk. We derive common measures of predictive accuracy and improvement as functions of the training sample size, chip heritabilities of disease and environmental score, and genetic correlation between disease and environmental risk factors. We consider simple addition of the two scores and a weighted sum that accounts for their correlation. Using examples from studies of cardiovascular disease and breast cancer, we show that improvements in discrimination are generally small but reasonable degrees of reclassification could be obtained with current sample sizes. Correlation between genetic and environmental scores has only minor effects on numerical results in realistic scenarios. In the longer term, as the accuracy of polygenic scores improves they will come to dominate the predictive accuracy compared to environmental scores. © 2017 WILEY PERIODICALS, INC.
Predictive accuracy of combined genetic and environmental risk scores
Pashayan, Nora; Yang, Jian
2017-01-01
ABSTRACT The substantial heritability of most complex diseases suggests that genetic data could provide useful risk prediction. To date the performance of genetic risk scores has fallen short of the potential implied by heritability, but this can be explained by insufficient sample sizes for estimating highly polygenic models. When risk predictors already exist based on environment or lifestyle, two key questions are to what extent can they be improved by adding genetic information, and what is the ultimate potential of combined genetic and environmental risk scores? Here, we extend previous work on the predictive accuracy of polygenic scores to allow for an environmental score that may be correlated with the polygenic score, for example when the environmental factors mediate the genetic risk. We derive common measures of predictive accuracy and improvement as functions of the training sample size, chip heritabilities of disease and environmental score, and genetic correlation between disease and environmental risk factors. We consider simple addition of the two scores and a weighted sum that accounts for their correlation. Using examples from studies of cardiovascular disease and breast cancer, we show that improvements in discrimination are generally small but reasonable degrees of reclassification could be obtained with current sample sizes. Correlation between genetic and environmental scores has only minor effects on numerical results in realistic scenarios. In the longer term, as the accuracy of polygenic scores improves they will come to dominate the predictive accuracy compared to environmental scores. PMID:29178508
Meuwissen, Theo H E; Indahl, Ulf G; Ødegård, Jørgen
2017-12-27
Non-linear Bayesian genomic prediction models such as BayesA/B/C/R involve iteration and mostly Markov chain Monte Carlo (MCMC) algorithms, which are computationally expensive, especially when whole-genome sequence (WGS) data are analyzed. Singular value decomposition (SVD) of the genotype matrix can facilitate genomic prediction in large datasets, and can be used to estimate marker effects and their prediction error variances (PEV) in a computationally efficient manner. Here, we developed, implemented, and evaluated a direct, non-iterative method for the estimation of marker effects for the BayesC genomic prediction model. The BayesC model assumes a priori that markers have normally distributed effects with probability [Formula: see text] and no effect with probability (1 - [Formula: see text]). Marker effects and their PEV are estimated by using SVD and the posterior probability of the marker having a non-zero effect is calculated. These posterior probabilities are used to obtain marker-specific effect variances, which are subsequently used to approximate BayesC estimates of marker effects in a linear model. A computer simulation study was conducted to compare alternative genomic prediction methods, where a single reference generation was used to estimate marker effects, which were subsequently used for 10 generations of forward prediction, for which accuracies were evaluated. SVD-based posterior probabilities of markers having non-zero effects were generally lower than MCMC-based posterior probabilities, but for some regions the opposite occurred, resulting in clear signals for QTL-rich regions. The accuracies of breeding values estimated using SVD- and MCMC-based BayesC analyses were similar across the 10 generations of forward prediction. For an intermediate number of generations (2 to 5) of forward prediction, accuracies obtained with the BayesC model tended to be slightly higher than accuracies obtained using the best linear unbiased prediction of SNP effects (SNP-BLUP model). When reducing marker density from WGS data to 30 K, SNP-BLUP tended to yield the highest accuracies, at least in the short term. Based on SVD of the genotype matrix, we developed a direct method for the calculation of BayesC estimates of marker effects. Although SVD- and MCMC-based marker effects differed slightly, their prediction accuracies were similar. Assuming that the SVD of the marker genotype matrix is already performed for other reasons (e.g. for SNP-BLUP), computation times for the BayesC predictions were comparable to those of SNP-BLUP.
Conde-Agudelo, Agustin; Romero, Roberto
2015-12-01
To determine the accuracy of changes in transvaginal sonographic cervical length over time in predicting preterm birth in women with singleton and twin gestations. PubMed, Embase, Cinahl, Lilacs, and Medion (all from inception to June 30, 2015), bibliographies, Google scholar, and conference proceedings. Cohort or cross-sectional studies reporting on the predictive accuracy for preterm birth of changes in cervical length over time. Two reviewers independently selected studies, assessed the risk of bias, and extracted the data. Summary receiver-operating characteristic curves, pooled sensitivities and specificities, and summary likelihood ratios were generated. Fourteen studies met the inclusion criteria, of which 7 provided data on singleton gestations (3374 women) and 8 on twin gestations (1024 women). Among women with singleton gestations, the shortening of cervical length over time had a low predictive accuracy for preterm birth at <37 and <35 weeks of gestation with pooled sensitivities and specificities, and summary positive and negative likelihood ratios ranging from 49% to 74%, 44% to 85%, 1.3 to 4.1, and 0.3 to 0.7, respectively. In women with twin gestations, the shortening of cervical length over time had a low to moderate predictive accuracy for preterm birth at <34, <32, <30, and <28 weeks of gestation with pooled sensitivities and specificities, and summary positive and negative likelihood ratios ranging from 47% to 73%, 84% to 89%, 3.8 to 5.3, and 0.3 to 0.6, respectively. There were no statistically significant differences between the predictive accuracies for preterm birth of cervical length shortening over time and the single initial and/or final cervical length measurement in 8 of 11 studies that provided data for making these comparisons. In the largest and highest-quality study, a single measurement of cervical length obtained at 24 or 28 weeks of gestation was significantly more predictive of preterm birth than any decrease in cervical length between these gestational ages. Change in transvaginal sonographic cervical length over time is not a clinically useful test to predict preterm birth in women with singleton or twin gestations. A single cervical length measurement obtained between 18 and 24 weeks of gestation appears to be a better test to predict preterm birth than changes in cervical length over time. Published by Elsevier Inc.
Elkovitch, Natasha; Viljoen, Jodi L; Scalora, Mario J; Ullman, Daniel
2008-01-01
As courts often rely on clinicians when differentiating between sexually abusive youth at a low versus high risk of reoffense, understanding factors that contribute to accuracy in assessment of risk is imperative. The present study built on existing research by examining (1) the accuracy of clinical judgments of risk made after completing risk assessment instruments, (2) whether instrument-informed clinical judgments made with a high degree of confidence are associated with greater accuracy, and (3) the risk assessment instruments and subscales most predictive of clinical judgments. Raters assessed each youth's (n = 166) risk of reoffending after completing the SAVRY and J-SOAP-II. Raters were not able to predict detected cases of either sexual recidivism or nonsexual violent recidivism above chance, and a high degree of rater confidence was not associated with higher levels of accuracy. Total scores on the J-SOAP-II were predictive of instrument-informed clinical judgments of sexual risk, and total scores on the SAVRY of nonsexual risk.
Song, H; Li, L; Ma, P; Zhang, S; Su, G; Lund, M S; Zhang, Q; Ding, X
2018-06-01
This study investigated the efficiency of genomic prediction with adding the markers identified by genome-wide association study (GWAS) using a data set of imputed high-density (HD) markers from 54K markers in Chinese Holsteins. Among 3,056 Chinese Holsteins with imputed HD data, 2,401 individuals born before October 1, 2009, were used for GWAS and a reference population for genomic prediction, and the 220 younger cows were used as a validation population. In total, 1,403, 1,536, and 1,383 significant single nucleotide polymorphisms (SNP; false discovery rate at 0.05) associated with conformation final score, mammary system, and feet and legs were identified, respectively. About 2 to 3% genetic variance of 3 traits was explained by these significant SNP. Only a very small proportion of significant SNP identified by GWAS was included in the 54K marker panel. Three new marker sets (54K+) were herein produced by adding significant SNP obtained by linear mixed model for each trait into the 54K marker panel. Genomic breeding values were predicted using a Bayesian variable selection (BVS) model. The accuracies of genomic breeding value by BVS based on the 54K+ data were 2.0 to 5.2% higher than those based on the 54K data. The imputed HD markers yielded 1.4% higher accuracy on average (BVS) than the 54K data. Both the 54K+ and HD data generated lower bias of genomic prediction, and the 54K+ data yielded the lowest bias in all situations. Our results show that the imputed HD data were not very useful for improving the accuracy of genomic prediction and that adding the significant markers derived from the imputed HD marker panel could improve the accuracy of genomic prediction and decrease the bias of genomic prediction. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Price, Owen F; Penman, Trent; Bradstock, Ross; Borah, Rittick
2016-10-01
Wildfires are complex adaptive systems, and have been hypothesized to exhibit scale-dependent transitions in the drivers of fire spread. Among other things, this makes the prediction of final fire size from conditions at the ignition difficult. We test this hypothesis by conducting a multi-scale statistical modelling of the factors determining whether fires reached 10 ha, then 100 ha then 1000 ha and the final size of fires >1000 ha. At each stage, the predictors were measures of weather, fuels, topography and fire suppression. The objectives were to identify differences among the models indicative of scale transitions, assess the accuracy of the multi-step method for predicting fire size (compared to predicting final size from initial conditions) and to quantify the importance of the predictors. The data were 1116 fires that occurred in the eucalypt forests of New South Wales between 1985 and 2010. The models were similar at the different scales, though there were subtle differences. For example, the presence of roads affected whether fires reached 10 ha but not larger scales. Weather was the most important predictor overall, though fuel load, topography and ease of suppression all showed effects. Overall, there was no evidence that fires have scale-dependent transitions in behaviour. The models had a predictive accuracy of 73%, 66%, 72% and 53% accuracy at 10 ha, 100 ha, 1000 ha and final size scales. When these steps were combined, the overall accuracy for predicting the size of fires was 62%, while the accuracy of the one step model was only 20%. Thus, the multi-scale approach was an improvement on the single scale approach, even though the predictive accuracy was probably insufficient for use as an operational tool. The analysis has also provided further evidence of the important role of weather, compared to fuel, suppression and topography in driving fire behaviour. Copyright © 2016. Published by Elsevier Ltd.
Perez-Cruz, Pedro E; Dos Santos, Renata; Silva, Thiago Buosi; Crovador, Camila Souza; Nascimento, Maria Salete de Angelis; Hall, Stacy; Fajardo, Julieta; Bruera, Eduardo; Hui, David
2014-11-01
Survival prognostication is important during the end of life. The accuracy of clinician prediction of survival (CPS) over time has not been well characterized. The aims of the study were to examine changes in prognostication accuracy during the last 14 days of life in a cohort of patients with advanced cancer admitted to two acute palliative care units and to compare the accuracy between the temporal and probabilistic approaches. Physicians and nurses prognosticated survival daily for cancer patients in two hospitals until death/discharge using two prognostic approaches: temporal and probabilistic. We assessed accuracy for each method daily during the last 14 days of life comparing accuracy at Day -14 (baseline) with accuracy at each time point using a test of proportions. A total of 6718 temporal and 6621 probabilistic estimations were provided by physicians and nurses for 311 patients, respectively. Median (interquartile range) survival was 8 days (4-20 days). Temporal CPS had low accuracy (10%-40%) and did not change over time. In contrast, probabilistic CPS was significantly more accurate (P < .05 at each time point) but decreased close to death. Probabilistic CPS was consistently more accurate than temporal CPS over the last 14 days of life; however, its accuracy decreased as patients approached death. Our findings suggest that better tools to predict impending death are necessary. Copyright © 2014 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
Labrenz, Franziska; Icenhour, Adriane; Benson, Sven; Elsenbruch, Sigrid
2015-01-01
As a fundamental learning process, fear conditioning promotes the formation of associations between predictive cues and biologically significant signals. In its application to pain, conditioning may provide important insight into mechanisms underlying pain-related fear, although knowledge especially in interoceptive pain paradigms remains scarce. Furthermore, while the influence of contingency awareness on excitatory learning is subject of ongoing debate, its role in pain-related acquisition is poorly understood and essentially unknown regarding extinction as inhibitory learning. Therefore, we addressed the impact of contingency awareness on learned emotional responses to pain- and safety-predictive cues in a combined dataset of two pain-related conditioning studies. In total, 75 healthy participants underwent differential fear acquisition, during which rectal distensions as interoceptive unconditioned stimuli (US) were repeatedly paired with a predictive visual cue (conditioned stimulus; CS+) while another cue (CS−) was presented unpaired. During extinction, both CS were presented without US. CS valence, indicating learned emotional responses, and CS-US contingencies were assessed on visual analog scales (VAS). Based on an integrative measure of contingency accuracy, a median-split was performed to compare groups with low vs. high contingency accuracy regarding learned emotional responses. To investigate predictive value of contingency accuracy, regression analyses were conducted. Highly accurate individuals revealed more pronounced negative emotional responses to CS+ and increased positive responses to CS− when compared to participants with low contingency accuracy. Following extinction, highly accurate individuals had fully extinguished pain-predictive cue properties, while exhibiting persistent positive emotional responses to safety signals. In contrast, individuals with low accuracy revealed equally positive emotional responses to both, CS+ and CS−. Contingency accuracy predicted variance in the formation of positive responses to safety cues while no predictive value was found for danger cues following acquisition and for neither cue following extinction. Our findings underscore specific roles of learned danger and safety in pain-related acquisition and extinction. Contingency accuracy appears to distinctly impact learned emotional responses to safety and danger cues, supporting aversive learning to occur independently from CS-US awareness. The interplay of cognitive and emotional factors in shaping excitatory and inhibitory pain-related learning may contribute to altered pain processing, underscoring its clinical relevance in chronic pain. PMID:26640433
Labrenz, Franziska; Icenhour, Adriane; Benson, Sven; Elsenbruch, Sigrid
2015-01-01
As a fundamental learning process, fear conditioning promotes the formation of associations between predictive cues and biologically significant signals. In its application to pain, conditioning may provide important insight into mechanisms underlying pain-related fear, although knowledge especially in interoceptive pain paradigms remains scarce. Furthermore, while the influence of contingency awareness on excitatory learning is subject of ongoing debate, its role in pain-related acquisition is poorly understood and essentially unknown regarding extinction as inhibitory learning. Therefore, we addressed the impact of contingency awareness on learned emotional responses to pain- and safety-predictive cues in a combined dataset of two pain-related conditioning studies. In total, 75 healthy participants underwent differential fear acquisition, during which rectal distensions as interoceptive unconditioned stimuli (US) were repeatedly paired with a predictive visual cue (conditioned stimulus; CS(+)) while another cue (CS(-)) was presented unpaired. During extinction, both CS were presented without US. CS valence, indicating learned emotional responses, and CS-US contingencies were assessed on visual analog scales (VAS). Based on an integrative measure of contingency accuracy, a median-split was performed to compare groups with low vs. high contingency accuracy regarding learned emotional responses. To investigate predictive value of contingency accuracy, regression analyses were conducted. Highly accurate individuals revealed more pronounced negative emotional responses to CS(+) and increased positive responses to CS(-) when compared to participants with low contingency accuracy. Following extinction, highly accurate individuals had fully extinguished pain-predictive cue properties, while exhibiting persistent positive emotional responses to safety signals. In contrast, individuals with low accuracy revealed equally positive emotional responses to both, CS(+) and CS(-). Contingency accuracy predicted variance in the formation of positive responses to safety cues while no predictive value was found for danger cues following acquisition and for neither cue following extinction. Our findings underscore specific roles of learned danger and safety in pain-related acquisition and extinction. Contingency accuracy appears to distinctly impact learned emotional responses to safety and danger cues, supporting aversive learning to occur independently from CS-US awareness. The interplay of cognitive and emotional factors in shaping excitatory and inhibitory pain-related learning may contribute to altered pain processing, underscoring its clinical relevance in chronic pain.
Automated detection of brain atrophy patterns based on MRI for the prediction of Alzheimer's disease
Plant, Claudia; Teipel, Stefan J.; Oswald, Annahita; Böhm, Christian; Meindl, Thomas; Mourao-Miranda, Janaina; Bokde, Arun W.; Hampel, Harald; Ewers, Michael
2010-01-01
Subjects with mild cognitive impairment (MCI) have an increased risk to develop Alzheimer's disease (AD). Voxel-based MRI studies have demonstrated that widely distributed cortical and subcortical brain areas show atrophic changes in MCI, preceding the onset of AD-type dementia. Here we developed a novel data mining framework in combination with three different classifiers including support vector machine (SVM), Bayes statistics, and voting feature intervals (VFI) to derive a quantitative index of pattern matching for the prediction of the conversion from MCI to AD. MRI was collected in 32 AD patients, 24 MCI subjects and 18 healthy controls (HC). Nine out of 24 MCI subjects converted to AD after an average follow-up interval of 2.5 years. Using feature selection algorithms, brain regions showing the highest accuracy for the discrimination between AD and HC were identified, reaching a classification accuracy of up to 92%. The extracted AD clusters were used as a search region to extract those brain areas that are predictive of conversion to AD within MCI subjects. The most predictive brain areas included the anterior cingulate gyrus and orbitofrontal cortex. The best prediction accuracy, which was cross-validated via train-and-test, was 75% for the prediction of the conversion from MCI to AD. The present results suggest that novel multivariate methods of pattern matching reach a clinically relevant accuracy for the a priori prediction of the progression from MCI to AD. PMID:19961938
Prediction-Oriented Marker Selection (PROMISE): With Application to High-Dimensional Regression.
Kim, Soyeon; Baladandayuthapani, Veerabhadran; Lee, J Jack
2017-06-01
In personalized medicine, biomarkers are used to select therapies with the highest likelihood of success based on an individual patient's biomarker/genomic profile. Two goals are to choose important biomarkers that accurately predict treatment outcomes and to cull unimportant biomarkers to reduce the cost of biological and clinical verifications. These goals are challenging due to the high dimensionality of genomic data. Variable selection methods based on penalized regression (e.g., the lasso and elastic net) have yielded promising results. However, selecting the right amount of penalization is critical to simultaneously achieving these two goals. Standard approaches based on cross-validation (CV) typically provide high prediction accuracy with high true positive rates but at the cost of too many false positives. Alternatively, stability selection (SS) controls the number of false positives, but at the cost of yielding too few true positives. To circumvent these issues, we propose prediction-oriented marker selection (PROMISE), which combines SS with CV to conflate the advantages of both methods. Our application of PROMISE with the lasso and elastic net in data analysis shows that, compared to CV, PROMISE produces sparse solutions, few false positives, and small type I + type II error, and maintains good prediction accuracy, with a marginal decrease in the true positive rates. Compared to SS, PROMISE offers better prediction accuracy and true positive rates. In summary, PROMISE can be applied in many fields to select regularization parameters when the goals are to minimize false positives and maximize prediction accuracy.
NASA Astrophysics Data System (ADS)
Motoyama, Yuichi; Shiga, Hidetoshi; Sato, Takeshi; Kambe, Hiroshi; Yoshida, Makoto
2017-06-01
Recovery behavior (recovery) and strain-rate dependence of the stress-strain curve (strain-rate dependence) are incorporated into constitutive equations of alloys to predict residual stress and thermal stress during casting. Nevertheless, few studies have systematically investigated the effects of these metallurgical phenomena on the prediction accuracy of thermal stress in a casting. This study compares the thermal stress analysis results with in situ thermal stress measurement results of an Al-Si-Cu specimen during casting. The results underscore the importance for the alloy constitutive equation of incorporating strain-rate dependence to predict thermal stress that develops at high temperatures where the alloy shows strong strain-rate dependence of the stress-strain curve. However, the prediction accuracy of the thermal stress developed at low temperatures did not improve by considering the strain-rate dependence. Incorporating recovery into the constitutive equation improved the accuracy of the simulated thermal stress at low temperatures. Results of comparison implied that the constitutive equation should include strain-rate dependence to simulate defects that develop from thermal stress at high temperatures, such as hot tearing and hot cracking. Recovery should be incorporated into the alloy constitutive equation to predict the casting residual stress and deformation caused by the thermal stress developed mainly in the low temperature range.
A fresh look at the predictors of naming accuracy and errors in Alzheimer's disease.
Cuetos, Fernando; Rodríguez-Ferreiro, Javier; Sage, Karen; Ellis, Andrew W
2012-09-01
In recent years, a considerable number of studies have tried to establish which characteristics of objects and their names predict the responses of patients with Alzheimer's disease (AD) in the picture-naming task. The frequency of use of words and their age of acquisition (AoA) have been implicated as two of the most influential variables, with naming being best preserved for objects with high-frequency, early-acquired names. The present study takes a fresh look at the predictors of naming success in Spanish and English AD patients using a range of measures of word frequency and AoA along with visual complexity, imageability, and word length as predictors. Analyses using generalized linear mixed modelling found that naming accuracy was better predicted by AoA ratings taken from older adults than conventional ratings from young adults. Older frequency measures based on written language samples predicted accuracy better than more modern measures based on the frequencies of words in film subtitles. Replacing adult frequency with an estimate of cumulative (lifespan) frequency did not reduce the impact of AoA. Semantic error rates were predicted by both written word frequency and senior AoA while null response errors were only predicted by frequency. Visual complexity, imageability, and word length did not predict naming accuracy or errors. ©2012 The British Psychological Society.
Thermodynamics and proton activities of protic ionic liquids with quantum cluster equilibrium theory
NASA Astrophysics Data System (ADS)
Ingenmey, Johannes; von Domaros, Michael; Perlt, Eva; Verevkin, Sergey P.; Kirchner, Barbara
2018-05-01
We applied the binary Quantum Cluster Equilibrium (bQCE) method to a number of alkylammonium-based protic ionic liquids in order to predict boiling points, vaporization enthalpies, and proton activities. The theory combines statistical thermodynamics of van-der-Waals-type clusters with ab initio quantum chemistry and yields the partition functions (and associated thermodynamic potentials) of binary mixtures over a wide range of thermodynamic phase points. Unlike conventional cluster approaches that are limited to the prediction of thermodynamic properties, dissociation reactions can be effortlessly included into the bQCE formalism, giving access to ionicities, as well. The method is open to quantum chemical methods at any level of theory, but combination with low-cost composite density functional theory methods and the proposed systematic approach to generate cluster sets provides a computationally inexpensive and mostly parameter-free way to predict such properties at good-to-excellent accuracy. Boiling points can be predicted within an accuracy of 50 K, reaching excellent accuracy for ethylammonium nitrate. Vaporization enthalpies are predicted within an accuracy of 20 kJ mol-1 and can be systematically interpreted on a molecular level. We present the first theoretical approach to predict proton activities in protic ionic liquids, with results fitting well into the experimentally observed correlation. Furthermore, enthalpies of vaporization were measured experimentally for some alkylammonium nitrates and an excellent linear correlation with vaporization enthalpies of their respective parent amines is observed.
Genomic Prediction Accounting for Residual Heteroskedasticity.
Ou, Zhining; Tempelman, Robert J; Steibel, Juan P; Ernst, Catherine W; Bates, Ronald O; Bello, Nora M
2015-11-12
Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. Copyright © 2016 Ou et al.
Entropy-based link prediction in weighted networks
NASA Astrophysics Data System (ADS)
Xu, Zhongqi; Pu, Cunlai; Ramiz Sharafat, Rajput; Li, Lunbo; Yang, Jian
2017-01-01
Information entropy has been proved to be an effective tool to quantify the structural importance of complex networks. In the previous work (Xu et al, 2016 \\cite{xu2016}), we measure the contribution of a path in link prediction with information entropy. In this paper, we further quantify the contribution of a path with both path entropy and path weight, and propose a weighted prediction index based on the contributions of paths, namely Weighted Path Entropy (WPE), to improve the prediction accuracy in weighted networks. Empirical experiments on six weighted real-world networks show that WPE achieves higher prediction accuracy than three typical weighted indices.
Ngo, L; Ho, H; Hunter, P; Quinn, K; Thomson, A; Pearson, G
2016-02-01
Post-mortem measurements (cold weight, grade and external carcass linear dimensions) as well as live animal data (age, breed, sex) were used to predict ovine primal and retail cut weights for 792 lamb carcases. Significant levels of variance could be explained using these predictors. The predictive power of those measurements on primal and retail cut weights was studied by using the results from principal component analysis and the absolute value of the t-statistics of the linear regression model. High prediction accuracy for primal cut weight was achieved (adjusted R(2) up to 0.95), as well as moderate accuracy for key retail cut weight: tenderloins (adj-R(2)=0.60), loin (adj-R(2)=0.62), French rack (adj-R(2)=0.76) and rump (adj-R(2)=0.75). The carcass cold weight had the best predictive power, with the accuracy increasing by around 10% after including the next three most significant variables. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mancuso, Renzo; Osta, Rosario; Navarro, Xavier
2014-12-01
We assessed the predictive value of electrophysiological tests as a marker of clinical disease onset and survival in superoxide-dismutase 1 (SOD1)(G93A) mice. We evaluated the accuracy of electrophysiological tests in differentiating transgenic versus wild-type mice. We made a correlation analysis of electrophysiological parameters and the onset of symptoms, survival, and number of spinal motoneurons. Presymptomatic electrophysiological tests show great accuracy in differentiating transgenic versus wild-type mice, with the most sensitive parameter being the tibialis anterior compound muscle action potential (CMAP) amplitude. The CMAP amplitude at age 10 weeks correlated significantly with clinical disease onset and survival. Electrophysiological tests increased their survival prediction accuracy when evaluated at later stages of the disease and also predicted the amount of lumbar spinal motoneuron preservation. Electrophysiological tests predict clinical disease onset, survival, and spinal motoneuron preservation in SOD1(G93A) mice. This is a methodological improvement for preclinical studies. © 2014 Wiley Periodicals, Inc.
Accuracy Analysis of a Box-wing Theoretical SRP Model
NASA Astrophysics Data System (ADS)
Wang, Xiaoya; Hu, Xiaogong; Zhao, Qunhe; Guo, Rui
2016-07-01
For Beidou satellite navigation system (BDS) a high accuracy SRP model is necessary for high precise applications especially with Global BDS establishment in future. The BDS accuracy for broadcast ephemeris need be improved. So, a box-wing theoretical SRP model with fine structure and adding conical shadow factor of earth and moon were established. We verified this SRP model by the GPS Block IIF satellites. The calculation was done with the data of PRN 1, 24, 25, 27 satellites. The results show that the physical SRP model for POD and forecast for GPS IIF satellite has higher accuracy with respect to Bern empirical model. The 3D-RMS of orbit is about 20 centimeters. The POD accuracy for both models is similar but the prediction accuracy with the physical SRP model is more than doubled. We tested 1-day 3-day and 7-day orbit prediction. The longer is the prediction arc length, the more significant is the improvement. The orbit prediction accuracy with the physical SRP model for 1-day, 3-day and 7-day arc length are 0.4m, 2.0m, 10.0m respectively. But they are 0.9m, 5.5m and 30m with Bern empirical model respectively. We apply this means to the BDS and give out a SRP model for Beidou satellites. Then we test and verify the model with Beidou data of one month only for test. Initial results show the model is good but needs more data for verification and improvement. The orbit residual RMS is similar to that with our empirical force model which only estimate the force for along track, across track direction and y-bias. But the orbit overlap and SLR observation evaluation show some improvement. The remaining empirical force is reduced significantly for present Beidou constellation.
Chow, Benjamin J W; Freeman, Michael R; Bowen, James M; Levin, Leslie; Hopkins, Robert B; Provost, Yves; Tarride, Jean-Eric; Dennie, Carole; Cohen, Eric A; Marcuzzi, Dan; Iwanochko, Robert; Moody, Alan R; Paul, Narinder; Parker, John D; O'Reilly, Daria J; Xie, Feng; Goeree, Ron
2011-06-13
Computed tomographic coronary angiography (CTCA) has gained clinical acceptance for the detection of obstructive coronary artery disease. Although single-center studies have demonstrated excellent accuracy, multicenter studies have yielded variable results. The true diagnostic accuracy of CTCA in the "real world" remains uncertain. We conducted a field evaluation comparing multidetector CTCA with invasive CA (ICA) to understand CTCA's diagnostic accuracy in a real-world setting. A multicenter cohort study of patients awaiting ICA was conducted between September 2006 and June 2009. All patients had either a low or an intermediate pretest probability for coronary artery disease and underwent CTCA and ICA within 10 days. The results of CTCA and ICA were interpreted visually by local expert observers who were blinded to all clinical data and imaging results. Using a patient-based analysis (diameter stenosis ≥50%) of 169 patients, the sensitivity, specificity, positive predictive value, and negative predictive value were 81.3% (95% confidence interval [CI], 71.0%-89.1%), 93.3% (95% CI, 85.9%-97.5%), 91.6% (95% CI, 82.5%-96.8%), and 84.7% (95% CI, 76.0%-91.2%), respectively; the area under receiver operating characteristic curve was 0.873. The diagnostic accuracy varied across centers (P < .001), with a sensitivity, specificity, positive predictive value, and negative predictive value ranging from 50.0% to 93.2%, 92.0% to 100%, 84.6% to 100%, and 42.9% to 94.7%, respectively. Compared with ICA, CTCA appears to have good accuracy; however, there was variability in diagnostic accuracy across centers. Factors affecting institutional variability need to be better understood before CTCA is universally adopted. Additional real-world evaluations are needed to fully understand the impact of CTCA on clinical care. clinicaltrials.gov Identifier: NCT00371891.
Numerical simulation on chain-die forming of an AHSS top-hat section
NASA Astrophysics Data System (ADS)
Majji, Raju; Xiang, Yang; Ding, Scott; Yang, Chunhui
2018-05-01
The applications of Advanced High-Strength Steels (AHSS) in the automotive industry are rapidly increasing due to a demand for a lightweight material that significantly reduces fuel consumption without compromising passenger safety. Automotive industries and material suppliers are expected by consumers to deliver reliable and affordable products, thus stimulating these manufacturers to research solutions to meet these customer requirements. The primary advantage of AHSS is its extremely high strength to weight ratio, an ideal material for the automotive industry. However, its low ductility is a major disadvantage, in particular, when using traditional cold forming processes such as roll forming and deep drawing process to form profiles. Consequently, AHSS parts frequently fail to form. Thereby, in order to improve quality and reliability on manufacturing AHSS products, a recently-developed incremental cold sheet metal forming technology called Chain-die Forming (CDF) is recognised as a potential solution to the forming process of AHSS. The typical CDF process is a combination of bending and roll forming processes which is equivalent to a roll with a large deforming radius, and incrementally forms the desired shape with split die and segments. This study focuses on manufacturing an AHSS top-hat section with minimum passes without geometrical or surface defects by using finite element modelling and simulations. The developed numerical simulation is employed to investigate the influences on the main control parameter of the CDF process while forming AHSS products and further develop new die-punch sets of compensation design via a numerical optimal process. In addition, the study focuses on the tool design to compensate spring-back and reduce friction between tooling and sheet-metal. This reduces the number of passes, thereby improving productivity and reducing energy consumption and material waste. This numerical study reveals that CDF forms AHSS products of complex profiles with much less residual stress, low spring back, low strain and of higher geometrical accuracy compared to other traditional manufacturing processes.
2000-06-30
Leaving billowing clouds of steam and smoke behind, NASA’s Tracking and Data Relay Satellite (TDRS-H) shoots into the blue sky aboard an Atlas IIA/Centaur rocket from Pad 36A, Cape Canaveral Air Force Station. Liftoff occurred at 8:56 a.m. EDT. One of three satellites (labeled H, I and J) being built by the Hughes Space and Communications Company, the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the space shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-01
Workers in KSC’s Spacecraft Assembly and Encapsulation Facility (SAEF-2) prepare the Tracking and Data Relay Satellite (TDRS-H) above them for electrical testing. The TDRS is scheduled to be launched from CCAFS June 29 aboard an Atlas IIA/Centaur rocket. One of three satellites (labeled H, I and J) being built in the Hughes Space and Communications Company Integrated Satellite Factory in El Segundo, Calif., the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the space shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-01
Workers in KSC’s Spacecraft Assembly and Encapsulation Facility (SAEF-2) conduct electrical testing on the Tracking and Data Relay Satellite (TDRS-H) above them. The TDRS is scheduled to be launched from CCAFS June 29 aboard an Atlas IIA/Centaur rocket. One of three satellites (labeled H, I and J) being built in the Hughes Space and Communications Company Integrated Satellite Factory in El Segundo, Calif., the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the space shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-01
The Tracking and Data Relay Satellite (TDRS-H) sits on a workstand in KSC’s Spacecraft Assembly and Encapsulation Facility (SAEF-2) in order to undergo electrical testing. The TDRS is scheduled to be launched from CCAFS June 29 aboard an Atlas IIA/Centaur rocket. One of three satellites (labeled H, I and J) being built in the Hughes Space and Communications Company Integrated Satellite Factory in El Segundo, Calif., the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the space shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-30
Looking like a Roman candle, NASA’s Tracking and Data Relay Satellite (TDRS-H) shoots into the blue sky aboard an Atlas IIA/Centaur rocket from Pad 36A, Cape Canaveral Air Force Station. Liftoff occurred at 8:56 a.m. EDT. One of three satellites (labeled H, I and J) being built by the Hughes Space and Communications Company, the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the space shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-30
Leaving billowing clouds of steam and smoke behind, NASA’s Tracking and Data Relay Satellite (TDRS-H) shoots into the blue sky aboard an Atlas IIA/Centaur rocket from Pad 36A, Cape Canaveral Air Force Station. Liftoff occurred at 8:56 a.m. EDT. One of three satellites (labeled H, I and J) being built by the Hughes Space and Communications Company, the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the space shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-13
The Tracking and Data Relay Satellite (TDRS-H) sits fully encapsulated inside the fairing. Next, it will be transported to Launch Pad 36A, Cape Canaveral Air Force Station for launch scheduled June 29 aboard an Atlas IIA/Centaur rocket. One of three satellites (labeled H, I and J) being built in the Hughes Space and Communications Company Integrated Satellite Factory in El Segundo, Calif., the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the space shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-01
Workers in KSC’s Spacecraft Assembly and Encapsulation Facility (SAEF-2) conduct electrical testing on the Tracking and Data Relay Satellite (TDRS-H) above them. The TDRS is scheduled to be launched from CCAFS June 29 aboard an Atlas IIA/Centaur rocket. One of three satellites (labeled H, I and J) being built in the Hughes Space and Communications Company Integrated Satellite Factory in El Segundo, Calif., the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the space shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-13
The Tracking and Data Relay Satellite (TDRS-H) sits fully encapsulated inside the fairing. Next, it will be transported to Launch Pad 36A, Cape Canaveral Air Force Station for launch scheduled June 29 aboard an Atlas IIA/Centaur rocket. One of three satellites (labeled H, I and J) being built in the Hughes Space and Communications Company Integrated Satellite Factory in El Segundo, Calif., the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the space shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-01
Workers in KSC’s Spacecraft Assembly and Encapsulation Facility (SAEF-2) prepare the Tracking and Data Relay Satellite (TDRS-H) above them for electrical testing. The TDRS is scheduled to be launched from CCAFS June 29 aboard an Atlas IIA/Centaur rocket. One of three satellites (labeled H, I and J) being built in the Hughes Space and Communications Company Integrated Satellite Factory in El Segundo, Calif., the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the space shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-30
Looking like a Roman candle, NASA’s Tracking and Data Relay Satellite (TDRS-H) shoots into the blue sky aboard an Atlas IIA/Centaur rocket from Pad 36A, Cape Canaveral Air Force Station. Liftoff occurred at 8:56 a.m. EDT. One of three satellites (labeled H, I and J) being built by the Hughes Space and Communications Company, the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the space shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
2000-06-01
The Tracking and Data Relay Satellite (TDRS-H) sits on a workstand in KSC’s Spacecraft Assembly and Encapsulation Facility (SAEF-2) in order to undergo electrical testing. The TDRS is scheduled to be launched from CCAFS June 29 aboard an Atlas IIA/Centaur rocket. One of three satellites (labeled H, I and J) being built in the Hughes Space and Communications Company Integrated Satellite Factory in El Segundo, Calif., the latest TDRS uses an innovative springback antenna design. A pair of 15-foot-diameter, flexible mesh antenna reflectors fold up for launch, then spring back into their original cupped circular shape on orbit. The new satellites will augment the TDRS system’s existing Sand Ku-band frequencies by adding Ka-band capability. TDRS will serve as the sole means of continuous, high-data-rate communication with the space shuttle, with the International Space Station upon its completion, and with dozens of unmanned scientific satellites in low earth orbit
NASA Technical Reports Server (NTRS)
Goodyear, M. D.
1987-01-01
NASA sponsored the Aircraft Energy Efficiency (ACEE) program in 1976 to develop technologies to improve fuel efficiency. Laminar flow control was one such technology. Two approaches for achieving laminar flow were designed and manufactured under NASA sponsored programs: the perforated skin concept used at McDonnell Douglas and the slotted design used at Lockheed-Georgia. Both achieved laminar flow, with the slotted design to a lesser degree (JetStar flight test program). The latter design had several fabrication problems concerning springback and adhesive flow clogging the air flow passages. The Lockheed-Georgia Company accomplishments is documented in designing and fabricating a small section of a leading edge article addressing a simpler fabrication method to overcome the previous program's manufacturing problems, i.e., design and fabrication using advanced technologies such as diffusion bonding of aluminum, which has not been used on aerospace structures to date, and the superplastic forming of aluminum.
Application technologies for effective utilization of advanced high strength steel sheets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suehiro, Masayoshi, E-mail: suehiro.kp5.masayoshi@jp.nssmc.com
Recently, application of high strength steel sheets for automobiles has increased in order to meet a demand of light weighting of automobiles to reduce a carbon footprint while satisfying collision safety. The formability of steel sheets generally decreases with the increase in strength. Fracture and wrinkles tend to occur easily during forming. The springback phenomenon is also one of the issues which we should cope with, because it makes it difficult to obtain the desired shape after forming. Advanced high strength steel sheets with high formability have been developed in order to overcome these issues, and at the same timemore » application technologies have been developed for their effective utilization. These sheets are normally used for cold forming. As a different type of forming, hot forming technique has been developed in order to produce parts with ultra high strength. In this report, technologies developed at NSSMC in this field will be introduced.« less
Experimental evaluation of radiosity for room sound-field prediction.
Hodgson, Murray; Nosal, Eva-Marie
2006-08-01
An acoustical radiosity model was evaluated for how it performs in predicting real room sound fields. This was done by comparing radiosity predictions with experimental results for three existing rooms--a squash court, a classroom, and an office. Radiosity predictions were also compared with those by ray tracing--a "reference" prediction model--for both specular and diffuse surface reflection. Comparisons were made for detailed and discretized echograms, sound-decay curves, sound-propagation curves, and the variations with frequency of four room-acoustical parameters--EDT, RT, D50, and C80. In general, radiosity and diffuse ray tracing gave very similar predictions. Predictions by specular ray tracing were often very different. Radiosity agreed well with experiment in some cases, less well in others. Definitive conclusions regarding the accuracy with which the rooms were modeled, or the accuracy of the radiosity approach, were difficult to draw. The results suggest that radiosity predicts room sound fields with some accuracy, at least as well as diffuse ray tracing and, in general, better than specular ray tracing. The predictions of detailed echograms are less accurate, those of derived room-acoustical parameters more accurate. The results underline the need to develop experimental methods for accurately characterizing the absorptive and reflective characteristics of room surfaces, possible including phase.
Kurgan, Lukasz; Cios, Krzysztof; Chen, Ke
2008-05-01
Protein structure prediction methods provide accurate results when a homologous protein is predicted, while poorer predictions are obtained in the absence of homologous templates. However, some protein chains that share twilight-zone pairwise identity can form similar folds and thus determining structural similarity without the sequence similarity would be desirable for the structure prediction. The folding type of a protein or its domain is defined as the structural class. Current structural class prediction methods that predict the four structural classes defined in SCOP provide up to 63% accuracy for the datasets in which sequence identity of any pair of sequences belongs to the twilight-zone. We propose SCPRED method that improves prediction accuracy for sequences that share twilight-zone pairwise similarity with sequences used for the prediction. SCPRED uses a support vector machine classifier that takes several custom-designed features as its input to predict the structural classes. Based on extensive design that considers over 2300 index-, composition- and physicochemical properties-based features along with features based on the predicted secondary structure and content, the classifier's input includes 8 features based on information extracted from the secondary structure predicted with PSI-PRED and one feature computed from the sequence. Tests performed with datasets of 1673 protein chains, in which any pair of sequences shares twilight-zone similarity, show that SCPRED obtains 80.3% accuracy when predicting the four SCOP-defined structural classes, which is superior when compared with over a dozen recent competing methods that are based on support vector machine, logistic regression, and ensemble of classifiers predictors. The SCPRED can accurately find similar structures for sequences that share low identity with sequence used for the prediction. The high predictive accuracy achieved by SCPRED is attributed to the design of the features, which are capable of separating the structural classes in spite of their low dimensionality. We also demonstrate that the SCPRED's predictions can be successfully used as a post-processing filter to improve performance of modern fold classification methods.
Kurgan, Lukasz; Cios, Krzysztof; Chen, Ke
2008-01-01
Background Protein structure prediction methods provide accurate results when a homologous protein is predicted, while poorer predictions are obtained in the absence of homologous templates. However, some protein chains that share twilight-zone pairwise identity can form similar folds and thus determining structural similarity without the sequence similarity would be desirable for the structure prediction. The folding type of a protein or its domain is defined as the structural class. Current structural class prediction methods that predict the four structural classes defined in SCOP provide up to 63% accuracy for the datasets in which sequence identity of any pair of sequences belongs to the twilight-zone. We propose SCPRED method that improves prediction accuracy for sequences that share twilight-zone pairwise similarity with sequences used for the prediction. Results SCPRED uses a support vector machine classifier that takes several custom-designed features as its input to predict the structural classes. Based on extensive design that considers over 2300 index-, composition- and physicochemical properties-based features along with features based on the predicted secondary structure and content, the classifier's input includes 8 features based on information extracted from the secondary structure predicted with PSI-PRED and one feature computed from the sequence. Tests performed with datasets of 1673 protein chains, in which any pair of sequences shares twilight-zone similarity, show that SCPRED obtains 80.3% accuracy when predicting the four SCOP-defined structural classes, which is superior when compared with over a dozen recent competing methods that are based on support vector machine, logistic regression, and ensemble of classifiers predictors. Conclusion The SCPRED can accurately find similar structures for sequences that share low identity with sequence used for the prediction. The high predictive accuracy achieved by SCPRED is attributed to the design of the features, which are capable of separating the structural classes in spite of their low dimensionality. We also demonstrate that the SCPRED's predictions can be successfully used as a post-processing filter to improve performance of modern fold classification methods. PMID:18452616
Noaman, Amin Y.; Jamjoom, Arwa; Al-Abdullah, Nabeela; Nasir, Mahreen; Ali, Anser G.
2017-01-01
Prediction of nosocomial infections among patients is an important part of clinical surveillance programs to enable the related personnel to take preventive actions in advance. Designing a clinical surveillance program with capability of predicting nosocomial infections is a challenging task due to several reasons, including high dimensionality of medical data, heterogenous data representation, and special knowledge required to extract patterns for prediction. In this paper, we present details of six data mining methods implemented using cross industry standard process for data mining to predict central line-associated blood stream infections. For our study, we selected datasets of healthcare-associated infections from US National Healthcare Safety Network and consumer survey data from Hospital Consumer Assessment of Healthcare Providers and Systems. Our experiments show that central line-associated blood stream infections (CLABSIs) can be successfully predicted using AdaBoost method with an accuracy up to 89.7%. This will help in implementing effective clinical surveillance programs for infection control, as well as improving the accuracy detection of CLABSIs. Also, this reduces patients' hospital stay cost and maintains patients' safety. PMID:29085836
Analysis of model development strategies: predicting ventral hernia recurrence.
Holihan, Julie L; Li, Linda T; Askenasy, Erik P; Greenberg, Jacob A; Keith, Jerrod N; Martindale, Robert G; Roth, J Scott; Liang, Mike K
2016-11-01
There have been many attempts to identify variables associated with ventral hernia recurrence; however, it is unclear which statistical modeling approach results in models with greatest internal and external validity. We aim to assess the predictive accuracy of models developed using five common variable selection strategies to determine variables associated with hernia recurrence. Two multicenter ventral hernia databases were used. Database 1 was randomly split into "development" and "internal validation" cohorts. Database 2 was designated "external validation". The dependent variable for model development was hernia recurrence. Five variable selection strategies were used: (1) "clinical"-variables considered clinically relevant, (2) "selective stepwise"-all variables with a P value <0.20 were assessed in a step-backward model, (3) "liberal stepwise"-all variables were included and step-backward regression was performed, (4) "restrictive internal resampling," and (5) "liberal internal resampling." Variables were included with P < 0.05 for the Restrictive model and P < 0.10 for the Liberal model. A time-to-event analysis using Cox regression was performed using these strategies. The predictive accuracy of the developed models was tested on the internal and external validation cohorts using Harrell's C-statistic where C > 0.70 was considered "reasonable". The recurrence rate was 32.9% (n = 173/526; median/range follow-up, 20/1-58 mo) for the development cohort, 36.0% (n = 95/264, median/range follow-up 20/1-61 mo) for the internal validation cohort, and 12.7% (n = 155/1224, median/range follow-up 9/1-50 mo) for the external validation cohort. Internal validation demonstrated reasonable predictive accuracy (C-statistics = 0.772, 0.760, 0.767, 0.757, 0.763), while on external validation, predictive accuracy dipped precipitously (C-statistic = 0.561, 0.557, 0.562, 0.553, 0.560). Predictive accuracy was equally adequate on internal validation among models; however, on external validation, all five models failed to demonstrate utility. Future studies should report multiple variable selection techniques and demonstrate predictive accuracy on external data sets for model validation. Copyright © 2016 Elsevier Inc. All rights reserved.
High accuracy prediction of beta-turns and their types using propensities and multiple alignments.
Fuchs, Patrick F J; Alix, Alain J P
2005-06-01
We have developed a method that predicts both the presence and the type of beta-turns, using a straightforward approach based on propensities and multiple alignments. The propensities were calculated classically, but the way to use them for prediction was completely new: starting from a tetrapeptide sequence on which one wants to evaluate the presence of a beta-turn, the propensity for a given residue is modified by taking into account all the residues present in the multiple alignment at this position. The evaluation of a score is then done by weighting these propensities by the use of Position-specific score matrices generated by PSI-BLAST. The introduction of secondary structure information predicted by PSIPRED or SSPRO2 as well as taking into account the flanking residues around the tetrapeptide improved the accuracy greatly. This latter evaluated on a database of 426 reference proteins (previously used on other studies) by a sevenfold crossvalidation gave very good results with a Matthews Correlation Coefficient (MCC) of 0.42 and an overall prediction accuracy of 74.8%; this places our method among the best ones. A jackknife test was also done, which gave results within the same range. This shows that it is possible to reach neural networks accuracy with considerably less computional cost and complexity. Furthermore, propensities remain excellent descriptors of amino acid tendencies to belong to beta-turns, which can be useful for peptide or protein engineering and design. For beta-turn type prediction, we reached the best accuracy ever published in terms of MCC (except for the irregular type IV) in the range of 0.25-0.30 for types I, II, and I' and 0.13-0.15 for types VIII, II', and IV. To our knowledge, our method is the only one available on the Web that predicts types I' and II'. The accuracy evaluated on two larger databases of 547 and 823 proteins was not improved significantly. All of this was implemented into a Web server called COUDES (French acronym for: Chercher Ou Une Deviation Existe Surement), which is available at the following URL: http://bioserv.rpbs.jussieu.fr/Coudes/index.html within the new bioinformatics platform RPBS.
2011-01-01
Background Existing methods of predicting DNA-binding proteins used valuable features of physicochemical properties to design support vector machine (SVM) based classifiers. Generally, selection of physicochemical properties and determination of their corresponding feature vectors rely mainly on known properties of binding mechanism and experience of designers. However, there exists a troublesome problem for designers that some different physicochemical properties have similar vectors of representing 20 amino acids and some closely related physicochemical properties have dissimilar vectors. Results This study proposes a systematic approach (named Auto-IDPCPs) to automatically identify a set of physicochemical and biochemical properties in the AAindex database to design SVM-based classifiers for predicting and analyzing DNA-binding domains/proteins. Auto-IDPCPs consists of 1) clustering 531 amino acid indices in AAindex into 20 clusters using a fuzzy c-means algorithm, 2) utilizing an efficient genetic algorithm based optimization method IBCGA to select an informative feature set of size m to represent sequences, and 3) analyzing the selected features to identify related physicochemical properties which may affect the binding mechanism of DNA-binding domains/proteins. The proposed Auto-IDPCPs identified m=22 features of properties belonging to five clusters for predicting DNA-binding domains with a five-fold cross-validation accuracy of 87.12%, which is promising compared with the accuracy of 86.62% of the existing method PSSM-400. For predicting DNA-binding sequences, the accuracy of 75.50% was obtained using m=28 features, where PSSM-400 has an accuracy of 74.22%. Auto-IDPCPs and PSSM-400 have accuracies of 80.73% and 82.81%, respectively, applied to an independent test data set of DNA-binding domains. Some typical physicochemical properties discovered are hydrophobicity, secondary structure, charge, solvent accessibility, polarity, flexibility, normalized Van Der Waals volume, pK (pK-C, pK-N, pK-COOH and pK-a(RCOOH)), etc. Conclusions The proposed approach Auto-IDPCPs would help designers to investigate informative physicochemical and biochemical properties by considering both prediction accuracy and analysis of binding mechanism simultaneously. The approach Auto-IDPCPs can be also applicable to predict and analyze other protein functions from sequences. PMID:21342579
Jeong, Jae Yoon; Kim, Tae Yeob; Sohn, Joo Hyun; Kim, Yongsoo; Jeong, Woo Kyoung; Oh, Young-Ha; Yoo, Kyo-Sang
2014-01-01
AIM: To evaluate the correlation between liver stiffness measurement (LSM) by real-time shear wave elastography (SWE) and liver fibrosis stage and the accuracy of LSM for predicting significant and advanced fibrosis, in comparison with serum markers. METHODS: We consecutively analyzed 70 patients with various chronic liver diseases. Liver fibrosis was staged from F0 to F4 according to the Batts and Ludwig scoring system. Significant and advanced fibrosis was defined as stage F ≥ 2 and F ≥ 3, respectively. The accuracy of prediction for fibrosis was analyzed using receiver operating characteristic curves. RESULTS: Seventy patients, 15 were belonged to F0-F1 stage, 20 F2, 13 F3 and 22 F4. LSM was increased with progression of fibrosis stage (F0-F1: 6.77 ± 1.72, F2: 9.98 ± 3.99, F3: 15.80 ± 7.73, and F4: 22.09 ± 10.09, P < 0.001). Diagnostic accuracies of LSM for prediction of F ≥ 2 and F ≥ 3 were 0.915 (95%CI: 0.824-0.968, P < 0.001) and 0.913 (95%CI: 0.821-0.967, P < 0.001), respectively. The cut-off values of LSM for prediction of F ≥ 2 and F ≥ 3 were 8.6 kPa with 78.2% sensitivity and 93.3% specificity and 10.46 kPa with 88.6% sensitivity and 80.0% specificity, respectively. However, there were no significant differences between LSM and serum hyaluronic acid and type IV collagen in diagnostic accuracy. CONCLUSION: SWE showed a significant correlation with the severity of liver fibrosis and was useful and accurate to predict significant and advanced fibrosis, comparable with serum markers. PMID:25320528
Jeong, Jae Yoon; Kim, Tae Yeob; Sohn, Joo Hyun; Kim, Yongsoo; Jeong, Woo Kyoung; Oh, Young-Ha; Yoo, Kyo-Sang
2014-10-14
To evaluate the correlation between liver stiffness measurement (LSM) by real-time shear wave elastography (SWE) and liver fibrosis stage and the accuracy of LSM for predicting significant and advanced fibrosis, in comparison with serum markers. We consecutively analyzed 70 patients with various chronic liver diseases. Liver fibrosis was staged from F0 to F4 according to the Batts and Ludwig scoring system. Significant and advanced fibrosis was defined as stage F ≥ 2 and F ≥ 3, respectively. The accuracy of prediction for fibrosis was analyzed using receiver operating characteristic curves. Seventy patients, 15 were belonged to F0-F1 stage, 20 F2, 13 F3 and 22 F4. LSM was increased with progression of fibrosis stage (F0-F1: 6.77 ± 1.72, F2: 9.98 ± 3.99, F3: 15.80 ± 7.73, and F4: 22.09 ± 10.09, P < 0.001). Diagnostic accuracies of LSM for prediction of F ≥ 2 and F ≥ 3 were 0.915 (95%CI: 0.824-0.968, P < 0.001) and 0.913 (95%CI: 0.821-0.967, P < 0.001), respectively. The cut-off values of LSM for prediction of F ≥ 2 and F ≥ 3 were 8.6 kPa with 78.2% sensitivity and 93.3% specificity and 10.46 kPa with 88.6% sensitivity and 80.0% specificity, respectively. However, there were no significant differences between LSM and serum hyaluronic acid and type IV collagen in diagnostic accuracy. SWE showed a significant correlation with the severity of liver fibrosis and was useful and accurate to predict significant and advanced fibrosis, comparable with serum markers.
Rodríguez-Wong, Laura; Noguera-González, Danny; Esparza-Villalpando, Vicente; Montero-Aguilar, Mauricio
2017-01-01
Introduction The inferior alveolar nerve block (IANB) is the most common anesthetic technique used on mandibular teeth during root canal treatment. Its success in the presence of preoperative inflammation is still controversial. The aim of this study was to evaluate the sensitivity, specificity, predictive values, and accuracy of three diagnostic tests used to predict IANB failure in symptomatic irreversible pulpitis (SIP). Methodology A cross-sectional study was carried out on the mandibular molars of 53 patients with SIP. All patients received a single cartridge of mepivacaine 2% with 1 : 100000 epinephrine using the IANB technique. Three diagnostic clinical tests were performed to detect anesthetic failure. Anesthetic failure was defined as a positive painful response to any of the three tests. Sensitivity, specificity, predictive values, accuracy, and ROC curves were calculated and compared and significant differences were analyzed. Results IANB failure was determined in 71.7% of the patients. The sensitivity scores for the three tests (lip numbness, the cold stimuli test, and responsiveness during endodontic access) were 0.03, 0.35, and 0.55, respectively, and the specificity score was determined as 1 for all of the tests. Clinically, none of the evaluated tests demonstrated a high enough accuracy (0.30, 0.53, and 0.68 for lip numbness, the cold stimuli test, and responsiveness during endodontic access, resp.). A comparison of the areas under the curve in the ROC analyses showed statistically significant differences between the three tests (p < 0.05). Conclusion None of the analyzed tests demonstrated a high enough accuracy to be considered a reliable diagnostic tool for the prediction of anesthetic failure. PMID:28694714
Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials.
Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A; Burgueño, Juan; Bandeira E Sousa, Massaine; Crossa, José
2018-03-28
In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines ([Formula: see text]) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. Copyright © 2018 Cuevas et al.
Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials
Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José
2018-01-01
In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023
[Comparison of three stand-level biomass estimation methods].
Dong, Li Hu; Li, Feng Ri
2016-12-01
At present, the forest biomass methods of regional scale attract most of attention of the researchers, and developing the stand-level biomass model is popular. Based on the forestry inventory data of larch plantation (Larix olgensis) in Jilin Province, we used non-linear seemly unrelated regression (NSUR) to estimate the parameters in two additive system of stand-level biomass equations, i.e., stand-level biomass equations including the stand variables and stand biomass equations including the biomass expansion factor (i.e., Model system 1 and Model system 2), listed the constant biomass expansion factor for larch plantation and compared the prediction accuracy of three stand-level biomass estimation methods. The results indicated that for two additive system of biomass equations, the adjusted coefficient of determination (R a 2 ) of the total and stem equations was more than 0.95, the root mean squared error (RMSE), the mean prediction error (MPE) and the mean absolute error (MAE) were smaller. The branch and foliage biomass equations were worse than total and stem biomass equations, and the adjusted coefficient of determination (R a 2 ) was less than 0.95. The prediction accuracy of a constant biomass expansion factor was relatively lower than the prediction accuracy of Model system 1 and Model system 2. Overall, although stand-level biomass equation including the biomass expansion factor belonged to the volume-derived biomass estimation method, and was different from the stand biomass equations including stand variables in essence, but the obtained prediction accuracy of the two methods was similar. The constant biomass expansion factor had the lower prediction accuracy, and was inappropriate. In addition, in order to make the model parameter estimation more effective, the established stand-level biomass equations should consider the additivity in a system of all tree component biomass and total biomass equations.
Hsu, David
2015-09-27
Clustering methods are often used to model energy consumption for two reasons. First, clustering is often used to process data and to improve the predictive accuracy of subsequent energy models. Second, stable clusters that are reproducible with respect to non-essential changes can be used to group, target, and interpret observed subjects. However, it is well known that clustering methods are highly sensitive to the choice of algorithms and variables. This can lead to misleading assessments of predictive accuracy and mis-interpretation of clusters in policymaking. This paper therefore introduces two methods to the modeling of energy consumption in buildings: clusterwise regression,more » also known as latent class regression, which integrates clustering and regression simultaneously; and cluster validation methods to measure stability. Using a large dataset of multifamily buildings in New York City, clusterwise regression is compared to common two-stage algorithms that use K-means and model-based clustering with linear regression. Predictive accuracy is evaluated using 20-fold cross validation, and the stability of the perturbed clusters is measured using the Jaccard coefficient. These results show that there seems to be an inherent tradeoff between prediction accuracy and cluster stability. This paper concludes by discussing which clustering methods may be appropriate for different analytical purposes.« less
Jia, Cang-Zhi; He, Wen-Ying; Yao, Yu-Hua
2017-03-01
Hydroxylation of proline or lysine residues in proteins is a common post-translational modification event, and such modifications are found in many physiological and pathological processes. Nonetheless, the exact molecular mechanism of hydroxylation remains under investigation. Because experimental identification of hydroxylation is time-consuming and expensive, bioinformatics tools with high accuracy represent desirable alternatives for large-scale rapid identification of protein hydroxylation sites. In view of this, we developed a supporter vector machine-based tool, OH-PRED, for the prediction of protein hydroxylation sites using the adapted normal distribution bi-profile Bayes feature extraction in combination with the physicochemical property indexes of the amino acids. In a jackknife cross validation, OH-PRED yields an accuracy of 91.88% and a Matthew's correlation coefficient (MCC) of 0.838 for the prediction of hydroxyproline sites, and yields an accuracy of 97.42% and a MCC of 0.949 for the prediction of hydroxylysine sites. These results demonstrate that OH-PRED increased significantly the prediction accuracy of hydroxyproline and hydroxylysine sites by 7.37 and 14.09%, respectively, when compared with the latest predictor PredHydroxy. In independent tests, OH-PRED also outperforms previously published methods.
Predicting School Enrollments Using the Modified Regression Technique.
ERIC Educational Resources Information Center
Grip, Richard S.; Young, John W.
This report is based on a study in which a regression model was constructed to increase accuracy in enrollment predictions. A model, known as the Modified Regression Technique (MRT), was used to examine K-12 enrollment over the past 20 years in 2 New Jersey school districts of similar size and ethnicity. To test the model's accuracy, MRT was…
ERIC Educational Resources Information Center
Szadokierski, Isadora; Burns, Matthew K.; McComas, Jennifer J.
2017-01-01
The current study used the learning hierarchy/instructional hierarchy phases of acquisition and fluency to predict intervention effectiveness based on preintervention reading skills. Preintervention reading accuracy (percentage of words read correctly) and rate (number of words read correctly per minute) were assessed for 49 second- and…
Developing Local Oral Reading Fluency Cut Scores for Predicting High-Stakes Test Performance
ERIC Educational Resources Information Center
Grapin, Sally L.; Kranzler, John H.; Waldron, Nancy; Joyce-Beaulieu, Diana; Algina, James
2017-01-01
This study evaluated the classification accuracy of a second grade oral reading fluency curriculum-based measure (R-CBM) in predicting third grade state test performance. It also compared the long-term classification accuracy of local and publisher-recommended R-CBM cut scores. Participants were 266 students who were divided into a calibration…
ERIC Educational Resources Information Center
Szadokierski, Isadora Elisabeth
2012-01-01
The current study used the Learning Hierarchy/Instructional Hierarchy (LH/IH) to predict intervention effectiveness based on the reading skills of students who are developing reading fluency. Pre-intervention reading accuracy and rate were assessed for 49 second and third grade participants who then participated in a brief experimental analysis…
Maintenance of equilibrium point control during an unexpectedly loaded rapid limb movement.
Simmons, R W; Richardson, C
1984-06-08
Two experiments investigated whether the equilibrium point hypothesis or the mass-spring model of motor control subserves positioning accuracy during spring loaded, rapid, bi-articulated movement. For intact preparations, the equilibrium point hypothesis predicts response accuracy to be determined by a mixture of afferent and efferent information, whereas the mass-spring model predicts positioning to be under a direct control system. Subjects completed a series of load-resisted training trials to a spatial target. The magnitude of a sustained spring load was unexpectedly increased on selected trials. Results indicated positioning accuracy and applied force varied with increases in load, which suggests that the original efferent commands are modified by afferent information during the movement as predicted by the equilibrium point hypothesis.
NASA Astrophysics Data System (ADS)
Hänsch, Ronny; Hellwich, Olaf
2018-04-01
Random Forests have continuously proven to be one of the most accurate, robust, as well as efficient methods for the supervised classification of images in general and polarimetric synthetic aperture radar data in particular. While the majority of previous work focus on improving classification accuracy, we aim for accelerating the training of the classifier as well as its usage during prediction while maintaining its accuracy. Unlike other approaches we mainly consider algorithmic changes to stay as much as possible independent of platform and programming language. The final model achieves an approximately 60 times faster training and a 500 times faster prediction, while the accuracy is only marginally decreased by roughly 1 %.
Park, Jonghyeok; Kim, Hackjin; Sohn, Jeong-Woo; Choi, Jong-ryul; Kim, Sung-Phil
2018-01-01
Humans often attempt to predict what others prefer based on a narrow slice of experience, called thin-slicing. According to the theoretical bases for how humans can predict the preference of others, one tends to estimate the other's preference using a perceived difference between the other and self. Previous neuroimaging studies have revealed that the network of dorsal medial prefrontal cortex (dmPFC) and right temporoparietal junction (rTPJ) is related to the ability of predicting others' preference. However, it still remains unknown about the temporal patterns of neural activities for others' preference prediction through thin-slicing. To investigate such temporal aspects of neural activities, we investigated human electroencephalography (EEG) recorded during the task of predicting the preference of others while only a facial picture of others was provided. Twenty participants (all female, average age: 21.86) participated in the study. In each trial of the task, participants were shown a picture of either a target person or self for 3 s, followed by the presentation of a movie poster over which participants predicted the target person's preference as liking or disliking. The time-frequency EEG analysis was employed to analyze temporal changes in the amplitudes of brain oscillations. Participants could predict others' preference for movies with accuracy of 56.89 ± 3.16% and 10 out of 20 participants exhibited prediction accuracy higher than a chance level (95% interval). There was a significant difference in the power of the parietal alpha (10~13 Hz) oscillation 0.6~0.8 s after the onset of poster presentation between the cases when participants predicted others' preference and when they reported self-preference (p < 0.05). The power of brain oscillations at any frequency band and time period during the trial did not show a significant correlation with individual prediction accuracy. However, when we measured differences of the power between the trials of predicting other's preference and reporting self-preference, the right temporal beta oscillations 1.6~1.8 s after the onset of facial picture presentation exhibited a significant correlation with individual accuracy. Our results suggest that right temporoparietal beta oscillations may be correlated with one's ability to predict what others prefer with minimal information. PMID:29479312
Park, Jonghyeok; Kim, Hackjin; Sohn, Jeong-Woo; Choi, Jong-Ryul; Kim, Sung-Phil
2018-01-01
Humans often attempt to predict what others prefer based on a narrow slice of experience, called thin-slicing. According to the theoretical bases for how humans can predict the preference of others, one tends to estimate the other's preference using a perceived difference between the other and self. Previous neuroimaging studies have revealed that the network of dorsal medial prefrontal cortex (dmPFC) and right temporoparietal junction (rTPJ) is related to the ability of predicting others' preference. However, it still remains unknown about the temporal patterns of neural activities for others' preference prediction through thin-slicing. To investigate such temporal aspects of neural activities, we investigated human electroencephalography (EEG) recorded during the task of predicting the preference of others while only a facial picture of others was provided. Twenty participants (all female, average age: 21.86) participated in the study. In each trial of the task, participants were shown a picture of either a target person or self for 3 s, followed by the presentation of a movie poster over which participants predicted the target person's preference as liking or disliking. The time-frequency EEG analysis was employed to analyze temporal changes in the amplitudes of brain oscillations. Participants could predict others' preference for movies with accuracy of 56.89 ± 3.16% and 10 out of 20 participants exhibited prediction accuracy higher than a chance level (95% interval). There was a significant difference in the power of the parietal alpha (10~13 Hz) oscillation 0.6~0.8 s after the onset of poster presentation between the cases when participants predicted others' preference and when they reported self-preference ( p < 0.05). The power of brain oscillations at any frequency band and time period during the trial did not show a significant correlation with individual prediction accuracy. However, when we measured differences of the power between the trials of predicting other's preference and reporting self-preference, the right temporal beta oscillations 1.6~1.8 s after the onset of facial picture presentation exhibited a significant correlation with individual accuracy. Our results suggest that right temporoparietal beta oscillations may be correlated with one's ability to predict what others prefer with minimal information.
Fang, Lingzhao; Sahana, Goutam; Ma, Peipei; Su, Guosheng; Yu, Ying; Zhang, Shengli; Lund, Mogens Sandø; Sørensen, Peter
2017-05-12
A better understanding of the genetic architecture of complex traits can contribute to improve genomic prediction. We hypothesized that genomic variants associated with mastitis and milk production traits in dairy cattle are enriched in hepatic transcriptomic regions that are responsive to intra-mammary infection (IMI). Genomic markers [e.g. single nucleotide polymorphisms (SNPs)] from those regions, if included, may improve the predictive ability of a genomic model. We applied a genomic feature best linear unbiased prediction model (GFBLUP) to implement the above strategy by considering the hepatic transcriptomic regions responsive to IMI as genomic features. GFBLUP, an extension of GBLUP, includes a separate genomic effect of SNPs within a genomic feature, and allows differential weighting of the individual marker relationships in the prediction equation. Since GFBLUP is computationally intensive, we investigated whether a SNP set test could be a computationally fast way to preselect predictive genomic features. The SNP set test assesses the association between a genomic feature and a trait based on single-SNP genome-wide association studies. We applied these two approaches to mastitis and milk production traits (milk, fat and protein yield) in Holstein (HOL, n = 5056) and Jersey (JER, n = 1231) cattle. We observed that a majority of genomic features were enriched in genomic variants that were associated with mastitis and milk production traits. Compared to GBLUP, the accuracy of genomic prediction with GFBLUP was marginally improved (3.2 to 3.9%) in within-breed prediction. The highest increase (164.4%) in prediction accuracy was observed in across-breed prediction. The significance of genomic features based on the SNP set test were correlated with changes in prediction accuracy of GFBLUP (P < 0.05). GFBLUP provides a framework for integrating multiple layers of biological knowledge to provide novel insights into the biological basis of complex traits, and to improve the accuracy of genomic prediction. The SNP set test might be used as a first-step to improve GFBLUP models. Approaches like GFBLUP and SNP set test will become increasingly useful, as the functional annotations of genomes keep accumulating for a range of species and traits.
2011-01-01
Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed overall classification accuracy above a median value of 0.63, but for most sensitivity was around or even lower than a median value of 0.5. Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing. PMID:21849043
Zheng, Jun; Yu, Zhiyuan; Xu, Zhao; Li, Mou; Wang, Xiaoze; Lin, Sen; Li, Hao; You, Chao
2017-05-12
BACKGROUND Hematoma expansion is associated with poor outcome in intracerebral hemorrhage (ICH) patients. The spot sign and the blend sign are reliable tools for predicting hematoma expansion in ICH patients. The aim of this study was to compare the accuracy of the two signs in the prediction of hematoma expansion. MATERIAL AND METHODS Patients with spontaneous ICH were screened for the presence of the computed tomography angiography (CTA) spot sign and the non-contrast CT (NCCT) blend sign within 6 hours after onset of symptoms. The sensitivity, specificity, and positive and negative predictive values of the spot sign and the blend sign in predicting hematoma expansion were calculated. The accuracy of the spot sign and the blend sign in predicting hematoma expansion was analyzed by receiver-operator analysis. RESULTS A total of 115 patients were enrolled in this study. The spot sign was observed in 25 (21.74%) patients, whereas the blend sign was observed in 22 (19.13%) patients. Of the 28 patients with hematoma expansion, the CTA spot sign was found on admission CT scans in 16 (57.14%) and the NCCT blend sign in 12 (42.86%), respectively. The sensitivity, specificity, positive predictive value, and negative predictive value of the spot sign for predicting hematoma expansion were 57.14%, 89.66%, 64.00%, and 86.67%, respectively. In contrast, the sensitivity, specificity, positive predictive value, and negative predictive value of the blend sign were 42.86%, 88.51%, 54.55%, and 82.80%, respectively. The area under the curve (AUC) of the spot sign was 0.734, which was higher than that of the blend sign (0.657). CONCLUSIONS Both the spot sign and the blend sign seemed to be good predictors for hematoma expansion, and the spot sign appeared to have better predictive accuracy.
Zheng, Jun; Yu, Zhiyuan; Xu, Zhao; Li, Mou; Wang, Xiaoze; Lin, Sen; Li, Hao; You, Chao
2017-01-01
Background Hematoma expansion is associated with poor outcome in intracerebral hemorrhage (ICH) patients. The spot sign and the blend sign are reliable tools for predicting hematoma expansion in ICH patients. The aim of this study was to compare the accuracy of the two signs in the prediction of hematoma expansion. Material/Methods Patients with spontaneous ICH were screened for the presence of the computed tomography angiography (CTA) spot sign and the non-contrast CT (NCCT) blend sign within 6 hours after onset of symptoms. The sensitivity, specificity, and positive and negative predictive values of the spot sign and the blend sign in predicting hematoma expansion were calculated. The accuracy of the spot sign and the blend sign in predicting hematoma expansion was analyzed by receiver-operator analysis. Results A total of 115 patients were enrolled in this study. The spot sign was observed in 25 (21.74%) patients, whereas the blend sign was observed in 22 (19.13%) patients. Of the 28 patients with hematoma expansion, the CTA spot sign was found on admission CT scans in 16 (57.14%) and the NCCT blend sign in 12 (42.86%), respectively. The sensitivity, specificity, positive predictive value, and negative predictive value of the spot sign for predicting hematoma expansion were 57.14%, 89.66%, 64.00%, and 86.67%, respectively. In contrast, the sensitivity, specificity, positive predictive value, and negative predictive value of the blend sign were 42.86%, 88.51%, 54.55%, and 82.80%, respectively. The area under the curve (AUC) of the spot sign was 0.734, which was higher than that of the blend sign (0.657). Conclusions Both the spot sign and the blend sign seemed to be good predictors for hematoma expansion, and the spot sign appeared to have better predictive accuracy. PMID:28498827
Prospects and Potential Uses of Genomic Prediction of Key Performance Traits in Tetraploid Potato.
Stich, Benjamin; Van Inghelandt, Delphine
2018-01-01
Genomic prediction is a routine tool in breeding programs of most major animal and plant species. However, its usefulness for potato breeding has not yet been evaluated in detail. The objectives of this study were to (i) examine the prospects of genomic prediction of key performance traits in a diversity panel of tetraploid potato modeling additive, dominance, and epistatic effects, (ii) investigate the effects of size and make up of training set, number of test environments and molecular markers on prediction accuracy, and (iii) assess the effect of including markers from candidate genes on the prediction accuracy. With genomic best linear unbiased prediction (GBLUP), BayesA, BayesCπ, and Bayesian LASSO, four different prediction methods were used for genomic prediction of relative area under disease progress curve after a Phytophthora infestans infection, plant maturity, maturity corrected resistance, tuber starch content, tuber starch yield (TSY), and tuber yield (TY) of 184 tetraploid potato clones or subsets thereof genotyped with the SolCAP 8.3k SNP array. The cross-validated prediction accuracies with GBLUP and the three Bayesian approaches for the six evaluated traits ranged from about 0.5 to about 0.8. For traits with a high expected genetic complexity, such as TSY and TY, we observed an 8% higher prediction accuracy using a model with additive and dominance effects compared with a model with additive effects only. Our results suggest that for oligogenic traits in general and when diagnostic markers are available in particular, the use of Bayesian methods for genomic prediction is highly recommended and that the diagnostic markers should be modeled as fixed effects. The evaluation of the relative performance of genomic prediction vs. phenotypic selection indicated that the former is superior, assuming cycle lengths and selection intensities that are possible to realize in commercial potato breeding programs.
Prospects and Potential Uses of Genomic Prediction of Key Performance Traits in Tetraploid Potato
Stich, Benjamin; Van Inghelandt, Delphine
2018-01-01
Genomic prediction is a routine tool in breeding programs of most major animal and plant species. However, its usefulness for potato breeding has not yet been evaluated in detail. The objectives of this study were to (i) examine the prospects of genomic prediction of key performance traits in a diversity panel of tetraploid potato modeling additive, dominance, and epistatic effects, (ii) investigate the effects of size and make up of training set, number of test environments and molecular markers on prediction accuracy, and (iii) assess the effect of including markers from candidate genes on the prediction accuracy. With genomic best linear unbiased prediction (GBLUP), BayesA, BayesCπ, and Bayesian LASSO, four different prediction methods were used for genomic prediction of relative area under disease progress curve after a Phytophthora infestans infection, plant maturity, maturity corrected resistance, tuber starch content, tuber starch yield (TSY), and tuber yield (TY) of 184 tetraploid potato clones or subsets thereof genotyped with the SolCAP 8.3k SNP array. The cross-validated prediction accuracies with GBLUP and the three Bayesian approaches for the six evaluated traits ranged from about 0.5 to about 0.8. For traits with a high expected genetic complexity, such as TSY and TY, we observed an 8% higher prediction accuracy using a model with additive and dominance effects compared with a model with additive effects only. Our results suggest that for oligogenic traits in general and when diagnostic markers are available in particular, the use of Bayesian methods for genomic prediction is highly recommended and that the diagnostic markers should be modeled as fixed effects. The evaluation of the relative performance of genomic prediction vs. phenotypic selection indicated that the former is superior, assuming cycle lengths and selection intensities that are possible to realize in commercial potato breeding programs. PMID:29563919
NASA Astrophysics Data System (ADS)
Markov, Yu. G.; Mikhailov, M. V.; Pochukaev, V. N.
2012-07-01
An analysis of perturbing factors influencing the motion of a navigation satellite (NS) is carried out, and the degree of influence of each factor on the GLONASS orbit is estimated. It is found that fundamental components of the Earth's rotation parameters (ERP) are one substantial factor commensurable with maximum perturbations. Algorithms for the calculation of orbital perturbations caused by these parameters are given; these algorithms can be implemented in a consumer's equipment. The daily prediction of NS coordinates is performed on the basis of real GLONASS satellite ephemerides transmitted to a consumer, using the developed prediction algorithms taking the ERP into account. The obtained accuracy of the daily prediction of GLONASS ephemerides exceeds by tens of times the accuracy of the daily prediction performed using algorithms recommended in interface control documents.
Short-arc measurement and fitting based on the bidirectional prediction of observed data
NASA Astrophysics Data System (ADS)
Fei, Zhigen; Xu, Xiaojie; Georgiadis, Anthimos
2016-02-01
To measure a short arc is a notoriously difficult problem. In this study, the bidirectional prediction method based on the Radial Basis Function Neural Network (RBFNN) to the observed data distributed along a short arc is proposed to increase the corresponding arc length, and thus improve its fitting accuracy. Firstly, the rationality of regarding observed data as a time series is discussed in accordance with the definition of a time series. Secondly, the RBFNN is constructed to predict the observed data where the interpolation method is used for enlarging the size of training examples in order to improve the learning accuracy of the RBFNN’s parameters. Finally, in the numerical simulation section, we focus on simulating how the size of the training sample and noise level influence the learning error and prediction error of the built RBFNN. Typically, the observed data coming from a 5{}^\\circ short arc are used to evaluate the performance of the Hyper method known as the ‘unbiased fitting method of circle’ with a different noise level before and after prediction. A number of simulation experiments reveal that the fitting stability and accuracy of the Hyper method after prediction are far superior to the ones before prediction.
High accuracy satellite drag model (HASDM)
NASA Astrophysics Data System (ADS)
Storz, M.; Bowman, B.; Branson, J.
The dominant error source in the force models used to predict low perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying high-resolution density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal, semidiurnal and terdiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index a p to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low perigee satellites.
High accuracy satellite drag model (HASDM)
NASA Astrophysics Data System (ADS)
Storz, Mark F.; Bowman, Bruce R.; Branson, Major James I.; Casali, Stephen J.; Tobiska, W. Kent
The dominant error source in force models used to predict low-perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index ap, to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low-perigee satellites.
NASA Astrophysics Data System (ADS)
Ko, P.; Kurosawa, S.
2014-03-01
The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.
Dynamic Filtering Improves Attentional State Prediction with fNIRS
NASA Technical Reports Server (NTRS)
Harrivel, Angela R.; Weissman, Daniel H.; Noll, Douglas C.; Huppert, Theodore; Peltier, Scott J.
2016-01-01
Brain activity can predict a person's level of engagement in an attentional task. However, estimates of brain activity are often confounded by measurement artifacts and systemic physiological noise. The optimal method for filtering this noise - thereby increasing such state prediction accuracy - remains unclear. To investigate this, we asked study participants to perform an attentional task while we monitored their brain activity with functional near infrared spectroscopy (fNIRS). We observed higher state prediction accuracy when noise in the fNIRS hemoglobin [Hb] signals was filtered with a non-stationary (adaptive) model as compared to static regression (84% +/- 6% versus 72% +/- 15%).
Rebound from marital conflict and divorce prediction.
Gottman, J M; Levenson, R W
1999-01-01
Marital interaction has primarily been examined in the context of conflict resolution. This study investigated the predictive ability of couples to rebound from marital conflict in a subsequent positive conversation. Results showed that there was a great deal of consistency in affect across both conversations. Also examined was the ability of affective interaction to predict divorce over a 4-year period, separately in each of the two conversations. It was possible to predict divorce using affective variables from each conversation, with 82.6% accuracy from the conflict conversation and with 92.7% accuracy from the positive rebound conversation.
McDermott, A; Visentin, G; De Marchi, M; Berry, D P; Fenelon, M A; O'Connor, P M; Kenny, O A; McParland, S
2016-04-01
The aim of this study was to evaluate the effectiveness of mid-infrared spectroscopy in predicting milk protein and free amino acid (FAA) composition in bovine milk. Milk samples were collected from 7 Irish research herds and represented cows from a range of breeds, parities, and stages of lactation. Mid-infrared spectral data in the range of 900 to 5,000 cm(-1) were available for 730 milk samples; gold standard methods were used to quantify individual protein fractions and FAA of these samples with a view to predicting these gold standard protein fractions and FAA levels with available mid-infrared spectroscopy data. Separate prediction equations were developed for each trait using partial least squares regression; accuracy of prediction was assessed using both cross validation on a calibration data set (n=400 to 591 samples) and external validation on an independent data set (n=143 to 294 samples). The accuracy of prediction in external validation was the same irrespective of whether undertaken on the entire external validation data set or just within the Holstein-Friesian breed. The strongest coefficient of correlation obtained for protein fractions in external validation was 0.74, 0.69, and 0.67 for total casein, total β-lactoglobulin, and β-casein, respectively. Total proteins (i.e., total casein, total whey, and total lactoglobulin) were predicted with greater accuracy then their respective component traits; prediction accuracy using the infrared spectrum was superior to prediction using just milk protein concentration. Weak to moderate prediction accuracies were observed for FAA. The greatest coefficient of correlation in both cross validation and external validation was for Gly (0.75), indicating a moderate accuracy of prediction. Overall, the FAA prediction models overpredicted the gold standard values. Near-unity correlations existed between total casein and β-casein irrespective of whether the traits were based on the gold standard (0.92) or mid-infrared spectroscopy predictions (0.95). Weaker correlations among FAA were observed than the correlations among the protein fractions. Pearson correlations between gold standard protein fractions and the milk processing characteristics of rennet coagulation time, curd firming time, curd firmness, heat coagulating time, pH, and casein micelle size were weak to moderate and ranged from -0.48 (protein and pH) to 0.50 (total casein and a30). Pearson correlations between gold standard FAA and these milk processing characteristics were also weak to moderate and ranged from -0.60 (Val and pH) to 0.49 (Val and K20). Results from this study indicate that mid-infrared spectroscopy has the potential to predict protein fractions and some FAA in milk at a population level. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Monteiro-Soares, M; Martins-Mendes, D; Vaz-Carneiro, A; Sampaio, S; Dinis-Ribeiro, M
2014-10-01
We systematically review the available systems used to classify diabetic foot ulcers in order to synthesize their methodological qualitative issues and accuracy to predict lower extremity amputation, as this may represent a critical point in these patients' care. Two investigators searched, in EBSCO, ISI, PubMed and SCOPUS databases, and independently selected studies published until May 2013 and reporting prognostic accuracy and/or reliability of specific systems for patients with diabetic foot ulcer in order to predict lower extremity amputation. We included 25 studies reporting a prevalence of lower extremity amputation between 6% and 78%. Eight different diabetic foot ulcer descriptions and seven prognostic stratification classification systems were addressed with a variable (1-9) number of factors included, specially peripheral arterial disease (n = 12) or infection at the ulcer site (n = 10) or ulcer depth (n = 10). The Meggitt-Wagner, S(AD)SAD and Texas University Classification systems were the most extensively validated, whereas ten classifications were derived or validated only once. Reliability was reported in a single study, and accuracy measures were reported in five studies with another eight allowing their calculation. Pooled accuracy ranged from 0.65 (for gangrene) to 0.74 (for infection). There are numerous classification systems for diabetic foot ulcer outcome prediction, but only few studies evaluated their reliability or external validity. Studies rarely validated several systems simultaneously and only a few reported accuracy measures. Further studies assessing reliability and accuracy of the available systems and their composing variables are needed. Copyright © 2014 John Wiley & Sons, Ltd.
Energy prediction equations are inadequate for obese Hispanic youth.
Klein, Catherine J; Villavicencio, Stephan A; Schweitzer, Amy; Bethepu, Joel S; Hoffman, Heather J; Mirza, Nazrat M
2011-08-01
Assessing energy requirements is a fundamental activity in clinical dietetics practice. A study was designed to determine whether published linear regression equations were accurate for predicting resting energy expenditure (REE) in fasted Hispanic children with obesity (aged 7 to 15 years). REE was measured using indirect calorimetry; body composition was estimated with whole-body air displacement plethysmography. REE was predicted using four equations: Institute of Medicine for healthy-weight children (IOM-HW), IOM for overweight and obese children (IOM-OS), Harris-Benedict, and Schofield. Accuracy of the prediction was calculated as the absolute value of the difference between the measured and predicted REE divided by the measured REE, expressed as a percentage. Predicted values within 85% to 115% of measured were defined as accurate. Participants (n=58; 53% boys) were mean age 11.8±2.1 years, had 43.5%±5.1% body fat, and had a body mass index of 31.5±5.8 (98.6±1.1 body mass index percentile). Measured REE was 2,339±680 kcal/day; predicted REE was 1,815±401 kcal/day (IOM-HW), 1,794±311 kcal/day (IOM-OS), 1,151±300 kcal/day (Harris-Benedict), and, 1,771±316 kcal/day (Schofield). Measured REE adjusted for body weight averaged 32.0±8.4 kcal/kg/day (95% confidence interval 29.8 to 34.2). Published equations predicted REE within 15% accuracy for only 36% to 40% of 58 participants, except for Harris-Benedict, which did not achieve accuracy for any participant. The most frequently accurate values were obtained using IOM-HW, which predicted REE within 15% accuracy for 55% (17/31) of boys. Published equations did not accurately predict REE for youth in the study sample. Further studies are warranted to formulate accurate energy prediction equations for this population. Copyright © 2011 American Dietetic Association. Published by Elsevier Inc. All rights reserved.
Predicting human olfactory perception from chemical features of odor molecules.
Keller, Andreas; Gerkin, Richard C; Guan, Yuanfang; Dhurandhar, Amit; Turu, Gabor; Szalai, Bence; Mainland, Joel D; Ihara, Yusuke; Yu, Chung Wen; Wolfinger, Russ; Vens, Celine; Schietgat, Leander; De Grave, Kurt; Norel, Raquel; Stolovitzky, Gustavo; Cecchi, Guillermo A; Vosshall, Leslie B; Meyer, Pablo
2017-02-24
It is still not possible to predict whether a given molecule will have a perceived odor or what olfactory percept it will produce. We therefore organized the crowd-sourced DREAM Olfaction Prediction Challenge. Using a large olfactory psychophysical data set, teams developed machine-learning algorithms to predict sensory attributes of molecules based on their chemoinformatic features. The resulting models accurately predicted odor intensity and pleasantness and also successfully predicted 8 among 19 rated semantic descriptors ("garlic," "fish," "sweet," "fruit," "burnt," "spices," "flower," and "sour"). Regularized linear models performed nearly as well as random forest-based ones, with a predictive accuracy that closely approaches a key theoretical limit. These models help to predict the perceptual qualities of virtually any molecule with high accuracy and also reverse-engineer the smell of a molecule. Copyright © 2017, American Association for the Advancement of Science.
A Deep Learning Network Approach to ab initio Protein Secondary Structure Prediction
Spencer, Matt; Eickholt, Jesse; Cheng, Jianlin
2014-01-01
Ab initio protein secondary structure (SS) predictions are utilized to generate tertiary structure predictions, which are increasingly demanded due to the rapid discovery of proteins. Although recent developments have slightly exceeded previous methods of SS prediction, accuracy has stagnated around 80% and many wonder if prediction cannot be advanced beyond this ceiling. Disciplines that have traditionally employed neural networks are experimenting with novel deep learning techniques in attempts to stimulate progress. Since neural networks have historically played an important role in SS prediction, we wanted to determine whether deep learning could contribute to the advancement of this field as well. We developed an SS predictor that makes use of the position-specific scoring matrix generated by PSI-BLAST and deep learning network architectures, which we call DNSS. Graphical processing units and CUDA software optimize the deep network architecture and efficiently train the deep networks. Optimal parameters for the training process were determined, and a workflow comprising three separately trained deep networks was constructed in order to make refined predictions. This deep learning network approach was used to predict SS for a fully independent test data set of 198 proteins, achieving a Q3 accuracy of 80.7% and a Sov accuracy of 74.2%. PMID:25750595
A Deep Learning Network Approach to ab initio Protein Secondary Structure Prediction.
Spencer, Matt; Eickholt, Jesse; Jianlin Cheng
2015-01-01
Ab initio protein secondary structure (SS) predictions are utilized to generate tertiary structure predictions, which are increasingly demanded due to the rapid discovery of proteins. Although recent developments have slightly exceeded previous methods of SS prediction, accuracy has stagnated around 80 percent and many wonder if prediction cannot be advanced beyond this ceiling. Disciplines that have traditionally employed neural networks are experimenting with novel deep learning techniques in attempts to stimulate progress. Since neural networks have historically played an important role in SS prediction, we wanted to determine whether deep learning could contribute to the advancement of this field as well. We developed an SS predictor that makes use of the position-specific scoring matrix generated by PSI-BLAST and deep learning network architectures, which we call DNSS. Graphical processing units and CUDA software optimize the deep network architecture and efficiently train the deep networks. Optimal parameters for the training process were determined, and a workflow comprising three separately trained deep networks was constructed in order to make refined predictions. This deep learning network approach was used to predict SS for a fully independent test dataset of 198 proteins, achieving a Q3 accuracy of 80.7 percent and a Sov accuracy of 74.2 percent.
Genomic prediction of the polled and horned phenotypes in Merino sheep.
Duijvesteijn, Naomi; Bolormaa, Sunduimijid; Daetwyler, Hans D; van der Werf, Julius H J
2018-05-22
In horned sheep breeds, breeding for polledness has been of interest for decades. The objective of this study was to improve prediction of the horned and polled phenotypes using horn scores classified as polled, scurs, knobs or horns. Derived phenotypes polled/non-polled (P/NP) and horned/non-horned (H/NH) were used to test four different strategies for prediction in 4001 purebred Merino sheep. These strategies include the use of single 'single nucleotide polymorphism' (SNP) genotypes, multiple-SNP haplotypes, genome-wide and chromosome-wide genomic best linear unbiased prediction and information from imputed sequence variants from the region including the RXFP2 gene. Low-density genotypes of these animals were imputed to the Illumina Ovine high-density (600k) chip and the 1.78-kb insertion polymorphism in RXFP2 was included in the imputation process to whole-genome sequence. We evaluated the mode of inheritance and validated models by a fivefold cross-validation and across- and between-family prediction. The most significant SNPs for prediction of P/NP and H/NH were OAR10_29546872.1 and OAR10_29458450, respectively, located on chromosome 10 close to the 1.78-kb insertion at 29.5 Mb. The mode of inheritance included an additive effect and a sex-dependent effect for dominance for P/NP and a sex-dependent additive and dominance effect for H/NH. Models with the highest prediction accuracies for H/NH used either single SNPs or 3-SNP haplotypes and included a polygenic effect estimated based on traditional pedigree relationships. Prediction accuracies for H/NH were 0.323 for females and 0.725 for males. For predicting P/NP, the best models were the same as for H/NH but included a genomic relationship matrix with accuracies of 0.713 for females and 0.620 for males. Our results show that prediction accuracy is high using a single SNP, but does not reach 1 since the causative mutation is not genotyped. Incomplete penetrance or allelic heterogeneity, which can influence expression of the phenotype, may explain why prediction accuracy did not approach 1 with any of the genetic models tested here. Nevertheless, a breeding program to eradicate horns from Merino sheep can be effective by selecting genotypes GG of SNP OAR10_29458450 or TT of SNP OAR10_29546872.1 since all sheep with these genotypes will be non-horned.
Dias, Kaio Olímpio Das Graças; Gezan, Salvador Alejandro; Guimarães, Claudia Teixeira; Nazarian, Alireza; da Costa E Silva, Luciano; Parentoni, Sidney Netto; de Oliveira Guimarães, Paulo Evaristo; de Oliveira Anoni, Carina; Pádua, José Maria Villela; de Oliveira Pinto, Marcos; Noda, Roberto Willians; Ribeiro, Carlos Alexandre Gomes; de Magalhães, Jurandir Vieira; Garcia, Antonio Augusto Franco; de Souza, João Cândido; Guimarães, Lauro José Moreira; Pastina, Maria Marta
2018-07-01
Breeding for drought tolerance is a challenging task that requires costly, extensive, and precise phenotyping. Genomic selection (GS) can be used to maximize selection efficiency and the genetic gains in maize (Zea mays L.) breeding programs for drought tolerance. Here, we evaluated the accuracy of genomic selection (GS) using additive (A) and additive + dominance (AD) models to predict the performance of untested maize single-cross hybrids for drought tolerance in multi-environment trials. Phenotypic data of five drought tolerance traits were measured in 308 hybrids along eight trials under water-stressed (WS) and well-watered (WW) conditions over two years and two locations in Brazil. Hybrids' genotypes were inferred based on their parents' genotypes (inbred lines) using single-nucleotide polymorphism markers obtained via genotyping-by-sequencing. GS analyses were performed using genomic best linear unbiased prediction by fitting a factor analytic (FA) multiplicative mixed model. Two cross-validation (CV) schemes were tested: CV1 and CV2. The FA framework allowed for investigating the stability of additive and dominance effects across environments, as well as the additive-by-environment and the dominance-by-environment interactions, with interesting applications for parental and hybrid selection. Results showed differences in the predictive accuracy between A and AD models, using both CV1 and CV2, for the five traits in both water conditions. For grain yield (GY) under WS and using CV1, the AD model doubled the predictive accuracy in comparison to the A model. Through CV2, GS models benefit from borrowing information of correlated trials, resulting in an increase of 40% and 9% in the predictive accuracy of GY under WS for A and AD models, respectively. These results highlight the importance of multi-environment trial analyses using GS models that incorporate additive and dominance effects for genomic predictions of GY under drought in maize single-cross hybrids.
Predicting grain yield using canopy hyperspectral reflectance in wheat breeding data.
Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; de Los Campos, Gustavo; Alvarado, Gregorio; Suchismita, Mondal; Rutkoski, Jessica; González-Pérez, Lorena; Burgueño, Juan
2017-01-01
Modern agriculture uses hyperspectral cameras to obtain hundreds of reflectance data measured at discrete narrow bands to cover the whole visible light spectrum and part of the infrared and ultraviolet light spectra, depending on the camera. This information is used to construct vegetation indices (VI) (e.g., green normalized difference vegetation index or GNDVI, simple ratio or SRa, etc.) which are used for the prediction of primary traits (e.g., biomass). However, these indices only use some bands and are cultivar-specific; therefore they lose considerable information and are not robust for all cultivars. This study proposes models that use all available bands as predictors to increase prediction accuracy; we compared these approaches with eight conventional vegetation indexes (VIs) constructed using only some bands. The data set we used comes from CIMMYT's global wheat program and comprises 1170 genotypes evaluated for grain yield (ton/ha) in five environments (Drought, Irrigated, EarlyHeat, Melgas and Reduced Irrigated); the reflectance data were measured in 250 discrete narrow bands ranging between 392 and 851 nm. The proposed models for the simultaneous analysis of all the bands were ordinal least square (OLS), Bayes B, principal components with Bayes B, functional B-spline, functional Fourier and functional partial least square. The results of these models were compared with the OLS performed using as predictors each of the eight VIs individually and combined. We found that using all bands simultaneously increased prediction accuracy more than using VI alone. The Splines and Fourier models had the best prediction accuracy for each of the nine time-points under study. Combining image data collected at different time-points led to a small increase in prediction accuracy relative to models that use data from a single time-point. Also, using bands with heritabilities larger than 0.5 only in Drought as predictor variables showed improvements in prediction accuracy.
Nuutinen, Mikko; Leskelä, Riikka-Leena; Suojalehto, Ella; Tirronen, Anniina; Komssi, Vesa
2017-04-13
In previous years a substantial number of studies have identified statistically important predictors of nursing home admission (NHA). However, as far as we know, the analyses have been done at the population-level. No prior research has analysed the prediction accuracy of a NHA model for individuals. This study is an analysis of 3056 longer-term home care customers in the city of Tampere, Finland. Data were collected from the records of social and health service usage and RAI-HC (Resident Assessment Instrument - Home Care) assessment system during January 2011 and September 2015. The aim was to find out the most efficient variable subsets to predict NHA for individuals and validate the accuracy. The variable subsets of predicting NHA were searched by sequential forward selection (SFS) method, a variable ranking metric and the classifiers of logistic regression (LR), support vector machine (SVM) and Gaussian naive Bayes (GNB). The validation of the results was guaranteed using randomly balanced data sets and cross-validation. The primary performance metrics for the classifiers were the prediction accuracy and AUC (average area under the curve). The LR and GNB classifiers achieved 78% accuracy for predicting NHA. The most important variables were RAI MAPLE (Method for Assigning Priority Levels), functional impairment (RAI IADL, Activities of Daily Living), cognitive impairment (RAI CPS, Cognitive Performance Scale), memory disorders (diagnoses G30-G32 and F00-F03) and the use of community-based health-service and prior hospital use (emergency visits and periods of care). The accuracy of the classifier for individuals was high enough to convince the officials of the city of Tampere to integrate the predictive model based on the findings of this study as a part of home care information system. Further work need to be done to evaluate variables that are modifiable and responsive to interventions.
NASA Technical Reports Server (NTRS)
Eckstein, M. P.; Thomas, J. P.; Palmer, J.; Shimozaki, S. S.
2000-01-01
Recently, quantitative models based on signal detection theory have been successfully applied to the prediction of human accuracy in visual search for a target that differs from distractors along a single attribute (feature search). The present paper extends these models for visual search accuracy to multidimensional search displays in which the target differs from the distractors along more than one feature dimension (conjunction, disjunction, and triple conjunction displays). The model assumes that each element in the display elicits a noisy representation for each of the relevant feature dimensions. The observer combines the representations across feature dimensions to obtain a single decision variable, and the stimulus with the maximum value determines the response. The model accurately predicts human experimental data on visual search accuracy in conjunctions and disjunctions of contrast and orientation. The model accounts for performance degradation without resorting to a limited-capacity spatially localized and temporally serial mechanism by which to bind information across feature dimensions.
Adaptive time-variant models for fuzzy-time-series forecasting.
Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching
2010-12-01
A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.
Accuracy of genomic selection in European maize elite breeding populations.
Zhao, Yusheng; Gowda, Manje; Liu, Wenxin; Würschum, Tobias; Maurer, Hans P; Longin, Friedrich H; Ranc, Nicolas; Reif, Jochen C
2012-03-01
Genomic selection is a promising breeding strategy for rapid improvement of complex traits. The objective of our study was to investigate the prediction accuracy of genomic breeding values through cross validation. The study was based on experimental data of six segregating populations from a half-diallel mating design with 788 testcross progenies from an elite maize breeding program. The plants were intensively phenotyped in multi-location field trials and fingerprinted with 960 SNP markers. We used random regression best linear unbiased prediction in combination with fivefold cross validation. The prediction accuracy across populations was higher for grain moisture (0.90) than for grain yield (0.58). The accuracy of genomic selection realized for grain yield corresponds to the precision of phenotyping at unreplicated field trials in 3-4 locations. As for maize up to three generations are feasible per year, selection gain per unit time is high and, consequently, genomic selection holds great promise for maize breeding programs.
Kiel, Elizabeth J; Buss, Kristin A
2011-10-01
Early social withdrawal and protective parenting predict a host of negative outcomes, warranting examination of their development. Mothers' accurate anticipation of their toddlers' fearfulness may facilitate transactional relations between toddler fearful temperament and protective parenting, leading to these outcomes. Currently, we followed 93 toddlers (42 female; on average 24.76 months) and their mothers (9% underrepresented racial/ethnic backgrounds) over 3 years. We gathered laboratory observation of fearful temperament, maternal protective behavior, and maternal accuracy during toddlerhood and a multi-method assessment of children's social withdrawal and mothers' self-reported protective behavior at kindergarten entry. When mothers displayed higher accuracy, toddler fearful temperament significantly related to concurrent maternal protective behavior and indirectly predicted kindergarten social withdrawal and maternal protective behavior. These results highlight the important role of maternal accuracy in linking fearful temperament and protective parenting, which predict further social withdrawal and protection, and point to toddlerhood for efforts of prevention of anxiety-spectrum outcomes.
Kiel, Elizabeth J.; Buss, Kristin A.
2011-01-01
Early social withdrawal and protective parenting predict a host of negative outcomes, warranting examination of their development. Mothers’ accurate anticipation of their toddlers’ fearfulness may facilitate transactional relations between toddler fearful temperament and protective parenting, leading to these outcomes. Currently, we followed 93 toddlers (42 female; on average 24.76 months) and their mothers (9% underrepresented racial/ethnic backgrounds) over 3 years. We gathered laboratory observation of fearful temperament, maternal protective behavior, and maternal accuracy during toddlerhood and a multi-method assessment of children’s social withdrawal and mothers’ self-reported protective behavior at kindergarten entry. When mothers displayed higher accuracy, toddler fearful temperament significantly related to concurrent maternal protective behavior and indirectly predicted kindergarten social withdrawal and maternal protective behavior. These results highlight the important role of maternal accuracy in linking fearful temperament and protective parenting, which predict further social withdrawal and protection, and point to toddlerhood for efforts of prevention of anxiety-spectrum outcomes. PMID:21537895
Accuracy of endoscopic ultrasonography for diagnosing ulcerative early gastric cancers
Park, Jin-Seok; Kim, Hyungkil; Bang, Byongwook; Kwon, Kyesook; Shin, Youngwoon
2016-01-01
Abstract Although endoscopic ultrasonography (EUS) is the first-choice imaging modality for predicting the invasion depth of early gastric cancer (EGC), the prediction accuracy of EUS is significantly decreased when EGC is combined with ulceration. The aim of present study was to compare the accuracy of EUS and conventional endoscopy (CE) for determining the depth of EGC. In addition, the various clinic-pathologic factors affecting the diagnostic accuracy of EUS, with a particular focus on endoscopic ulcer shapes, were evaluated. We retrospectively reviewed data from 236 consecutive patients with ulcerative EGC. All patients underwent EUS for estimating tumor invasion depth, followed by either curative surgery or endoscopic treatment. The diagnostic accuracy of EUS and CE was evaluated by comparing the final histologic result of resected specimen. The correlation between accuracy of EUS and characteristics of EGC (tumor size, histology, location in stomach, tumor invasion depth, and endoscopic ulcer shapes) was analyzed. Endoscopic ulcer shapes were classified into 3 groups: definite ulcer, superficial ulcer, and ill-defined ulcer. The overall accuracy of EUS and CE for predicting the invasion depth in ulcerative EGC was 68.6% and 55.5%, respectively. Of the 236 patients, 36 patients were classified as definite ulcers, 98 were superficial ulcers, and 102 were ill-defined ulcers, In univariate analysis, EUS accuracy was associated with invasion depth (P = 0.023), tumor size (P = 0.034), and endoscopic ulcer shapes (P = 0.001). In multivariate analysis, there is a significant association between superficial ulcer in CE and EUS accuracy (odds ratio: 2.977; 95% confidence interval: 1.255–7.064; P = 0.013). The accuracy of EUS for determining tumor invasion depth in ulcerative EGC was superior to that of CE. In addition, ulcer shape was an important factor that affected EUS accuracy. PMID:27472672
ERIC Educational Resources Information Center
Tout, Hicham
2013-01-01
The majority of documented phishing attacks have been carried by email, yet few studies have measured the impact of email headers on the predictive accuracy of machine learning techniques in detecting email phishing attacks. Research has shown that the inclusion of a limited subset of email headers as features in training machine learning…
ERIC Educational Resources Information Center
Ball, Carrie R.; O'Connor, Edward
2016-01-01
This study examined the predictive validity and classification accuracy of two commonly used universal screening measures relative to a statewide achievement test. Results indicated that second-grade performance on oral reading fluency and the Measures of Academic Progress (MAP), together with special education status, explained 68% of the…
ERIC Educational Resources Information Center
Morris, Darrell; Pennell, Ashley M.; Perney, Jan; Trathen, Woodrow
2018-01-01
This study compared reading rate to reading fluency (as measured by a rating scale). After listening to first graders read short passages, we assigned an overall fluency rating (low, average, or high) to each reading. We then used predictive discriminant analyses to determine which of five measures--accuracy, rate (objective); accuracy, phrasing,…
A comprehensive comparison of network similarities for link prediction and spurious link elimination
NASA Astrophysics Data System (ADS)
Zhang, Peng; Qiu, Dan; Zeng, An; Xiao, Jinghua
2018-06-01
Identifying missing interactions in complex networks, known as link prediction, is realized by estimating the likelihood of the existence of a link between two nodes according to the observed links and nodes' attributes. Similar approaches have also been employed to identify and remove spurious links in networks which is crucial for improving the reliability of network data. In network science, the likelihood for two nodes having a connection strongly depends on their structural similarity. The key to address these two problems thus becomes how to objectively measure the similarity between nodes in networks. In the literature, numerous network similarity metrics have been proposed and their accuracy has been discussed independently in previous works. In this paper, we systematically compare the accuracy of 18 similarity metrics in both link prediction and spurious link elimination when the observed networks are very sparse or consist of inaccurate linking information. Interestingly, some methods have high prediction accuracy, they tend to perform low accuracy in identification spurious interaction. We further find that methods can be classified into several cluster according to their behaviors. This work is useful for guiding future use of these similarity metrics for different purposes.
Chiu, Herng-Chia; Ho, Te-Wei; Lee, King-Teh; Chen, Hong-Yaw; Ho, Wen-Hsien
2013-01-01
The aim of this present study is firstly to compare significant predictors of mortality for hepatocellular carcinoma (HCC) patients undergoing resection between artificial neural network (ANN) and logistic regression (LR) models and secondly to evaluate the predictive accuracy of ANN and LR in different survival year estimation models. We constructed a prognostic model for 434 patients with 21 potential input variables by Cox regression model. Model performance was measured by numbers of significant predictors and predictive accuracy. The results indicated that ANN had double to triple numbers of significant predictors at 1-, 3-, and 5-year survival models as compared with LR models. Scores of accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC) of 1-, 3-, and 5-year survival estimation models using ANN were superior to those of LR in all the training sets and most of the validation sets. The study demonstrated that ANN not only had a great number of predictors of mortality variables but also provided accurate prediction, as compared with conventional methods. It is suggested that physicians consider using data mining methods as supplemental tools for clinical decision-making and prognostic evaluation. PMID:23737707
Banzato, T; Cherubini, G B; Atzori, M; Zotti, A
2018-05-01
An established deep neural network (DNN) based on transfer learning and a newly designed DNN were tested to predict the grade of meningiomas from magnetic resonance (MR) images in dogs and to determine the accuracy of classification of using pre- and post-contrast T1-weighted (T1W), and T2-weighted (T2W) MR images. The images were randomly assigned to a training set, a validation set and a test set, comprising 60%, 10% and 30% of images, respectively. The combination of DNN and MR sequence displaying the highest discriminating accuracy was used to develop an image classifier to predict the grading of new cases. The algorithm based on transfer learning using the established DNN did not provide satisfactory results, whereas the newly designed DNN had high classification accuracy. On the basis of classification accuracy, an image classifier built on the newly designed DNN using post-contrast T1W images was developed. This image classifier correctly predicted the grading of 8 out of 10 images not included in the data set. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Using Time Series Analysis to Predict Cardiac Arrest in a PICU.
Kennedy, Curtis E; Aoki, Noriaki; Mariscalco, Michele; Turley, James P
2015-11-01
To build and test cardiac arrest prediction models in a PICU, using time series analysis as input, and to measure changes in prediction accuracy attributable to different classes of time series data. Retrospective cohort study. Thirty-one bed academic PICU that provides care for medical and general surgical (not congenital heart surgery) patients. Patients experiencing a cardiac arrest in the PICU and requiring external cardiac massage for at least 2 minutes. None. One hundred three cases of cardiac arrest and 109 control cases were used to prepare a baseline dataset that consisted of 1,025 variables in four data classes: multivariate, raw time series, clinical calculations, and time series trend analysis. We trained 20 arrest prediction models using a matrix of five feature sets (combinations of data classes) with four modeling algorithms: linear regression, decision tree, neural network, and support vector machine. The reference model (multivariate data with regression algorithm) had an accuracy of 78% and 87% area under the receiver operating characteristic curve. The best model (multivariate + trend analysis data with support vector machine algorithm) had an accuracy of 94% and 98% area under the receiver operating characteristic curve. Cardiac arrest predictions based on a traditional model built with multivariate data and a regression algorithm misclassified cases 3.7 times more frequently than predictions that included time series trend analysis and built with a support vector machine algorithm. Although the final model lacks the specificity necessary for clinical application, we have demonstrated how information from time series data can be used to increase the accuracy of clinical prediction models.
Lado, Bettina; Matus, Ivan; Rodríguez, Alejandra; Inostroza, Luis; Poland, Jesse; Belzile, François; del Pozo, Alejandro; Quincke, Martín; Castro, Marina; von Zitzewitz, Jarislav
2013-12-09
In crop breeding, the interest of predicting the performance of candidate cultivars in the field has increased due to recent advances in molecular breeding technologies. However, the complexity of the wheat genome presents some challenges for applying new technologies in molecular marker identification with next-generation sequencing. We applied genotyping-by-sequencing, a recently developed method to identify single-nucleotide polymorphisms, in the genomes of 384 wheat (Triticum aestivum) genotypes that were field tested under three different water regimes in Mediterranean climatic conditions: rain-fed only, mild water stress, and fully irrigated. We identified 102,324 single-nucleotide polymorphisms in these genotypes, and the phenotypic data were used to train and test genomic selection models intended to predict yield, thousand-kernel weight, number of kernels per spike, and heading date. Phenotypic data showed marked spatial variation. Therefore, different models were tested to correct the trends observed in the field. A mixed-model using moving-means as a covariate was found to best fit the data. When we applied the genomic selection models, the accuracy of predicted traits increased with spatial adjustment. Multiple genomic selection models were tested, and a Gaussian kernel model was determined to give the highest accuracy. The best predictions between environments were obtained when data from different years were used to train the model. Our results confirm that genotyping-by-sequencing is an effective tool to obtain genome-wide information for crops with complex genomes, that these data are efficient for predicting traits, and that correction of spatial variation is a crucial ingredient to increase prediction accuracy in genomic selection models.
Preciat Gonzalez, German A.; El Assal, Lemmer R. P.; Noronha, Alberto; ...
2017-06-14
The mechanism of each chemical reaction in a metabolic network can be represented as a set of atom mappings, each of which relates an atom in a substrate metabolite to an atom of the same element in a product metabolite. Genome-scale metabolic network reconstructions typically represent biochemistry at the level of reaction stoichiometry. However, a more detailed representation at the underlying level of atom mappings opens the possibility for a broader range of biological, biomedical and biotechnological applications than with stoichiometry alone. Complete manual acquisition of atom mapping data for a genome-scale metabolic network is a laborious process. However, manymore » algorithms exist to predict atom mappings. How do their predictions compare to each other and to manually curated atom mappings? For more than four thousand metabolic reactions in the latest human metabolic reconstruction, Recon 3D, we compared the atom mappings predicted by six atom mapping algorithms. We also compared these predictions to those obtained by manual curation of atom mappings for over five hundred reactions distributed among all top level Enzyme Commission number classes. Five of the evaluated algorithms had similarly high prediction accuracy of over 91% when compared to manually curated atom mapped reactions. On average, the accuracy of the prediction was highest for reactions catalysed by oxidoreductases and lowest for reactions catalysed by ligases. In addition to prediction accuracy, the algorithms were evaluated on their accessibility, their advanced features, such as the ability to identify equivalent atoms, and their ability to map hydrogen atoms. In addition to prediction accuracy, we found that software accessibility and advanced features were fundamental to the selection of an atom mapping algorithm in practice.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preciat Gonzalez, German A.; El Assal, Lemmer R. P.; Noronha, Alberto
The mechanism of each chemical reaction in a metabolic network can be represented as a set of atom mappings, each of which relates an atom in a substrate metabolite to an atom of the same element in a product metabolite. Genome-scale metabolic network reconstructions typically represent biochemistry at the level of reaction stoichiometry. However, a more detailed representation at the underlying level of atom mappings opens the possibility for a broader range of biological, biomedical and biotechnological applications than with stoichiometry alone. Complete manual acquisition of atom mapping data for a genome-scale metabolic network is a laborious process. However, manymore » algorithms exist to predict atom mappings. How do their predictions compare to each other and to manually curated atom mappings? For more than four thousand metabolic reactions in the latest human metabolic reconstruction, Recon 3D, we compared the atom mappings predicted by six atom mapping algorithms. We also compared these predictions to those obtained by manual curation of atom mappings for over five hundred reactions distributed among all top level Enzyme Commission number classes. Five of the evaluated algorithms had similarly high prediction accuracy of over 91% when compared to manually curated atom mapped reactions. On average, the accuracy of the prediction was highest for reactions catalysed by oxidoreductases and lowest for reactions catalysed by ligases. In addition to prediction accuracy, the algorithms were evaluated on their accessibility, their advanced features, such as the ability to identify equivalent atoms, and their ability to map hydrogen atoms. In addition to prediction accuracy, we found that software accessibility and advanced features were fundamental to the selection of an atom mapping algorithm in practice.« less
Takahashi, Masahiko; Saito, Hidetsugu; Higashimoto, Makiko; Atsukawa, Kazuhiro; Ishii, Hiromasa
2005-01-01
A highly sensitive second-generation hepatitis C virus (HCV) core antigen assay has recently been developed. We compared viral disappearance and first-phase kinetics between commercially available core antigen (Ag) assays, Lumipulse Ortho HCV Ag (Lumipulse-Ag), and a quantitative HCV RNA PCR assay, Cobas Amplicor HCV Monitor test, version 2 (Amplicor M), to estimate the predictive benefit of a sustained viral response (SVR) and non-SVR in 44 genotype 1b patients treated with interferon (IFN) and ribavirin. HCV core Ag negativity could predict SVR on day 1 (sensitivity = 100%, specificity = 85.0%, accuracy = 86.4%), whereas RNA negativity could predict SVR on day 7 (sensitivity = 100%, specificity = 87.2%, accuracy = 88.6%). None of the patients who had detectable serum core Ag or RNA on day 14 achieved SVR (specificity = 100%). The predictive accuracy on day 14 was higher by RNA negativity (93.2%) than that by core Ag negativity (75.0%). The combined predictive criterion of both viral load decline during the first 24 h and basal viral load was also predictive for SVR; the sensitivities of Lumipulse-Ag and Amplicor-M were 45.5 and 47.6%, respectively, and the specificity was 100%. Amplicor-M had better predictive accuracy than Lumipulse-Ag in 2-week disappearance tests because it had better sensitivity. On the other hand, estimates of kinetic parameters were similar regardless of the detection method. Although the correlations between Lumipulse-Ag and Amplicor-M were good both before and 24 h after IFN administration, HCV core Ag seemed to be relatively lower 24 h after IFN administration than before administration. Lumipulse-Ag seems to be useful for detecting the HCV concentration during IFN therapy; however, we still need to understand the characteristics of the assay.
Multivariate prediction of motor diagnosis in Huntington's disease: 12 years of PREDICT-HD.
Long, Jeffrey D; Paulsen, Jane S
2015-10-01
It is well known in Huntington's disease that cytosine-adenine-guanine expansion and age at study entry are predictive of the timing of motor diagnosis. The goal of this study was to assess whether additional motor, imaging, cognitive, functional, psychiatric, and demographic variables measured at study entry increased the ability to predict the risk of motor diagnosis over 12 years. One thousand seventy-eight Huntington's disease gene-expanded carriers (64% female) from the Neurobiological Predictors of Huntington's Disease study were followed up for up to 12 y (mean = 5, standard deviation = 3.3) covering 2002 to 2014. No one had a motor diagnosis at study entry, but 225 (21%) carriers prospectively received a motor diagnosis. Analysis was performed with random survival forests, which is a machine learning method for right-censored data. Adding 34 variables along with cytosine-adenine-guanine and age substantially increased predictive accuracy relative to cytosine-adenine-guanine and age alone. Adding six of the common motor and cognitive variables (total motor score, diagnostic confidence level, Symbol Digit Modalities Test, three Stroop tests) resulted in lower predictive accuracy than the full set, but still had twice the 5-y predictive accuracy than when using cytosine-adenine-guanine and age alone. Additional analysis suggested interactions and nonlinear effects that were characterized in a post hoc Cox regression model. Measurement of clinical variables can substantially increase the accuracy of predicting motor diagnosis over and above cytosine-adenine-guanine and age (and their interaction). Estimated probabilities can be used to characterize progression level and aid in future studies' sample selection. © 2015 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society.
Rutter, Carolyn M; Knudsen, Amy B; Marsh, Tracey L; Doria-Rose, V Paul; Johnson, Eric; Pabiniak, Chester; Kuntz, Karen M; van Ballegooijen, Marjolein; Zauber, Ann G; Lansdorp-Vogelaar, Iris
2016-07-01
Microsimulation models synthesize evidence about disease processes and interventions, providing a method for predicting long-term benefits and harms of prevention, screening, and treatment strategies. Because models often require assumptions about unobservable processes, assessing a model's predictive accuracy is important. We validated 3 colorectal cancer (CRC) microsimulation models against outcomes from the United Kingdom Flexible Sigmoidoscopy Screening (UKFSS) Trial, a randomized controlled trial that examined the effectiveness of one-time flexible sigmoidoscopy screening to reduce CRC mortality. The models incorporate different assumptions about the time from adenoma initiation to development of preclinical and symptomatic CRC. Analyses compare model predictions to study estimates across a range of outcomes to provide insight into the accuracy of model assumptions. All 3 models accurately predicted the relative reduction in CRC mortality 10 years after screening (predicted hazard ratios, with 95% percentile intervals: 0.56 [0.44, 0.71], 0.63 [0.51, 0.75], 0.68 [0.53, 0.83]; estimated with 95% confidence interval: 0.56 [0.45, 0.69]). Two models with longer average preclinical duration accurately predicted the relative reduction in 10-year CRC incidence. Two models with longer mean sojourn time accurately predicted the number of screen-detected cancers. All 3 models predicted too many proximal adenomas among patients referred to colonoscopy. Model accuracy can only be established through external validation. Analyses such as these are therefore essential for any decision model. Results supported the assumptions that the average time from adenoma initiation to development of preclinical cancer is long (up to 25 years), and mean sojourn time is close to 4 years, suggesting the window for early detection and intervention by screening is relatively long. Variation in dwell time remains uncertain and could have important clinical and policy implications. © The Author(s) 2016.
Carlisle, D.M.; Falcone, J.; Meador, M.R.
2009-01-01
We developed and evaluated empirical models to predict biological condition of wadeable streams in a large portion of the eastern USA, with the ultimate goal of prediction for unsampled basins. Previous work had classified (i.e., altered vs. unaltered) the biological condition of 920 streams based on a biological assessment of macroinvertebrate assemblages. Predictor variables were limited to widely available geospatial data, which included land cover, topography, climate, soils, societal infrastructure, and potential hydrologic modification. We compared the accuracy of predictions of biological condition class based on models with continuous and binary responses. We also evaluated the relative importance of specific groups and individual predictor variables, as well as the relationships between the most important predictors and biological condition. Prediction accuracy and the relative importance of predictor variables were different for two subregions for which models were created. Predictive accuracy in the highlands region improved by including predictors that represented both natural and human activities. Riparian land cover and road-stream intersections were the most important predictors. In contrast, predictive accuracy in the lowlands region was best for models limited to predictors representing natural factors, including basin topography and soil properties. Partial dependence plots revealed complex and nonlinear relationships between specific predictors and the probability of biological alteration. We demonstrate a potential application of the model by predicting biological condition in 552 unsampled basins across an ecoregion in southeastern Wisconsin (USA). Estimates of the likelihood of biological condition of unsampled streams could be a valuable tool for screening large numbers of basins to focus targeted monitoring of potentially unaltered or altered stream segments. ?? Springer Science+Business Media B.V. 2008.
Preciat Gonzalez, German A; El Assal, Lemmer R P; Noronha, Alberto; Thiele, Ines; Haraldsdóttir, Hulda S; Fleming, Ronan M T
2017-06-14
The mechanism of each chemical reaction in a metabolic network can be represented as a set of atom mappings, each of which relates an atom in a substrate metabolite to an atom of the same element in a product metabolite. Genome-scale metabolic network reconstructions typically represent biochemistry at the level of reaction stoichiometry. However, a more detailed representation at the underlying level of atom mappings opens the possibility for a broader range of biological, biomedical and biotechnological applications than with stoichiometry alone. Complete manual acquisition of atom mapping data for a genome-scale metabolic network is a laborious process. However, many algorithms exist to predict atom mappings. How do their predictions compare to each other and to manually curated atom mappings? For more than four thousand metabolic reactions in the latest human metabolic reconstruction, Recon 3D, we compared the atom mappings predicted by six atom mapping algorithms. We also compared these predictions to those obtained by manual curation of atom mappings for over five hundred reactions distributed among all top level Enzyme Commission number classes. Five of the evaluated algorithms had similarly high prediction accuracy of over 91% when compared to manually curated atom mapped reactions. On average, the accuracy of the prediction was highest for reactions catalysed by oxidoreductases and lowest for reactions catalysed by ligases. In addition to prediction accuracy, the algorithms were evaluated on their accessibility, their advanced features, such as the ability to identify equivalent atoms, and their ability to map hydrogen atoms. In addition to prediction accuracy, we found that software accessibility and advanced features were fundamental to the selection of an atom mapping algorithm in practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Callaghan, Michael E., E-mail: elspeth.raymond@health.sa.gov.au; Freemasons Foundation Centre for Men's Health, University of Adelaide; Urology Unit, Repatriation General Hospital, SA Health, Flinders Centre for Innovation in Cancer
Purpose: To identify, through a systematic review, all validated tools used for the prediction of patient-reported outcome measures (PROMs) in patients being treated with radiation therapy for prostate cancer, and provide a comparative summary of accuracy and generalizability. Methods and Materials: PubMed and EMBASE were searched from July 2007. Title/abstract screening, full text review, and critical appraisal were undertaken by 2 reviewers, whereas data extraction was performed by a single reviewer. Eligible articles had to provide a summary measure of accuracy and undertake internal or external validation. Tools were recommended for clinical implementation if they had been externally validated and foundmore » to have accuracy ≥70%. Results: The search strategy identified 3839 potential studies, of which 236 progressed to full text review and 22 were included. From these studies, 50 tools predicted gastrointestinal/rectal symptoms, 29 tools predicted genitourinary symptoms, 4 tools predicted erectile dysfunction, and no tools predicted quality of life. For patients treated with external beam radiation therapy, 3 tools could be recommended for the prediction of rectal toxicity, gastrointestinal toxicity, and erectile dysfunction. For patients treated with brachytherapy, 2 tools could be recommended for the prediction of urinary retention and erectile dysfunction. Conclusions: A large number of tools for the prediction of PROMs in prostate cancer patients treated with radiation therapy have been developed. Only a small minority are accurate and have been shown to be generalizable through external validation. This review provides an accessible catalogue of tools that are ready for clinical implementation as well as which should be prioritized for validation.« less
Predicting one repetition maximum equations accuracy in paralympic rowers with motor disabilities.
Schwingel, Paulo A; Porto, Yuri C; Dias, Marcelo C M; Moreira, Mônica M; Zoppi, Cláudio C
2009-05-01
Predicting one repetition maximum equations accuracy in paralympic rowers Resistance training intensity is prescribed using percentiles of the maximum strength, defined as the maximum tension generated for a muscle or muscular group. This value is found through the application of the one maximal repetition (1RM) test. One maximal repetition test demands time and still is not appropriate for some populations because of the risk it offers. In recent years, the prediction of maximal strength, through predicting equations, has been used to prevent the inconveniences of the 1RM test. The purpose of this study was to verify the accuracy of 12 1RM predicting equations for disabled rowers. Nine male paralympic rowers (7 one-leg amputated rowers and 2 cerebral paralyzed rowers; age, 30 +/- 7.9 years; height, 175.1 +/- 5.9 cm; weight, 69 +/- 13.6 kg) performed 1RM test for lying T-bar row and flat barbell bench press exercises to determine upper-body strength and leg press exercise to determine lower-body strength. One maximal repetition test was performed, and based on submaximal repetitions loads, several linear and exponential equations models were tested with regard of their accuracy. We did not find statistical differences for lying T-bar row and bench press exercises between measured and predicted 1RM values (p = 0.84 and 0.23 for lying T-bar row and flat barbell bench press, respectively); however, leg press exercise reached a high significant difference between measured and predicted values (p < 0.01). In conclusion, rowers with motor disabilities tolerate 1RM testing procedures, and predicting 1RM equations are accurate for bench press and lying T-bar row, but not for leg press, in this kind of athlete.
Achievable accuracy of hip screw holding power estimation by insertion torque measurement.
Erani, Paolo; Baleani, Massimiliano
2018-02-01
To ensure stability of proximal femoral fractures, the hip screw must firmly engage into the femoral head. Some studies suggested that screw holding power into trabecular bone could be evaluated, intraoperatively, through measurement of screw insertion torque. However, those studies used synthetic bone, instead of trabecular bone, as host material or they did not evaluate accuracy of predictions. We determined prediction accuracy, also assessing the impact of screw design and host material. We measured, under highly-repeatable experimental conditions, disregarding clinical procedure complexities, insertion torque and pullout strength of four screw designs, both in 120 synthetic and 80 trabecular bone specimens of variable density. For both host materials, we calculated the root-mean-square error and the mean-absolute-percentage error of predictions based on the best fitting model of torque-pullout data, in both single-screw and merged dataset. Predictions based on screw-specific regression models were the most accurate. Host material impacts on prediction accuracy: the replacement of synthetic with trabecular bone decreased both root-mean-square errors, from 0.54 ÷ 0.76 kN to 0.21 ÷ 0.40 kN, and mean-absolute-percentage errors, from 14 ÷ 21% to 10 ÷ 12%. However, holding power predicted on low insertion torque remained inaccurate, with errors up to 40% for torques below 1 Nm. In poor-quality trabecular bone, tissue inhomogeneities likely affect pullout strength and insertion torque to different extents, limiting the predictive power of the latter. This bias decreases when the screw engages good-quality bone. Under this condition, predictions become more accurate although this result must be confirmed by close in-vitro simulation of the clinical procedure. Copyright © 2018 Elsevier Ltd. All rights reserved.
Data Prediction for Public Events in Professional Domains Based on Improved RNN- LSTM
NASA Astrophysics Data System (ADS)
Song, Bonan; Fan, Chunxiao; Wu, Yuexin; Sun, Juanjuan
2018-02-01
The traditional data services of prediction for emergency or non-periodic events usually cannot generate satisfying result or fulfill the correct prediction purpose. However, these events are influenced by external causes, which mean certain a priori information of these events generally can be collected through the Internet. This paper studied the above problems and proposed an improved model—LSTM (Long Short-term Memory) dynamic prediction and a priori information sequence generation model by combining RNN-LSTM and public events a priori information. In prediction tasks, the model is qualified for determining trends, and its accuracy also is validated. This model generates a better performance and prediction results than the previous one. Using a priori information can increase the accuracy of prediction; LSTM can better adapt to the changes of time sequence; LSTM can be widely applied to the same type of prediction tasks, and other prediction tasks related to time sequence.
Predicting coronary artery disease using different artificial neural network models.
Colak, M Cengiz; Colak, Cemil; Kocatürk, Hasan; Sağiroğlu, Seref; Barutçu, Irfan
2008-08-01
Eight different learning algorithms used for creating artificial neural network (ANN) models and the different ANN models in the prediction of coronary artery disease (CAD) are introduced. This work was carried out as a retrospective case-control study. Overall, 124 consecutive patients who had been diagnosed with CAD by coronary angiography (at least 1 coronary stenosis > 50% in major epicardial arteries) were enrolled in the work. Angiographically, the 113 people (group 2) with normal coronary arteries were taken as control subjects. Multi-layered perceptrons ANN architecture were applied. The ANN models trained with different learning algorithms were performed in 237 records, divided into training (n=171) and testing (n=66) data sets. The performance of prediction was evaluated by sensitivity, specificity and accuracy values based on standard definitions. The results have demonstrated that ANN models trained with eight different learning algorithms are promising because of high (greater than 71%) sensitivity, specificity and accuracy values in the prediction of CAD. Accuracy, sensitivity and specificity values varied between 83.63%-100%, 86.46%-100% and 74.67%-100% for training, respectively. For testing, the values were more than 71% for sensitivity, 76% for specificity and 81% for accuracy. It may be proposed that the use of different learning algorithms other than backpropagation and larger sample sizes can improve the performance of prediction. The proposed ANN models trained with these learning algorithms could be used a promising approach for predicting CAD without the need for invasive diagnostic methods and could help in the prognostic clinical decision.
Bailey, Drew H.; Littlefield, Andrew; Geary, David C.
2012-01-01
The ability to retrieve basic arithmetic facts from long-term memory contributes to individual and perhaps sex differences in mathematics achievement. The current study tracked the co-development of preference for using retrieval over other strategies to solve single-digit addition problems, independent of accuracy, and skilled use of retrieval (i.e., accuracy and RT) from first to sixth grade, inclusive (n = 311). Accurate retrieval in first grade was related to working memory capacity and intelligence and predicted a preference for retrieval in second grade. In later grades, the relation between skill and preference changed such that preference in one grade predicted accuracy and RT in the next, as RT and accuracy continued to predict future gains in preference. In comparison to girls, boys had a consistent preference for retrieval over other strategies and had faster retrieval speeds, but the sex difference in retrieval accuracy varied across grades. Results indicate ability influences early skilled retrieval but both practice and skill influence each other in a feedback loop later in development, and provide insights into the source of the sex difference in problem solving approaches. PMID:22704036
Orbit Determination for the Lunar Reconnaissance Orbiter Using an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Slojkowski, Steven; Lowe, Jonathan; Woodburn, James
2015-01-01
Orbit determination (OD) analysis results are presented for the Lunar Reconnaissance Orbiter (LRO) using a commercially available Extended Kalman Filter, Analytical Graphics' Orbit Determination Tool Kit (ODTK). Process noise models for lunar gravity and solar radiation pressure (SRP) are described and OD results employing the models are presented. Definitive accuracy using ODTK meets mission requirements and is better than that achieved using the operational LRO OD tool, the Goddard Trajectory Determination System (GTDS). Results demonstrate that a Vasicek stochastic model produces better estimates of the coefficient of solar radiation pressure than a Gauss-Markov model, and prediction accuracy using a Vasicek model meets mission requirements over the analysis span. Modeling the effect of antenna motion on range-rate tracking considerably improves residuals and filter-smoother consistency. Inclusion of off-axis SRP process noise and generalized process noise improves filter performance for both definitive and predicted accuracy. Definitive accuracy from the smoother is better than achieved using GTDS and is close to that achieved by precision OD methods used to generate definitive science orbits. Use of a multi-plate dynamic spacecraft area model with ODTK's force model plugin capability provides additional improvements in predicted accuracy.
Foley, Alana E; Vasilyeva, Marina; Laski, Elida V
2017-06-01
This study examined the mediating role of children's use of decomposition strategies in the relation between visuospatial memory (VSM) and arithmetic accuracy. Children (N = 78; Age M = 9.36) completed assessments of VSM, arithmetic strategies, and arithmetic accuracy. Consistent with previous findings, VSM predicted arithmetic accuracy in children. Extending previous findings, the current study showed that the relation between VSM and arithmetic performance was mediated by the frequency of children's use of decomposition strategies. Identifying the role of arithmetic strategies in this relation has implications for increasing the math performance of children with lower VSM. Statement of contribution What is already known on this subject? The link between children's visuospatial working memory and arithmetic accuracy is well documented. Frequency of decomposition strategy use is positively related to children's arithmetic accuracy. Children's spatial skill positively predicts the frequency with which they use decomposition. What does this study add? Short-term visuospatial memory (VSM) positively relates to the frequency of children's decomposition use. Decomposition use mediates the relation between short-term VSM and arithmetic accuracy. Children with limited short-term VSM may struggle to use decomposition, decreasing accuracy. © 2016 The British Psychological Society.
Protein location prediction using atomic composition and global features of the amino acid sequence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cherian, Betsy Sheena, E-mail: betsy.skb@gmail.com; Nair, Achuthsankar S.
2010-01-22
Subcellular location of protein is constructive information in determining its function, screening for drug candidates, vaccine design, annotation of gene products and in selecting relevant proteins for further studies. Computational prediction of subcellular localization deals with predicting the location of a protein from its amino acid sequence. For a computational localization prediction method to be more accurate, it should exploit all possible relevant biological features that contribute to the subcellular localization. In this work, we extracted the biological features from the full length protein sequence to incorporate more biological information. A new biological feature, distribution of atomic composition is effectivelymore » used with, multiple physiochemical properties, amino acid composition, three part amino acid composition, and sequence similarity for predicting the subcellular location of the protein. Support Vector Machines are designed for four modules and prediction is made by a weighted voting system. Our system makes prediction with an accuracy of 100, 82.47, 88.81 for self-consistency test, jackknife test and independent data test respectively. Our results provide evidence that the prediction based on the biological features derived from the full length amino acid sequence gives better accuracy than those derived from N-terminal alone. Considering the features as a distribution within the entire sequence will bring out underlying property distribution to a greater detail to enhance the prediction accuracy.« less
2011-01-01
Background Molecular marker information is a common source to draw inferences about the relationship between genetic and phenotypic variation. Genetic effects are often modelled as additively acting marker allele effects. The true mode of biological action can, of course, be different from this plain assumption. One possibility to better understand the genetic architecture of complex traits is to include intra-locus (dominance) and inter-locus (epistasis) interaction of alleles as well as the additive genetic effects when fitting a model to a trait. Several Bayesian MCMC approaches exist for the genome-wide estimation of genetic effects with high accuracy of genetic value prediction. Including pairwise interaction for thousands of loci would probably go beyond the scope of such a sampling algorithm because then millions of effects are to be estimated simultaneously leading to months of computation time. Alternative solving strategies are required when epistasis is studied. Methods We extended a fast Bayesian method (fBayesB), which was previously proposed for a purely additive model, to include non-additive effects. The fBayesB approach was used to estimate genetic effects on the basis of simulated datasets. Different scenarios were simulated to study the loss of accuracy of prediction, if epistatic effects were not simulated but modelled and vice versa. Results If 23 QTL were simulated to cause additive and dominance effects, both fBayesB and a conventional MCMC sampler BayesB yielded similar results in terms of accuracy of genetic value prediction and bias of variance component estimation based on a model including additive and dominance effects. Applying fBayesB to data with epistasis, accuracy could be improved by 5% when all pairwise interactions were modelled as well. The accuracy decreased more than 20% if genetic variation was spread over 230 QTL. In this scenario, accuracy based on modelling only additive and dominance effects was generally superior to that of the complex model including epistatic effects. Conclusions This simulation study showed that the fBayesB approach is convenient for genetic value prediction. Jointly estimating additive and non-additive effects (especially dominance) has reasonable impact on the accuracy of prediction and the proportion of genetic variation assigned to the additive genetic source. PMID:21867519
Remmers, John E; Topor, Zbigniew; Grosse, Joshua; Vranjes, Nikola; Mosca, Erin V; Brant, Rollin; Bruehlmann, Sabina; Charkhandeh, Shouresh; Zareian Jahromi, Seyed Abdolali
2017-07-15
Mandibular protruding oral appliances represent a potentially important therapy for obstructive sleep apnea (OSA). However, their clinical utility is limited by a less-than-ideal efficacy rate and uncertainty regarding an efficacious mandibular position, pointing to the need for a tool to assist in delivery of the therapy. The current study assesses the ability to prospectively identify therapeutic responders and determine an efficacious mandibular position. Individuals (n = 202) with OSA participated in a blinded, 2-part investigation. A system for identifying therapeutic responders was developed in part 1 (n = 149); the predictive accuracy of this system was prospectively evaluated on a new population in part 2 (n = 53). Each participant underwent a 2-night, in-home feedback-controlled mandibular positioner (FCMP) test, followed by treatment with a custom oral appliance and an outcome study with the oral appliance in place. A machine learning classification system was trained to predict therapeutic outcome on data obtained from FCMP studies on part 1 participants. The accuracy of this trained system was then evaluated on part 2 participants by examining the agreement between prospectively predicted outcome and observed outcome. A predicted efficacious mandibular position was derived from each FCMP study. Predictive accuracy was as follows: sensitivity 85%; specificity 93%; positive predictive value 97%; and negative predictive value 72%. Of participants correctly predicted to respond to therapy, the predicted mandibular protrusive position proved efficacious in 86% of cases. An unattended, in-home FCMP test prospectively identifies individuals with OSA who will respond to oral appliance therapy and provides an efficacious mandibular position. The trial that this study reports on is registered on www.clinicaltrials.gov, ID NCT03011762, study name: Feasibility and Predictive Accuracy of an In-Home Computer Controlled Mandibular Positioner in Identifying Favourable Candidates for Oral Appliance Therapy. © 2017 American Academy of Sleep Medicine
NMRDSP: an accurate prediction of protein shape strings from NMR chemical shifts and sequence data.
Mao, Wusong; Cong, Peisheng; Wang, Zhiheng; Lu, Longjian; Zhu, Zhongliang; Li, Tonghua
2013-01-01
Shape string is structural sequence and is an extremely important structure representation of protein backbone conformations. Nuclear magnetic resonance chemical shifts give a strong correlation with the local protein structure, and are exploited to predict protein structures in conjunction with computational approaches. Here we demonstrate a novel approach, NMRDSP, which can accurately predict the protein shape string based on nuclear magnetic resonance chemical shifts and structural profiles obtained from sequence data. The NMRDSP uses six chemical shifts (HA, H, N, CA, CB and C) and eight elements of structure profiles as features, a non-redundant set (1,003 entries) as the training set, and a conditional random field as a classification algorithm. For an independent testing set (203 entries), we achieved an accuracy of 75.8% for S8 (the eight states accuracy) and 87.8% for S3 (the three states accuracy). This is higher than only using chemical shifts or sequence data, and confirms that the chemical shift and the structure profile are significant features for shape string prediction and their combination prominently improves the accuracy of the predictor. We have constructed the NMRDSP web server and believe it could be employed to provide a solid platform to predict other protein structures and functions. The NMRDSP web server is freely available at http://cal.tongji.edu.cn/NMRDSP/index.jsp.
A new self-report inventory of dyslexia for students: criterion and construct validity.
Tamboer, Peter; Vorst, Harrie C M
2015-02-01
The validity of a Dutch self-report inventory of dyslexia was ascertained in two samples of students. Six biographical questions, 20 general language statements and 56 specific language statements were based on dyslexia as a multi-dimensional deficit. Dyslexia and non-dyslexia were assessed with two criteria: identification with test results (Sample 1) and classification using biographical information (both samples). Using discriminant analyses, these criteria were predicted with various groups of statements. All together, 11 discriminant functions were used to estimate classification accuracy of the inventory. In Sample 1, 15 statements predicted the test criterion with classification accuracy of 98%, and 18 statements predicted the biographical criterion with classification accuracy of 97%. In Sample 2, 16 statements predicted the biographical criterion with classification accuracy of 94%. Estimations of positive and negative predictive value were 89% and 99%. Items of various discriminant functions were factor analysed to find characteristic difficulties of students with dyslexia, resulting in a five-factor structure in Sample 1 and a four-factor structure in Sample 2. Answer bias was investigated with measures of internal consistency reliability. Less than 20 self-report items are sufficient to accurately classify students with and without dyslexia. This supports the usefulness of self-assessment of dyslexia as a valid alternative to diagnostic test batteries. Copyright © 2015 John Wiley & Sons, Ltd.
Influence of sex and ethnic tooth-size differences on mixed-dentition space analysis
Altherr, Edward R.; Koroluk, Lorne D.; Phillips, Ceib
2013-01-01
Introduction Most mixed-dentition space analyses were developed by using subjects of northwestern European descent and unspecified sex. The purpose of this study was to determine the predictive accuracy of the Tanaka-Johnston analysis in white and black subjects in North Carolina. Methods A total of 120 subjects (30 males and 30 females in each ethnic group) were recruited from clinics at the University of North Carolina School of Dentistry. Ethnicity was verified to 2 previous generations. All subjects were less than 21 years of age and had a full complement of permanent teeth. Digital calipers were used to measure the mesiodistal widths of all teeth on study models fabricated from alginate impressions. The predicted widths of the canines and the premolars in both arches were compared with the actual measured widths. Results In the maxillary arch, there was a significant interaction of ethnicity and sex on the predictive accuracy of the Tanaka-Johnston analysis (P = .03, factorial ANOVA). The predictive accuracy was significantly overestimated in the white female group (P <.001, least square means). In the mandibular arch, there was no significant interaction between ethnicity and sex (P = .49). Conclusions The Tanaka-Johnston analysis significantly overestimated in females (P <.0001) and underestimated in blacks (P <.0001) (factorial ANOVA). Regression equations were developed to increase the predictive accuracy in both arches. (Am J Orthod Dentofacial Orthop 2007;132:332-9) PMID:17826601
Austin, Peter C; Lee, Douglas S
2011-01-01
Purpose: Classification trees are increasingly being used to classifying patients according to the presence or absence of a disease or health outcome. A limitation of classification trees is their limited predictive accuracy. In the data-mining and machine learning literature, boosting has been developed to improve classification. Boosting with classification trees iteratively grows classification trees in a sequence of reweighted datasets. In a given iteration, subjects that were misclassified in the previous iteration are weighted more highly than subjects that were correctly classified. Classifications from each of the classification trees in the sequence are combined through a weighted majority vote to produce a final classification. The authors' objective was to examine whether boosting improved the accuracy of classification trees for predicting outcomes in cardiovascular patients. Methods: We examined the utility of boosting classification trees for classifying 30-day mortality outcomes in patients hospitalized with either acute myocardial infarction or congestive heart failure. Results: Improvements in the misclassification rate using boosted classification trees were at best minor compared to when conventional classification trees were used. Minor to modest improvements to sensitivity were observed, with only a negligible reduction in specificity. For predicting cardiovascular mortality, boosted classification trees had high specificity, but low sensitivity. Conclusions: Gains in predictive accuracy for predicting cardiovascular outcomes were less impressive than gains in performance observed in the data mining literature. PMID:22254181
E-nose based rapid prediction of early mouldy grain using probabilistic neural networks
Ying, Xiaoguo; Liu, Wei; Hui, Guohua; Fu, Jun
2015-01-01
In this paper, early mouldy grain rapid prediction method using probabilistic neural network (PNN) and electronic nose (e-nose) was studied. E-nose responses to rice, red bean, and oat samples with different qualities were measured and recorded. E-nose data was analyzed using principal component analysis (PCA), back propagation (BP) network, and PNN, respectively. Results indicated that PCA and BP network could not clearly discriminate grain samples with different mouldy status and showed poor predicting accuracy. PNN showed satisfying discriminating abilities to grain samples with an accuracy of 93.75%. E-nose combined with PNN is effective for early mouldy grain prediction. PMID:25714125
Dynamic filtering improves attentional state prediction with fNIRS
Harrivel, Angela R.; Weissman, Daniel H.; Noll, Douglas C.; Huppert, Theodore; Peltier, Scott J.
2016-01-01
Brain activity can predict a person’s level of engagement in an attentional task. However, estimates of brain activity are often confounded by measurement artifacts and systemic physiological noise. The optimal method for filtering this noise – thereby increasing such state prediction accuracy – remains unclear. To investigate this, we asked study participants to perform an attentional task while we monitored their brain activity with functional near infrared spectroscopy (fNIRS). We observed higher state prediction accuracy when noise in the fNIRS hemoglobin [Hb] signals was filtered with a non-stationary (adaptive) model as compared to static regression (84% ± 6% versus 72% ± 15%). PMID:27231602
Prediction of beta-turns from amino acid sequences using the residue-coupled model.
Guruprasad, K; Shukla, S
2003-04-01
We evaluated the prediction of beta-turns from amino acid sequences using the residue-coupled model with an enlarged representative protein data set selected from the Protein Data Bank. Our results show that the probability values derived from a data set comprising 425 protein chains yielded an overall beta-turn prediction accuracy 68.74%, compared with 94.7% reported earlier on a data set of 30 proteins using the same method. However, we noted that the overall beta-turn prediction accuracy using probability values derived from the 30-protein data set reduces to 40.74% when tested on the data set comprising 425 protein chains. In contrast, using probability values derived from the 425 data set used in this analysis, the overall beta-turn prediction accuracy yielded consistent results when tested on either the 30-protein data set (64.62%) used earlier or a more recent representative data set comprising 619 protein chains (64.66%) or on a jackknife data set comprising 476 representative protein chains (63.38%). We therefore recommend the use of probability values derived from the 425 representative protein chains data set reported here, which gives more realistic and consistent predictions of beta-turns from amino acid sequences.
Bommert, Andrea; Rahnenführer, Jörg; Lang, Michel
2017-01-01
Finding a good predictive model for a high-dimensional data set can be challenging. For genetic data, it is not only important to find a model with high predictive accuracy, but it is also important that this model uses only few features and that the selection of these features is stable. This is because, in bioinformatics, the models are used not only for prediction but also for drawing biological conclusions which makes the interpretability and reliability of the model crucial. We suggest using three target criteria when fitting a predictive model to a high-dimensional data set: the classification accuracy, the stability of the feature selection, and the number of chosen features. As it is unclear which measure is best for evaluating the stability, we first compare a variety of stability measures. We conclude that the Pearson correlation has the best theoretical and empirical properties. Also, we find that for the stability assessment behaviour it is most important that a measure contains a correction for chance or large numbers of chosen features. Then, we analyse Pareto fronts and conclude that it is possible to find models with a stable selection of few features without losing much predictive accuracy.
RaptorX-Property: a web server for protein structure property prediction.
Wang, Sheng; Li, Wei; Liu, Shiwang; Xu, Jinbo
2016-07-08
RaptorX Property (http://raptorx2.uchicago.edu/StructurePropertyPred/predict/) is a web server predicting structure property of a protein sequence without using any templates. It outperforms other servers, especially for proteins without close homologs in PDB or with very sparse sequence profile (i.e. carries little evolutionary information). This server employs a powerful in-house deep learning model DeepCNF (Deep Convolutional Neural Fields) to predict secondary structure (SS), solvent accessibility (ACC) and disorder regions (DISO). DeepCNF not only models complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent property labels. Our experimental results show that, tested on CASP10, CASP11 and the other benchmarks, this server can obtain ∼84% Q3 accuracy for 3-state SS, ∼72% Q8 accuracy for 8-state SS, ∼66% Q3 accuracy for 3-state solvent accessibility, and ∼0.89 area under the ROC curve (AUC) for disorder prediction. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Kim, Min Jung; Kim, Eun-Kyung; Park, Seho; Moon, Hee Jung; Kim, Seung Il; Park, Byeong-Woo
2015-09-01
Triple-negative breast cancer (TNBC) which expresses neither hormonal receptors nor HER-2 is associated with poor prognosis and shorter survival. Several studies have suggested that TNBC patients attaining pathological complete response (pCR) after neoadjuvant chemotherapy (NAC) show a longer survival than those without pCR. To assess the accuracy of 3.0-T breast magnetic resonance imaging (MRI) in predicting pCR and to evaluate the clinicoradiologic factors affecting the diagnostic accuracy of 3.0-T breast MRI in TNBC patients treated with anthracycline and taxane (ACD). This retrospective study was approved by the institutional review board; patient consent was not required. Between 2009 and 2012, 35 TNBC patients with 3.0-T breast MRI prior to (n = 26) or after (n = 35) NAC were included. MRI findings were reviewed according to pCR to chemotherapy. The diagnostic accuracy of 3.0-T breast MRI for predicting pCR and the clinicoradiological factors affecting MRI accuracy and response to NAC were analyzed. 3.0-T MRI following NAC with ACD accurately predicted pCR in 91.4% of TNBC patients. The residual tumor size between pathology and 3.0-T MRI in non-pCR cases showed a higher correlation in the Ki-67-positive TNBC group (r = 0.947) than in the Ki-67 negative group (r = 0.375) with statistical trends (P = 0.069). Pre-treatment MRI in the non-pCR group compared to the pCR group showed a larger tumor size (P = 0.030) and non-mass presentation (P = 0.015). 3.0-T MRI in TNBC patients following NAC with ACD showed a high accuracy for predicting pCR to NAC. Ki-67 can affect the diagnostic accuracy of 3.0-T MRI for pCR to NAC with ACD in TNBC patients. © The Foundation Acta Radiologica 2014.
Thandassery, Ragesh B; Al Kaabi, Saad; Soofi, Madiha E; Mohiuddin, Syed A; John, Anil K; Al Mohannadi, Muneera; Al Ejji, Khalid; Yakoob, Rafie; Derbala, Moutaz F; Wani, Hamidullah; Sharma, Manik; Al Dweik, Nazeeh; Butt, Mohammed T; Kamel, Yasser M; Sultan, Khaleel; Pasic, Fuad; Singh, Rajvir
2016-07-01
Many indirect noninvasive scores to predict liver fibrosis are calculated from routine blood investigations. Only limited studies have compared their efficacy head to head. We aimed to compare these scores with liver biopsy fibrosis stages in patients with chronic hepatitis C. From blood investigations of 1602 patients with chronic hepatitis C who underwent a liver biopsy before initiation of antiviral treatment, 19 simple noninvasive scores were calculated. The area under the receiver operating characteristic curves and diagnostic accuracy of each of these scores were calculated (with reference to the Scheuer staging) and compared. The mean age of the patients was 41.8±9.6 years (1365 men). The most common genotype was genotype 4 (65.6%). Significant fibrosis, advanced fibrosis, and cirrhosis were seen in 65.1%, 25.6, and 6.6% of patients, respectively. All the scores except the aspartate transaminase (AST) alanine transaminase ratio, Pohl score, mean platelet volume, fibro-alpha, and red cell distribution width to platelet count ratio index showed high predictive accuracy for the stages of fibrosis. King's score (cutoff, 17.5) showed the highest predictive accuracy for significant and advanced fibrosis. King's score, Göteborg university cirrhosis index, APRI (the AST/platelet count ratio index), and Fibrosis-4 (FIB-4) had the highest predictive accuracy for cirrhosis, with the APRI (cutoff, 2) and FIB-4 (cutoff, 3.25) showing the highest diagnostic accuracy.We derived the study score 8.5 - 0.2(albumin, g/dL) +0.01(AST, IU/L) -0.02(platelet count, 10/L), which at a cutoff of >4.7 had a predictive accuracy of 0.868 (95% confidence interval, 0.833-0.904) for cirrhosis. King's score for significant and advanced fibrosis and the APRI or FIB-4 score for cirrhosis could be the best simple indirect noninvasive scores.
Spittle, Alicia J; Lee, Katherine J; Spencer-Smith, Megan; Lorefice, Lucy E; Anderson, Peter J; Doyle, Lex W
2015-01-01
The primary aim of this study was to investigate the accuracy of the Alberta Infant Motor Scale (AIMS) and Neuro-Sensory Motor Developmental Assessment (NSMDA) over the first year of life for predicting motor impairment at 4 years in preterm children. The secondary aims were to assess the predictive value of serial assessments over the first year and when using a combination of these two assessment tools in follow-up. Children born <30 weeks' gestation were prospectively recruited and assessed at 4, 8 and 12 months' corrected age using the AIMS and NSMDA. At 4 years' corrected age children were assessed for cerebral palsy (CP) and motor impairment using the Movement Assessment Battery for Children 2nd-edition (MABC-2). We calculated accuracy of the AIMS and NSMDA for predicting CP and MABC-2 scores ≤15th (at-risk of motor difficulty) and ≤5th centile (significant motor difficulty) for each test (AIMS and NSMDA) at 4, 8 and 12 months, for delay on one, two or all three of the time points over the first year, and finally for delay on both tests at each time point. Accuracy for predicting motor impairment was good for each test at each age, although false positives were common. Motor impairment on the MABC-2 (scores ≤5th and ≤15th) was most accurately predicted by the AIMS at 4 months, whereas CP was most accurately predicted by the NSMDA at 12 months. In regards to serial assessments, the likelihood ratio for motor impairment increased with the number of delayed assessments. When combining both the NSMDA and AIMS the best accuracy was achieved at 4 months, although results were similar at 8 and 12 months. Motor development during the first year of life in preterm infants assessed with the AIMS and NSMDA is predictive of later motor impairment at preschool age. However, false positives are common and therefore it is beneficial to follow-up children at high risk of motor impairment at more than one time point, or to use a combination of assessment tools. ACTR.org.au ACTRN12606000252516.
Medium- and Long-term Prediction of LOD Change with the Leap-step Autoregressive Model
NASA Astrophysics Data System (ADS)
Liu, Q. B.; Wang, Q. J.; Lei, M. F.
2015-09-01
It is known that the accuracies of medium- and long-term prediction of changes of length of day (LOD) based on the combined least-square and autoregressive (LS+AR) decrease gradually. The leap-step autoregressive (LSAR) model is more accurate and stable in medium- and long-term prediction, therefore it is used to forecast the LOD changes in this work. Then the LOD series from EOP 08 C04 provided by IERS (International Earth Rotation and Reference Systems Service) is used to compare the effectiveness of the LSAR and traditional AR methods. The predicted series resulted from the two models show that the prediction accuracy with the LSAR model is better than that from AR model in medium- and long-term prediction.
Evaluation of a habitat capability model for nongame birds in the Black Hills, South Dakota
Todd R. Mills; Mark A. Rumble; Lester D. Flake
1996-01-01
Habitat models, used to predict consequences of land management decisions on wildlife, can have considerable economic effect on management decisions. The Black Hills National Forest uses such a habitat capability model (HABCAP), but its accuracy is largely unknown. We tested this modelâs predictive accuracy for nongame birds in 13 vegetative structural stages of...
ERIC Educational Resources Information Center
Decker, Dawn M.; Hixson, Michael D.; Shaw, Amber; Johnson, Gloria
2014-01-01
The purpose of this study was to examine whether using a multiple-measure framework yielded better classification accuracy than oral reading fluency (ORF) or maze alone in predicting pass/fail rates for middle-school students on a large-scale reading assessment. Participants were 178 students in Grades 7 and 8 from a Midwestern school district.…
Leong, Ivone U S; Stuckey, Alexander; Lai, Daniel; Skinner, Jonathan R; Love, Donald R
2015-05-13
Long QT syndrome (LQTS) is an autosomal dominant condition predisposing to sudden death from malignant arrhythmia. Genetic testing identifies many missense single nucleotide variants of uncertain pathogenicity. Establishing genetic pathogenicity is an essential prerequisite to family cascade screening. Many laboratories use in silico prediction tools, either alone or in combination, or metaservers, in order to predict pathogenicity; however, their accuracy in the context of LQTS is unknown. We evaluated the accuracy of five in silico programs and two metaservers in the analysis of LQTS 1-3 gene variants. The in silico tools SIFT, PolyPhen-2, PROVEAN, SNPs&GO and SNAP, either alone or in all possible combinations, and the metaservers Meta-SNP and PredictSNP, were tested on 312 KCNQ1, KCNH2 and SCN5A gene variants that have previously been characterised by either in vitro or co-segregation studies as either "pathogenic" (283) or "benign" (29). The accuracy, sensitivity, specificity and Matthews Correlation Coefficient (MCC) were calculated to determine the best combination of in silico tools for each LQTS gene, and when all genes are combined. The best combination of in silico tools for KCNQ1 is PROVEAN, SNPs&GO and SIFT (accuracy 92.7%, sensitivity 93.1%, specificity 100% and MCC 0.70). The best combination of in silico tools for KCNH2 is SIFT and PROVEAN or PROVEAN, SNPs&GO and SIFT. Both combinations have the same scores for accuracy (91.1%), sensitivity (91.5%), specificity (87.5%) and MCC (0.62). In the case of SCN5A, SNAP and PROVEAN provided the best combination (accuracy 81.4%, sensitivity 86.9%, specificity 50.0%, and MCC 0.32). When all three LQT genes are combined, SIFT, PROVEAN and SNAP is the combination with the best performance (accuracy 82.7%, sensitivity 83.0%, specificity 80.0%, and MCC 0.44). Both metaservers performed better than the single in silico tools; however, they did not perform better than the best performing combination of in silico tools. The combination of in silico tools with the best performance is gene-dependent. The in silico tools reported here may have some value in assessing variants in the KCNQ1 and KCNH2 genes, but caution should be taken when the analysis is applied to SCN5A gene variants.
Schrank, Elisa S; Hitch, Lester; Wallace, Kevin; Moore, Richard; Stanhope, Steven J
2013-10-01
Passive-dynamic ankle-foot orthosis (PD-AFO) bending stiffness is a key functional characteristic for achieving enhanced gait function. However, current orthosis customization methods inhibit objective premanufacture tuning of the PD-AFO bending stiffness, making optimization of orthosis function challenging. We have developed a novel virtual functional prototyping (VFP) process, which harnesses the strengths of computer aided design (CAD) model parameterization and finite element analysis, to quantitatively tune and predict the functional characteristics of a PD-AFO, which is rapidly manufactured via fused deposition modeling (FDM). The purpose of this study was to assess the VFP process for PD-AFO bending stiffness. A PD-AFO CAD model was customized for a healthy subject and tuned to four bending stiffness values via VFP. Two sets of each tuned model were fabricated via FDM using medical-grade polycarbonate (PC-ISO). Dimensional accuracy of the fabricated orthoses was excellent (average 0.51 ± 0.39 mm). Manufacturing precision ranged from 0.0 to 0.74 Nm/deg (average 0.30 ± 0.36 Nm/deg). Bending stiffness prediction accuracy was within 1 Nm/deg using the manufacturer provided PC-ISO elastic modulus (average 0.48 ± 0.35 Nm/deg). Using an experimentally derived PC-ISO elastic modulus improved the optimized bending stiffness prediction accuracy (average 0.29 ± 0.57 Nm/deg). Robustness of the derived modulus was tested by carrying out the VFP process for a disparate subject, tuning the PD-AFO model to five bending stiffness values. For this disparate subject, bending stiffness prediction accuracy was strong (average 0.20 ± 0.14 Nm/deg). Overall, the VFP process had excellent dimensional accuracy, good manufacturing precision, and strong prediction accuracy with the derived modulus. Implementing VFP as part of our PD-AFO customization and manufacturing framework, which also includes fit customization, provides a novel and powerful method to predictably tune and precisely manufacture orthoses with objectively customized fit and functional characteristics.
The predictability of consumer visitation patterns
NASA Astrophysics Data System (ADS)
Krumme, Coco; Llorente, Alejandro; Cebrian, Manuel; Pentland, Alex ("Sandy"); Moro, Esteban
2013-04-01
We consider hundreds of thousands of individual economic transactions to ask: how predictable are consumers in their merchant visitation patterns? Our results suggest that, in the long-run, much of our seemingly elective activity is actually highly predictable. Notwithstanding a wide range of individual preferences, shoppers share regularities in how they visit merchant locations over time. Yet while aggregate behavior is largely predictable, the interleaving of shopping events introduces important stochastic elements at short time scales. These short- and long-scale patterns suggest a theoretical upper bound on predictability, and describe the accuracy of a Markov model in predicting a person's next location. We incorporate population-level transition probabilities in the predictive models, and find that in many cases these improve accuracy. While our results point to the elusiveness of precise predictions about where a person will go next, they suggest the existence, at large time-scales, of regularities across the population.
Motor system contribution to action prediction: Temporal accuracy depends on motor experience.
Stapel, Janny C; Hunnius, Sabine; Meyer, Marlene; Bekkering, Harold
2016-03-01
Predicting others' actions is essential for well-coordinated social interactions. In two experiments including an infant population, this study addresses to what extent motor experience of an observer determines prediction accuracy for others' actions. Results show that infants who were proficient crawlers but inexperienced walkers predicted crawling more accurately than walking, whereas age groups mastering both skills (i.e. toddlers and adults) were equally accurate in predicting walking and crawling. Regardless of experience, human movements were predicted more accurately by all age groups than non-human movement control stimuli. This suggests that for predictions to be accurate, the observed act needs to be established in the motor repertoire of the observer. Through the acquisition of new motor skills, we also become better at predicting others' actions. The findings thus stress the relevance of motor experience for social-cognitive development. Copyright © 2015 Elsevier B.V. All rights reserved.
The predictability of consumer visitation patterns
Krumme, Coco; Llorente, Alejandro; Cebrian, Manuel; Pentland, Alex ("Sandy"); Moro, Esteban
2013-01-01
We consider hundreds of thousands of individual economic transactions to ask: how predictable are consumers in their merchant visitation patterns? Our results suggest that, in the long-run, much of our seemingly elective activity is actually highly predictable. Notwithstanding a wide range of individual preferences, shoppers share regularities in how they visit merchant locations over time. Yet while aggregate behavior is largely predictable, the interleaving of shopping events introduces important stochastic elements at short time scales. These short- and long-scale patterns suggest a theoretical upper bound on predictability, and describe the accuracy of a Markov model in predicting a person's next location. We incorporate population-level transition probabilities in the predictive models, and find that in many cases these improve accuracy. While our results point to the elusiveness of precise predictions about where a person will go next, they suggest the existence, at large time-scales, of regularities across the population. PMID:23598917
Application of linear regression analysis in accuracy assessment of rolling force calculations
NASA Astrophysics Data System (ADS)
Poliak, E. I.; Shim, M. K.; Kim, G. S.; Choo, W. Y.
1998-10-01
Efficient operation of the computational models employed in process control systems require periodical assessment of the accuracy of their predictions. Linear regression is proposed as a tool which allows separate systematic and random prediction errors from those related to measurements. A quantitative characteristic of the model predictive ability is introduced in addition to standard statistical tests for model adequacy. Rolling force calculations are considered as an example for the application. However, the outlined approach can be used to assess the performance of any computational model.
Flight Test Results: CTAS Cruise/Descent Trajectory Prediction Accuracy for En route ATC Advisories
NASA Technical Reports Server (NTRS)
Green, S.; Grace, M.; Williams, D.
1999-01-01
The Center/TRACON Automation System (CTAS), under development at NASA Ames Research Center, is designed to assist controllers with the management and control of air traffic transitioning to/from congested airspace. This paper focuses on the transition from the en route environment, to high-density terminal airspace, under a time-based arrival-metering constraint. Two flight tests were conducted at the Denver Air Route Traffic Control Center (ARTCC) to study trajectory-prediction accuracy, the key to accurate Decision Support Tool advisories such as conflict detection/resolution and fuel-efficient metering conformance. In collaboration with NASA Langley Research Center, these test were part of an overall effort to research systems and procedures for the integration of CTAS and flight management systems (FMS). The Langley Transport Systems Research Vehicle Boeing 737 airplane flew a combined total of 58 cruise-arrival trajectory runs while following CTAS clearance advisories. Actual trajectories of the airplane were compared to CTAS and FMS predictions to measure trajectory-prediction accuracy and identify the primary sources of error for both. The research airplane was used to evaluate several levels of cockpit automation ranging from conventional avionics to a performance-based vertical navigation (VNAV) FMS. Trajectory prediction accuracy was analyzed with respect to both ARTCC radar tracking and GPS-based aircraft measurements. This paper presents detailed results describing the trajectory accuracy and error sources. Although differences were found in both accuracy and error sources, CTAS accuracy was comparable to the FMS in terms of both meter-fix arrival-time performance (in support of metering) and 4D-trajectory prediction (key to conflict prediction). Overall arrival time errors (mean plus standard deviation) were measured to be approximately 24 seconds during the first flight test (23 runs) and 15 seconds during the second flight test (25 runs). The major source of error during these tests was found to be the predicted winds aloft used by CTAS. Position and velocity estimates of the airplane provided to CTAS by the ATC Host radar tracker were found to be a relatively insignificant error source for the trajectory conditions evaluated. Airplane performance modeling errors within CTAS were found to not significantly affect arrival time errors when the constrained descent procedures were used. The most significant effect related to the flight guidance was observed to be the cross-track and turn-overshoot errors associated with conventional VOR guidance. Lateral navigation (LNAV) guidance significantly reduced both the cross-track and turn-overshoot error. Pilot procedures and VNAV guidance were found to significantly reduce the vertical profile errors associated with atmospheric and aircraft performance model errors.
Prediction of beef carcass and meat traits from rearing factors in young bulls and cull cows.
Soulat, J; Picard, B; Léger, S; Monteils, V
2016-04-01
The aim of this study was to predict the beef carcass and LM (thoracis part) characteristics and the sensory properties of the LM from rearing factors applied during the fattening period. Individual data from 995 animals (688 young bulls and 307 cull cows) in 15 experiments were used to establish prediction models. The data concerned rearing factors (13 variables), carcass characteristics (5 variables), LM characteristics (2 variables), and LM sensory properties (3 variables). In this study, 8 prediction models were established: dressing percentage and the proportions of fat tissue and muscle in the carcass to characterize the beef carcass; cross-sectional area of fibers (mean fiber area) and isocitrate dehydrogenase activity to characterize the LM; and, finally, overall tenderness, juiciness, and flavor intensity scores to characterize the LM sensory properties. A random effect was considered in each model: the breed for the prediction models for the carcass and LM characteristics and the trained taste panel for the prediction of the meat sensory properties. To evaluate the quality of prediction models, 3 criteria were measured: robustness, accuracy, and precision. The model was robust when the root mean square errors of prediction of calibration and validation sub-data sets were near to one another. Except for the mean fiber area model, the obtained predicted models were robust. The prediction models were considered to have a high accuracy when the mean prediction error (MPE) was ≤0.10 and to have a high precision when the was the closest to 1. The prediction of the characteristics of the carcass from the rearing factors had a high precision ( > 0.70) and a high prediction accuracy (MPE < 0.10), except for the fat percentage model ( = 0.67, MPE = 0.16). However, the predictions of the LM characteristics and LM sensory properties from the rearing factors were not sufficiently precise ( < 0.50) and accurate (MPE > 0.10). Only the flavor intensity of the beef score could be satisfactorily predicted from the rearing factors with high precision ( = 0.72) and accuracy (MPE = 0.10). All the prediction models displayed different effects of the rearing factors according to animal categories (young bulls or cull cows). In consequence, these prediction models display the necessary adaption of rearing factors during the fattening period according to animal categories to optimize the carcass traits according to animal categories.
Yang, Jing; He, Bao-Ji; Jang, Richard; Zhang, Yang; Shen, Hong-Bin
2015-01-01
Abstract Motivation: Cysteine-rich proteins cover many important families in nature but there are currently no methods specifically designed for modeling the structure of these proteins. The accuracy of disulfide connectivity pattern prediction, particularly for the proteins of higher-order connections, e.g. >3 bonds, is too low to effectively assist structure assembly simulations. Results: We propose a new hierarchical order reduction protocol called Cyscon for disulfide-bonding prediction. The most confident disulfide bonds are first identified and bonding prediction is then focused on the remaining cysteine residues based on SVR training. Compared with purely machine learning-based approaches, Cyscon improved the average accuracy of connectivity pattern prediction by 21.9%. For proteins with more than 5 disulfide bonds, Cyscon improved the accuracy by 585% on the benchmark set of PDBCYS. When applied to 158 non-redundant cysteine-rich proteins, Cyscon predictions helped increase (or decrease) the TM-score (or RMSD) of the ab initio QUARK modeling by 12.1% (or 14.4%). This result demonstrates a new avenue to improve the ab initio structure modeling for cysteine-rich proteins. Availability and implementation: http://www.csbio.sjtu.edu.cn/bioinf/Cyscon/ Contact: zhng@umich.edu or hbshen@sjtu.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26254435
Improving orbit prediction accuracy through supervised machine learning
NASA Astrophysics Data System (ADS)
Peng, Hao; Bai, Xiaoli
2018-05-01
Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs' trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: (1) the ML model can be used to improve the same RSO's orbit information that is not available during the learning process but shares the same time interval as the training data; (2) the ML model can be used to improve predictions of the same RSO at future epochs; and (3) the ML model based on a RSO can be applied to other RSOs that share some common features.
Improving the Accuracy of Software-Based Energy Analysis for Residential Buildings (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polly, B.
2011-09-01
This presentation describes the basic components of software-based energy analysis for residential buildings, explores the concepts of 'error' and 'accuracy' when analysis predictions are compared to measured data, and explains how NREL is working to continuously improve the accuracy of energy analysis methods.
Apirakviriya, Chayanis; Rungruxsirivorn, Tassawan; Phupong, Vorapong; Wisawasukmongchol, Wirach
2016-05-01
To assess diagnostic accuracy of 3D transvaginal ultrasound (3D-TVS) compared with hysteroscopy in detecting uterine cavity abnormalities in infertile women. This prospective observational cross-sectional study was conducted during the July 2013 to December 2013 study period. Sixty-nine women with infertility were enrolled. In the mid to late follicular phase of each subject's menstrual cycle, 3D transvaginal ultrasound and hysteroscopy were performed on the same day in each patient. Hysteroscopy is widely considered to be the gold standard method for investigation of the uterine cavity. Uterine cavity characteristics and abnormalities were recorded. Diagnostic accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and positive and negative likelihood ratios were evaluated. Hysteroscopy was successfully performed in all subjects. Hysteroscopy diagnosed pathological findings in 22 of 69 cases (31.8%). There were 18 endometrial polyps, 3 submucous myomas, and 1 septate uterus. Three-dimensional transvaginal ultrasound in comparison with hysteroscopy had 84.1% diagnostic accuracy, 68.2% sensitivity, 91.5% specificity, 79% positive predictive value, and 86% negative predictive value. The positive and negative likelihood ratios were 8.01 and 0.3, respectively. 3D-TVS successfully detected every case of submucous myoma and uterine anomaly. For detection of endometrial polyps, 3D-TVS had 61.1% sensitivity, 91.5% specificity, and 83.1% diagnostic accuracy. 3D-TVS demonstrated 84.1% diagnostic accuracy for detecting uterine cavity abnormalities in infertile women. A significant percentage of infertile patients had evidence of uterine cavity pathology. Hysteroscopy is, therefore, recommended for accurate detection and diagnosis of uterine cavity lesion. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Diagnostic accuracy of FEV1/forced vital capacity ratio z scores in asthmatic patients.
Lambert, Allison; Drummond, M Bradley; Wei, Christine; Irvin, Charles; Kaminsky, David; McCormack, Meredith; Wise, Robert
2015-09-01
The FEV1/forced vital capacity (FVC) ratio is used as a criterion for airflow obstruction; however, the test characteristics of spirometry in the diagnosis of asthma are not well established. The accuracy of a test depends on the pretest probability of disease. We wanted to estimate the FEV1/FVC ratio z score threshold with optimal accuracy for the diagnosis of asthma for different pretest probabilities. Asthmatic patients enrolled in 4 trials from the Asthma Clinical Research Centers were included in this analysis. Measured and predicted FEV1/FVC ratios were obtained, with calculation of z scores for each participant. Across a range of asthma prevalences and z score thresholds, the overall diagnostic accuracy was calculated. One thousand six hundred eight participants were included (mean age, 39 years; 71% female; 61% white). The mean FEV1 percent predicted value was 83% (SD, 15%). In a symptomatic population with 50% pretest probability of asthma, optimal accuracy (68%) is achieved with a z score threshold of -1.0 (16th percentile), corresponding to a 6 percentage point reduction from the predicted ratio. However, in a screening population with a 5% pretest probability of asthma, the optimum z score is -2.0 (second percentile), corresponding to a 12 percentage point reduction from the predicted ratio. These findings were not altered by markers of disease control. Reduction of the FEV1/FVC ratio can support the diagnosis of asthma; however, the ratio is neither sensitive nor specific enough for diagnostic accuracy. When interpreting spirometric results, consideration of the pretest probability is an important consideration in the diagnosis of asthma based on airflow limitation. Copyright © 2015 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.
Torres-Dowdall, J.; Farmer, A.H.; Bucher, E.H.; Rye, R.O.; Landis, G.
2009-01-01
Stable isotope analyses have revolutionized the study of migratory connectivity. However, as with all tools, their limitations must be understood in order to derive the maximum benefit of a particular application. The goal of this study was to evaluate the efficacy of stable isotopes of C, N, H, O and S for assigning known-origin feathers to the molting sites of migrant shorebird species wintering and breeding in Argentina. Specific objectives were to: 1) compare the efficacy of the technique for studying shorebird species with different migration patterns, life histories and habitat-use patterns; 2) evaluate the grouping of species with similar migration and habitat use patterns in a single analysis to potentially improve prediction accuracy; and 3) evaluate the potential gains in prediction accuracy that might be achieved from using multiple stable isotopes. The efficacy of stable isotope ratios to determine origin was found to vary with species. While one species (White-rumped Sandpiper, Calidris fuscicollis) had high levels of accuracy assigning samples to known origin (91% of samples correctly assigned), another (Collared Plover, Charadrius collaris) showed low levels of accuracy (52% of samples correctly assigned). Intra-individual variability may account for this difference in efficacy. The prediction model for three species with similar migration and habitat-use patterns performed poorly compared with the model for just one of the species (71% versus 91% of samples correctly assigned). Thus, combining multiple sympatric species may not improve model prediction accuracy. Increasing the number of stable isotopes in the analyses increased the accuracy of assigning shorebirds to their molting origin, but the best combination - involving a subset of all the isotopes analyzed - varied among species.
Predict the fatigue life of crack based on extended finite element method and SVR
NASA Astrophysics Data System (ADS)
Song, Weizhen; Jiang, Zhansi; Jiang, Hui
2018-05-01
Using extended finite element method (XFEM) and support vector regression (SVR) to predict the fatigue life of plate crack. Firstly, the XFEM is employed to calculate the stress intensity factors (SIFs) with given crack sizes. Then predicetion model can be built based on the function relationship of the SIFs with the fatigue life or crack length. Finally, according to the prediction model predict the SIFs at different crack sizes or different cycles. Because of the accuracy of the forward Euler method only ensured by the small step size, a new prediction method is presented to resolve the issue. The numerical examples were studied to demonstrate the proposed method allow a larger step size and have a high accuracy.
Da, Yang; Wang, Chunkao; Wang, Shengwen; Hu, Guo
2014-01-01
We established a genomic model of quantitative trait with genomic additive and dominance relationships that parallels the traditional quantitative genetics model, which partitions a genotypic value as breeding value plus dominance deviation and calculates additive and dominance relationships using pedigree information. Based on this genomic model, two sets of computationally complementary but mathematically identical mixed model methods were developed for genomic best linear unbiased prediction (GBLUP) and genomic restricted maximum likelihood estimation (GREML) of additive and dominance effects using SNP markers. These two sets are referred to as the CE and QM sets, where the CE set was designed for large numbers of markers and the QM set was designed for large numbers of individuals. GBLUP and associated accuracy formulations for individuals in training and validation data sets were derived for breeding values, dominance deviations and genotypic values. Simulation study showed that GREML and GBLUP generally were able to capture small additive and dominance effects that each accounted for 0.00005–0.0003 of the phenotypic variance and GREML was able to differentiate true additive and dominance heritability levels. GBLUP of the total genetic value as the summation of additive and dominance effects had higher prediction accuracy than either additive or dominance GBLUP, causal variants had the highest accuracy of GREML and GBLUP, and predicted accuracies were in agreement with observed accuracies. Genomic additive and dominance relationship matrices using SNP markers were consistent with theoretical expectations. The GREML and GBLUP methods can be an effective tool for assessing the type and magnitude of genetic effects affecting a phenotype and for predicting the total genetic value at the whole genome level. PMID:24498162
Da, Yang; Wang, Chunkao; Wang, Shengwen; Hu, Guo
2014-01-01
We established a genomic model of quantitative trait with genomic additive and dominance relationships that parallels the traditional quantitative genetics model, which partitions a genotypic value as breeding value plus dominance deviation and calculates additive and dominance relationships using pedigree information. Based on this genomic model, two sets of computationally complementary but mathematically identical mixed model methods were developed for genomic best linear unbiased prediction (GBLUP) and genomic restricted maximum likelihood estimation (GREML) of additive and dominance effects using SNP markers. These two sets are referred to as the CE and QM sets, where the CE set was designed for large numbers of markers and the QM set was designed for large numbers of individuals. GBLUP and associated accuracy formulations for individuals in training and validation data sets were derived for breeding values, dominance deviations and genotypic values. Simulation study showed that GREML and GBLUP generally were able to capture small additive and dominance effects that each accounted for 0.00005-0.0003 of the phenotypic variance and GREML was able to differentiate true additive and dominance heritability levels. GBLUP of the total genetic value as the summation of additive and dominance effects had higher prediction accuracy than either additive or dominance GBLUP, causal variants had the highest accuracy of GREML and GBLUP, and predicted accuracies were in agreement with observed accuracies. Genomic additive and dominance relationship matrices using SNP markers were consistent with theoretical expectations. The GREML and GBLUP methods can be an effective tool for assessing the type and magnitude of genetic effects affecting a phenotype and for predicting the total genetic value at the whole genome level.
Sauder, Cara; Bretl, Michelle; Eadie, Tanya
2017-09-01
The purposes of this study were to (1) determine and compare the diagnostic accuracy of a single acoustic measure, smoothed cepstral peak prominence (CPPS), to predict voice disorder status from connected speech samples using two software systems: Analysis of Dysphonia in Speech and Voice (ADSV) and Praat; and (2) to determine the relationship between measures of CPPS generated from these programs. This is a retrospective cross-sectional study. Measures of CPPS were obtained from connected speech recordings of 100 subjects with voice disorders and 70 nondysphonic subjects without vocal complaints using commercially available ADSV and freely downloadable Praat software programs. Logistic regression and receiver operating characteristic (ROC) analyses were used to evaluate and compare the diagnostic accuracy of CPPS measures. Relationships between CPPS measures from the programs were determined. Results showed acceptable overall accuracy rates (75% accuracy, ADSV; 82% accuracy, Praat) and area under the ROC curves (area under the curve [AUC] = 0.81, ADSV; AUC = 0.91, Praat) for predicting voice disorder status, with slight differences in sensitivity and specificity. CPPS measures derived from Praat were uniquely predictive of disorder status above and beyond CPPS measures from ADSV (χ 2 (1) = 40.71, P < 0.001). CPPS measures from both programs were significantly and highly correlated (r = 0.88, P < 0.001). A single acoustic measure of CPPS was highly predictive of voice disorder status using either program. Clinicians may consider using CPPS to complement clinical voice evaluation and screening protocols. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Brusselaers, Nele; Labeau, Sonia; Vogelaers, Dirk; Blot, Stijn
2013-03-01
In ventilator-associated pneumonia (VAP), early appropriate antimicrobial therapy may be hampered by involvement of multidrug-resistant (MDR) pathogens. A systematic review and diagnostic test accuracy meta-analysis were performed to analyse whether lower respiratory tract surveillance cultures accurately predict the causative pathogens of subsequent VAP in adult patients. Selection and assessment of eligibility were performed by three investigators by mutual consideration. Of the 525 studies retrieved, 14 were eligible for inclusion (all in English; published since 1994), accounting for 791 VAP episodes. The following data were collected: study and population characteristics; in- and exclusion criteria; diagnostic criteria for VAP; microbiological workup of surveillance and diagnostic VAP cultures. Sub-analyses were conducted for VAP caused by Staphylococcus aureus, Pseudomonas spp., and Acinetobacter spp., MDR microorganisms, frequency of sampling, and consideration of all versus the most recent surveillance cultures. The meta-analysis showed a high accuracy of surveillance cultures, with pooled sensitivities up to 0.75 and specificities up to 0.92 in culture-positive VAP. The area under the curve (AUC) of the hierarchical summary receiver-operating characteristic curve demonstrates moderate accuracy (AUC: 0.90) in predicting multidrug resistance. A sampling frequency of >2/week (sensitivity 0.79; specificity 0.96) and consideration of only the most recent surveillance culture (sensitivity 0.78; specificity 0.96) are associated with a higher accuracy of prediction. This study provides evidence for the benefit of surveillance cultures in predicting MDR bacterial pathogens in VAP. However, clinical and statistical heterogeneity, limited samples sizes, and bias remain important limitations of this meta-analysis.
Improving prediction accuracy of cooling load using EMD, PSR and RBFNN
NASA Astrophysics Data System (ADS)
Shen, Limin; Wen, Yuanmei; Li, Xiaohong
2017-08-01
To increase the accuracy for the prediction of cooling load demand, this work presents an EMD (empirical mode decomposition)-PSR (phase space reconstruction) based RBFNN (radial basis function neural networks) method. Firstly, analyzed the chaotic nature of the real cooling load demand, transformed the non-stationary cooling load historical data into several stationary intrinsic mode functions (IMFs) by using EMD. Secondly, compared the RBFNN prediction accuracies of each IMFs and proposed an IMF combining scheme that is combine the lower-frequency components (called IMF4-IMF6 combined) while keep the higher frequency component (IMF1, IMF2, IMF3) and the residual unchanged. Thirdly, reconstruct phase space for each combined components separately, process the highest frequency component (IMF1) by differential method and predict with RBFNN in the reconstructed phase spaces. Real cooling load data of a centralized ice storage cooling systems in Guangzhou are used for simulation. The results show that the proposed hybrid method outperforms the traditional methods.
Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields.
Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo
2016-01-11
Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility.
Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields
NASA Astrophysics Data System (ADS)
Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo
2016-01-01
Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility.
Caçola, Priscila M; Pant, Mohan D
2014-10-01
The purpose was to use a multi-level statistical technique to analyze how children's age, motor proficiency, and cognitive styles interact to affect accuracy on reach estimation tasks via Motor Imagery and Visual Imagery. Results from the Generalized Linear Mixed Model analysis (GLMM) indicated that only the 7-year-old age group had significant random intercepts for both tasks. Motor proficiency predicted accuracy in reach tasks, and cognitive styles (object scale) predicted accuracy in the motor imagery task. GLMM analysis is suitable to explore age and other parameters of development. In this case, it allowed an assessment of motor proficiency interacting with age to shape how children represent, plan, and act on the environment.
Karp, Jerome M; Eryilmaz, Ertan; Erylimaz, Ertan; Cowburn, David
2015-01-01
There has been a longstanding interest in being able to accurately predict NMR chemical shifts from structural data. Recent studies have focused on using molecular dynamics (MD) simulation data as input for improved prediction. Here we examine the accuracy of chemical shift prediction for intein systems, which have regions of intrinsic disorder. We find that using MD simulation data as input for chemical shift prediction does not consistently improve prediction accuracy over use of a static X-ray crystal structure. This appears to result from the complex conformational ensemble of the disordered protein segments. We show that using accelerated molecular dynamics (aMD) simulations improves chemical shift prediction, suggesting that methods which better sample the conformational ensemble like aMD are more appropriate tools for use in chemical shift prediction for proteins with disordered regions. Moreover, our study suggests that data accurately reflecting protein dynamics must be used as input for chemical shift prediction in order to correctly predict chemical shifts in systems with disorder.
Multivariate prediction of motor diagnosis in Huntington's disease: 12 years of PREDICT‐HD
Long, Jeffrey D.
2015-01-01
Abstract Background It is well known in Huntington's disease that cytosine‐adenine‐guanine expansion and age at study entry are predictive of the timing of motor diagnosis. The goal of this study was to assess whether additional motor, imaging, cognitive, functional, psychiatric, and demographic variables measured at study entry increased the ability to predict the risk of motor diagnosis over 12 years. Methods One thousand seventy‐eight Huntington's disease gene–expanded carriers (64% female) from the Neurobiological Predictors of Huntington's Disease study were followed up for up to 12 y (mean = 5, standard deviation = 3.3) covering 2002 to 2014. No one had a motor diagnosis at study entry, but 225 (21%) carriers prospectively received a motor diagnosis. Analysis was performed with random survival forests, which is a machine learning method for right‐censored data. Results Adding 34 variables along with cytosine‐adenine‐guanine and age substantially increased predictive accuracy relative to cytosine‐adenine‐guanine and age alone. Adding six of the common motor and cognitive variables (total motor score, diagnostic confidence level, Symbol Digit Modalities Test, three Stroop tests) resulted in lower predictive accuracy than the full set, but still had twice the 5‐y predictive accuracy than when using cytosine‐adenine‐guanine and age alone. Additional analysis suggested interactions and nonlinear effects that were characterized in a post hoc Cox regression model. Conclusions Measurement of clinical variables can substantially increase the accuracy of predicting motor diagnosis over and above cytosine‐adenine‐guanine and age (and their interaction). Estimated probabilities can be used to characterize progression level and aid in future studies' sample selection. © 2015 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society PMID:26340420
Lado, Bettina; Matus, Ivan; Rodríguez, Alejandra; Inostroza, Luis; Poland, Jesse; Belzile, François; del Pozo, Alejandro; Quincke, Martín; Castro, Marina; von Zitzewitz, Jarislav
2013-01-01
In crop breeding, the interest of predicting the performance of candidate cultivars in the field has increased due to recent advances in molecular breeding technologies. However, the complexity of the wheat genome presents some challenges for applying new technologies in molecular marker identification with next-generation sequencing. We applied genotyping-by-sequencing, a recently developed method to identify single-nucleotide polymorphisms, in the genomes of 384 wheat (Triticum aestivum) genotypes that were field tested under three different water regimes in Mediterranean climatic conditions: rain-fed only, mild water stress, and fully irrigated. We identified 102,324 single-nucleotide polymorphisms in these genotypes, and the phenotypic data were used to train and test genomic selection models intended to predict yield, thousand-kernel weight, number of kernels per spike, and heading date. Phenotypic data showed marked spatial variation. Therefore, different models were tested to correct the trends observed in the field. A mixed-model using moving-means as a covariate was found to best fit the data. When we applied the genomic selection models, the accuracy of predicted traits increased with spatial adjustment. Multiple genomic selection models were tested, and a Gaussian kernel model was determined to give the highest accuracy. The best predictions between environments were obtained when data from different years were used to train the model. Our results confirm that genotyping-by-sequencing is an effective tool to obtain genome-wide information for crops with complex genomes, that these data are efficient for predicting traits, and that correction of spatial variation is a crucial ingredient to increase prediction accuracy in genomic selection models. PMID:24082033
EGASP: the human ENCODE Genome Annotation Assessment Project
Guigó, Roderic; Flicek, Paul; Abril, Josep F; Reymond, Alexandre; Lagarde, Julien; Denoeud, France; Antonarakis, Stylianos; Ashburner, Michael; Bajic, Vladimir B; Birney, Ewan; Castelo, Robert; Eyras, Eduardo; Ucla, Catherine; Gingeras, Thomas R; Harrow, Jennifer; Hubbard, Tim; Lewis, Suzanna E; Reese, Martin G
2006-01-01
Background We present the results of EGASP, a community experiment to assess the state-of-the-art in genome annotation within the ENCODE regions, which span 1% of the human genome sequence. The experiment had two major goals: the assessment of the accuracy of computational methods to predict protein coding genes; and the overall assessment of the completeness of the current human genome annotations as represented in the ENCODE regions. For the computational prediction assessment, eighteen groups contributed gene predictions. We evaluated these submissions against each other based on a 'reference set' of annotations generated as part of the GENCODE project. These annotations were not available to the prediction groups prior to the submission deadline, so that their predictions were blind and an external advisory committee could perform a fair assessment. Results The best methods had at least one gene transcript correctly predicted for close to 70% of the annotated genes. Nevertheless, the multiple transcript accuracy, taking into account alternative splicing, reached only approximately 40% to 50% accuracy. At the coding nucleotide level, the best programs reached an accuracy of 90% in both sensitivity and specificity. Programs relying on mRNA and protein sequences were the most accurate in reproducing the manually curated annotations. Experimental validation shows that only a very small percentage (3.2%) of the selected 221 computationally predicted exons outside of the existing annotation could be verified. Conclusion This is the first such experiment in human DNA, and we have followed the standards established in a similar experiment, GASP1, in Drosophila melanogaster. We believe the results presented here contribute to the value of ongoing large-scale annotation projects and should guide further experimental methods when being scaled up to the entire human genome sequence. PMID:16925836
Dusenberry, Michael W; Brown, Charles K; Brewer, Kori L
2017-02-01
To construct an artificial neural network (ANN) model that can predict the presence of acute CT findings with both high sensitivity and high specificity when applied to the population of patients≥age 65years who have incurred minor head injury after a fall. An ANN was created in the Python programming language using a population of 514 patients ≥ age 65 years presenting to the ED with minor head injury after a fall. The patient dataset was divided into three parts: 60% for "training", 20% for "cross validation", and 20% for "testing". Sensitivity, specificity, positive and negative predictive values, and accuracy were determined by comparing the model's predictions to the actual correct answers for each patient. On the "cross validation" data, the model attained a sensitivity ("recall") of 100.00%, specificity of 78.95%, PPV ("precision") of 78.95%, NPV of 100.00%, and accuracy of 88.24% in detecting the presence of positive head CTs. On the "test" data, the model attained a sensitivity of 97.78%, specificity of 89.47%, PPV of 88.00%, NPV of 98.08%, and accuracy of 93.14% in detecting the presence of positive head CTs. ANNs show great potential for predicting CT findings in the population of patients ≥ 65 years of age presenting with minor head injury after a fall. As a good first step, the ANN showed comparable sensitivity, predictive values, and accuracy, with a much higher specificity than the existing decision rules in clinical usage for predicting head CTs with acute intracranial findings. Copyright © 2016 Elsevier Inc. All rights reserved.
Accuracy of ultrasound for the prediction of placenta accreta.
Bowman, Zachary S; Eller, Alexandra G; Kennedy, Anne M; Richards, Douglas S; Winter, Thomas C; Woodward, Paula J; Silver, Robert M
2014-08-01
Ultrasound has been reported to be greater than 90% sensitive for the diagnosis of accreta. Prior studies may be subject to bias because of single expert observers, suspicion for accreta, and knowledge of risk factors. We aimed to assess the accuracy of ultrasound for the prediction of accreta. Patients with accreta at a single academic center were matched to patients with placenta previa, but no accreta, by year of delivery. Ultrasound studies with views of the placenta were collected, deidentified, blinded to clinical history, and placed in random sequence. Six investigators prospectively interpreted each study for the presence of accreta and findings reported to be associated with its diagnosis. Sensitivity, specificity, positive predictive, negative predictive value, and accuracy were calculated. Characteristics of accurate findings were compared using univariate and multivariate analyses. Six investigators examined 229 ultrasound studies from 55 patients with accreta and 56 controls for 1374 independent observations. 1205/1374 (87.7% overall, 90% controls, 84.9% cases) studies were given a diagnosis. There were 371 (27.0%) true positives; 81 (5.9%) false positives; 533 (38.8%) true negatives, 220 (16.0%) false negatives, and 169 (12.3%) with uncertain diagnosis. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were 53.5%, 88.0%, 82.1%, 64.8%, and 64.8%, respectively. In multivariate analysis, true positives were more likely to have placental lacunae (odds ratio [OR], 1.5; 95% confidence interval [CI], 1.4-1.6), loss of retroplacental clear space (OR, 2.4; 95% CI, 1.1-4.9), or abnormalities on color Doppler (OR, 2.1; 95% CI, 1.8-2.4). Ultrasound for the prediction of placenta accreta may not be as sensitive as previously described. Copyright © 2014 Mosby, Inc. All rights reserved.
Prediction of skin sensitization potency using machine learning approaches.
Zang, Qingda; Paris, Michael; Lehmann, David M; Bell, Shannon; Kleinstreuer, Nicole; Allen, David; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Strickland, Judy
2017-07-01
The replacement of animal use in testing for regulatory classification of skin sensitizers is a priority for US federal agencies that use data from such testing. Machine learning models that classify substances as sensitizers or non-sensitizers without using animal data have been developed and evaluated. Because some regulatory agencies require that sensitizers be further classified into potency categories, we developed statistical models to predict skin sensitization potency for murine local lymph node assay (LLNA) and human outcomes. Input variables for our models included six physicochemical properties and data from three non-animal test methods: direct peptide reactivity assay; human cell line activation test; and KeratinoSens™ assay. Models were built to predict three potency categories using four machine learning approaches and were validated using external test sets and leave-one-out cross-validation. A one-tiered strategy modeled all three categories of response together while a two-tiered strategy modeled sensitizer/non-sensitizer responses and then classified the sensitizers as strong or weak sensitizers. The two-tiered model using the support vector machine with all assay and physicochemical data inputs provided the best performance, yielding accuracy of 88% for prediction of LLNA outcomes (120 substances) and 81% for prediction of human test outcomes (87 substances). The best one-tiered model predicted LLNA outcomes with 78% accuracy and human outcomes with 75% accuracy. By comparison, the LLNA predicts human potency categories with 69% accuracy (60 of 87 substances correctly categorized). These results suggest that computational models using non-animal methods may provide valuable information for assessing skin sensitization potency. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Elminir, Hamdy K; Own, Hala S; Azzam, Yosry A; Riad, A M
2008-03-28
The problem we address here describes the on-going research effort that takes place to shed light on the applicability of using artificial intelligence techniques to predict the local noon erythemal UV irradiance in the plain areas of Egypt. In light of this fact, we use the bootstrap aggregating (bagging) algorithm to improve the prediction accuracy reported by a multi-layer perceptron (MLP) network. The results showed that, the overall prediction accuracy for the MLP network was only 80.9%. When bagging algorithm is used, the accuracy reached 94.8%; an improvement of about 13.9% was achieved. These improvements demonstrate the efficiency of the bagging procedure, and may be used as a promising tool at least for the plain areas of Egypt.
Contingency Table Browser - prediction of early stage protein structure.
Kalinowska, Barbara; Krzykalski, Artur; Roterman, Irena
2015-01-01
The Early Stage (ES) intermediate represents the starting structure in protein folding simulations based on the Fuzzy Oil Drop (FOD) model. The accuracy of FOD predictions is greatly dependent on the accuracy of the chosen intermediate. A suitable intermediate can be constructed using the sequence-structure relationship information contained in the so-called contingency table - this table expresses the likelihood of encountering various structural motifs for each tetrapeptide fragment in the amino acid sequence. The limited accuracy with which such structures could previously be predicted provided the motivation for a more indepth study of the contingency table itself. The Contingency Table Browser is a tool which can visualize, search and analyze the table. Our work presents possible applications of Contingency Table Browser, among them - analysis of specific protein sequences from the point of view of their structural ambiguity.
Pinder, John E; Rowan, David J; Rasmussen, Joseph B; Smith, Jim T; Hinton, Thomas G; Whicker, F W
2014-08-01
Data from published studies and World Wide Web sources were combined to produce and test a regression model to predict Cs concentration ratios for freshwater fish species. The accuracies of predicted concentration ratios, which were computed using 1) species trophic levels obtained from random resampling of known food items and 2) K concentrations in the water for 207 fish from 44 species and 43 locations, were tested against independent observations of ratios for 57 fish from 17 species from 25 locations. Accuracy was assessed as the percent of observed to predicted ratios within factors of 2 or 3. Conservatism, expressed as the lack of under prediction, was assessed as the percent of observed to predicted ratios that were less than 2 or less than 3. The model's median observed to predicted ratio was 1.26, which was not significantly different from 1, and 50% of the ratios were between 0.73 and 1.85. The percentages of ratios within factors of 2 or 3 were 67 and 82%, respectively. The percentages of ratios that were <2 or <3 were 79 and 88%, respectively. An example for Perca fluviatilis demonstrated that increased prediction accuracy could be obtained when more detailed knowledge of diet was available to estimate trophic level. Copyright © 2014 Elsevier Ltd. All rights reserved.
Meher, Prabina K.; Sahu, Tanmaya K.; Gahoi, Shachi; Rao, Atmakuri R.
2018-01-01
Heat shock proteins (HSPs) play a pivotal role in cell growth and variability. Since conventional approaches are expensive and voluminous protein sequence information is available in the post-genomic era, development of an automated and accurate computational tool is highly desirable for prediction of HSPs, their families and sub-types. Thus, we propose a computational approach for reliable prediction of all these components in a single framework and with higher accuracy as well. The proposed approach achieved an overall accuracy of ~84% in predicting HSPs, ~97% in predicting six different families of HSPs, and ~94% in predicting four types of DnaJ proteins, with bench mark datasets. The developed approach also achieved higher accuracy as compared to most of the existing approaches. For easy prediction of HSPs by experimental scientists, a user friendly web server ir-HSP is made freely accessible at http://cabgrid.res.in:8080/ir-hsp. The ir-HSP was further evaluated for proteome-wide identification of HSPs by using proteome datasets of eight different species, and ~50% of the predicted HSPs in each species were found to be annotated with InterPro HSP families/domains. Thus, the developed computational method is expected to supplement the currently available approaches for prediction of HSPs, to the extent of their families and sub-types. PMID:29379521
NASA Astrophysics Data System (ADS)
López, Ana María Camacho; Regueras, José María Gutiérrez
2017-10-01
The new goals of automotive industry related with environment concerns, the reduction of fuel emissions and the security requirements have driven up to new designs which main objective is reducing weight. It can be achieved through new materials such as nano-structured materials, fibre-reinforced composites or steels with higher strength, among others. Into the last group, the Advance High Strength Steels (AHSS) and particularly, dual-phase steels are in a predominant situation. However, despite of their special characteristics, they present issues related to their manufacturability such as springback, splits and cracks, among others. This work is focused on the deep drawing processof rectangular shapes, a very usual forming operation that allows manufacturing several automotive parts like oil pans, cases, etc. Two of the main parameters in this process which affect directly to the characteristics of final product are blank thickness (t) and die radius (Rd). Influence of t and Rd on the formability of dual-phase steels has been analysed considering values typically used in industrial manufacturing for a wide range of dual-phase steels using finite element modelling and simulation; concretely, the influence of these parameters in the percentage of thickness reduction pt(%), a quite important value for manufactured parts by deep drawing operations, which affects to its integrity and its service behaviour. Modified Morh Coulomb criteria (MMC) has been used in order to obtain Fracture Forming Limit Diagrams (FFLD) which take into account an important failure mode in dual-phase steels: shear fracture. Finally, a relation between thickness reduction percentage and studied parameters has been established fordual-phase steels, obtaining a collection of equations based on Design of Experiments (D.O.E) technique, which can be useful in order to predict approximate results.
Impact Of The Material Variability On The Stamping Process: Numerical And Analytical Analysis
NASA Astrophysics Data System (ADS)
Ledoux, Yann; Sergent, Alain; Arrieux, Robert
2007-05-01
The finite element simulation is a very useful tool in the deep drawing industry. It is used more particularly for the development and the validation of new stamping tools. It allows to decrease cost and time for the tooling design and set up. But one of the most important difficulties to have a good agreement between the simulation and the real process comes from the definition of the numerical conditions (mesh, punch travel speed, limit conditions,…) and the parameters which model the material behavior. Indeed, in press shop, when the sheet set changes, often a variation of the formed part geometry is observed according to the variability of the material properties between these different sets. This last parameter represents probably one of the main source of process deviation when the process is set up. That's why it is important to study the influence of material data variation on the geometry of a classical stamped part. The chosen geometry is an omega shaped part because of its simplicity and it is representative one in the automotive industry (car body reinforcement). Moreover, it shows important springback deviations. An isotropic behaviour law is assumed. The impact of the statistical deviation of the three law coefficients characterizing the material and the friction coefficient around their nominal values is tested. A Gaussian distribution is supposed and their impact on the geometry variation is studied by FE simulation. An other approach is envisaged consisting in modeling the process variability by a mathematical model and then, in function of the input parameters variability, it is proposed to define an analytical model which leads to find the part geometry variability around the nominal shape. These two approaches allow to predict the process capability as a function of the material parameter variability.
NASA Astrophysics Data System (ADS)
Yi, Peiyun; Deng, Yujun; Shu, Yunyi; Peng, Linfa
2018-08-01
Roll-to-roll (R2R) hot embossing is regarded as a cost-effective replication technology to fabricate microstructures on polymer films. However, the characteristics of continuous and fast forming for the R2R hot embossing process limits material flow during the filling stage and results in significant springback during the demolding stage. To resolve this issue, this study proposed a novel R2R powder hot embossing process, which combines the merits of the continuous fabrication of R2R hot embossing and near-net-shape forming of powder sintering and also decreases the whole cycle of the fabrication from films to microstructures. First, the relation between the molten layer thickness and processing parameters was discussed and an analytical model was established to predict the feed of the polymeric powder during R2R powder hot embossing. Then, with the use of a micro-pyramid array mold, the impact of the process parameters including mold temperature, feeding speed and applied force on the geometrical dimension of the patterned microstructures was discussed. Last, based on the response surface analysis, a process window, in terms of the mold temperature of 132 °C –145 °C, feeding speed of 0.1–1.4 m min‑1 and applied force of 15–50 kgf was determined for the continuous fabrication of completely-filled micropyramid arrays with the R2R powder hot embossing process. This research demonstrated the feasibility and superiority of the proposed R2R powder hot embossing process in continuously fabricating micropatterned structures on polymeric films.