Sample records for simple prediction method

  1. A reexamination of the use of simple concepts for predicting the shape and location of detached shock waves

    NASA Technical Reports Server (NTRS)

    Love, Eugene S

    1957-01-01

    A reexamination has been made of the use of simple concepts for predicting the shape and location of detached shock waves. The results show that simple concepts and modifications of existing methods can yield good predictions for many nose shapes and for a wide range of Mach numbers.

  2. A Simple Microsoft Excel Method to Predict Antibiotic Outbreaks and Underutilization.

    PubMed

    Miglis, Cristina; Rhodes, Nathaniel J; Avedissian, Sean N; Zembower, Teresa R; Postelnick, Michael; Wunderink, Richard G; Sutton, Sarah H; Scheetz, Marc H

    2017-07-01

    Benchmarking strategies are needed to promote the appropriate use of antibiotics. We have adapted a simple regressive method in Microsoft Excel that is easily implementable and creates predictive indices. This method trends consumption over time and can identify periods of over- and underuse at the hospital level. Infect Control Hosp Epidemiol 2017;38:860-862.

  3. Interspecies scaling and prediction of human clearance: comparison of small- and macro-molecule drugs

    PubMed Central

    Huh, Yeamin; Smith, David E.; Feng, Meihau Rose

    2014-01-01

    Human clearance prediction for small- and macro-molecule drugs was evaluated and compared using various scaling methods and statistical analysis.Human clearance is generally well predicted using single or multiple species simple allometry for macro- and small-molecule drugs excreted renally.The prediction error is higher for hepatically eliminated small-molecules using single or multiple species simple allometry scaling, and it appears that the prediction error is mainly associated with drugs with low hepatic extraction ratio (Eh). The error in human clearance prediction for hepatically eliminated small-molecules was reduced using scaling methods with a correction of maximum life span (MLP) or brain weight (BRW).Human clearance of both small- and macro-molecule drugs is well predicted using the monkey liver blood flow method. Predictions using liver blood flow from other species did not work as well, especially for the small-molecule drugs. PMID:21892879

  4. Comparison on genomic predictions using three GBLUP methods and two single-step blending methods in the Nordic Holstein population

    PubMed Central

    2012-01-01

    Background A single-step blending approach allows genomic prediction using information of genotyped and non-genotyped animals simultaneously. However, the combined relationship matrix in a single-step method may need to be adjusted because marker-based and pedigree-based relationship matrices may not be on the same scale. The same may apply when a GBLUP model includes both genomic breeding values and residual polygenic effects. The objective of this study was to compare single-step blending methods and GBLUP methods with and without adjustment of the genomic relationship matrix for genomic prediction of 16 traits in the Nordic Holstein population. Methods The data consisted of de-regressed proofs (DRP) for 5 214 genotyped and 9 374 non-genotyped bulls. The bulls were divided into a training and a validation population by birth date, October 1, 2001. Five approaches for genomic prediction were used: 1) a simple GBLUP method, 2) a GBLUP method with a polygenic effect, 3) an adjusted GBLUP method with a polygenic effect, 4) a single-step blending method, and 5) an adjusted single-step blending method. In the adjusted GBLUP and single-step methods, the genomic relationship matrix was adjusted for the difference of scale between the genomic and the pedigree relationship matrices. A set of weights on the pedigree relationship matrix (ranging from 0.05 to 0.40) was used to build the combined relationship matrix in the single-step blending method and the GBLUP method with a polygenetic effect. Results Averaged over the 16 traits, reliabilities of genomic breeding values predicted using the GBLUP method with a polygenic effect (relative weight of 0.20) were 0.3% higher than reliabilities from the simple GBLUP method (without a polygenic effect). The adjusted single-step blending and original single-step blending methods (relative weight of 0.20) had average reliabilities that were 2.1% and 1.8% higher than the simple GBLUP method, respectively. In addition, the GBLUP method with a polygenic effect led to less bias of genomic predictions than the simple GBLUP method, and both single-step blending methods yielded less bias of predictions than all GBLUP methods. Conclusions The single-step blending method is an appealing approach for practical genomic prediction in dairy cattle. Genomic prediction from the single-step blending method can be improved by adjusting the scale of the genomic relationship matrix. PMID:22455934

  5. A study of the limitations of linear theory methods as applied to sonic boom calculations

    NASA Technical Reports Server (NTRS)

    Darden, Christine M.

    1990-01-01

    Current sonic boom minimization theories have been reviewed to emphasize the capabilities and flexibilities of the methods. Flexibility is important because it is necessary for the designer to meet optimized area constraints while reducing the impact on vehicle aerodynamic performance. Preliminary comparisons of sonic booms predicted for two Mach 3 concepts illustrate the benefits of shaping. Finally, for very simple bodies of revolution, sonic boom predictions were made using two methods - a modified linear theory method and a nonlinear method - for signature shapes which were both farfield N-waves and midfield waves. Preliminary analysis on these simple bodies verified that current modified linear theory prediction methods become inadequate for predicting midfield signatures for Mach numbers above 3. The importance of impulse is sonic boom disturbance and the importance of three-dimensional effects which could not be simulated with the bodies of revolution will determine the validity of current modified linear theory methods in predicting midfield signatures at lower Mach numbers.

  6. A simple method for HPLC retention time prediction: linear calibration using two reference substances.

    PubMed

    Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng

    2017-01-01

    Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.

  7. Examination of multi-model ensemble seasonal prediction methods using a simple climate system

    NASA Astrophysics Data System (ADS)

    Kang, In-Sik; Yoo, Jin Ho

    2006-02-01

    A simple climate model was designed as a proxy for the real climate system, and a number of prediction models were generated by slightly perturbing the physical parameters of the simple model. A set of long (240 years) historical hindcast predictions were performed with various prediction models, which are used to examine various issues of multi-model ensemble seasonal prediction, such as the best ways of blending multi-models and the selection of models. Based on these results, we suggest a feasible way of maximizing the benefit of using multi models in seasonal prediction. In particular, three types of multi-model ensemble prediction systems, i.e., the simple composite, superensemble, and the composite after statistically correcting individual predictions (corrected composite), are examined and compared to each other. The superensemble has more of an overfitting problem than the others, especially for the case of small training samples and/or weak external forcing, and the corrected composite produces the best prediction skill among the multi-model systems.

  8. Prediction of thermal cycling induced matrix cracking

    NASA Technical Reports Server (NTRS)

    Mcmanus, Hugh L.

    1992-01-01

    Thermal fatigue has been observed to cause matrix cracking in laminated composite materials. A method is presented to predict transverse matrix cracks in composite laminates subjected to cyclic thermal load. Shear lag stress approximations and a simple energy-based fracture criteria are used to predict crack densities as a function of temperature. Prediction of crack densities as a function of thermal cycling is accomplished by assuming that fatigue degrades the material's inherent resistance to cracking. The method is implemented as a computer program. A simple experiment provides data on progressive cracking of a laminate with decreasing temperature. Existing data on thermal fatigue is also used. Correlations of the analytical predictions to the data are very good. A parametric study using the analytical method is presented which provides insight into material behavior under cyclical thermal loads.

  9. A simple method to predict regional fish abundance: an example in the McKenzie River Basin, Oregon

    Treesearch

    D.J. McGarvey; J.M. Johnston

    2011-01-01

    Regional assessments of fisheries resources are increasingly called for, but tools with which to perform them are limited. We present a simple method that can be used to estimate regional carrying capacity and apply it to the McKenzie River Basin, Oregon. First, we use a macroecological model to predict trout densities within small, medium, and large streams in the...

  10. Prediction of Transport Properties of Permeants through Polymer Films. A Simple Gravimetric Experiment.

    ERIC Educational Resources Information Center

    Britton, L. N.; And Others

    1988-01-01

    Considers the applicability of the simple emersion/weight-gain method for predicting diffusion coefficients, solubilities, and permeation rates of chemicals in polymers that do not undergo physical and chemical deterioration. Presents the theoretical background, procedures and typical results related to this activity. (CW)

  11. Simple prediction method of lumbar lordosis for planning of lumbar corrective surgery: radiological analysis in a Korean population.

    PubMed

    Lee, Chong Suh; Chung, Sung Soo; Park, Se Jun; Kim, Dong Min; Shin, Seong Kee

    2014-01-01

    This study aimed at deriving a lordosis predictive equation using the pelvic incidence and to establish a simple prediction method of lumbar lordosis for planning lumbar corrective surgery in Asians. Eighty-six asymptomatic volunteers were enrolled in the study. The maximal lumbar lordosis (MLL), lower lumbar lordosis (LLL), pelvic incidence (PI), and sacral slope (SS) were measured. The correlations between the parameters were analyzed using Pearson correlation analysis. Predictive equations of lumbar lordosis through simple regression analysis of the parameters and simple predictive values of lumbar lordosis using PI were derived. The PI strongly correlated with the SS (r = 0.78), and a strong correlation was found between the SS and LLL (r = 0.89), and between the SS and MLL (r = 0.83). Based on these correlations, the predictive equations of lumbar lordosis were found (SS = 0.80 + 0.74 PI (r = 0.78, R (2) = 0.61), LLL = 5.20 + 0.87 SS (r = 0.89, R (2) = 0.80), MLL = 17.41 + 0.96 SS (r = 0.83, R (2) = 0.68). When PI was between 30° to 35°, 40° to 50° and 55° to 60°, the equations predicted that MLL would be PI + 10°, PI + 5° and PI, and LLL would be PI - 5°, PI - 10° and PI - 15°, respectively. This simple calculation method can provide a more appropriate and simpler prediction of lumbar lordosis for Asian populations. The prediction of lumbar lordosis should be used as a reference for surgeons planning to restore the lumbar lordosis in lumbar corrective surgery.

  12. A simple and efficient method for predicting protein-protein interaction sites.

    PubMed

    Higa, R H; Tozzi, C L

    2008-09-23

    Computational methods for predicting protein-protein interaction sites based on structural data are characterized by an accuracy between 70 and 80%. Some experimental studies indicate that only a fraction of the residues, forming clusters in the center of the interaction site, are energetically important for binding. In addition, the analysis of amino acid composition has shown that residues located in the center of the interaction site can be better discriminated from the residues in other parts of the protein surface. In the present study, we implement a simple method to predict interaction site residues exploiting this fact and show that it achieves a very competitive performance compared to other methods using the same dataset and criteria for performance evaluation (success rate of 82.1%).

  13. A simple method to predict body temperature of small reptiles from environmental temperature.

    PubMed

    Vickers, Mathew; Schwarzkopf, Lin

    2016-05-01

    To study behavioral thermoregulation, it is useful to use thermal sensors and physical models to collect environmental temperatures that are used to predict organism body temperature. Many techniques involve expensive or numerous types of sensors (cast copper models, or temperature, humidity, radiation, and wind speed sensors) to collect the microhabitat data necessary to predict body temperatures. Expense and diversity of requisite sensors can limit sampling resolution and accessibility of these methods. We compare body temperature predictions of small lizards from iButtons, DS18B20 sensors, and simple copper models, in both laboratory and natural conditions. Our aim was to develop an inexpensive yet accurate method for body temperature prediction. Either method was applicable given appropriate parameterization of the heat transfer equation used. The simplest and cheapest method was DS18B20 sensors attached to a small recording computer. There was little if any deficit in precision or accuracy compared to other published methods. We show how the heat transfer equation can be parameterized, and it can also be used to predict body temperature from historically collected data, allowing strong comparisons between current and previous environmental temperatures using the most modern techniques. Our simple method uses very cheap sensors and loggers to extensively sample habitat temperature, improving our understanding of microhabitat structure and thermal variability with respect to small ectotherms. While our method was quite precise, we feel any potential loss in accuracy is offset by the increase in sample resolution, important as it is increasingly apparent that, particularly for small ectotherms, habitat thermal heterogeneity is the strongest influence on transient body temperature.

  14. The IntFOLD server: an integrated web resource for protein fold recognition, 3D model quality assessment, intrinsic disorder prediction, domain prediction and ligand binding site prediction.

    PubMed

    Roche, Daniel B; Buenavista, Maria T; Tetchner, Stuart J; McGuffin, Liam J

    2011-07-01

    The IntFOLD server is a novel independent server that integrates several cutting edge methods for the prediction of structure and function from sequence. Our guiding principles behind the server development were as follows: (i) to provide a simple unified resource that makes our prediction software accessible to all and (ii) to produce integrated output for predictions that can be easily interpreted. The output for predictions is presented as a simple table that summarizes all results graphically via plots and annotated 3D models. The raw machine readable data files for each set of predictions are also provided for developers, which comply with the Critical Assessment of Methods for Protein Structure Prediction (CASP) data standards. The server comprises an integrated suite of five novel methods: nFOLD4, for tertiary structure prediction; ModFOLD 3.0, for model quality assessment; DISOclust 2.0, for disorder prediction; DomFOLD 2.0 for domain prediction; and FunFOLD 1.0, for ligand binding site prediction. Predictions from the IntFOLD server were found to be competitive in several categories in the recent CASP9 experiment. The IntFOLD server is available at the following web site: http://www.reading.ac.uk/bioinf/IntFOLD/.

  15. Using mean duration and variation of procedure times to plan a list of surgical operations to fit into the scheduled list time.

    PubMed

    Pandit, Jaideep J; Tavare, Aniket

    2011-07-01

    It is important that a surgical list is planned to utilise as much of the scheduled time as possible while not over-running, because this can lead to cancellation of operations. We wished to assess whether, theoretically, the known duration of individual operations could be used quantitatively to predict the likely duration of the operating list. In a university hospital setting, we first assessed the extent to which the current ad-hoc method of operating list planning was able to match the scheduled operating list times for 153 consecutive historical lists. Using receiver operating curve analysis, we assessed the ability of an alternative method to predict operating list duration for the same operating lists. This method uses a simple formula: the sum of individual operation times and a pooled standard deviation of these times. We used the operating list duration estimated from this formula to generate a probability that the operating list would finish within its scheduled time. Finally, we applied the simple formula prospectively to 150 operating lists, 'shadowing' the current ad-hoc method, to confirm the predictive ability of the formula. The ad-hoc method was very poor at planning: 50% of historical operating lists were under-booked and 37% over-booked. In contrast, the simple formula predicted the correct outcome (under-run or over-run) for 76% of these operating lists. The calculated probability that a planned series of operations will over-run or under-run was found useful in developing an algorithm to adjust the planned cases optimally. In the prospective series, 65% of operating lists were over-booked and 10% were under-booked. The formula predicted the correct outcome for 84% of operating lists. A simple quantitative method of estimating operating list duration for a series of operations leads to an algorithm (readily created on an Excel spreadsheet, http://links.lww.com/EJA/A19) that can potentially improve operating list planning.

  16. Improving Prediction Accuracy for WSN Data Reduction by Applying Multivariate Spatio-Temporal Correlation

    PubMed Central

    Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman

    2011-01-01

    This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626

  17. Data-driven forecasting algorithms for building energy consumption

    NASA Astrophysics Data System (ADS)

    Noh, Hae Young; Rajagopal, Ram

    2013-04-01

    This paper introduces two forecasting methods for building energy consumption data that are recorded from smart meters in high resolution. For utility companies, it is important to reliably forecast the aggregate consumption profile to determine energy supply for the next day and prevent any crisis. The proposed methods involve forecasting individual load on the basis of their measurement history and weather data without using complicated models of building system. The first method is most efficient for a very short-term prediction, such as the prediction period of one hour, and uses a simple adaptive time-series model. For a longer-term prediction, a nonparametric Gaussian process has been applied to forecast the load profiles and their uncertainty bounds to predict a day-ahead. These methods are computationally simple and adaptive and thus suitable for analyzing a large set of data whose pattern changes over the time. These forecasting methods are applied to several sets of building energy consumption data for lighting and heating-ventilation-air-conditioning (HVAC) systems collected from a campus building at Stanford University. The measurements are collected every minute, and corresponding weather data are provided hourly. The results show that the proposed algorithms can predict those energy consumption data with high accuracy.

  18. Individual and population pharmacokinetic compartment analysis: a graphic procedure for quantification of predictive performance.

    PubMed

    Eksborg, Staffan

    2013-01-01

    Pharmacokinetic studies are important for optimizing of drug dosing, but requires proper validation of the used pharmacokinetic procedures. However, simple and reliable statistical methods suitable for evaluation of the predictive performance of pharmacokinetic analysis are essentially lacking. The aim of the present study was to construct and evaluate a graphic procedure for quantification of predictive performance of individual and population pharmacokinetic compartment analysis. Original data from previously published pharmacokinetic compartment analyses after intravenous, oral, and epidural administration, and digitized data, obtained from published scatter plots of observed vs predicted drug concentrations from population pharmacokinetic studies using the NPEM algorithm and NONMEM computer program and Bayesian forecasting procedures, were used for estimating the predictive performance according to the proposed graphical method and by the method of Sheiner and Beal. The graphical plot proposed in the present paper proved to be a useful tool for evaluation of predictive performance of both individual and population compartment pharmacokinetic analysis. The proposed method is simple to use and gives valuable information concerning time- and concentration-dependent inaccuracies that might occur in individual and population pharmacokinetic compartment analysis. Predictive performance can be quantified by the fraction of concentration ratios within arbitrarily specified ranges, e.g. within the range 0.8-1.2.

  19. A three-dimensional FEM-DEM technique for predicting the evolution of fracture in geomaterials and concrete

    NASA Astrophysics Data System (ADS)

    Zárate, Francisco; Cornejo, Alejandro; Oñate, Eugenio

    2018-07-01

    This paper extends to three dimensions (3D), the computational technique developed by the authors in 2D for predicting the onset and evolution of fracture in a finite element mesh in a simple manner based on combining the finite element method and the discrete element method (DEM) approach (Zárate and Oñate in Comput Part Mech 2(3):301-314, 2015). Once a crack is detected at an element edge, discrete elements are generated at the adjacent element vertexes and a simple DEM mechanism is considered in order to follow the evolution of the crack. The combination of the DEM with simple four-noded linear tetrahedron elements correctly captures the onset of fracture and its evolution, as shown in several 3D examples of application.

  20. A Simple Plasma Retinol Isotope Ratio Method for Estimating β-Carotene Relative Bioefficacy in Humans: Validation with the Use of Model-Based Compartmental Analysis.

    PubMed

    Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H

    2017-09-01

    Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides information about the contributions of absorptive and postabsorptive conversion to total bioefficacy if an additional sample is taken at 1 d. © 2017 American Society for Nutrition.

  1. Evaluation of SimpleTreat 4.0: Simulations of pharmaceutical removal in wastewater treatment plant facilities.

    PubMed

    Lautz, L S; Struijs, J; Nolte, T M; Breure, A M; van der Grinten, E; van de Meent, D; van Zelm, R

    2017-02-01

    In this study, the removal of pharmaceuticals from wastewater as predicted by SimpleTreat 4.0 was evaluated. Field data obtained from literature of 43 pharmaceuticals, measured in 51 different activated sludge WWTPs were used. Based on reported influent concentrations, the effluent concentrations were calculated with SimpleTreat 4.0 and compared to measured effluent concentrations. The model predicts effluent concentrations mostly within a factor of 10, using the specific WWTP parameters as well as SimpleTreat default parameters, while it systematically underestimates concentrations in secondary sludge. This may be caused by unexpected sorption, resulting from variability in WWTP operating conditions, and/or QSAR applicability domain mismatch and background concentrations prior to measurements. Moreover, variability in detection techniques and sampling methods can cause uncertainty in measured concentration levels. To find possible structural improvements, we also evaluated SimpleTreat 4.0 using several specific datasets with different degrees of uncertainty and variability. This evaluation verified that the most influencing parameters for water effluent predictions were biodegradation and the hydraulic retention time. Results showed that model performance is highly dependent on the nature and quality, i.e. degree of uncertainty, of the data. The default values for reactor settings in SimpleTreat result in realistic predictions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction.

    PubMed

    Crop, F; Van Rompaye, B; Paelinck, L; Vakaet, L; Thierens, H; De Wagter, C

    2008-07-21

    The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry.

  3. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    PubMed

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  4. Predicting the past: a simple reverse stand table projection method

    Treesearch

    Quang V. Cao; Shanna M. McCarty

    2006-01-01

    A stand table gives number of trees in each diameter class. Future stand tables can be predicted from current stand tables using a stand table projection method. In the simplest form of this method, a future stand table can be expressed as the product of a matrix of transitional proportions (based on diameter growth rates) and a vector of the current stand table. There...

  5. Perendoscopic gastric pH determination. Simple method for increasing accuracy in diagnosing chronic atrophic gastritis.

    PubMed

    Farinati, F; Cardin, F; Di Mario, F; Sava, G A; Piccoli, A; Costa, F; Penon, G; Naccarato, R

    1987-08-01

    The endoscopic diagnosis of chronic atrophic gastritis is often underestimated, and most of the procedures adopted to increase diagnostic accuracy are time consuming and complex. In this study, we evaluated the usefulness of the determination of gastric juice pH by means of litmus paper. Values obtained by this method correlate well with gastric acid secretory capacity as measured by gastric acid analysis (r = -0.64, p less than 0.001) and are not affected by the presence of bile. Gastric juice pH determination increases sensitivity and other diagnostic parameters such as performance index (Youden J test), positive predictive value, and post-test probability difference by 50%. Furthermore, the negative predictive value is very high, the probability of missing a patient with chronic atrophic gastritis with this simple method being 2% for fundic and 15% for antral atrophic change. We conclude that gastric juice pH determination, which substantially increases diagnostic accuracy and is very simple to perform, should be routinely adopted.

  6. Simulation of ground motion using the stochastic method

    USGS Publications Warehouse

    Boore, D.M.

    2003-01-01

    A simple and powerful method for simulating ground motions is to combine parametric or functional descriptions of the ground motion's amplitude spectrum with a random phase spectrum modified such that the motion is distributed over a duration related to the earthquake magnitude and to the distance from the source. This method of simulating ground motions often goes by the name "the stochastic method." It is particularly useful for simulating the higher-frequency ground motions of most interest to engineers (generally, f>0.1 Hz), and it is widely used to predict ground motions for regions of the world in which recordings of motion from potentially damaging earthquakes are not available. This simple method has been successful in matching a variety of ground-motion measures for earthquakes with seismic moments spanning more than 12 orders of magnitude and in diverse tectonic environments. One of the essential characteristics of the method is that it distills what is known about the various factors affecting ground motions (source, path, and site) into simple functional forms. This provides a means by which the results of the rigorous studies reported in other papers in this volume can be incorporated into practical predictions of ground motion.

  7. A simple extension to the CMASA method for the prediction of catalytic residues in the presence of single point mutations.

    PubMed

    Flores, David I; Sotelo-Mundo, Rogerio R; Brizuela, Carlos A

    2014-01-01

    The automatic identification of catalytic residues still remains an important challenge in structural bioinformatics. Sequence-based methods are good alternatives when the query shares a high percentage of identity with a well-annotated enzyme. However, when the homology is not apparent, which occurs with many structures from the structural genome initiative, structural information should be exploited. A local structural comparison is preferred to a global structural comparison when predicting functional residues. CMASA is a recently proposed method for predicting catalytic residues based on a local structure comparison. The method achieves high accuracy and a high value for the Matthews correlation coefficient. However, point substitutions or a lack of relevant data strongly affect the performance of the method. In the present study, we propose a simple extension to the CMASA method to overcome this difficulty. Extensive computational experiments are shown as proof of concept instances, as well as for a few real cases. The results show that the extension performs well when the catalytic site contains mutated residues or when some residues are missing. The proposed modification could correctly predict the catalytic residues of a mutant thymidylate synthase, 1EVF. It also successfully predicted the catalytic residues for 3HRC despite the lack of information for a relevant side chain atom in the PDB file.

  8. A method of predicting the energy-absorption capability of composite subfloor beams

    NASA Technical Reports Server (NTRS)

    Farley, Gary L.

    1987-01-01

    A simple method of predicting the energy-absorption capability of composite subfloor beam structure was developed. The method is based upon the weighted sum of the energy-absorption capability of constituent elements of a subfloor beam. An empirical data base of energy absorption results from circular and square cross section tube specimens were used in the prediction capability. The procedure is applicable to a wide range of subfloor beam structure. The procedure was demonstrated on three subfloor beam concepts. Agreement between test and prediction was within seven percent for all three cases.

  9. Prediction of missing links and reconstruction of complex networks

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng-Jun; Zeng, An

    2016-04-01

    Predicting missing links in complex networks is of great significance from both theoretical and practical point of view, which not only helps us understand the evolution of real systems but also relates to many applications in social, biological and online systems. In this paper, we study the features of different simple link prediction methods, revealing that they may lead to the distortion of networks’ structural and dynamical properties. Moreover, we find that high prediction accuracy is not definitely corresponding to a high performance in preserving the network properties when using link prediction methods to reconstruct networks. Our work highlights the importance of considering the feedback effect of the link prediction methods on network properties when designing the algorithms.

  10. Simple to complex modeling of breathing volume using a motion sensor.

    PubMed

    John, Dinesh; Staudenmayer, John; Freedson, Patty

    2013-06-01

    To compare simple and complex modeling techniques to estimate categories of low, medium, and high ventilation (VE) from ActiGraph™ activity counts. Vertical axis ActiGraph™ GT1M activity counts, oxygen consumption and VE were measured during treadmill walking and running, sports, household chores and labor-intensive employment activities. Categories of low (<19.3 l/min), medium (19.3 to 35.4 l/min) and high (>35.4 l/min) VEs were derived from activity intensity classifications (light <2.9 METs, moderate 3.0 to 5.9 METs and vigorous >6.0 METs). We examined the accuracy of two simple techniques (multiple regression and activity count cut-point analyses) and one complex (random forest technique) modeling technique in predicting VE from activity counts. Prediction accuracy of the complex random forest technique was marginally better than the simple multiple regression method. Both techniques accurately predicted VE categories almost 80% of the time. The multiple regression and random forest techniques were more accurate (85 to 88%) in predicting medium VE. Both techniques predicted the high VE (70 to 73%) with greater accuracy than low VE (57 to 60%). Actigraph™ cut-points for light, medium and high VEs were <1381, 1381 to 3660 and >3660 cpm. There were minor differences in prediction accuracy between the multiple regression and the random forest technique. This study provides methods to objectively estimate VE categories using activity monitors that can easily be deployed in the field. Objective estimates of VE should provide a better understanding of the dose-response relationship between internal exposure to pollutants and disease. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Development and validation of the SIMPLE endoscopic classification of diminutive and small colorectal polyps.

    PubMed

    Iacucci, Marietta; Trovato, Cristina; Daperno, Marco; Akinola, Oluseyi; Greenwald, David; Gross, Seth A; Hoffman, Arthur; Lee, Jeffrey; Lethebe, Brendan C; Lowerison, Mark; Nayor, Jennifer; Neumann, Helmut; Rath, Timo; Sanduleanu, Silvia; Sharma, Prateek; Kiesslich, Ralf; Ghosh, Subrata; Saltzman, John R

    2018-03-23

    Prediction of histology of small polyps facilitates colonoscopic treatment. The aims of this study were: 1) to develop a simplified polyp classification, 2) to evaluate its performance in predicting polyp histology, and 3) to evaluate the reproducibility of the classification by trainees using multiplatform endoscopic systems. In phase 1, a new simplified endoscopic classification for polyps - Simplified Identification Method for Polyp Labeling during Endoscopy (SIMPLE) - was created, using the new I-SCAN OE system (Pentax, Tokyo, Japan), by eight international experts. In phase 2, the accuracy, level of confidence, and interobserver agreement to predict polyp histology before and after training, and univariable/multivariable analysis of the endoscopic features, were performed. In phase 3, the reproducibility of SIMPLE by trainees using different endoscopy platforms was evaluated. Using the SIMPLE classification, the accuracy of experts in predicting polyps was 83 % (95 % confidence interval [CI] 77 % - 88 %) before and 94 % (95 %CI 89 % - 97 %) after training ( P   = 0.002). The sensitivity, specificity, positive predictive value, and negative predictive value after training were 97 %, 88 %, 95 %, and 91 %. The interobserver agreement of polyp diagnosis improved from 0.46 (95 %CI 0.30 - 0.64) before to 0.66 (95 %CI 0.48 - 0.82) after training. The trainees demonstrated that the SIMPLE classification is applicable across endoscopy platforms, with similar post-training accuracies for narrow-band imaging NBI classification (0.69; 95 %CI 0.64 - 0.73) and SIMPLE (0.71; 95 %CI 0.67 - 0.75). Using the I-SCAN OE system, the new SIMPLE classification demonstrated a high degree of accuracy for adenoma diagnosis, meeting the ASGE PIVI recommendations. We demonstrated that SIMPLE may be used with either I-SCAN OE or NBI. © Georg Thieme Verlag KG Stuttgart · New York.

  12. A simplified indexing of F-region geophysical noise at low latitudes

    NASA Technical Reports Server (NTRS)

    Aggarwal, S.; Lakshmi, D. R.; Reddy, B. M.

    1979-01-01

    A simple method of deriving an F-region index that can warn the prediction users at low latitudes as to the specific months when they have to be more careful in using the long term predictions is described.

  13. Children's Verbal Working Memory: Role of Processing Complexity in Predicting Spoken Sentence Comprehension

    ERIC Educational Resources Information Center

    Magimairaj, Beula M.; Montgomery, James W.

    2012-01-01

    Purpose: This study investigated the role of processing complexity of verbal working memory tasks in predicting spoken sentence comprehension in typically developing children. Of interest was whether simple and more complex working memory tasks have similar or different power in predicting sentence comprehension. Method: Sixty-five children (6- to…

  14. DO TIE LABORATORY BASED ASSESSMENT METHODS REALLY PREDICT FIELD EFFECTS?

    EPA Science Inventory

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both porewaters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question of whethe...

  15. Predicting Fluctuations in Cryptocurrency Transactions Based on User Comments and Replies.

    PubMed

    Kim, Young Bin; Kim, Jun Gi; Kim, Wook; Im, Jae Ho; Kim, Tae Hyeong; Kang, Shin Jin; Kim, Chang Hun

    2016-01-01

    This paper proposes a method to predict fluctuations in the prices of cryptocurrencies, which are increasingly used for online transactions worldwide. Little research has been conducted on predicting fluctuations in the price and number of transactions of a variety of cryptocurrencies. Moreover, the few methods proposed to predict fluctuation in currency prices are inefficient because they fail to take into account the differences in attributes between real currencies and cryptocurrencies. This paper analyzes user comments in online cryptocurrency communities to predict fluctuations in the prices of cryptocurrencies and the number of transactions. By focusing on three cryptocurrencies, each with a large market size and user base, this paper attempts to predict such fluctuations by using a simple and efficient method.

  16. Predicting Fluctuations in Cryptocurrency Transactions Based on User Comments and Replies

    PubMed Central

    Kim, Young Bin; Kim, Jun Gi; Kim, Wook; Im, Jae Ho; Kim, Tae Hyeong; Kang, Shin Jin; Kim, Chang Hun

    2016-01-01

    This paper proposes a method to predict fluctuations in the prices of cryptocurrencies, which are increasingly used for online transactions worldwide. Little research has been conducted on predicting fluctuations in the price and number of transactions of a variety of cryptocurrencies. Moreover, the few methods proposed to predict fluctuation in currency prices are inefficient because they fail to take into account the differences in attributes between real currencies and cryptocurrencies. This paper analyzes user comments in online cryptocurrency communities to predict fluctuations in the prices of cryptocurrencies and the number of transactions. By focusing on three cryptocurrencies, each with a large market size and user base, this paper attempts to predict such fluctuations by using a simple and efficient method. PMID:27533113

  17. Multilabel learning via random label selection for protein subcellular multilocations prediction.

    PubMed

    Wang, Xiao; Li, Guo-Zheng

    2013-01-01

    Prediction of protein subcellular localization is an important but challenging problem, particularly when proteins may simultaneously exist at, or move between, two or more different subcellular location sites. Most of the existing protein subcellular localization methods are only used to deal with the single-location proteins. In the past few years, only a few methods have been proposed to tackle proteins with multiple locations. However, they only adopt a simple strategy, that is, transforming the multilocation proteins to multiple proteins with single location, which does not take correlations among different subcellular locations into account. In this paper, a novel method named random label selection (RALS) (multilabel learning via RALS), which extends the simple binary relevance (BR) method, is proposed to learn from multilocation proteins in an effective and efficient way. RALS does not explicitly find the correlations among labels, but rather implicitly attempts to learn the label correlations from data by augmenting original feature space with randomly selected labels as its additional input features. Through the fivefold cross-validation test on a benchmark data set, we demonstrate our proposed method with consideration of label correlations obviously outperforms the baseline BR method without consideration of label correlations, indicating correlations among different subcellular locations really exist and contribute to improvement of prediction performance. Experimental results on two benchmark data sets also show that our proposed methods achieve significantly higher performance than some other state-of-the-art methods in predicting subcellular multilocations of proteins. The prediction web server is available at >http://levis.tongji.edu.cn:8080/bioinfo/MLPred-Euk/ for the public usage.

  18. Statistical Approaches for Spatiotemporal Prediction of Low Flows

    NASA Astrophysics Data System (ADS)

    Fangmann, A.; Haberlandt, U.

    2017-12-01

    An adequate assessment of regional climate change impacts on streamflow requires the integration of various sources of information and modeling approaches. This study proposes simple statistical tools for inclusion into model ensembles, which are fast and straightforward in their application, yet able to yield accurate streamflow predictions in time and space. Target variables for all approaches are annual low flow indices derived from a data set of 51 records of average daily discharge for northwestern Germany. The models require input of climatic data in the form of meteorological drought indices, derived from observed daily climatic variables, averaged over the streamflow gauges' catchments areas. Four different modeling approaches are analyzed. Basis for all pose multiple linear regression models that estimate low flows as a function of a set of meteorological indices and/or physiographic and climatic catchment descriptors. For the first method, individual regression models are fitted at each station, predicting annual low flow values from a set of annual meteorological indices, which are subsequently regionalized using a set of catchment characteristics. The second method combines temporal and spatial prediction within a single panel data regression model, allowing estimation of annual low flow values from input of both annual meteorological indices and catchment descriptors. The third and fourth methods represent non-stationary low flow frequency analyses and require fitting of regional distribution functions. Method three is subject to a spatiotemporal prediction of an index value, method four to estimation of L-moments that adapt the regional frequency distribution to the at-site conditions. The results show that method two outperforms successive prediction in time and space. Method three also shows a high performance in the near future period, but since it relies on a stationary distribution, its application for prediction of far future changes may be problematic. Spatiotemporal prediction of L-moments appeared highly uncertain for higher-order moments resulting in unrealistic future low flow values. All in all, the results promote an inclusion of simple statistical methods in climate change impact assessment.

  19. Crack Path Selection in Thermally Loaded Borosilicate/Steel Bibeam Specimen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grutzik, Scott Joseph; Reedy, Jr., E. D.

    Here, we have developed a novel specimen for studying crack paths in glass. Under certain conditions, the specimen reaches a state where the crack must select between multiple paths satisfying the K II = 0 condition. This path selection is a simple but challenging benchmark case for both analytical and numerical methods of predicting crack propagation. We document the development of the specimen, using an uncracked and instrumented test case to study the effect of adhesive choice and validate the accuracy of both a simple beam theory model and a finite element model. In addition, we present preliminary fracture testmore » results and provide a comparison to the path predicted by two numerical methods (mesh restructuring and XFEM). The directional stability of the crack path and differences in kink angle predicted by various crack kinking criteria is analyzed with a finite element model.« less

  20. Crack Path Selection in Thermally Loaded Borosilicate/Steel Bibeam Specimen

    DOE PAGES

    Grutzik, Scott Joseph; Reedy, Jr., E. D.

    2017-08-04

    Here, we have developed a novel specimen for studying crack paths in glass. Under certain conditions, the specimen reaches a state where the crack must select between multiple paths satisfying the K II = 0 condition. This path selection is a simple but challenging benchmark case for both analytical and numerical methods of predicting crack propagation. We document the development of the specimen, using an uncracked and instrumented test case to study the effect of adhesive choice and validate the accuracy of both a simple beam theory model and a finite element model. In addition, we present preliminary fracture testmore » results and provide a comparison to the path predicted by two numerical methods (mesh restructuring and XFEM). The directional stability of the crack path and differences in kink angle predicted by various crack kinking criteria is analyzed with a finite element model.« less

  1. Prediction of drug transport processes using simple parameters and PLS statistics. The use of ACD/logP and ACD/ChemSketch descriptors.

    PubMed

    Osterberg, T; Norinder, U

    2001-01-01

    A method of modelling and predicting biopharmaceutical properties using simple theoretically computed molecular descriptors and multivariate statistics has been investigated for several data sets related to solubility, IAM chromatography, permeability across Caco-2 cell monolayers, human intestinal perfusion, brain-blood partitioning, and P-glycoprotein ATPase activity. The molecular descriptors (e.g. molar refractivity, molar volume, index of refraction, surface tension and density) and logP were computed with ACD/ChemSketch and ACD/logP, respectively. Good statistical models were derived that permit simple computational prediction of biopharmaceutical properties. All final models derived had R(2) values ranging from 0.73 to 0.95 and Q(2) values ranging from 0.69 to 0.86. The RMSEP values for the external test sets ranged from 0.24 to 0.85 (log scale).

  2. Simple tool for prediction of parotid gland sparing in intensity-modulated radiation therapy.

    PubMed

    Gensheimer, Michael F; Hummel-Kramer, Sharon M; Cain, David; Quang, Tony S

    2015-01-01

    Sparing one or both parotid glands is a key goal when planning head and neck cancer radiation treatment. If the planning target volume (PTV) overlaps one or both parotid glands substantially, it may not be possible to achieve adequate gland sparing. This finding results in physicians revising their PTV contours after an intensity-modulated radiation therapy (IMRT) plan has been run and reduces workflow efficiency. We devised a simple formula for predicting mean parotid gland dose from the overlap of the parotid gland and isotropically expanded PTV contours. We tested the tool using 44 patients from 2 institutions and found agreement between predicted and actual parotid gland doses (mean absolute error = 5.3 Gy). This simple method could increase treatment planning efficiency by improving the chance that the first plan presented to the physician will have optimal parotid gland sparing. Published by Elsevier Inc.

  3. Simple tool for prediction of parotid gland sparing in intensity-modulated radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gensheimer, Michael F.; Hummel-Kramer, Sharon M., E-mail: sharonhummel@comcast.net; Cain, David

    Sparing one or both parotid glands is a key goal when planning head and neck cancer radiation treatment. If the planning target volume (PTV) overlaps one or both parotid glands substantially, it may not be possible to achieve adequate gland sparing. This finding results in physicians revising their PTV contours after an intensity-modulated radiation therapy (IMRT) plan has been run and reduces workflow efficiency. We devised a simple formula for predicting mean parotid gland dose from the overlap of the parotid gland and isotropically expanded PTV contours. We tested the tool using 44 patients from 2 institutions and found agreementmore » between predicted and actual parotid gland doses (mean absolute error = 5.3 Gy). This simple method could increase treatment planning efficiency by improving the chance that the first plan presented to the physician will have optimal parotid gland sparing.« less

  4. A simple and novel grading method for retraction and overshoot in Duane retraction syndrome.

    PubMed

    Kekunnaya, Ramesh; Moharana, Ruby; Tibrewal, Shailja; Chhablani, Preeti-Patil; Sachdeva, Virender

    2016-11-01

    Strabismus in Duane retraction syndrome is frequently associated with significant globe retraction and overshoots. However, there is no method to objectively grade retraction and overshoot. Our purpose is to describe a novel objective grading method. This novel and simple grading method has excellent agreement. It will help standardise measurements and guide the clinician in taking the decision for surgery and predicting its outcome. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  5. Use of moments of momentum to predict the crystal habit in potassium hydrogen phthalate

    NASA Technical Reports Server (NTRS)

    Barber, Patrick G.; Petty, John T.

    1990-01-01

    A relatively simple calculation of the moments of momentum predicts the morphological order of crystal faces for potassium hydrogen phthalate. The effects on the habit caused by the addition of monomeric, dimeric, and larger aggregates during crystal growth are considered. The first six of the seven observed crystal faces are predicted with this method.

  6. Using Time-Series Regression to Predict Academic Library Circulations.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.

    1984-01-01

    Four methods were used to forecast monthly circulation totals in 15 midwestern academic libraries: dummy time-series regression, lagged time-series regression, simple average (straight-line forecasting), monthly average (naive forecasting). In tests of forecasting accuracy, dummy regression method and monthly mean method exhibited smallest average…

  7. [The trial of business data analysis at the Department of Radiology by constructing the auto-regressive integrated moving-average (ARIMA) model].

    PubMed

    Tani, Yuji; Ogasawara, Katsuhiko

    2012-01-01

    This study aimed to contribute to the management of a healthcare organization by providing management information using time-series analysis of business data accumulated in the hospital information system, which has not been utilized thus far. In this study, we examined the performance of the prediction method using the auto-regressive integrated moving-average (ARIMA) model, using the business data obtained at the Radiology Department. We made the model using the data used for analysis, which was the number of radiological examinations in the past 9 years, and we predicted the number of radiological examinations in the last 1 year. Then, we compared the actual value with the forecast value. We were able to establish that the performance prediction method was simple and cost-effective by using free software. In addition, we were able to build the simple model by pre-processing the removal of trend components using the data. The difference between predicted values and actual values was 10%; however, it was more important to understand the chronological change rather than the individual time-series values. Furthermore, our method was highly versatile and adaptable compared to the general time-series data. Therefore, different healthcare organizations can use our method for the analysis and forecasting of their business data.

  8. The Optimal Cut-Off Value of Neutrophil-to-Lymphocyte Ratio for Predicting Prognosis in Adult Patients with Henoch–Schönlein Purpura

    PubMed Central

    Park, Chan Hyuk; Han, Dong Soo; Jeong, Jae Yoon; Eun, Chang Soo; Yoo, Kyo-Sang; Jeon, Yong Cheol; Sohn, Joo Hyun

    2016-01-01

    Background The development of gastrointestinal (GI) bleeding and end-stage renal disease (ESRD) can be a concern in the management of Henoch–Schönlein purpura (HSP). We aimed to evaluate whether the neutrophil-to-lymphocyte ratio (NLR) is associated with the prognosis of adult patients with HSP. Methods Clinical data including the NLR of adult patients with HSP were retrospectively analyzed. Patients were classified into three groups as follows: (a) simple recovery, (b) wax & wane without GI bleeding, and (c) development of GI bleeding. The optimal cut-off value was determined using a receiver operating characteristics curve and the Youden index. Results A total of 66 adult patients were enrolled. The NLR was higher in the GI bleeding group than in the simple recovery or wax & wane group (simple recovery vs. wax & wane vs. GI bleeding; median [IQR], 2.32 [1.61–3.11] vs. 3.18 [2.16–3.71] vs. 7.52 [4.91–10.23], P<0.001). For the purpose of predicting simple recovery, the optimal cut-off value of NLR was 3.18, and the sensitivity and specificity were 74.1% and 75.0%, respectively. For predicting development of GI bleeding, the optimal cut-off value was 3.90 and the sensitivity and specificity were 87.5% and 88.6%, respectively. Conclusions The NLR is useful for predicting development of GI bleeding as well as simple recovery without symptom relapse. Two different cut-off values of NLR, 3.18 for predicting an easy recovery without symptom relapse and 3.90 for predicting GI bleeding can be used in adult patients with HSP. PMID:27073884

  9. Assessing the capability of continuum and discrete particle methods to simulate gas-solids flow using DNS predictions as a benchmark

    DOE PAGES

    Lu, Liqiang; Liu, Xiaowen; Li, Tingwen; ...

    2017-08-12

    For this study, gas–solids flow in a three-dimension periodic domain was numerically investigated by direct numerical simulation (DNS), computational fluid dynamic-discrete element method (CFD-DEM) and two-fluid model (TFM). DNS data obtained by finely resolving the flow around every particle are used as a benchmark to assess the validity of coarser DEM and TFM approaches. The CFD-DEM predicts the correct cluster size distribution and under-predicts the macro-scale slip velocity even with a grid size as small as twice the particle diameter. The TFM approach predicts larger cluster size and lower slip velocity with a homogeneous drag correlation. Although the slip velocitymore » can be matched by a simple modification to the drag model, the predicted voidage distribution is still different from DNS: Both CFD-DEM and TFM over-predict the fraction of particles in dense regions and under-predict the fraction of particles in regions of intermediate void fractions. Also, the cluster aspect ratio of DNS is smaller than CFD-DEM and TFM. Since a simple correction to the drag model can predict a correct slip velocity, it is hopeful that drag corrections based on more elaborate theories that consider voidage gradient and particle fluctuations may be able to improve the current predictions of cluster distribution.« less

  10. Assessing the capability of continuum and discrete particle methods to simulate gas-solids flow using DNS predictions as a benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Liqiang; Liu, Xiaowen; Li, Tingwen

    For this study, gas–solids flow in a three-dimension periodic domain was numerically investigated by direct numerical simulation (DNS), computational fluid dynamic-discrete element method (CFD-DEM) and two-fluid model (TFM). DNS data obtained by finely resolving the flow around every particle are used as a benchmark to assess the validity of coarser DEM and TFM approaches. The CFD-DEM predicts the correct cluster size distribution and under-predicts the macro-scale slip velocity even with a grid size as small as twice the particle diameter. The TFM approach predicts larger cluster size and lower slip velocity with a homogeneous drag correlation. Although the slip velocitymore » can be matched by a simple modification to the drag model, the predicted voidage distribution is still different from DNS: Both CFD-DEM and TFM over-predict the fraction of particles in dense regions and under-predict the fraction of particles in regions of intermediate void fractions. Also, the cluster aspect ratio of DNS is smaller than CFD-DEM and TFM. Since a simple correction to the drag model can predict a correct slip velocity, it is hopeful that drag corrections based on more elaborate theories that consider voidage gradient and particle fluctuations may be able to improve the current predictions of cluster distribution.« less

  11. Sonographic Diagnosis of Tubal Cancer with IOTA Simple Rules Plus Pattern Recognition

    PubMed Central

    Tongsong, Theera; Wanapirak, Chanane; Tantipalakorn, Charuwan; Tinnangwattana, Dangcheewan

    2017-01-01

    Objective: To evaluate diagnostic performance of IOTA simple rules plus pattern recognition in predicting tubal cancer. Methods: Secondary analysis was performed on prospective database of our IOTA project. The patients recruited in the project were those who were scheduled for pelvic surgery due to adnexal masses. The patients underwent ultrasound examinations within 24 hours before surgery. On ultrasound examination, the masses were evaluated using the well-established IOTA simple rules plus pattern recognition (sausage-shaped appearance, incomplete septum, visible ipsilateral ovaries) to predict tubal cancer. The gold standard diagnosis was based on histological findings or operative findings. Results: A total of 482 patients, including 15 cases of tubal cancer, were evaluated by ultrasound preoperatively. The IOTA simple rules plus pattern recognition gave a sensitivity of 86.7% (13 in 15) and specificity of 97.4%. Sausage-shaped appearance was identified in nearly all cases (14 in 15). Incomplete septa and normal ovaries could be identified in 33.3% and 40%, respectively. Conclusion: IOTA simple rules plus pattern recognition is relatively effective in predicting tubal cancer. Thus, we propose the simple scheme in diagnosis of tubal cancer as follows. First of all, the adnexal masses are evaluated with IOTA simple rules. If the B-rules could be applied, tubal cancer is reliably excluded. If the M-rules could be applied or the result is inconclusive, careful delineation of the mass with pattern recognition should be performed. PMID:29172273

  12. Sonographic Diagnosis of Tubal Cancer with IOTA Simple Rules Plus Pattern Recognition

    PubMed

    Tongsong, Theera; Wanapirak, Chanane; Tantipalakorn, Charuwan; Tinnangwattana, Dangcheewan

    2017-11-26

    Objective: To evaluate diagnostic performance of IOTA simple rules plus pattern recognition in predicting tubal cancer. Methods: Secondary analysis was performed on prospective database of our IOTA project. The patients recruited in the project were those who were scheduled for pelvic surgery due to adnexal masses. The patients underwent ultrasound examinations within 24 hours before surgery. On ultrasound examination, the masses were evaluated using the well-established IOTA simple rules plus pattern recognition (sausage-shaped appearance, incomplete septum, visible ipsilateral ovaries) to predict tubal cancer. The gold standard diagnosis was based on histological findings or operative findings. Results: A total of 482 patients, including 15 cases of tubal cancer, were evaluated by ultrasound preoperatively. The IOTA simple rules plus pattern recognition gave a sensitivity of 86.7% (13 in 15) and specificity of 97.4%. Sausage-shaped appearance was identified in nearly all cases (14 in 15). Incomplete septa and normal ovaries could be identified in 33.3% and 40%, respectively. Conclusion: IOTA simple rules plus pattern recognition is relatively effective in predicting tubal cancer. Thus, we propose the simple scheme in diagnosis of tubal cancer as follows. First of all, the adnexal masses are evaluated with IOTA simple rules. If the B-rules could be applied, tubal cancer is reliably excluded. If the M-rules could be applied or the result is inconclusive, careful delineation of the mass with pattern recognition should be performed. Creative Commons Attribution License

  13. Interpretation of atomic mass systematics in terms of the valence shells and a simple scheme for predicting masses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haustein, P.E.; Brenner, D.S.; Casten, R.F.

    1988-07-01

    A new semiempirical method that significantly simplifies atomic mass systematics and which provides a method for making mass predictions by linear interpolation is discussed in the context of the nuclear valence space. In certain regions complicated patterns of mass systematics in traditional plots versus Z, N, or isospin are consolidated and transformed into linear ones extending over long isotopic and isotonic sequences.

  14. Validation of a simple method for predicting the disinfection performance in a flow-through contactor.

    PubMed

    Pfeiffer, Valentin; Barbeau, Benoit

    2014-02-01

    Despite its shortcomings, the T10 method introduced by the United States Environmental Protection Agency (USEPA) in 1989 is currently the method most frequently used in North America to calculate disinfection performance. Other methods (e.g., the Integrated Disinfection Design Framework, IDDF) have been advanced as replacements, and more recently, the USEPA suggested the Extended T10 and Extended CSTR (Continuous Stirred-Tank Reactor) methods to improve the inactivation calculations within ozone contactors. To develop a method that fully considers the hydraulic behavior of the contactor, two models (Plug Flow with Dispersion and N-CSTR) were successfully fitted with five tracer tests results derived from four Water Treatment Plants and a pilot-scale contactor. A new method based on the N-CSTR model was defined as the Partially Segregated (Pseg) method. The predictions from all the methods mentioned were compared under conditions of poor and good hydraulic performance, low and high disinfectant decay, and different levels of inactivation. These methods were also compared with experimental results from a chlorine pilot-scale contactor used for Escherichia coli inactivation. The T10 and Extended T10 methods led to large over- and under-estimations. The Segregated Flow Analysis (used in the IDDF) also considerably overestimated the inactivation under high disinfectant decay. Only the Extended CSTR and Pseg methods produced realistic and conservative predictions in all cases. Finally, a simple implementation procedure of the Pseg method was suggested for calculation of disinfection performance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Understanding valence-shell electron-pair repulsion (VSEPR) theory using origami molecular models

    NASA Astrophysics Data System (ADS)

    Endah Saraswati, Teguh; Saputro, Sulistyo; Ramli, Murni; Praseptiangga, Danar; Khasanah, Nurul; Marwati, Sri

    2017-01-01

    Valence-shell electron-pair repulsion (VSEPR) theory is conventionally used to predict molecular geometry. However, it is difficult to explore the full implications of this theory by simply drawing chemical structures. Here, we introduce origami modelling as a more accessible approach for exploration of the VSEPR theory. Our technique is simple, readily accessible and inexpensive compared with other sophisticated methods such as computer simulation or commercial three-dimensional modelling kits. This method can be implemented in chemistry education at both the high school and university levels. We discuss the example of a simple molecular structure prediction for ammonia (NH3). Using the origami model, both molecular shape and the scientific justification can be visualized easily. This ‘hands-on’ approach to building molecules will help promote understanding of VSEPR theory.

  16. Prediction of toxicity and comparison of alternatives using WebTEST (Web-services Toxicity Estimation Software Tool)

    EPA Science Inventory

    A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...

  17. Prediction of toxicity and comparison of alternatives using WebTEST (Web-services Toxicity Estimation Software Tool)(Bled Slovenia)

    EPA Science Inventory

    A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...

  18. Naïve Bayes classification in R.

    PubMed

    Zhang, Zhongheng

    2016-06-01

    Naïve Bayes classification is a kind of simple probabilistic classification methods based on Bayes' theorem with the assumption of independence between features. The model is trained on training dataset to make predictions by predict() function. This article introduces two functions naiveBayes() and train() for the performance of Naïve Bayes classification.

  19. A novel method for calculating the energy barriers for carbon diffusion in ferrite under heterogeneous stress

    NASA Astrophysics Data System (ADS)

    Tchitchekova, Deyana S.; Morthomas, Julien; Ribeiro, Fabienne; Ducher, Roland; Perez, Michel

    2014-07-01

    A novel method for accurate and efficient evaluation of the change in energy barriers for carbon diffusion in ferrite under heterogeneous stress is introduced. This method, called Linear Combination of Stress States, is based on the knowledge of the effects of simple stresses (uniaxial or shear) on these diffusion barriers. Then, it is assumed that the change in energy barriers under a complex stress can be expressed as a linear combination of these already known simple stress effects. The modifications of energy barriers by either uniaxial traction/compression and shear stress are determined by means of atomistic simulations with the Climbing Image-Nudge Elastic Band method and are stored as a set of functions. The results of this method are compared to the predictions of anisotropic elasticity theory. It is shown that, linear anisotropic elasticity fails to predict the correct energy barrier variation with stress (especially with shear stress) whereas the proposed method provides correct energy barrier variation for stresses up to ˜3 GPa. This study provides a basis for the development of multiscale models of diffusion under non-uniform stress.

  20. A novel method for calculating the energy barriers for carbon diffusion in ferrite under heterogeneous stress.

    PubMed

    Tchitchekova, Deyana S; Morthomas, Julien; Ribeiro, Fabienne; Ducher, Roland; Perez, Michel

    2014-07-21

    A novel method for accurate and efficient evaluation of the change in energy barriers for carbon diffusion in ferrite under heterogeneous stress is introduced. This method, called Linear Combination of Stress States, is based on the knowledge of the effects of simple stresses (uniaxial or shear) on these diffusion barriers. Then, it is assumed that the change in energy barriers under a complex stress can be expressed as a linear combination of these already known simple stress effects. The modifications of energy barriers by either uniaxial traction/compression and shear stress are determined by means of atomistic simulations with the Climbing Image-Nudge Elastic Band method and are stored as a set of functions. The results of this method are compared to the predictions of anisotropic elasticity theory. It is shown that, linear anisotropic elasticity fails to predict the correct energy barrier variation with stress (especially with shear stress) whereas the proposed method provides correct energy barrier variation for stresses up to ∼3 GPa. This study provides a basis for the development of multiscale models of diffusion under non-uniform stress.

  1. The SAMPL4 host-guest blind prediction challenge: an overview.

    PubMed

    Muddana, Hari S; Fenley, Andrew T; Mobley, David L; Gilson, Michael K

    2014-04-01

    Prospective validation of methods for computing binding affinities can help assess their predictive power and thus set reasonable expectations for their performance in drug design applications. Supramolecular host-guest systems are excellent model systems for testing such affinity prediction methods, because their small size and limited conformational flexibility, relative to proteins, allows higher throughput and better numerical convergence. The SAMPL4 prediction challenge therefore included a series of host-guest systems, based on two hosts, cucurbit[7]uril and octa-acid. Binding affinities in aqueous solution were measured experimentally for a total of 23 guest molecules. Participants submitted 35 sets of computational predictions for these host-guest systems, based on methods ranging from simple docking, to extensive free energy simulations, to quantum mechanical calculations. Over half of the predictions provided better correlations with experiment than two simple null models, but most methods underperformed the null models in terms of root mean squared error and linear regression slope. Interestingly, the overall performance across all SAMPL4 submissions was similar to that for the prior SAMPL3 host-guest challenge, although the experimentalists took steps to simplify the current challenge. While some methods performed fairly consistently across both hosts, no single approach emerged as consistent top performer, and the nonsystematic nature of the various submissions made it impossible to draw definitive conclusions regarding the best choices of energy models or sampling algorithms. Salt effects emerged as an issue in the calculation of absolute binding affinities of cucurbit[7]uril-guest systems, but were not expected to affect the relative affinities significantly. Useful directions for future rounds of the challenge might involve encouraging participants to carry out some calculations that replicate each others' studies, and to systematically explore parameter options.

  2. CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.

    USGS Publications Warehouse

    Cooley, Richard L.; Vecchia, Aldo V.

    1987-01-01

    A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.

  3. Quantitative computed tomography for the prediction of pulmonary function after lung cancer surgery: a simple method using simulation software.

    PubMed

    Ueda, Kazuhiro; Tanaka, Toshiki; Li, Tao-Sheng; Tanaka, Nobuyuki; Hamano, Kimikazu

    2009-03-01

    The prediction of pulmonary functional reserve is mandatory in therapeutic decision-making for patients with resectable lung cancer, especially those with underlying lung disease. Volumetric analysis in combination with densitometric analysis of the affected lung lobe or segment with quantitative computed tomography (CT) helps to identify residual pulmonary function, although the utility of this modality needs investigation. The subjects of this prospective study were 30 patients with resectable lung cancer. A three-dimensional CT lung model was created with voxels representing normal lung attenuation (-600 to -910 Hounsfield units). Residual pulmonary function was predicted by drawing a boundary line between the lung to be preserved and that to be resected, directly on the lung model. The predicted values were correlated with the postoperative measured values. The predicted and measured values corresponded well (r=0.89, p<0.001). Although the predicted values corresponded with values predicted by simple calculation using a segment-counting method (r=0.98), there were two outliers whose pulmonary functional reserves were predicted more accurately by CT than by segment counting. The measured pulmonary functional reserves were significantly higher than the predicted values in patients with extensive emphysematous areas (<-910 Hounsfield units), but not in patients with chronic obstructive pulmonary disease. Quantitative CT yielded accurate prediction of functional reserve after lung cancer surgery and helped to identify patients whose functional reserves are likely to be underestimated. Hence, this modality should be utilized for patients with marginal pulmonary function.

  4. Measurement of Antenna Bore-Sight Gain

    NASA Technical Reports Server (NTRS)

    Fortinberry, Jarrod; Shumpert, Thomas

    2016-01-01

    The absolute or free-field gain of a simple antenna can be approximated using standard antenna theory formulae or for a more accurate prediction, numerical methods may be employed to solve for antenna parameters including gain. Both of these methods will result in relatively reasonable estimates but in practice antenna gain is usually verified and documented via measurements and calibration. In this paper, a relatively simple and low-cost, yet effective means of determining the bore-sight free-field gain of a VHF/UHF antenna is proposed by using the Brewster angle relationship.

  5. Prediction of soil attributes through interpolators in a deglaciated environment with complex landforms

    NASA Astrophysics Data System (ADS)

    Schünemann, Adriano Luis; Inácio Fernandes Filho, Elpídio; Rocha Francelino, Marcio; Rodrigues Santos, Gérson; Thomazini, Andre; Batista Pereira, Antônio; Gonçalves Reynaud Schaefer, Carlos Ernesto

    2017-04-01

    The knowledge of environmental variables values, in non-sampled sites from a minimum data set can be accessed through interpolation technique. Kriging and the classifier Random Forest algorithm are examples of predictors with this aim. The objective of this work was to compare methods of soil attributes spatialization in a recent deglaciated environment with complex landforms. Prediction of the selected soil attributes (potassium, calcium and magnesium) from ice-free areas were tested by using morphometric covariables, and geostatistical models without these covariables. For this, 106 soil samples were collected at 0-10 cm depth in Keller Peninsula, King George Island, Maritime Antarctica. Soil chemical analysis was performed by the gravimetric method, determining values of potassium, calcium and magnesium for each sampled point. Digital terrain models (DTMs) were obtained by using Terrestrial Laser Scanner. DTMs were generated from a cloud of points with spatial resolutions of 1, 5, 10, 20 and 30 m. Hence, 40 morphometric covariates were generated. Simple Kriging was performed using the R package software. The same data set coupled with morphometric covariates, was used to predict values of the studied attributes in non-sampled sites through Random Forest interpolator. Little differences were observed on the DTMs generated by Simple kriging and Random Forest interpolators. Also, DTMs with better spatial resolution did not improved the quality of soil attributes prediction. Results revealed that Simple Kriging can be used as interpolator when morphometric covariates are not available, with little impact regarding quality. It is necessary to go further in soil chemical attributes prediction techniques, especially in periglacial areas with complex landforms.

  6. Integrated spectral and image analysis of hyperspectral scattering data for prediction of apple fruit firmness and soluble solids content

    USDA-ARS?s Scientific Manuscript database

    Spectral scattering is useful for assessing the firmness and soluble solids content (SSC) of apples. In previous research, mean reflectance extracted from the hyperspectral scattering profiles was used for this purpose since the method is simple and fast and also gives relatively good predictions. T...

  7. Predicting community structure in snakes on Eastern Nearctic islands using ecological neutral theory and phylogenetic methods

    PubMed Central

    Burbrink, Frank T.; McKelvy, Alexander D.; Pyron, R. Alexander; Myers, Edward A.

    2015-01-01

    Predicting species presence and richness on islands is important for understanding the origins of communities and how likely it is that species will disperse and resist extinction. The equilibrium theory of island biogeography (ETIB) and, as a simple model of sampling abundances, the unified neutral theory of biodiversity (UNTB), predict that in situations where mainland to island migration is high, species-abundance relationships explain the presence of taxa on islands. Thus, more abundant mainland species should have a higher probability of occurring on adjacent islands. In contrast to UNTB, if certain groups have traits that permit them to disperse to islands better than other taxa, then phylogeny may be more predictive of which taxa will occur on islands. Taking surveys of 54 island snake communities in the Eastern Nearctic along with mainland communities that have abundance data for each species, we use phylogenetic assembly methods and UNTB estimates to predict island communities. Species richness is predicted by island area, whereas turnover from the mainland to island communities is random with respect to phylogeny. Community structure appears to be ecologically neutral and abundance on the mainland is the best predictor of presence on islands. With regard to young and proximate islands, where allopatric or cladogenetic speciation is not a factor, we find that simple neutral models following UNTB and ETIB predict the structure of island communities. PMID:26609083

  8. Predicting community structure in snakes on Eastern Nearctic islands using ecological neutral theory and phylogenetic methods.

    PubMed

    Burbrink, Frank T; McKelvy, Alexander D; Pyron, R Alexander; Myers, Edward A

    2015-11-22

    Predicting species presence and richness on islands is important for understanding the origins of communities and how likely it is that species will disperse and resist extinction. The equilibrium theory of island biogeography (ETIB) and, as a simple model of sampling abundances, the unified neutral theory of biodiversity (UNTB), predict that in situations where mainland to island migration is high, species-abundance relationships explain the presence of taxa on islands. Thus, more abundant mainland species should have a higher probability of occurring on adjacent islands. In contrast to UNTB, if certain groups have traits that permit them to disperse to islands better than other taxa, then phylogeny may be more predictive of which taxa will occur on islands. Taking surveys of 54 island snake communities in the Eastern Nearctic along with mainland communities that have abundance data for each species, we use phylogenetic assembly methods and UNTB estimates to predict island communities. Species richness is predicted by island area, whereas turnover from the mainland to island communities is random with respect to phylogeny. Community structure appears to be ecologically neutral and abundance on the mainland is the best predictor of presence on islands. With regard to young and proximate islands, where allopatric or cladogenetic speciation is not a factor, we find that simple neutral models following UNTB and ETIB predict the structure of island communities. © 2015 The Author(s).

  9. Simple, empirical approach to predict neutron capture cross sections from nuclear masses

    NASA Astrophysics Data System (ADS)

    Couture, A.; Casten, R. F.; Cakirli, R. B.

    2017-12-01

    Background: Neutron capture cross sections are essential to understanding the astrophysical s and r processes, the modeling of nuclear reactor design and performance, and for a wide variety of nuclear forensics applications. Often, cross sections are needed for nuclei where experimental measurements are difficult. Enormous effort, over many decades, has gone into attempting to develop sophisticated statistical reaction models to predict these cross sections. Such work has met with some success but is often unable to reproduce measured cross sections to better than 40 % , and has limited predictive power, with predictions from different models rapidly differing by an order of magnitude a few nucleons from the last measurement. Purpose: To develop a new approach to predicting neutron capture cross sections over broad ranges of nuclei that accounts for their values where known and which has reliable predictive power with small uncertainties for many nuclei where they are unknown. Methods: Experimental neutron capture cross sections were compared to empirical mass observables in regions of similar structure. Results: We present an extremely simple method, based solely on empirical mass observables, that correlates neutron capture cross sections in the critical energy range from a few keV to a couple hundred keV. We show that regional cross sections are compactly correlated in medium and heavy mass nuclei with the two-neutron separation energy. These correlations are easily amenable to predict unknown cross sections, often converting the usual extrapolations to more reliable interpolations. It almost always reproduces existing data to within 25 % and estimated uncertainties are below about 40 % up to 10 nucleons beyond known data. Conclusions: Neutron capture cross sections display a surprisingly strong connection to the two-neutron separation energy, a nuclear structure property. The simple, empirical correlations uncovered provide model-independent predictions of neutron capture cross sections, extending far from stability, including for nuclei of the highest sensitivity to r -process nucleosynthesis.

  10. Life extending control: An interdisciplinary engineering thrust

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.; Merrill, Walter C.

    1991-01-01

    The concept of Life Extending Control (LEC) is introduced. Possible extensions to the cyclic damage prediction approach are presented based on the identification of a model from elementary forms. Several candidate elementary forms are presented. These extensions will result in a continuous or differential form of the damage prediction model. Two possible approaches to the LEC based on the existing cyclic damage prediction method, the measured variables LEC and the estimated variables LEC, are defined. Here, damage estimates or measurements would be used directly in the LEC. A simple hydraulic actuator driven position control system example is used to illustrate the main ideas behind LEC. Results from a simple hydraulic actuator example demonstrate that overall system performance (dynamic plus life) can be maximized by accounting for component damage in the control design.

  11. Weighted Feature Significance: A Simple, Interpretable Model of Compound Toxicity Based on the Statistical Enrichment of Structural Features

    PubMed Central

    Huang, Ruili; Southall, Noel; Xia, Menghang; Cho, Ming-Hsuang; Jadhav, Ajit; Nguyen, Dac-Trung; Inglese, James; Tice, Raymond R.; Austin, Christopher P.

    2009-01-01

    In support of the U.S. Tox21 program, we have developed a simple and chemically intuitive model we call weighted feature significance (WFS) to predict the toxicological activity of compounds, based on the statistical enrichment of structural features in toxic compounds. We trained and tested the model on the following: (1) data from quantitative high–throughput screening cytotoxicity and caspase activation assays conducted at the National Institutes of Health Chemical Genomics Center, (2) data from Salmonella typhimurium reverse mutagenicity assays conducted by the U.S. National Toxicology Program, and (3) hepatotoxicity data published in the Registry of Toxic Effects of Chemical Substances. Enrichments of structural features in toxic compounds are evaluated for their statistical significance and compiled into a simple additive model of toxicity and then used to score new compounds for potential toxicity. The predictive power of the model for cytotoxicity was validated using an independent set of compounds from the U.S. Environmental Protection Agency tested also at the National Institutes of Health Chemical Genomics Center. We compared the performance of our WFS approach with classical classification methods such as Naive Bayesian clustering and support vector machines. In most test cases, WFS showed similar or slightly better predictive power, especially in the prediction of hepatotoxic compounds, where WFS appeared to have the best performance among the three methods. The new algorithm has the important advantages of simplicity, power, interpretability, and ease of implementation. PMID:19805409

  12. A new mathematical solution for predicting char activation reactions

    USGS Publications Warehouse

    Rafsanjani, H.H.; Jamshidi, E.; Rostam-Abadi, M.

    2002-01-01

    The differential conservation equations that describe typical gas-solid reactions, such as activation of coal chars, yield a set of coupled second-order partial differential equations. The solution of these coupled equations by exact analytical methods is impossible. In addition, an approximate or exact solution only provides predictions for either reaction- or diffusion-controlling cases. A new mathematical solution, the quantize method (QM), was applied to predict the gasification rates of coal char when both chemical reaction and diffusion through the porous char are present. Carbon conversion rates predicted by the QM were in closer agreement with the experimental data than those predicted by the random pore model and the simple particle model. ?? 2002 Elsevier Science Ltd. All rights reserved.

  13. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ajami, N K; Duan, Q; Gao, X

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less

  14. A comparison of simple shear characterization methods for composite laminates

    NASA Technical Reports Server (NTRS)

    Yeow, Y. T.; Brinson, H. F.

    1978-01-01

    Various methods for the shear stress/strain characterization of composite laminates are examined and their advantages and limitations are briefly discussed. Experimental results and the necessary accompanying analysis are then presented and compared for three simple shear characterization procedures. These are the off-axis tensile test method, the (+/- 45 deg)s tensile test method and the (0/90 deg)s symmetric rail shear test method. It is shown that the first technique indicates the shear properties of the graphite/epoxy laminates investigated are fundamentally brittle in nature while the latter two methods tend to indicate that these laminates are fundamentally ductile in nature. Finally, predictions of incrementally determined tensile stress/strain curves utilizing the various different shear behaviour methods as input information are presented and discussed.

  15. A comparison of simple shear characterization methods for composite laminates

    NASA Technical Reports Server (NTRS)

    Yeow, Y. T.; Brinson, H. F.

    1977-01-01

    Various methods for the shear stress-strain characterization of composite laminates are examined, and their advantages and limitations are briefly discussed. Experimental results and the necessary accompanying analysis are then presented and compared for three simple shear characterization procedures. These are the off-axis tensile test method, the + or - 45 degs tensile test method and the 0 deg/90 degs symmetric rail shear test method. It is shown that the first technique indicates that the shear properties of the G/E laminates investigated are fundamentally brittle in nature while the latter two methods tend to indicate that the G/E laminates are fundamentally ductile in nature. Finally, predictions of incrementally determined tensile stress-strain curves utilizing the various different shear behavior methods as input information are presented and discussed.

  16. A Simple and Efficient Computational Approach to Chafed Cable Time-Domain Reflectometry Signature Prediction

    NASA Technical Reports Server (NTRS)

    Kowalski, Marc Edward

    2009-01-01

    A method for the prediction of time-domain signatures of chafed coaxial cables is presented. The method is quasi-static in nature, and is thus efficient enough to be included in inference and inversion routines. Unlike previous models proposed, no restriction on the geometry or size of the chafe is required in the present approach. The model is validated and its speed is illustrated via comparison to simulations from a commercial, three-dimensional electromagnetic simulator.

  17. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  18. Mixed Beam Murine Harderian Gland Tumorigenesis: Predicted Dose-Effect Relationships if neither Synergism nor Antagonism Occurs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siranart, Nopphon; Blakely, Eleanor A.; Cheng, Alden

    Complex mixed radiation fields exist in interplanetary space, and not much is known about their latent effects on space travelers. In silico synergy analysis default predictions are useful when planning relevant mixed-ion-beam experiments and interpreting their results. These predictions are based on individual dose-effect relationships (IDER) for each component of the mixed-ion beam, assuming no synergy or antagonism. For example, a default hypothesis of simple effect additivity has often been used throughout the study of biology. However, for more than a century pharmacologists interested in mixtures of therapeutic drugs have analyzed conceptual, mathematical and practical questions similar to those thatmore » arise when analyzing mixed radiation fields, and have shown that simple effect additivity often gives unreasonable predictions when the IDER are curvilinear. Various alternatives to simple effect additivity proposed in radiobiology, pharmacometrics, toxicology and other fields are also known to have important limitations. In this work, we analyze upcoming murine Harderian gland (HG) tumor prevalence mixed-beam experiments, using customized open-source software and published IDER from past single-ion experiments. The upcoming experiments will use acute irradiation and the mixed beam will include components of high atomic number and energy (HZE). We introduce a new alternative to simple effect additivity, "incremental effect additivity", which is more suitable for the HG analysis and perhaps for other end points. We use incremental effect additivity to calculate default predictions for mixture dose-effect relationships, including 95% confidence intervals. We have drawn three main conclusions from this work. 1. It is important to supplement mixed-beam experiments with single-ion experiments, with matching end point(s), shielding and dose timing. 2. For HG tumorigenesis due to a mixed beam, simple effect additivity and incremental effect additivity sometimes give default predictions that are numerically close. However, if nontargeted effects are important and the mixed beam includes a number of different HZE components, simple effect additivity becomes unusable and another method is needed such as incremental effect additivity. 3. Eventually, synergy analysis default predictions of the effects of mixed radiation fields will be replaced by more mechanistic, biophysically-based predictions. However, optimizing synergy analyses is an important first step. If mixed-beam experiments indicate little synergy or antagonism, plans by NASA for further experiments and possible missions beyond low earth orbit will be substantially simplified.« less

  19. DIFFERENTIATION OF AURANTII FRUCTUS IMMATURUS AND FRUCTUS PONICIRI TRIFOLIATAE IMMATURUS BY FLOW-INJECTION WITH ULTRAVIOLET SPECTROSCOPIC DETECTION AND PROTON NUCLEAR MAGNETIC RESONANCE USING PARTIAL LEAST-SQUARES DISCRIMINANT ANALYSIS.

    PubMed

    Zhang, Mengliang; Zhao, Yang; Harrington, Peter de B; Chen, Pei

    2016-03-01

    Two simple fingerprinting methods, flow-injection coupled to ultraviolet spectroscopy and proton nuclear magnetic resonance, were used for discriminating between Aurantii fructus immaturus and Fructus poniciri trifoliatae immaturus . Both methods were combined with partial least-squares discriminant analysis. In the flow-injection method, four data representations were evaluated: total ultraviolet absorbance chromatograms, averaged ultraviolet spectra, absorbance at 193, 205, 225, and 283 nm, and absorbance at 225 and 283 nm. Prediction rates of 100% were achieved for all data representations by partial least-squares discriminant analysis using leave-one-sample-out cross-validation. The prediction rate for the proton nuclear magnetic resonance data by partial least-squares discriminant analysis with leave-one-sample-out cross-validation was also 100%. A new validation set of data was collected by flow-injection with ultraviolet spectroscopic detection two weeks later and predicted by partial least-squares discriminant analysis models constructed by the initial data representations with no parameter changes. The classification rates were 95% with the total ultraviolet absorbance chromatograms datasets and 100% with the other three datasets. Flow-injection with ultraviolet detection and proton nuclear magnetic resonance are simple, high throughput, and low-cost methods for discrimination studies.

  20. Prediction of forces and moments for hypersonic flight vehicle control effectors

    NASA Technical Reports Server (NTRS)

    Maughmer, Mark D.; Long, Lyle N.; Pagano, Peter J.

    1991-01-01

    Developing methods of predicting flight control forces and moments for hypersonic vehicles, included a preliminary assessment of subsonic/supersonic panel methods and hypersonic local flow inclination methods for such predictions. While these findings clearly indicate the usefulness of such methods for conceptual design activities, deficiencies exist in some areas. Thus, a second phase of research was proposed in which a better understanding is sought for the reasons of the successes and failures of the methods considered, particularly for the cases at hypersonic Mach numbers. To obtain this additional understanding, a more careful study of the results obtained relative to the methods used was undertaken. In addition, where appropriate and necessary, a more complete modeling of the flow was performed using well proven methods of computational fluid dynamics. As a result, assessments will be made which are more quantitative than those of phase 1 regarding the uncertainty involved in the prediction of the aerodynamic derivatives. In addition, with improved understanding, it is anticipated that improvements resulting in better accuracy will be made to the simple force and moment prediction.

  1. X-DRAIN and XDS: a simplified road erosion prediction method

    Treesearch

    William J. Elliot; David E. Hall; S. R. Graves

    1998-01-01

    To develop a simple road sediment delivery tool, the WEPP program modeled sedimentation from forest roads for more than 50,000 combinations of distance between cross drains, road gradient, soil texture, distance from stream, steepness of the buffer between the road and the stream, and climate. The sediment yield prediction from each of these runs was stored in a data...

  2. Machine learning predictions of molecular properties: Accurate many-body potentials and nonlocality in chemical space

    DOE PAGES

    Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; ...

    2015-06-04

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstratemore » prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.« less

  3. Machine Learning Predictions of Molecular Properties: Accurate Many-Body Potentials and Nonlocality in Chemical Space

    PubMed Central

    2015-01-01

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. In addition, the same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies. PMID:26113956

  4. A vortex-filament and core model for wings with edge vortex separation

    NASA Technical Reports Server (NTRS)

    Pao, J. L.; Lan, C. E.

    1981-01-01

    A method for predicting aerodynamic characteristics of slender wings with edge vortex separation was developed. Semiempirical but simple methods were used to determine the initial positions of the free sheet and vortex core. Comparison with available data indicates that: the present method is generally accurate in predicting the lift and induced drag coefficients but the predicted pitching moment is too positive; the spanwise lifting pressure distributions estimated by the one vortex core solution of the present method are significantly better than the results of Mehrotra's method relative to the pressure peak values for the flat delta; the two vortex core system applied to the double delta and strake wing produce overall aerodynamic characteristics which have good agreement with data except for the pitching moment; and the computer time for the present method is about two thirds of that of Mehrotra's method.

  5. A vortex-filament and core model for wings with edge vortex separation

    NASA Technical Reports Server (NTRS)

    Pao, J. L.; Lan, C. E.

    1982-01-01

    A vortex filament-vortex core method for predicting aerodynamic characteristics of slender wings with edge vortex separation was developed. Semi-empirical but simple methods were used to determine the initial positions of the free sheet and vortex core. Comparison with available data indicates that: (1) the present method is generally accurate in predicting the lift and induced drag coefficients but the predicted pitching moment is too positive; (2) the spanwise lifting pressure distributions estimated by the one vortex core solution of the present method are significantly better than the results of Mehrotra's method relative to the pressure peak values for the flat delta; (3) the two vortex core system applied to the double delta and strake wings produce overall aerodynamic characteristics which have good agreement with data except for the pitching moment; and (4) the computer time for the present method is about two thirds of that of Mehrotra's method.

  6. A simple method of predicting S-wave velocity

    USGS Publications Warehouse

    Lee, M.W.

    2006-01-01

    Prediction of shear-wave velocity plays an important role in seismic modeling, amplitude analysis with offset, and other exploration applications. This paper presents a method for predicting S-wave velocity from the P-wave velocity on the basis of the moduli of dry rock. Elastic velocities of water-saturated sediments at low frequencies can be predicted from the moduli of dry rock by using Gassmann's equation; hence, if the moduli of dry rock can be estimated from P-wave velocities, then S-wave velocities easily can be predicted from the moduli. Dry rock bulk modulus can be related to the shear modulus through a compaction constant. The numerical results indicate that the predicted S-wave velocities for consolidated and unconsolidated sediments agree well with measured velocities if differential pressure is greater than approximately 5 MPa. An advantage of this method is that there are no adjustable parameters to be chosen, such as the pore-aspect ratios required in some other methods. The predicted S-wave velocity depends only on the measured P-wave velocity and porosity. ?? 2006 Society of Exploration Geophysicists.

  7. Location Prediction Based on Transition Probability Matrices Constructing from Sequential Rules for Spatial-Temporal K-Anonymity Dataset

    PubMed Central

    Liu, Zhao; Zhu, Yunhong; Wu, Chenxue

    2016-01-01

    Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502

  8. A method for studying the hunting oscillations of an airplane with a simple type of automatic control

    NASA Technical Reports Server (NTRS)

    Jones, R. T.

    1976-01-01

    A method is presented for predicting the amplitude and frequency, under certain simplifying conditions, of the hunting oscillations of an automatically controlled aircraft with lag in the control system or in the response of the aircraft to the controls. If the steering device is actuated by a simple right-left type of signal, the series of alternating fixed amplitude signals occuring during the hunting may ordinarily be represented by a square wave. Formulas are given expressing the response to such a variation of signal in terms of the response to a unit signal.

  9. Using a Spreadsheet Scroll Bar to Solve Equilibrium Concentrations

    ERIC Educational Resources Information Center

    Raviolo, Andres

    2012-01-01

    A simple, conceptual method is described for using the spreadsheet scroll bar to find the composition of a system at chemical equilibrium. Simulation of any kind of chemical equilibrium can be carried out using this method, and the effects of different disturbances can be predicted. This simulation, which can be used in general chemistry…

  10. Analysis of hardening behavior of sheet metals by a new simple shear test method taking into account the Bauschinger effect

    NASA Astrophysics Data System (ADS)

    Bang, Sungsik; Rickhey, Felix; Kim, Minsoo; Lee, Hyungyil; Kim, Naksoo

    2013-12-01

    In this study we establish a process to predict hardening behavior considering the Bauschinger effect for zircaloy-4 sheets. When a metal is compressed after tension in forming, the yield strength decreases. For this reason, the Bauschinger effect should be considered in FE simulations of spring-back. We suggested a suitable specimen size and a method for determining the optimum tightening torque for simple shear tests. Shear stress-strain curves are obtained for five materials. We developed a method to convert the shear load-displacement curve to the effective stress-strain curve with FEA. We simulated the simple shear forward/reverse test using the combined isotropic/kinematic hardening model. We also investigated the change of the load-displacement curve by varying the hardening coefficients. We determined the hardening coefficients so that they follow the hardening behavior of zircaloy-4 in experiments.

  11. Prediction of the effects of propeller operation on the static longitudinal stability of single-engine tractor monoplanes with flaps retracted

    NASA Technical Reports Server (NTRS)

    Weil, Joseph; Sleeman, William C , Jr

    1949-01-01

    The effects of propeller operation on the static longitudinal stability of single-engine tractor monoplanes are analyzed, and a simple method is presented for computing power-on pitching-moment curves for flap-retracted flight conditions. The methods evolved are based on the results of powered-model wind-tunnel investigations of 28 model configurations. Correlation curves are presented from which the effects of power on the downwash over the tail and the stabilizer effectiveness can be rapidly predicted. The procedures developed enable prediction of power-on longitudinal stability characteristics that are generally in very good agreement with experiment.

  12. Simple and Multivariate Relationships Between Spiritual Intelligence with General Health and Happiness.

    PubMed

    Amirian, Mohammad-Elyas; Fazilat-Pour, Masoud

    2016-08-01

    The present study examined simple and multivariate relationships of spiritual intelligence with general health and happiness. The employed method was descriptive and correlational. King's Spiritual Quotient scales, GHQ-28 and Oxford Happiness Inventory, are filled out by a sample consisted of 384 students, which were selected using stratified random sampling from the students of Shahid Bahonar University of Kerman. Data are subjected to descriptive and inferential statistics including correlations and multivariate regressions. Bivariate correlations support positive and significant predictive value of spiritual intelligence toward general health and happiness. Further analysis showed that among the Spiritual Intelligence' subscales, Existential Critical Thinking Predicted General Health and Happiness, reversely. In addition, happiness was positively predicted by generation of personal meaning and transcendental awareness. The findings are discussed in line with the previous studies and the relevant theoretical background.

  13. Prediction of Human Activity by Discovering Temporal Sequence Patterns.

    PubMed

    Li, Kang; Fu, Yun

    2014-08-01

    Early prediction of ongoing human activity has become more valuable in a large variety of time-critical applications. To build an effective representation for prediction, human activities can be characterized by a complex temporal composition of constituent simple actions and interacting objects. Different from early detection on short-duration simple actions, we propose a novel framework for long -duration complex activity prediction by discovering three key aspects of activity: Causality, Context-cue, and Predictability. The major contributions of our work include: (1) a general framework is proposed to systematically address the problem of complex activity prediction by mining temporal sequence patterns; (2) probabilistic suffix tree (PST) is introduced to model causal relationships between constituent actions, where both large and small order Markov dependencies between action units are captured; (3) the context-cue, especially interactive objects information, is modeled through sequential pattern mining (SPM), where a series of action and object co-occurrence are encoded as a complex symbolic sequence; (4) we also present a predictive accumulative function (PAF) to depict the predictability of each kind of activity. The effectiveness of our approach is evaluated on two experimental scenarios with two data sets for each: action-only prediction and context-aware prediction. Our method achieves superior performance for predicting global activity classes and local action units.

  14. The vibration discomfort of standing people: evaluation of multi-axis vibration.

    PubMed

    Thuong, Olivier; Griffin, Michael J

    2015-01-01

    Few studies have investigated discomfort caused by multi-axis vibration and none has explored methods of predicting the discomfort of standing people from simultaneous fore-and-aft, lateral and vertical vibration of a floor. Using the method of magnitude estimation, 16 subjects estimated their discomfort caused by dual-axis and tri-axial motions (octave-bands centred on either 1 or 4 Hz with various magnitudes in the fore-and-aft, lateral and vertical directions) and the discomfort caused by single-axis motions. The method of predicting discomfort assumed in current standards (square-root of the sums of squares of the three components weighted according to their individual contributions to discomfort) provided reasonable predictions of the discomfort caused by multi-axis vibration. Improved predictions can be obtained for specific stimuli, but no single simple method will provide accurate predictions for all stimuli because the rate of growth of discomfort with increasing magnitude of vibration depends on the frequency and direction of vibration.

  15. A root-mean-square approach for predicting fatigue crack growth under random loading

    NASA Technical Reports Server (NTRS)

    Hudson, C. M.

    1981-01-01

    A method for predicting fatigue crack growth under random loading which employs the concept of Barsom (1976) is presented. In accordance with this method, the loading history for each specimen is analyzed to determine the root-mean-square maximum and minimum stresses, and the predictions are made by assuming the tests have been conducted under constant-amplitude loading at the root-mean-square maximum and minimum levels. The procedure requires a simple computer program and a desk-top computer. For the eleven predictions made, the ratios of the predicted lives to the test lives ranged from 2.13 to 0.82, which is a good result, considering that the normal scatter in the fatigue-crack-growth rates may range from a factor of two to four under identical loading conditions.

  16. Prediction of quantum interference in molecular junctions using a parabolic diagram: Understanding the origin of Fano and anti- resonances

    NASA Astrophysics Data System (ADS)

    Nozaki, Daijiro; Avdoshenko, Stanislav M.; Sevinçli, Hâldun; Gutierrez, Rafael; Cuniberti, Gianaurelio

    2013-03-01

    Recently the interest in quantum interference (QI) phenomena in molecular devices (molecular junctions) has been growing due to the unique features observed in the transmission spectra. In order to design single molecular devices exploiting QI effects as desired, it is necessary to provide simple rules for predicting the appearance of QI effects such as anti-resonances or Fano line shapes and for controlling them. In this study, we derive a transmission function of a generic molecular junction with a side group (T-shaped molecular junction) using a minimal toy model. We developed a simple method to predict the appearance of quantum interference, Fano resonances or anti- resonances, and its position in the conductance spectrum by introducing a simple graphical representation (parabolic model). Using it we can easily visualize the relation between the key electronic parameters and the positions of normal resonant peaks and anti-resonant peaks induced by quantum interference in the conductance spectrum. We also demonstrate Fano and anti-resonance in T-shaped molecular junctions using a simple tight-binding model. This parabolic model enables one to infer on-site energies of T-shaped molecules and the coupling between side group and main conduction channel from transmission spectra.

  17. Development of clinical decision rules to predict recurrent shock in dengue

    PubMed Central

    2013-01-01

    Introduction Mortality from dengue infection is mostly due to shock. Among dengue patients with shock, approximately 30% have recurrent shock that requires a treatment change. Here, we report development of a clinical rule for use during a patient’s first shock episode to predict a recurrent shock episode. Methods The study was conducted in Center for Preventive Medicine in Vinh Long province and the Children’s Hospital No. 2 in Ho Chi Minh City, Vietnam. We included 444 dengue patients with shock, 126 of whom had recurrent shock (28%). Univariate and multivariate analyses and a preprocessing method were used to evaluate and select 14 clinical and laboratory signs recorded at shock onset. Five variables (admission day, purpura/ecchymosis, ascites/pleural effusion, blood platelet count and pulse pressure) were finally trained and validated by a 10-fold validation strategy with 10 times of repetition, using a logistic regression model. Results The results showed that shorter admission day (fewer days prior to admission), purpura/ecchymosis, ascites/pleural effusion, low platelet count and narrow pulse pressure were independently associated with recurrent shock. Our logistic prediction model was capable of predicting recurrent shock when compared to the null method (P < 0.05) and was not outperformed by other prediction models. Our final scoring rule provided relatively good accuracy (AUC, 0.73; sensitivity and specificity, 68%). Score points derived from the logistic prediction model revealed identical accuracy with AUCs at 0.73. Using a cutoff value greater than −154.5, our simple scoring rule showed a sensitivity of 68.3% and a specificity of 68.2%. Conclusions Our simple clinical rule is not to replace clinical judgment, but to help clinicians predict recurrent shock during a patient’s first dengue shock episode. PMID:24295509

  18. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    PubMed

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Computation of rare transitions in the barotropic quasi-geostrophic equations

    NASA Astrophysics Data System (ADS)

    Laurie, Jason; Bouchet, Freddy

    2015-01-01

    We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier-Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager-Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherwise. We adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.

  20. Prediction of forces and moments for hypersonic flight vehicle control effectors

    NASA Technical Reports Server (NTRS)

    Maughmer, Mark D.; Long, Lyle N.; Guilmette, Neal; Pagano, Peter

    1993-01-01

    This research project includes three distinct phases. For completeness, all three phases of the work are briefly described in this report. The goal was to develop methods of predicting flight control forces and moments for hypersonic vehicles which could be used in a preliminary design environment. The first phase included a preliminary assessment of subsonic/supersonic panel methods and hypersonic local flow inclination methods for such predictions. While these findings clearly indicated the usefulness of such methods for conceptual design activities, deficiencies exist in some areas. Thus, a second phase of research was conducted in which a better understanding was sought for the reasons behind the successes and failures of the methods considered, particularly for the cases at hypersonic Mach numbers. This second phase involved using computational fluid dynamics methods to examine the flow fields in detail. Through these detailed predictions, the deficiencies in the simple surface inclination methods were determined. In the third phase of this work, an improvement to the surface inclination methods was developed. This used a novel method for including viscous effects by modifying the geometry to include the viscous/shock layer.

  1. Predicting the onset of hazardous alcohol drinking in primary care: development and validation of a simple risk algorithm

    PubMed Central

    Bellón, Juan Ángel; de Dios Luna, Juan; King, Michael; Nazareth, Irwin; Motrico, Emma; GildeGómez-Barragán, María Josefa; Torres-González, Francisco; Montón-Franco, Carmen; Sánchez-Celaya, Marta; Díaz-Barreiros, Miguel Ángel; Vicens, Catalina; Moreno-Peral, Patricia

    2017-01-01

    Background Little is known about the risk of progressing to hazardous alcohol use in abstinent or low-risk drinkers. Aim To develop and validate a simple brief risk algorithm for the onset of hazardous alcohol drinking (HAD) over 12 months for use in primary care. Design and setting Prospective cohort study in 32 health centres from six Spanish provinces, with evaluations at baseline, 6 months, and 12 months. Method Forty-one risk factors were measured and multilevel logistic regression and inverse probability weighting were used to build the risk algorithm. The outcome was new occurrence of HAD during the study, as measured by the AUDIT. Results From the lists of 174 GPs, 3954 adult abstinent or low-risk drinkers were recruited. The ‘predictAL-10’ risk algorithm included just nine variables (10 questions): province, sex, age, cigarette consumption, perception of financial strain, having ever received treatment for an alcohol problem, childhood sexual abuse, AUDIT-C, and interaction AUDIT-C*Age. The c-index was 0.886 (95% CI = 0.854 to 0.918). The optimal cutoff had a sensitivity of 0.83 and specificity of 0.80. Excluding childhood sexual abuse from the model (the ‘predictAL-9’), the c-index was 0.880 (95% CI = 0.847 to 0.913), sensitivity 0.79, and specificity 0.81. There was no statistically significant difference between the c-indexes of predictAL-10 and predictAL-9. Conclusion The predictAL-10/9 is a simple and internally valid risk algorithm to predict the onset of hazardous alcohol drinking over 12 months in primary care attendees; it is a brief tool that is potentially useful for primary prevention of hazardous alcohol drinking. PMID:28360074

  2. SPOCS: software for predicting and visualizing orthology/paralogy relationships among genomes.

    PubMed

    Curtis, Darren S; Phillips, Aaron R; Callister, Stephen J; Conlan, Sean; McCue, Lee Ann

    2013-10-15

    At the rate that prokaryotic genomes can now be generated, comparative genomics studies require a flexible method for quickly and accurately predicting orthologs among the rapidly changing set of genomes available. SPOCS implements a graph-based ortholog prediction method to generate a simple tab-delimited table of orthologs and in addition, html files that provide a visualization of the predicted ortholog/paralog relationships to which gene/protein expression metadata may be overlaid. A SPOCS web application is freely available at http://cbb.pnnl.gov/portal/tools/spocs.html. Source code for Linux systems is also freely available under an open source license at http://cbb.pnnl.gov/portal/software/spocs.html; the Boost C++ libraries and BLAST are required.

  3. Prediction and analysis of protein solubility using a novel scoring card method with dipeptide composition

    PubMed Central

    2012-01-01

    Background Existing methods for predicting protein solubility on overexpression in Escherichia coli advance performance by using ensemble classifiers such as two-stage support vector machine (SVM) based classifiers and a number of feature types such as physicochemical properties, amino acid and dipeptide composition, accompanied with feature selection. It is desirable to develop a simple and easily interpretable method for predicting protein solubility, compared to existing complex SVM-based methods. Results This study proposes a novel scoring card method (SCM) by using dipeptide composition only to estimate solubility scores of sequences for predicting protein solubility. SCM calculates the propensities of 400 individual dipeptides to be soluble using statistic discrimination between soluble and insoluble proteins of a training data set. Consequently, the propensity scores of all dipeptides are further optimized using an intelligent genetic algorithm. The solubility score of a sequence is determined by the weighted sum of all propensity scores and dipeptide composition. To evaluate SCM by performance comparisons, four data sets with different sizes and variation degrees of experimental conditions were used. The results show that the simple method SCM with interpretable propensities of dipeptides has promising performance, compared with existing SVM-based ensemble methods with a number of feature types. Furthermore, the propensities of dipeptides and solubility scores of sequences can provide insights to protein solubility. For example, the analysis of dipeptide scores shows high propensity of α-helix structure and thermophilic proteins to be soluble. Conclusions The propensities of individual dipeptides to be soluble are varied for proteins under altered experimental conditions. For accurately predicting protein solubility using SCM, it is better to customize the score card of dipeptide propensities by using a training data set under the same specified experimental conditions. The proposed method SCM with solubility scores and dipeptide propensities can be easily applied to the protein function prediction problems that dipeptide composition features play an important role. Availability The used datasets, source codes of SCM, and supplementary files are available at http://iclab.life.nctu.edu.tw/SCM/. PMID:23282103

  4. Methods for estimating 2D cloud size distributions from 1D observations

    DOE PAGES

    Romps, David M.; Vogelmann, Andrew M.

    2017-08-04

    The two-dimensional (2D) size distribution of clouds in the horizontal plane plays a central role in the calculation of cloud cover, cloud radiative forcing, convective entrainment rates, and the likelihood of precipitation. Here, a simple method is proposed for calculating the area-weighted mean cloud size and for approximating the 2D size distribution from the 1D cloud chord lengths measured by aircraft and vertically pointing lidar and radar. This simple method (which is exact for square clouds) compares favorably against the inverse Abel transform (which is exact for circular clouds) in the context of theoretical size distributions. Both methods also performmore » well when used to predict the size distribution of real clouds from a Landsat scene. When applied to a large number of Landsat scenes, the simple method is able to accurately estimate the mean cloud size. Finally, as a demonstration, the methods are applied to aircraft measurements of shallow cumuli during the RACORO campaign, which then allow for an estimate of the true area-weighted mean cloud size.« less

  5. Methods for estimating 2D cloud size distributions from 1D observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romps, David M.; Vogelmann, Andrew M.

    The two-dimensional (2D) size distribution of clouds in the horizontal plane plays a central role in the calculation of cloud cover, cloud radiative forcing, convective entrainment rates, and the likelihood of precipitation. Here, a simple method is proposed for calculating the area-weighted mean cloud size and for approximating the 2D size distribution from the 1D cloud chord lengths measured by aircraft and vertically pointing lidar and radar. This simple method (which is exact for square clouds) compares favorably against the inverse Abel transform (which is exact for circular clouds) in the context of theoretical size distributions. Both methods also performmore » well when used to predict the size distribution of real clouds from a Landsat scene. When applied to a large number of Landsat scenes, the simple method is able to accurately estimate the mean cloud size. Finally, as a demonstration, the methods are applied to aircraft measurements of shallow cumuli during the RACORO campaign, which then allow for an estimate of the true area-weighted mean cloud size.« less

  6. Prediction of hole expansion ratio for various steel sheets based on uniaxial tensile properties

    NASA Astrophysics Data System (ADS)

    Kim, Jae Hyung; Kwon, Young Jin; Lee, Taekyung; Lee, Kee-Ahn; Kim, Hyoung Seop; Lee, Chong Soo

    2018-01-01

    Stretch-flangeability is one of important formability parameters of thin steel sheets used in the automotive industry. There have been many attempts to predict hole expansion ratio (HER), a typical term to evaluate stretch-flangeability, using uniaxial tensile properties for convenience. This paper suggests a new approach that uses total elongation and average normal anisotropy to predict HER of thin steel sheets. The method provides a good linear relationship between HER of the machined hole and the predictive variables in a variety of materials with different microstructures obtained using different processing methods. The HER of the punched hole was also well predicted using the similar approach, which reflected only the portion of post uniform elongation. The physical meaning drawn by our approach successfully explained the poor HER of austenitic steels despite their considerable elongation. The proposed method to predict HER is simple and cost-effective, so it will be useful in industry. In addition, the model provides a physical explanation of HER, so it will be useful in academia.

  7. Rapid measurement and prediction of bacterial contamination in milk using an oxygen electrode.

    PubMed

    Numthuam, Sonthaya; Suzuki, Hiroaki; Fukuda, Junji; Phunsiri, Suthiluk; Rungchang, Saowaluk; Satake, Takaaki

    2009-03-01

    An oxygen electrode was used to measure oxygen consumption to determine bacterial contamination in milk. Dissolved oxygen (DO) measured at 10-35 degrees C for 2 hours provided a reasonable prediction efficiency (r > or = 0.90) of the amount of bacteria between 1.9 and 7.3 log (CFU/mL). A temperature-dependent predictive model was developed that has the same prediction accuracy like the normal predictive model. The analysis performed with and without stirring provided the same prediction efficiency, with correlation coefficient of 0.90. The measurement of DO is a simple and rapid method for the determination of bacteria in milk.

  8. How to test validity in orthodontic research: a mixed dentition analysis example.

    PubMed

    Donatelli, Richard E; Lee, Shin-Jae

    2015-02-01

    The data used to test the validity of a prediction method should be different from the data used to generate the prediction model. In this study, we explored whether an independent data set is mandatory for testing the validity of a new prediction method and how validity can be tested without independent new data. Several validation methods were compared in an example using the data from a mixed dentition analysis with a regression model. The validation errors of real mixed dentition analysis data and simulation data were analyzed for increasingly large data sets. The validation results of both the real and the simulation studies demonstrated that the leave-1-out cross-validation method had the smallest errors. The largest errors occurred in the traditional simple validation method. The differences between the validation methods diminished as the sample size increased. The leave-1-out cross-validation method seems to be an optimal validation method for improving the prediction accuracy in a data set with limited sample sizes. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  9. Charge-to-mass dispersion methods for abrasion-ablation fragmentation models

    NASA Technical Reports Server (NTRS)

    Townsend, L. W.; Norbury, J. W.

    1985-01-01

    Methods to describe the charge-to-mass dispersion distributions of projectile prefragments are presented and used to determine individual isotope cross-sections or various elements produced in the fragmentation of relativistic argon nuclei by carbon targets. Although slight improvements in predicted cross-sections are obtained for the quantum mechanical giant dipole resonance (GDR) distribution when compared qith the predictions of the geometric GDR model, the closest agreement between theory and experiment continues to be obtained with the simple hypergeometric distribution, which treats the nucleons in the nucleus as completely uncorrelated.

  10. Nonlinear fracture mechanics-based analysis of thin wall cylinders

    NASA Technical Reports Server (NTRS)

    Brust, Frederick W.; Leis, Brian N.; Forte, Thomas P.

    1994-01-01

    This paper presents a simple analysis technique to predict the crack initiation, growth, and rupture of large-radius, R, to thickness, t, ratio (thin wall) cylinders. The method is formulated to deal both with stable tearing as well as fatigue mechanisms in applications to both surface and through-wall axial cracks, including interacting surface cracks. The method can also account for time-dependent effects. Validation of the model is provided by comparisons of predictions to more than forty full scale experiments of thin wall cylinders pressurized to failure.

  11. Prediction of global and local model quality in CASP8 using the ModFOLD server.

    PubMed

    McGuffin, Liam J

    2009-01-01

    The development of effective methods for predicting the quality of three-dimensional (3D) models is fundamentally important for the success of tertiary structure (TS) prediction strategies. Since CASP7, the Quality Assessment (QA) category has existed to gauge the ability of various model quality assessment programs (MQAPs) at predicting the relative quality of individual 3D models. For the CASP8 experiment, automated predictions were submitted in the QA category using two methods from the ModFOLD server-ModFOLD version 1.1 and ModFOLDclust. ModFOLD version 1.1 is a single-model machine learning based method, which was used for automated predictions of global model quality (QMODE1). ModFOLDclust is a simple clustering based method, which was used for automated predictions of both global and local quality (QMODE2). In addition, manual predictions of model quality were made using ModFOLD version 2.0--an experimental method that combines the scores from ModFOLDclust and ModFOLD v1.1. Predictions from the ModFOLDclust method were the most successful of the three in terms of the global model quality, whilst the ModFOLD v1.1 method was comparable in performance to other single-model based methods. In addition, the ModFOLDclust method performed well at predicting the per-residue, or local, model quality scores. Predictions of the per-residue errors in our own 3D models, selected using the ModFOLD v2.0 method, were also the most accurate compared with those from other methods. All of the MQAPs described are publicly accessible via the ModFOLD server at: http://www.reading.ac.uk/bioinf/ModFOLD/. The methods are also freely available to download from: http://www.reading.ac.uk/bioinf/downloads/. Copyright 2009 Wiley-Liss, Inc.

  12. Building blocks for automated elucidation of metabolites: machine learning methods for NMR prediction.

    PubMed

    Kuhn, Stefan; Egert, Björn; Neumann, Steffen; Steinbeck, Christoph

    2008-09-25

    Current efforts in Metabolomics, such as the Human Metabolome Project, collect structures of biological metabolites as well as data for their characterisation, such as spectra for identification of substances and measurements of their concentration. Still, only a fraction of existing metabolites and their spectral fingerprints are known. Computer-Assisted Structure Elucidation (CASE) of biological metabolites will be an important tool to leverage this lack of knowledge. Indispensable for CASE are modules to predict spectra for hypothetical structures. This paper evaluates different statistical and machine learning methods to perform predictions of proton NMR spectra based on data from our open database NMRShiftDB. A mean absolute error of 0.18 ppm was achieved for the prediction of proton NMR shifts ranging from 0 to 11 ppm. Random forest, J48 decision tree and support vector machines achieved similar overall errors. HOSE codes being a notably simple method achieved a comparatively good result of 0.17 ppm mean absolute error. NMR prediction methods applied in the course of this work delivered precise predictions which can serve as a building block for Computer-Assisted Structure Elucidation for biological metabolites.

  13. A Prediction Method of Binding Free Energy of Protein and Ligand

    NASA Astrophysics Data System (ADS)

    Yang, Kun; Wang, Xicheng

    2010-05-01

    Predicting the binding free energy is an important problem in bimolecular simulation. Such prediction would be great benefit in understanding protein functions, and may be useful for computational prediction of ligand binding strengths, e.g., in discovering pharmaceutical drugs. Free energy perturbation (FEP)/thermodynamics integration (TI) is a classical method to explicitly predict free energy. However, this method need plenty of time to collect datum, and that attempts to deal with some simple systems and small changes of molecular structures. Another one for estimating ligand binding affinities is linear interaction energy (LIE) method. This method employs averages of interaction potential energy terms from molecular dynamics simulations or other thermal conformational sampling techniques. Incorporation of systematic deviations from electrostatic linear response, derived from free energy perturbation studies, into the absolute binding free energy expression significantly enhances the accuracy of the approach. However, it also is time-consuming work. In this paper, a new prediction method based on steered molecular dynamics (SMD) with direction optimization is developed to compute binding free energy. Jarzynski's equality is used to derive the PMF or free-energy. The results for two numerical examples are presented, showing that the method has good accuracy and efficiency. The novel method can also simulate whole binding proceeding and give some important structural information about development of new drugs.

  14. Deflections of Uniformly Loaded Floors. A Beam-Spring Analog.

    DTIC Science & Technology

    1984-09-01

    joist floor systems have long been analyzed and Recently, the FEAFLO program was used to predict the designed by assuming that the joists act as...simple beams in behavior of floors constructed with joists whose properties carrying the design load. This simple method neglects many were determined in...uniform joist properties.) Designated N-3 for the floor with ’. nailed sheathing and G-3 for the floor with the sheathing 02 attached by means of a rigid

  15. Evidence for maximal acceleration and singularity resolution in covariant loop quantum gravity.

    PubMed

    Rovelli, Carlo; Vidotto, Francesca

    2013-08-30

    A simple argument indicates that covariant loop gravity (spin foam theory) predicts a maximal acceleration and hence forbids the development of curvature singularities. This supports the results obtained for cosmology and black holes using canonical methods.

  16. Methods of generating synthetic acoustic logs from resistivity logs for gas-hydrate-bearing sediments

    USGS Publications Warehouse

    Lee, Myung W.

    1999-01-01

    Methods of predicting acoustic logs from resistivity logs for hydrate-bearing sediments are presented. Modified time average equations derived from the weighted equation provide a means of relating the velocity of the sediment to the resistivity of the sediment. These methods can be used to transform resistivity logs into acoustic logs with or without using the gas hydrate concentration in the pore space. All the parameters except the unconsolidation constants, necessary for the prediction of acoustic log from resistivity log, can be estimated from a cross plot of resistivity versus porosity values. Unconsolidation constants in equations may be assumed without rendering significant errors in the prediction. These methods were applied to the acoustic and resistivity logs acquired at the Mallik 2L-38 gas hydrate research well drilled at the Mackenzie Delta, northern Canada. The results indicate that the proposed method is simple and accurate.

  17. A simple model to predict the biodiesel blend density as simultaneous function of blend percent and temperature.

    PubMed

    Gaonkar, Narayan; Vaidya, R G

    2016-05-01

    A simple method to estimate the density of biodiesel blend as simultaneous function of temperature and volume percent of biodiesel is proposed. Employing the Kay's mixing rule, we developed a model and investigated theoretically the density of different vegetable oil biodiesel blends as a simultaneous function of temperature and volume percent of biodiesel. Key advantage of the proposed model is that it requires only a single set of density values of components of biodiesel blends at any two different temperatures. We notice that the density of blend linearly decreases with increase in temperature and increases with increase in volume percent of the biodiesel. The lower values of standard estimate of error (SEE = 0.0003-0.0022) and absolute average deviation (AAD = 0.03-0.15 %) obtained using the proposed model indicate the predictive capability. The predicted values found good agreement with the recent available experimental data.

  18. Prediction of beta-turns in proteins using the first-order Markov models.

    PubMed

    Lin, Thy-Hou; Wang, Ging-Ming; Wang, Yen-Tseng

    2002-01-01

    We present a method based on the first-order Markov models for predicting simple beta-turns and loops containing multiple turns in proteins. Sequences of 338 proteins in a database are divided using the published turn criteria into the following three regions, namely, the turn, the boundary, and the nonturn ones. A transition probability matrix is constructed for either the turn or the nonturn region using the weighted transition probabilities computed for dipeptides identified from each region. There are two such matrices constructed for the boundary region since the transition probabilities for dipeptides immediately preceding or following a turn are different. The window used for scanning a protein sequence from amino (N-) to carboxyl (C-) terminal is a hexapeptide since the transition probability computed for a turn tetrapeptide is capped at both the N- and C- termini with a boundary transition probability indexed respectively from the two boundary transition matrices. A sum of the averaged product of the transition probabilities of all the hexapeptides involving each residue is computed. This is then weighted with a probability computed from assuming that all the hexapeptides are from the nonturn region to give the final prediction quantity. Both simple beta-turns and loops containing multiple turns in a protein are then identified by the rising of the prediction quantity computed. The performance of the prediction scheme or the percentage (%) of correct prediction is evaluated through computation of Matthews correlation coefficients for each protein predicted. It is found that the prediction method is capable of giving prediction results with better correlation between the percent of correct prediction and the Matthews correlation coefficients for a group of test proteins as compared with those predicted using some secondary structural prediction methods. The prediction accuracy for about 40% of proteins in the database or 50% of proteins in the test set is better than 70%. Such a percentage for the test set is reduced to 30 if the structures of all the proteins in the set are treated as unknown.

  19. Cox-nnet: An artificial neural network method for prognosis prediction of high-throughput omics data.

    PubMed

    Ching, Travers; Zhu, Xun; Garmire, Lana X

    2018-04-01

    Artificial neural networks (ANN) are computing architectures with many interconnections of simple neural-inspired computing elements, and have been applied to biomedical fields such as imaging analysis and diagnosis. We have developed a new ANN framework called Cox-nnet to predict patient prognosis from high throughput transcriptomics data. In 10 TCGA RNA-Seq data sets, Cox-nnet achieves the same or better predictive accuracy compared to other methods, including Cox-proportional hazards regression (with LASSO, ridge, and mimimax concave penalty), Random Forests Survival and CoxBoost. Cox-nnet also reveals richer biological information, at both the pathway and gene levels. The outputs from the hidden layer node provide an alternative approach for survival-sensitive dimension reduction. In summary, we have developed a new method for accurate and efficient prognosis prediction on high throughput data, with functional biological insights. The source code is freely available at https://github.com/lanagarmire/cox-nnet.

  20. Examination of the collision force method for analyzing the responses of simple containment/deflection structures to impact by one engine rotor blade fragment

    NASA Technical Reports Server (NTRS)

    Zirin, R. M.; Witmer, E. A.

    1972-01-01

    An approximate collision analysis, termed the collision-force method, was developed for studying impact-interaction of an engine rotor blade fragment with an initially circular containment ring. This collision analysis utilizes basic mass, material property, geometry, and pre-impact velocity information for the fragment, together with any one of three postulated patterns of blade deformation behavior: (1) the elastic straight blade model, (2) the elastic-plastic straight shortening blade model, and (3) the elastic-plastic curling blade model. The collision-induced forces are used to predict the resulting motions of both the blade fragment and the containment ring. Containment ring transient responses are predicted by a finite element computer code which accommodates the large deformation, elastic-plastic planar deformation behavior of simple structures such as beams and/or rings. The effects of varying the values of certain parameters in each blade-behavior model were studied. Comparisons of predictions with experimental data indicate that of the three postulated blade-behavior models, the elastic-plastic curling blade model appears to be the most plausible and satisfactory for predicting the impact-induced motions of a ductile engine rotor blade and a containment ring against which the blade impacts.

  1. Limited predictive ability of surrogate indices of insulin sensitivity/resistance in Asian-Indian men.

    PubMed

    Muniyappa, Ranganath; Irving, Brian A; Unni, Uma S; Briggs, William M; Nair, K Sreekumaran; Quon, Michael J; Kurpad, Anura V

    2010-12-01

    Insulin resistance is highly prevalent in Asian Indians and contributes to worldwide public health problems, including diabetes and related disorders. Surrogate measurements of insulin sensitivity/resistance are used frequently to study Asian Indians, but these are not formally validated in this population. In this study, we compared the ability of simple surrogate indices to accurately predict insulin sensitivity as determined by the reference glucose clamp method. In this cross-sectional study of Asian-Indian men (n = 70), we used a calibration model to assess the ability of simple surrogate indices for insulin sensitivity [quantitative insulin sensitivity check index (QUICKI), homeostasis model assessment (HOMA2-IR), fasting insulin-to-glucose ratio (FIGR), and fasting insulin (FI)] to predict an insulin sensitivity index derived from the reference glucose clamp method (SI(Clamp)). Predictive accuracy was assessed by both root mean squared error (RMSE) of prediction as well as leave-one-out cross-validation-type RMSE of prediction (CVPE). QUICKI, FIGR, and FI, but not HOMA2-IR, had modest linear correlations with SI(Clamp) (QUICKI: r = 0.36; FIGR: r = -0.36; FI: r = -0.27; P < 0.05). No significant differences were noted among CVPE or RMSE from any of the surrogate indices when compared with QUICKI. Surrogate measurements of insulin sensitivity/resistance such as QUICKI, FIGR, and FI are easily obtainable in large clinical studies, but these may only be useful as secondary outcome measurements in assessing insulin sensitivity/resistance in clinical studies of Asian Indians.

  2. Use of variational methods in the determination of wind-driven ocean circulation

    NASA Technical Reports Server (NTRS)

    Gelos, R.; Laura, P. A. A.

    1976-01-01

    Simple polynomial approximations and a variational approach were used to predict wind-induced circulation in rectangular ocean basins. Stommel's and Munk's models were solved in a unified fashion by means of the proposed method. Very good agreement with exact solutions available in the literature was shown to exist. The method was then applied to more complex situations where an exact solution seems out of the question.

  3. The Use of Partial Least Square Regression and Spectral Data in UV-Visible Region for Quantification of Adulteration in Indonesian Palm Civet Coffee

    PubMed Central

    Yulia, Meinilwita

    2017-01-01

    Asian palm civet coffee or kopi luwak (Indonesian words for coffee and palm civet) is well known as the world's priciest and rarest coffee. To protect the authenticity of luwak coffee and protect consumer from luwak coffee adulteration, it is very important to develop a robust and simple method for determining the adulteration of luwak coffee. In this research, the use of UV-Visible spectra combined with PLSR was evaluated to establish rapid and simple methods for quantification of adulteration in luwak-arabica coffee blend. Several preprocessing methods were tested and the results show that most of the preprocessing spectra were effective in improving the quality of calibration models with the best PLS calibration model selected for Savitzky-Golay smoothing spectra which had the lowest RMSECV (0.039) and highest RPDcal value (4.64). Using this PLS model, a prediction for quantification of luwak content was calculated and resulted in satisfactory prediction performance with high both RPDp and RER values. PMID:28913348

  4. Structural dynamics and vibrations of damped, aircraft-type structures

    NASA Technical Reports Server (NTRS)

    Young, Maurice I.

    1992-01-01

    Engineering preliminary design methods for approximating and predicting the effects of viscous or equivalent viscous-type damping treatments on the free and forced vibration of lightly damped aircraft-type structures are developed. Similar developments are presented for dynamic hysteresis viscoelastic-type damping treatments. It is shown by both engineering analysis and numerical illustrations that the intermodal coupling of the undamped modes arising from the introduction of damping may be neglected in applying these preliminary design methods, except when dissimilar modes of these lightly damped, complex aircraft-type structures have identical or nearly identical natural frequencies. In such cases, it is shown that a relatively simple, additional interaction calculation between pairs of modes exhibiting this 'modal response' phenomenon suffices in the prediction of interacting modal damping fractions. The accuracy of the methods is shown to be very good to excellent, depending on the normal natural frequency separation of the system modes, thereby permitting a relatively simple preliminary design approach. This approach is shown to be a natural precursor to elaborate finite element, digital computer design computations in evaluating the type, quantity, and location of damping treatment.

  5. Fault diagnosis and fault-tolerant finite control set-model predictive control of a multiphase voltage-source inverter supplying BLDC motor.

    PubMed

    Salehifar, Mehdi; Moreno-Equilaz, Manuel

    2016-01-01

    Due to its fault tolerance, a multiphase brushless direct current (BLDC) motor can meet high reliability demand for application in electric vehicles. The voltage-source inverter (VSI) supplying the motor is subjected to open circuit faults. Therefore, it is necessary to design a fault-tolerant (FT) control algorithm with an embedded fault diagnosis (FD) block. In this paper, finite control set-model predictive control (FCS-MPC) is developed to implement the fault-tolerant control algorithm of a five-phase BLDC motor. The developed control method is fast, simple, and flexible. A FD method based on available information from the control block is proposed; this method is simple, robust to common transients in motor and able to localize multiple open circuit faults. The proposed FD and FT control algorithm are embedded in a five-phase BLDC motor drive. In order to validate the theory presented, simulation and experimental results are conducted on a five-phase two-level VSI supplying a five-phase BLDC motor. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Anthropometry-corrected exposure modeling as a method to improve trunk posture assessment with a single inclinometer.

    PubMed

    Van Driel, Robin; Trask, Catherine; Johnson, Peter W; Callaghan, Jack P; Koehoorn, Mieke; Teschke, Kay

    2013-01-01

    Measuring trunk posture in the workplace commonly involves subjective observation or self-report methods or the use of costly and time-consuming motion analysis systems (current gold standard). This work compared trunk inclination measurements using a simple data-logging inclinometer with trunk flexion measurements using a motion analysis system, and evaluated adding measures of subject anthropometry to exposure prediction models to improve the agreement between the two methods. Simulated lifting tasks (n=36) were performed by eight participants, and trunk postures were simultaneously measured with each method. There were significant differences between the two methods, with the inclinometer initially explaining 47% of the variance in the motion analysis measurements. However, adding one key anthropometric parameter (lower arm length) to the inclinometer-based trunk flexion prediction model reduced the differences between the two systems and accounted for 79% of the motion analysis method's variance. Although caution must be applied when generalizing lower-arm length as a correction factor, the overall strategy of anthropometric modeling is a novel contribution. In this lifting-based study, by accounting for subject anthropometry, a single, simple data-logging inclinometer shows promise for trunk posture measurement and may have utility in larger-scale field studies where similar types of tasks are performed.

  7. A simple randomisation procedure for validating discriminant analysis: a methodological note.

    PubMed

    Wastell, D G

    1987-04-01

    Because the goal of discriminant analysis (DA) is to optimise classification, it designedly exaggerates between-group differences. This bias complicates validation of DA. Jack-knifing has been used for validation but is inappropriate when stepwise selection (SWDA) is employed. A simple randomisation test is presented which is shown to give correct decisions for SWDA. The general superiority of randomisation tests over orthodox significance tests is discussed. Current work on non-parametric methods of estimating the error rates of prediction rules is briefly reviewed.

  8. Selecting the minimum prediction base of historical data to perform 5-year predictions of the cancer burden: The GoF-optimal method.

    PubMed

    Valls, Joan; Castellà, Gerard; Dyba, Tadeusz; Clèries, Ramon

    2015-06-01

    Predicting the future burden of cancer is a key issue for health services planning, where a method for selecting the predictive model and the prediction base is a challenge. A method, named here Goodness-of-Fit optimal (GoF-optimal), is presented to determine the minimum prediction base of historical data to perform 5-year predictions of the number of new cancer cases or deaths. An empirical ex-post evaluation exercise for cancer mortality data in Spain and cancer incidence in Finland using simple linear and log-linear Poisson models was performed. Prediction bases were considered within the time periods 1951-2006 in Spain and 1975-2007 in Finland, and then predictions were made for 37 and 33 single years in these periods, respectively. The performance of three fixed different prediction bases (last 5, 10, and 20 years of historical data) was compared to that of the prediction base determined by the GoF-optimal method. The coverage (COV) of the 95% prediction interval and the discrepancy ratio (DR) were calculated to assess the success of the prediction. The results showed that (i) models using the prediction base selected through GoF-optimal method reached the highest COV and the lowest DR and (ii) the best alternative strategy to GoF-optimal was the one using the base of prediction of 5-years. The GoF-optimal approach can be used as a selection criterion in order to find an adequate base of prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Space-Time Earthquake Prediction: The Error Diagrams

    NASA Astrophysics Data System (ADS)

    Molchan, G.

    2010-08-01

    The quality of earthquake prediction is usually characterized by a two-dimensional diagram n versus τ, where n is the rate of failures-to-predict and τ is a characteristic of space-time alarm. Unlike the time prediction case, the quantity τ is not defined uniquely. We start from the case in which τ is a vector with components related to the local alarm times and find a simple structure of the space-time diagram in terms of local time diagrams. This key result is used to analyze the usual 2-d error sets { n, τ w } in which τ w is a weighted mean of the τ components and w is the weight vector. We suggest a simple algorithm to find the ( n, τ w ) representation of all random guess strategies, the set D, and prove that there exists the unique case of w when D degenerates to the diagonal n + τ w = 1. We find also a confidence zone of D on the ( n, τ w ) plane when the local target rates are known roughly. These facts are important for correct interpretation of ( n, τ w ) diagrams when we discuss the prediction capability of the data or prediction methods.

  10. Warped Linear Prediction of Physical Model Excitations with Applications in Audio Compression and Instrument Synthesis

    NASA Astrophysics Data System (ADS)

    Glass, Alexis; Fukudome, Kimitoshi

    2004-12-01

    A sound recording of a plucked string instrument is encoded and resynthesized using two stages of prediction. In the first stage of prediction, a simple physical model of a plucked string is estimated and the instrument excitation is obtained. The second stage of prediction compensates for the simplicity of the model in the first stage by encoding either the instrument excitation or the model error using warped linear prediction. These two methods of compensation are compared with each other, and to the case of single-stage warped linear prediction, adjustments are introduced, and their applications to instrument synthesis and MPEG4's audio compression within the structured audio format are discussed.

  11. On some theoretical and practical aspects of multigrid methods. [to solve finite element systems from elliptic equations

    NASA Technical Reports Server (NTRS)

    Nicolaides, R. A.

    1979-01-01

    A description and explanation of a simple multigrid algorithm for solving finite element systems is given. Numerical results for an implementation are reported for a number of elliptic equations, including cases with singular coefficients and indefinite equations. The method shows the high efficiency, essentially independent of the grid spacing, predicted by the theory.

  12. Mathematics Curriculum Based Measurement to Predict State Test Performance: A Comparison of Measures and Methods

    ERIC Educational Resources Information Center

    Stevens, Olinger; Leigh, Erika

    2012-01-01

    Scope and Method of Study: The purpose of the study is to use an empirical approach to identify a simple, economical, efficient, and technically adequate performance measure that teachers can use to assess student growth in mathematics. The current study has been designed to expand the body of research for math CBM to further examine technical…

  13. Review of Thawing Time Prediction Models Depending
on Process Conditions and Product Characteristics

    PubMed Central

    Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna

    2016-01-01

    Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387

  14. Modeling method of time sequence model based grey system theory and application proceedings

    NASA Astrophysics Data System (ADS)

    Wei, Xuexia; Luo, Yaling; Zhang, Shiqiang

    2015-12-01

    This article gives a modeling method of grey system GM(1,1) model based on reusing information and the grey system theory. This method not only extremely enhances the fitting and predicting accuracy of GM(1,1) model, but also maintains the conventional routes' merit of simple computation. By this way, we have given one syphilis trend forecast method based on reusing information and the grey system GM(1,1) model.

  15. Creep behavior of bone cement: a method for time extrapolation using time-temperature equivalence.

    PubMed

    Morgan, R L; Farrar, D F; Rose, J; Forster, H; Morgan, I

    2003-04-01

    The clinical lifetime of poly(methyl methacrylate) (PMMA) bone cement is considerably longer than the time over which it is convenient to perform creep testing. Consequently, it is desirable to be able to predict the long term creep behavior of bone cement from the results of short term testing. A simple method is described for prediction of long term creep using the principle of time-temperature equivalence in polymers. The use of the method is illustrated using a commercial acrylic bone cement. A creep strain of approximately 0.6% is predicted after 400 days under a constant flexural stress of 2 MPa. The temperature range and stress levels over which it is appropriate to perform testing are described. Finally, the effects of physical aging on the accuracy of the method are discussed and creep data from aged cement are reported.

  16. An unexpected way forward: towards a more accurate and rigorous protein-protein binding affinity scoring function by eliminating terms from an already simple scoring function.

    PubMed

    Swanson, Jon; Audie, Joseph

    2018-01-01

    A fundamental and unsolved problem in biophysical chemistry is the development of a computationally simple, physically intuitive, and generally applicable method for accurately predicting and physically explaining protein-protein binding affinities from protein-protein interaction (PPI) complex coordinates. Here, we propose that the simplification of a previously described six-term PPI scoring function to a four term function results in a simple expression of all physically and statistically meaningful terms that can be used to accurately predict and explain binding affinities for a well-defined subset of PPIs that are characterized by (1) crystallographic coordinates, (2) rigid-body association, (3) normal interface size, and hydrophobicity and hydrophilicity, and (4) high quality experimental binding affinity measurements. We further propose that the four-term scoring function could be regarded as a core expression for future development into a more general PPI scoring function. Our work has clear implications for PPI modeling and structure-based drug design.

  17. Contrast Analysis: A Tutorial

    ERIC Educational Resources Information Center

    Haans, Antal

    2018-01-01

    Contrast analysis is a relatively simple but effective statistical method for testing theoretical predictions about differences between group means against the empirical data. Despite its advantages, contrast analysis is hardly used to date, perhaps because it is not implemented in a convenient manner in many statistical software packages. This…

  18. A Bayesian truth serum for subjective data.

    PubMed

    Prelec, Drazen

    2004-10-15

    Subjective judgments, an essential information source for science and policy, are problematic because there are no public criteria for assessing judgmental truthfulness. I present a scoring method for eliciting truthful subjective data in situations where objective truth is unknowable. The method assigns high scores not to the most common answers but to the answers that are more common than collectively predicted, with predictions drawn from the same population. This simple adjustment in the scoring criterion removes all bias in favor of consensus: Truthful answers maximize expected score even for respondents who believe that their answer represents a minority view.

  19. Visual conspicuity: a new simple standard, its reliability, validity and applicability.

    PubMed

    Wertheim, A H

    2010-03-01

    A general standard for quantifying conspicuity is described. It derives from a simple and easy method to quantitatively measure the visual conspicuity of an object. The method stems from the theoretical view that the conspicuity of an object is not a property of that object, but describes the degree to which the object is perceptually embedded in, i.e. laterally masked by, its visual environment. First, three variations of a simple method to measure the strength of such lateral masking are described and empirical evidence for its reliability and its validity is presented, as are several tests of predictions concerning the effects of viewing distance and ambient light. It is then shown how this method yields a conspicuity standard, expressed as a number, which can be made part of a rule of law, and which can be used to test whether or not, and to what extent, the conspicuity of a particular object, e.g. a traffic sign, meets a predetermined criterion. An additional feature is that, when used under different ambient light conditions, the method may also yield an index of the amount of visual clutter in the environment. Taken together the evidence illustrates the methods' applicability in both the laboratory and in real-life situations. STATEMENT OF RELEVANCE: This paper concerns a proposal for a new method to measure visual conspicuity, yielding a numerical index that can be used in a rule of law. It is of importance to ergonomists and human factor specialists who are asked to measure the conspicuity of an object, such as a traffic or rail-road sign, or any other object. The new method is simple and circumvents the need to perform elaborate (search) experiments and thus has great relevance as a simple tool for applied research.

  20. An empirical approach to improving tidal predictions using recent real-time tide gauge data

    NASA Astrophysics Data System (ADS)

    Hibbert, Angela; Royston, Samantha; Horsburgh, Kevin J.; Leach, Harry

    2014-05-01

    Classical harmonic methods of tidal prediction are often problematic in estuarine environments due to the distortion of tidal fluctuations in shallow water, which results in a disparity between predicted and observed sea levels. This is of particular concern in the Bristol Channel, where the error associated with tidal predictions is potentially greater due to an unusually large tidal range of around 12m. As such predictions are fundamental to the short-term forecasting of High Water (HW) extremes, it is vital that alternative solutions are found. In a pilot study, using a year-long observational sea level record from the Port of Avonmouth in the Bristol Channel, the UK National Tidal and Sea Level Facility (NTSLF) tested the potential for reducing tidal prediction errors, using three alternatives to the Harmonic Method of tidal prediction. The three methods evaluated were (1) the use of Artificial Neural Network (ANN) models, (2) the Species Concordance technique and (3) a simple empirical procedure for correcting Harmonic Method High Water predictions based upon a few recent observations (referred to as the Empirical Correction Method). This latter method was then successfully applied to sea level records from an additional 42 of the 45 tide gauges that comprise the UK Tide Gauge Network. Consequently, it is to be incorporated into the operational systems of the UK Coastal Monitoring and Forecasting Partnership in order to improve short-term sea level predictions for the UK and in particular, the accurate estimation of HW extremes.

  1. Microarray-based cancer prediction using soft computing approach.

    PubMed

    Wang, Xiaosheng; Gotoh, Osamu

    2009-05-26

    One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.

  2. A Comparison of Simple Methods to Incorporate Material Temperature Dependency in the Green's Function Method for Estimating Transient Thermal Stresses in Thick-Walled Power Plant Components.

    PubMed

    Rouse, James; Hyde, Christopher

    2016-01-06

    The threat of thermal fatigue is an increasing concern for thermal power plant operators due to the increasing tendency to adopt "two-shifting" operating procedures. Thermal plants are likely to remain part of the energy portfolio for the foreseeable future and are under societal pressures to generate in a highly flexible and efficient manner. The Green's function method offers a flexible approach to determine reference elastic solutions for transient thermal stress problems. In order to simplify integration, it is often assumed that Green's functions (derived from finite element unit temperature step solutions) are temperature independent (this is not the case due to the temperature dependency of material parameters). The present work offers a simple method to approximate a material's temperature dependency using multiple reference unit solutions and an interpolation procedure. Thermal stress histories are predicted and compared for realistic temperature cycles using distinct techniques. The proposed interpolation method generally performs as well as (if not better) than the optimum single Green's function or the previously-suggested weighting function technique (particularly for large temperature increments). Coefficients of determination are typically above 0 . 96 , and peak stress differences between true and predicted datasets are always less than 10 MPa.

  3. Efficient and accurate two-scale FE-FFT-based prediction of the effective material behavior of elasto-viscoplastic polycrystals

    NASA Astrophysics Data System (ADS)

    Kochmann, Julian; Wulfinghoff, Stephan; Ehle, Lisa; Mayer, Joachim; Svendsen, Bob; Reese, Stefanie

    2018-06-01

    Recently, two-scale FE-FFT-based methods (e.g., Spahn et al. in Comput Methods Appl Mech Eng 268:871-883, 2014; Kochmann et al. in Comput Methods Appl Mech Eng 305:89-110, 2016) have been proposed to predict the microscopic and overall mechanical behavior of heterogeneous materials. The purpose of this work is the extension to elasto-viscoplastic polycrystals, efficient and robust Fourier solvers and the prediction of micromechanical fields during macroscopic deformation processes. Assuming scale separation, the macroscopic problem is solved using the finite element method. The solution of the microscopic problem, which is embedded as a periodic unit cell (UC) in each macroscopic integration point, is found by employing fast Fourier transforms, fixed-point and Newton-Krylov methods. The overall material behavior is defined by the mean UC response. In order to ensure spatially converged micromechanical fields as well as feasible overall CPU times, an efficient but simple solution strategy for two-scale simulations is proposed. As an example, the constitutive behavior of 42CrMo4 steel is predicted during macroscopic three-point bending tests.

  4. Efficient and accurate two-scale FE-FFT-based prediction of the effective material behavior of elasto-viscoplastic polycrystals

    NASA Astrophysics Data System (ADS)

    Kochmann, Julian; Wulfinghoff, Stephan; Ehle, Lisa; Mayer, Joachim; Svendsen, Bob; Reese, Stefanie

    2017-09-01

    Recently, two-scale FE-FFT-based methods (e.g., Spahn et al. in Comput Methods Appl Mech Eng 268:871-883, 2014; Kochmann et al. in Comput Methods Appl Mech Eng 305:89-110, 2016) have been proposed to predict the microscopic and overall mechanical behavior of heterogeneous materials. The purpose of this work is the extension to elasto-viscoplastic polycrystals, efficient and robust Fourier solvers and the prediction of micromechanical fields during macroscopic deformation processes. Assuming scale separation, the macroscopic problem is solved using the finite element method. The solution of the microscopic problem, which is embedded as a periodic unit cell (UC) in each macroscopic integration point, is found by employing fast Fourier transforms, fixed-point and Newton-Krylov methods. The overall material behavior is defined by the mean UC response. In order to ensure spatially converged micromechanical fields as well as feasible overall CPU times, an efficient but simple solution strategy for two-scale simulations is proposed. As an example, the constitutive behavior of 42CrMo4 steel is predicted during macroscopic three-point bending tests.

  5. Determination of the state-of-charge in leadacid batteries by means of a reference cell

    NASA Astrophysics Data System (ADS)

    Armenta, C.

    A knowledge of the state-of-charge of any battery is an essential requirement for system energy management and for battery life extension. In photovoltaic power plants and stand-alone photovoltaic installations, a knowledge of the state-of-charge helps one to predict remaining energy, to determine time remaining before battery turndown, and to avoid failures during operation. A reliable method of predicting the state-of-charge will allow reduced installation costs because less reserve capacity is needed to guarantee a reliable energy supply. We propose an on-line method based on simple electrical measurements combined with a new electrolyte agitation technique which avoids systematic control of the battery state-of-charge. The method is very accurate and reduces the standard error in the state-of-charge prediction.

  6. A study of methods of prediction and measurement of the transmission sound through the walls of light aircraft

    NASA Technical Reports Server (NTRS)

    Forssen, B.; Wang, Y. S.; Crocker, M. J.

    1981-01-01

    Several aspects were studied. The SEA theory was used to develop a theoretical model to predict the transmission loss through an aircraft window. This work mainly consisted of the writing of two computer programs. One program predicts the sound transmission through a plexiglass window (the case of a single partition). The other program applies to the case of a plexiglass window window with a window shade added (the case of a double partition with an air gap). The sound transmission through a structure was measured in experimental studies using several different methods in order that the accuracy and complexity of all the methods could be compared. Also, the measurements were conducted on the simple model of a fuselage (a cylindrical shell), on a real aircraft fuselage, and on stiffened panels.

  7. A study of methods of prediction and measurement of the transmission sound through the walls of light aircraft

    NASA Astrophysics Data System (ADS)

    Forssen, B.; Wang, Y. S.; Crocker, M. J.

    1981-12-01

    Several aspects were studied. The SEA theory was used to develop a theoretical model to predict the transmission loss through an aircraft window. This work mainly consisted of the writing of two computer programs. One program predicts the sound transmission through a plexiglass window (the case of a single partition). The other program applies to the case of a plexiglass window window with a window shade added (the case of a double partition with an air gap). The sound transmission through a structure was measured in experimental studies using several different methods in order that the accuracy and complexity of all the methods could be compared. Also, the measurements were conducted on the simple model of a fuselage (a cylindrical shell), on a real aircraft fuselage, and on stiffened panels.

  8. Research on Fault Rate Prediction Method of T/R Component

    NASA Astrophysics Data System (ADS)

    Hou, Xiaodong; Yang, Jiangping; Bi, Zengjun; Zhang, Yu

    2017-07-01

    T/R component is an important part of the large phased array radar antenna array, because of its large numbers, high fault rate, it has important significance for fault prediction. Aiming at the problems of traditional grey model GM(1,1) in practical operation, the discrete grey model is established based on the original model in this paper, and the optimization factor is introduced to optimize the background value, and the linear form of the prediction model is added, the improved discrete grey model of linear regression is proposed, finally, an example is simulated and compared with other models. The results show that the method proposed in this paper has higher accuracy and the solution is simple and the application scope is more extensive.

  9. Lock-in amplifier error prediction and correction in frequency sweep measurements.

    PubMed

    Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose

    2007-01-01

    This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gavignet, A.A.; Wick, C.J.

    In current practice, pressure drops in the mud circulating system and the settling velocity of cuttings are calculated with simple rheological models and simple equations. Wellsite computers now allow more sophistication in drilling computations. In this paper, experimental results on the settling velocity of spheres in drilling fluids are reported, along with rheograms done over a wide range of shear rates. The flow curves are fitted to polynomials and general methods are developed to predict friction losses and settling velocities as functions of the polynomial coefficients. These methods were incorporated in a software package that can handle any rig configurationmore » system, including riser booster. Graphic displays show the effect of each parameter on the performance of the circulating system.« less

  11. Southern Forestry Smoke Management Guidebook

    Treesearch

    Hugh E. Mobley; [senior compiler

    1976-01-01

    A system for predicting and modifying smoke concentrations from prescription fires is introduced. While limited to particulate matter and the more typical southern fuels, the system is for both simple and complex applications. Forestrysmoke constituents, variables affecting smoke production and dispersion, and new methods for estimating available fuel are presented....

  12. Literature review : simple test method for possible use in predicting the fatigue of asphaltic concrete.

    DOT National Transportation Integrated Search

    1975-01-01

    It has been recognized for many years that fatigue is one of many mechanisms by which asphaltic concrete pavements fail. Experience and empirical design procedures such as those developed by Marshall and Hveem have enabled engineers to design-mixture...

  13. A field method for soil erosion measurements in agricultural and natural lands

    Treesearch

    Y.P. Hsieh; K.T. Grant; G.C. Bugna

    2009-01-01

    Soil erosion is one of the most important watershed processes in nature, yet quantifying it under field conditions remains a challenge. The lack of soil erosion field data is a major factor hindering our ability to predict soil erosion in a watershed. We present here the development of a simple and sensitive field method that quantifies soil erosion and the resulting...

  14. Earthquake Prediction in a Big Data World

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.

    2016-12-01

    The digital revolution started just about 15 years ago has already surpassed the global information storage capacity of more than 5000 Exabytes (in optimally compressed bytes) per year. Open data in a Big Data World provides unprecedented opportunities for enhancing studies of the Earth System. However, it also opens wide avenues for deceptive associations in inter- and transdisciplinary data and for inflicted misleading predictions based on so-called "precursors". Earthquake prediction is not an easy task that implies a delicate application of statistics. So far, none of the proposed short-term precursory signals showed sufficient evidence to be used as a reliable precursor of catastrophic earthquakes. Regretfully, in many cases of seismic hazard assessment (SHA), from term-less to time-dependent (probabilistic PSHA or deterministic DSHA), and short-term earthquake forecasting (StEF), the claims of a high potential of the method are based on a flawed application of statistics and, therefore, are hardly suitable for communication to decision makers. Self-testing must be done in advance claiming prediction of hazardous areas and/or times. The necessity and possibility of applying simple tools of Earthquake Prediction Strategies, in particular, Error Diagram, introduced by G.M. Molchan in early 1990ies, and Seismic Roulette null-hypothesis as a metric of the alerted space, is evident. The set of errors, i.e. the rates of failure and of the alerted space-time volume, can be easily compared to random guessing, which comparison permits evaluating the SHA method effectiveness and determining the optimal choice of parameters in regard to a given cost-benefit function. These and other information obtained in such a simple testing may supply us with a realistic estimates of confidence and accuracy of SHA predictions and, if reliable but not necessarily perfect, with related recommendations on the level of risks for decision making in regard to engineering design, insurance, and emergency management. The examples of independent expertize of "seismic hazard maps", "precursors", and "forecast/prediction methods" are provided.

  15. Prediction: The Modern-Day Sport-Science and Sports-Medicine "Quest for the Holy Grail".

    PubMed

    McCall, Alan; Fanchini, Maurizio; Coutts, Aaron J

    2017-05-01

    In high-performance sport, science and medicine practitioners employ a variety of physical and psychological tests, training and match monitoring, and injury-screening tools for a variety of reasons, mainly to predict performance, identify talented individuals, and flag when an injury will occur. The ability to "predict" outcomes such as performance, talent, or injury is arguably sport science and medicine's modern-day equivalent of the "Quest for the Holy Grail." The purpose of this invited commentary is to highlight the common misinterpretation of studies investigating association to those actually analyzing prediction and to provide practitioners with simple recommendations to quickly distinguish between methods pertaining to association and those of prediction.

  16. Sub-Model Partial Least Squares for Improved Accuracy in Quantitative Laser Induced Breakdown Spectroscopy

    NASA Astrophysics Data System (ADS)

    Anderson, R. B.; Clegg, S. M.; Frydenvang, J.

    2015-12-01

    One of the primary challenges faced by the ChemCam instrument on the Curiosity Mars rover is developing a regression model that can accurately predict the composition of the wide range of target types encountered (basalts, calcium sulfate, feldspar, oxides, etc.). The original calibration used 69 rock standards to train a partial least squares (PLS) model for each major element. By expanding the suite of calibration samples to >400 targets spanning a wider range of compositions, the accuracy of the model was improved, but some targets with "extreme" compositions (e.g. pure minerals) were still poorly predicted. We have therefore developed a simple method, referred to as "submodel PLS", to improve the performance of PLS across a wide range of target compositions. In addition to generating a "full" (0-100 wt.%) PLS model for the element of interest, we also generate several overlapping submodels (e.g. for SiO2, we generate "low" (0-50 wt.%), "mid" (30-70 wt.%), and "high" (60-100 wt.%) models). The submodels are generally more accurate than the "full" model for samples within their range because they are able to adjust for matrix effects that are specific to that range. To predict the composition of an unknown target, we first predict the composition with the submodels and the "full" model. Then, based on the predicted composition from the "full" model, the appropriate submodel prediction can be used (e.g. if the full model predicts a low composition, use the "low" model result, which is likely to be more accurate). For samples with "full" predictions that occur in a region of overlap between submodels, the submodel predictions are "blended" using a simple linear weighted sum. The submodel PLS method shows improvements in most of the major elements predicted by ChemCam and reduces the occurrence of negative predictions for low wt.% targets. Submodel PLS is currently being used in conjunction with ICA regression for the major element compositions of ChemCam data.

  17. Connecting clinical and actuarial prediction with rule-based methods.

    PubMed

    Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H

    2015-06-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).

  18. Improved NASA-ANOPP Noise Prediction Computer Code for Advanced Subsonic Propulsion Systems. Volume 2; Fan Suppression Model Development

    NASA Technical Reports Server (NTRS)

    Kontos, Karen B.; Kraft, Robert E.; Gliebe, Philip R.

    1996-01-01

    The Aircraft Noise Predication Program (ANOPP) is an industry-wide tool used to predict turbofan engine flyover noise in system noise optimization studies. Its goal is to provide the best currently available methods for source noise prediction. As part of a program to improve the Heidmann fan noise model, models for fan inlet and fan exhaust noise suppression estimation that are based on simple engine and acoustic geometry inputs have been developed. The models can be used to predict sound power level suppression and sound pressure level suppression at a position specified relative to the engine inlet.

  19. Advantage of multiple spot urine collections for estimating daily sodium excretion: comparison with two 24-h urine collections as reference.

    PubMed

    Uechi, Ken; Asakura, Keiko; Ri, Yui; Masayasu, Shizuko; Sasaki, Satoshi

    2016-02-01

    Several estimation methods for 24-h sodium excretion using spot urine sample have been reported, but accurate estimation at the individual level remains difficult. We aimed to clarify the most accurate method of estimating 24-h sodium excretion with different numbers of available spot urine samples. A total of 370 participants from throughout Japan collected multiple 24-h urine and spot urine samples independently. Participants were allocated randomly into a development and a validation dataset. Two estimation methods were established in the development dataset using the two 24-h sodium excretion samples as reference: the 'simple mean method' estimated by multiplying the sodium-creatinine ratio by predicted 24-h creatinine excretion, whereas the 'regression method' employed linear regression analysis. The accuracy of the two methods was examined by comparing the estimated means and concordance correlation coefficients (CCC) in the validation dataset. Mean sodium excretion by the simple mean method with three spot urine samples was closest to that by 24-h collection (difference: -1.62  mmol/day). CCC with the simple mean method increased with an increased number of spot urine samples at 0.20, 0.31, and 0.42 using one, two, and three samples, respectively. This method with three spot urine samples yielded higher CCC than the regression method (0.40). When only one spot urine sample was available for each study participant, CCC was higher with the regression method (0.36). The simple mean method with three spot urine samples yielded the most accurate estimates of sodium excretion. When only one spot urine sample was available, the regression method was preferable.

  20. Prediction of fat-free body mass from bioelectrical impedance and anthropometry among 3-year-old children using DXA

    PubMed Central

    Ejlerskov, Katrine T.; Jensen, Signe M.; Christensen, Line B.; Ritz, Christian; Michaelsen, Kim F.; Mølgaard, Christian

    2014-01-01

    For 3-year-old children suitable methods to estimate body composition are sparse. We aimed to develop predictive equations for estimating fat-free mass (FFM) from bioelectrical impedance (BIA) and anthropometry using dual-energy X-ray absorptiometry (DXA) as reference method using data from 99 healthy 3-year-old Danish children. Predictive equations were derived from two multiple linear regression models, a comprehensive model (height2/resistance (RI), six anthropometric measurements) and a simple model (RI, height, weight). Their uncertainty was quantified by means of 10-fold cross-validation approach. Prediction error of FFM was 3.0% for both equations (root mean square error: 360 and 356 g, respectively). The derived equations produced BIA-based prediction of FFM and FM near DXA scan results. We suggest that the predictive equations can be applied in similar population samples aged 2–4 years. The derived equations may prove useful for studies linking body composition to early risk factors and early onset of obesity. PMID:24463487

  1. Prediction of fat-free body mass from bioelectrical impedance and anthropometry among 3-year-old children using DXA.

    PubMed

    Ejlerskov, Katrine T; Jensen, Signe M; Christensen, Line B; Ritz, Christian; Michaelsen, Kim F; Mølgaard, Christian

    2014-01-27

    For 3-year-old children suitable methods to estimate body composition are sparse. We aimed to develop predictive equations for estimating fat-free mass (FFM) from bioelectrical impedance (BIA) and anthropometry using dual-energy X-ray absorptiometry (DXA) as reference method using data from 99 healthy 3-year-old Danish children. Predictive equations were derived from two multiple linear regression models, a comprehensive model (height(2)/resistance (RI), six anthropometric measurements) and a simple model (RI, height, weight). Their uncertainty was quantified by means of 10-fold cross-validation approach. Prediction error of FFM was 3.0% for both equations (root mean square error: 360 and 356 g, respectively). The derived equations produced BIA-based prediction of FFM and FM near DXA scan results. We suggest that the predictive equations can be applied in similar population samples aged 2-4 years. The derived equations may prove useful for studies linking body composition to early risk factors and early onset of obesity.

  2. Simple approach for high-contrast optical imaging and characterization of graphene-based sheets.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, I.; Pelton, M.; Piner, R.

    2007-12-01

    A simple optical method is presented for identifying and measuring the effective optical properties of nanometer-thick, graphene-based materials, based on the use of substrates consisting of a thin dielectric layer on silicon. High contrast between the graphene-based materials and the substrate is obtained by choosing appropriate optical properties and thickness of the dielectric layer. The effective refractive index and optical absorption coefficient of graphene oxide, thermally reduced graphene oxide, and graphene are obtained by comparing the predicted and measured contrasts.

  3. Correlating the Subjects of Books Taken Out of and Books Used Within an Open-Stack Library

    ERIC Educational Resources Information Center

    McGrath, William E.

    1971-01-01

    Out-of-library circulation totals were found to be reliable indicators of in-library use. For predicting in-library use (and thus total use) two methods are cited: simple ratio of out to in, and the regression equation. (4 references) (Author/NH)

  4. Treatment of Phobic Disorders Using Cognitive and Exposure Methods: A Self-Efficacy Analysis.

    ERIC Educational Resources Information Center

    Biran, Mia; Wilson, G. Terence

    1981-01-01

    Examined predictions derived from self-efficacy theory in comparing the effects of exposure and cognitive interventions with simple phobics. Guided exposure (GE) was significantly superior to cognitive restructuring (CR) in enhancing approach behavior, increasing level and strength of self-efficacy, reducing subjective fear, and decreasing…

  5. Model Diagnostics for Bayesian Networks

    ERIC Educational Resources Information Center

    Sinharay, Sandip

    2006-01-01

    Bayesian networks are frequently used in educational assessments primarily for learning about students' knowledge and skills. There is a lack of works on assessing fit of Bayesian networks. This article employs the posterior predictive model checking method, a popular Bayesian model checking tool, to assess fit of simple Bayesian networks. A…

  6. Seismic energy data analysis of Merapi volcano to test the eruption time prediction using materials failure forecast method (FFM)

    NASA Astrophysics Data System (ADS)

    Anggraeni, Novia Antika

    2015-04-01

    The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano's inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration of the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 - 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between -2.86 up to 5.49 days.

  7. Seismic energy data analysis of Merapi volcano to test the eruption time prediction using materials failure forecast method (FFM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anggraeni, Novia Antika, E-mail: novia.antika.a@gmail.com

    The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano’s inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration ofmore » the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 – 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between −2.86 up to 5.49 days.« less

  8. Increased Fidelity in Prediction Methods For Landing Gear Noise

    NASA Technical Reports Server (NTRS)

    Lopes, Leonard V.; Brentner, Kenneth S.; Morris, Philip J.; Lockard, David P.

    2006-01-01

    An aeroacoustic prediction scheme has been developed for landing gear noise. The method is designed to handle the complex landing gear geometry of current and future aircraft. The gear is represented by a collection of subassemblies and simple components that are modeled using acoustic elements. These acoustic elements are generic, but generate noise representative of the physical components on a landing gear. The method sums the noise radiation from each component of the undercarriage in isolation accounting for interference with adjacent components through an estimate of the local upstream and downstream flows and turbulence intensities. The acoustic calculations are made in the code LGMAP, which computes the sound pressure levels at various observer locations. The method can calculate the noise from the undercarriage in isolation or installed on an aircraft for both main and nose landing gear. Comparisons with wind tunnel and flight data are used to initially calibrate the method, then it may be used to predict the noise of any landing gear. In this paper, noise predictions are compared with wind tunnel data for model landing gears of various scales and levels of fidelity, as well as with flight data on fullscale undercarriages. The present agreement between the calculations and measurements suggests the method has promise for future application in the prediction of airframe noise.

  9. An automated benchmarking platform for MHC class II binding prediction methods.

    PubMed

    Andreatta, Massimo; Trolle, Thomas; Yan, Zhen; Greenbaum, Jason A; Peters, Bjoern; Nielsen, Morten

    2018-05-01

    Computational methods for the prediction of peptide-MHC binding have become an integral and essential component for candidate selection in experimental T cell epitope discovery studies. The sheer amount of published prediction methods-and often discordant reports on their performance-poses a considerable quandary to the experimentalist who needs to choose the best tool for their research. With the goal to provide an unbiased, transparent evaluation of the state-of-the-art in the field, we created an automated platform to benchmark peptide-MHC class II binding prediction tools. The platform evaluates the absolute and relative predictive performance of all participating tools on data newly entered into the Immune Epitope Database (IEDB) before they are made public, thereby providing a frequent, unbiased assessment of available prediction tools. The benchmark runs on a weekly basis, is fully automated, and displays up-to-date results on a publicly accessible website. The initial benchmark described here included six commonly used prediction servers, but other tools are encouraged to join with a simple sign-up procedure. Performance evaluation on 59 data sets composed of over 10 000 binding affinity measurements suggested that NetMHCIIpan is currently the most accurate tool, followed by NN-align and the IEDB consensus method. Weekly reports on the participating methods can be found online at: http://tools.iedb.org/auto_bench/mhcii/weekly/. mniel@bioinformatics.dtu.dk. Supplementary data are available at Bioinformatics online.

  10. External validation of a simple clinical tool used to predict falls in people with Parkinson disease

    PubMed Central

    Duncan, Ryan P.; Cavanaugh, James T.; Earhart, Gammon M.; Ellis, Terry D.; Ford, Matthew P.; Foreman, K. Bo; Leddy, Abigail L.; Paul, Serene S.; Canning, Colleen G.; Thackeray, Anne; Dibble, Leland E.

    2015-01-01

    Background Assessment of fall risk in an individual with Parkinson disease (PD) is a critical yet often time consuming component of patient care. Recently a simple clinical prediction tool based only on fall history in the previous year, freezing of gait in the past month, and gait velocity <1.1 m/s was developed and accurately predicted future falls in a sample of individuals with PD. METHODS We sought to externally validate the utility of the tool by administering it to a different cohort of 171 individuals with PD. Falls were monitored prospectively for 6 months following predictor assessment. RESULTS The tool accurately discriminated future fallers from non-fallers (area under the curve [AUC] = 0.83; 95% CI 0.76 –0.89), comparable to the developmental study. CONCLUSION The results validated the utility of the tool for allowing clinicians to quickly and accurately identify an individual’s risk of an impending fall. PMID:26003412

  11. Dynamic properties and damping predictions for laminated plates: High order theories - Timoshenko beam

    NASA Astrophysics Data System (ADS)

    Diveyev, Bohdan; Konyk, Solomija; Crocker, Malcolm J.

    2018-01-01

    The main aim of this study is to predict the elastic and damping properties of composite laminated plates. This problem has an exact elasticity solution for simple uniform bending and transverse loading conditions. This paper presents a new stress analysis method for the accurate determination of the detailed stress distributions in laminated plates subjected to cylindrical bending. Some approximate methods for the stress state predictions for laminated plates are presented here. The present method is adaptive and does not rely on strong assumptions about the model of the plate. The theoretical model described here incorporates deformations of each sheet of the lamina, which account for the effects of transverse shear deformation, transverse normal strain-stress and nonlinear variation of displacements with respect to the thickness coordinate. Predictions of the dynamic and damping values of laminated plates for various geometrical, mechanical and fastening properties are presented. Comparison with the Timoshenko beam theory is systematically made for analytical and approximation variants.

  12. Cox-nnet: An artificial neural network method for prognosis prediction of high-throughput omics data

    PubMed Central

    Ching, Travers; Zhu, Xun

    2018-01-01

    Artificial neural networks (ANN) are computing architectures with many interconnections of simple neural-inspired computing elements, and have been applied to biomedical fields such as imaging analysis and diagnosis. We have developed a new ANN framework called Cox-nnet to predict patient prognosis from high throughput transcriptomics data. In 10 TCGA RNA-Seq data sets, Cox-nnet achieves the same or better predictive accuracy compared to other methods, including Cox-proportional hazards regression (with LASSO, ridge, and mimimax concave penalty), Random Forests Survival and CoxBoost. Cox-nnet also reveals richer biological information, at both the pathway and gene levels. The outputs from the hidden layer node provide an alternative approach for survival-sensitive dimension reduction. In summary, we have developed a new method for accurate and efficient prognosis prediction on high throughput data, with functional biological insights. The source code is freely available at https://github.com/lanagarmire/cox-nnet. PMID:29634719

  13. A grammar inference approach for predicting kinase specific phosphorylation sites.

    PubMed

    Datta, Sutapa; Mukhopadhyay, Subhasis

    2015-01-01

    Kinase mediated phosphorylation site detection is the key mechanism of post translational mechanism that plays an important role in regulating various cellular processes and phenotypes. Many diseases, like cancer are related with the signaling defects which are associated with protein phosphorylation. Characterizing the protein kinases and their substrates enhances our ability to understand the mechanism of protein phosphorylation and extends our knowledge of signaling network; thereby helping us to treat such diseases. Experimental methods for predicting phosphorylation sites are labour intensive and expensive. Also, manifold increase of protein sequences in the databanks over the years necessitates the improvement of high speed and accurate computational methods for predicting phosphorylation sites in protein sequences. Till date, a number of computational methods have been proposed by various researchers in predicting phosphorylation sites, but there remains much scope of improvement. In this communication, we present a simple and novel method based on Grammatical Inference (GI) approach to automate the prediction of kinase specific phosphorylation sites. In this regard, we have used a popular GI algorithm Alergia to infer Deterministic Stochastic Finite State Automata (DSFA) which equally represents the regular grammar corresponding to the phosphorylation sites. Extensive experiments on several datasets generated by us reveal that, our inferred grammar successfully predicts phosphorylation sites in a kinase specific manner. It performs significantly better when compared with the other existing phosphorylation site prediction methods. We have also compared our inferred DSFA with two other GI inference algorithms. The DSFA generated by our method performs superior which indicates that our method is robust and has a potential for predicting the phosphorylation sites in a kinase specific manner.

  14. Protein Structure Prediction with Evolutionary Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, W.E.; Krasnogor, N.; Pelta, D.A.

    1999-02-08

    Evolutionary algorithms have been successfully applied to a variety of molecular structure prediction problems. In this paper we reconsider the design of genetic algorithms that have been applied to a simple protein structure prediction problem. Our analysis considers the impact of several algorithmic factors for this problem: the confirmational representation, the energy formulation and the way in which infeasible conformations are penalized, Further we empirically evaluated the impact of these factors on a small set of polymer sequences. Our analysis leads to specific recommendations for both GAs as well as other heuristic methods for solving PSP on the HP model.

  15. Electrode effects in dielectric spectroscopy of colloidal suspensions

    NASA Astrophysics Data System (ADS)

    Cirkel, P. A.; van der Ploeg, J. P. M.; Koper, G. J. M.

    1997-02-01

    We present a simple model to account for electrode polarization in colloidal suspensions. Apart from correctly predicting the ω {-3}/{2} dependence for the dielectric permittivity at low frequencies ω, the model provides an explicit dependence of the effect on electrode spacing. The predictions are tested for the sodium bis(2-ethylhexyl) sulfosuccinate (AOT) water-in-oil microemulsion with iso-octane as continuous phase. In particular, the dependence of electrode polarization effects on electrode spacing has been measured and is found to be in accordance with the model prediction. Methods to reduce or account for electrode polarization are briefly discussed.

  16. Prediction of Sound Waves Propagating Through a Nozzle Without/With a Shock Wave Using the Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen; Chang, Sin-Chung; Jorgenson, Philip C. E.

    2000-01-01

    The benchmark problems in Category 1 (Internal Propagation) of the third Computational Aeroacoustics (CAA) Work-shop sponsored by NASA Glenn Research Center are solved using the space-time conservation element and solution element (CE/SE) method. The first problem addresses the propagation of sound waves through a nearly choked transonic nozzle. The second one concerns shock-sound interaction in a supersonic nozzle. A quasi one-dimension CE/SE Euler solver for a nonuniform mesh is developed and employed to solve both problems. Numerical solutions are compared with the analytical solution for both problems. It is demonstrated that the CE/SE method is capable of solving aeroacoustic problems with/without shock waves in a simple way. Furthermore, the simple nonreflecting boundary condition used in the CE/SE method which is not based on the characteristic theory works very well.

  17. Acoustic method of damage sensing in composite materials

    NASA Technical Reports Server (NTRS)

    Workman, Gary L.; Walker, James; Lansing, Matthew

    1994-01-01

    The use of acoustic emission and acousto-ultrasonics to characterize impact damage in composite structures is being performed on both graphite epoxy and kevlar bottles. Further development of the acoustic emission methodology to include neural net analysis and/or other multivariate techniques will enhance the capability of the technique to identify failure mechanisms during fracture. The acousto-ultrasonics technique will be investigated to determine its ability to predict regions prone to failure prior to the burst tests. The combination of the two methods will allow for simple nondestructive tests to be capable of predicting the performance of a composite structure prior to being placed in service and during service.

  18. Construction of crystal structure prototype database: methods and applications.

    PubMed

    Su, Chuanxun; Lv, Jian; Li, Quan; Wang, Hui; Zhang, Lijun; Wang, Yanchao; Ma, Yanming

    2017-04-26

    Crystal structure prototype data have become a useful source of information for materials discovery in the fields of crystallography, chemistry, physics, and materials science. This work reports the development of a robust and efficient method for assessing the similarity of structures on the basis of their interatomic distances. Using this method, we proposed a simple and unambiguous definition of crystal structure prototype based on hierarchical clustering theory, and constructed the crystal structure prototype database (CSPD) by filtering the known crystallographic structures in a database. With similar method, a program structure prototype analysis package (SPAP) was developed to remove similar structures in CALYPSO prediction results and extract predicted low energy structures for a separate theoretical structure database. A series of statistics describing the distribution of crystal structure prototypes in the CSPD was compiled to provide an important insight for structure prediction and high-throughput calculations. Illustrative examples of the application of the proposed database are given, including the generation of initial structures for structure prediction and determination of the prototype structure in databases. These examples demonstrate the CSPD to be a generally applicable and useful tool for materials discovery.

  19. Review of the Agard S&M panel evaluation program of the NASA-Lewis 'SRP' approach to high-temperature LCF life prediction. [Strainrange Partitioning for Low Cycle Fatigue

    NASA Technical Reports Server (NTRS)

    Hirschberg, M. H.

    1978-01-01

    Twenty laboratories in six countries participated in this program, each testing its own materials of interest under its own laboratory conditions. In this way the results obtained provided validation of the Strainrange Partitioning (SRP) method for a wide range of materials and insured maximum usefulness to each of the participating laboratories. The first, very necessary step in the evaluation of any life prediction approach - assessing the ability of the method to predict life of simple laboratory specimens subjected to complex loading, was thereby taken. The culmination of this program was the Specialists Meeting that was held in Aalborg, Denmark in April of 1978. At that meeting the various investigators shared their findings, thus providing the basis for an in-depth evaluation of the SRP method. While the results were variable from laboratory to laboratory, most investigators agreed that the SRP method was a significant step toward life prediction in the presence of high temperature and cyclic stresses.

  20. Construction of crystal structure prototype database: methods and applications

    NASA Astrophysics Data System (ADS)

    Su, Chuanxun; Lv, Jian; Li, Quan; Wang, Hui; Zhang, Lijun; Wang, Yanchao; Ma, Yanming

    2017-04-01

    Crystal structure prototype data have become a useful source of information for materials discovery in the fields of crystallography, chemistry, physics, and materials science. This work reports the development of a robust and efficient method for assessing the similarity of structures on the basis of their interatomic distances. Using this method, we proposed a simple and unambiguous definition of crystal structure prototype based on hierarchical clustering theory, and constructed the crystal structure prototype database (CSPD) by filtering the known crystallographic structures in a database. With similar method, a program structure prototype analysis package (SPAP) was developed to remove similar structures in CALYPSO prediction results and extract predicted low energy structures for a separate theoretical structure database. A series of statistics describing the distribution of crystal structure prototypes in the CSPD was compiled to provide an important insight for structure prediction and high-throughput calculations. Illustrative examples of the application of the proposed database are given, including the generation of initial structures for structure prediction and determination of the prototype structure in databases. These examples demonstrate the CSPD to be a generally applicable and useful tool for materials discovery.

  1. Improved High/Low Junction Silicon Solar Cell

    NASA Technical Reports Server (NTRS)

    Neugroschel, A.; Pao, S. C.; Lindholm, F. A.; Fossum, J. G.

    1986-01-01

    Method developed to raise value of open-circuit voltage in silicon solar cells by incorporating high/low junction in cell emitter. Power-conversion efficiency of low-resistivity silicon solar cell considerably less than maximum theoretical value mainly because open-circuit voltage is smaller than simple p/n junction theory predicts. With this method, air-mass-zero opencircuit voltage increased from 600 mV level to approximately 650 mV.

  2. The prediction of blood-tissue partitions, water-skin partitions and skin permeation for agrochemicals.

    PubMed

    Abraham, Michael H; Gola, Joelle M R; Ibrahim, Adam; Acree, William E; Liu, Xiangli

    2014-07-01

    There is considerable interest in the blood-tissue distribution of agrochemicals, and a number of researchers have developed experimental methods for in vitro distribution. These methods involve the determination of saline-blood and saline-tissue partitions; not only are they indirect, but they do not yield the required in vivo distribution. The authors set out equations for gas-tissue and blood-tissue distribution, for partition from water into skin and for permeation from water through human skin. Together with Abraham descriptors for the agrochemicals, these equations can be used to predict values for all of these processes. The present predictions compare favourably with experimental in vivo blood-tissue distribution where available. The predictions require no more than simple arithmetic. The present method represents a much easier and much more economic way of estimating blood-tissue partitions than the method that uses saline-blood and saline-tissue partitions. It has the added advantages of yielding the required in vivo partitions and being easily extended to the prediction of partition of agrochemicals from water into skin and permeation from water through skin. © 2013 Society of Chemical Industry.

  3. Reducing usage of the computational resources by event driven approach to model predictive control

    NASA Astrophysics Data System (ADS)

    Misik, Stefan; Bradac, Zdenek; Cela, Arben

    2017-08-01

    This paper deals with a real-time and optimal control of dynamic systems while also considers the constraints which these systems might be subject to. Main objective of this work is to propose a simple modification of the existing Model Predictive Control approach to better suit needs of computational resource-constrained real-time systems. An example using model of a mechanical system is presented and the performance of the proposed method is evaluated in a simulated environment.

  4. A novel simple QSAR model for the prediction of anti-HIV activity using multiple linear regression analysis.

    PubMed

    Afantitis, Antreas; Melagraki, Georgia; Sarimveis, Haralambos; Koutentis, Panayiotis A; Markopoulos, John; Igglessi-Markopoulou, Olga

    2006-08-01

    A quantitative-structure activity relationship was obtained by applying Multiple Linear Regression Analysis to a series of 80 1-[2-hydroxyethoxy-methyl]-6-(phenylthio) thymine (HEPT) derivatives with significant anti-HIV activity. For the selection of the best among 37 different descriptors, the Elimination Selection Stepwise Regression Method (ES-SWR) was utilized. The resulting QSAR model (R (2) (CV) = 0.8160; S (PRESS) = 0.5680) proved to be very accurate both in training and predictive stages.

  5. Prediction of trivalent actinide amino(poly)carboxylate complex stability constants using linear free energy relationships with the lanthanide series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uhnak, Nic E.

    Prediction of Trivalent Actinide Amino(poly)carboxylate Complex Stability Constants Using Linear Free Energy Relationships with the Lanthanide Series Alternative title: LFER Based Prediction of An(III) APC Stability Constants There is a gap in the literature regarding the complexation of amino(poly)carboxylate (APC) ligands with trivalent actinides (An(III))). The chemistry of the An(III) is nearly identical to that of the trivalent lanthanides Lns, but the An(III) express a slight enhancement when binding APC ligands. Presented in this report is a simple method of predicting the stability constants of the An(III), Pu, Am, Cm, Bk and Cf by using linear free energy relationships (LFER)more » of the An and the lanthanide (Ln) series for 91 APCs. This method produced An stability constants within uncertainty to available literature values for most ligands.« less

  6. An Economical Semi-Analytical Orbit Theory for Retarded Satellite Motion About an Oblate Planet

    NASA Technical Reports Server (NTRS)

    Gordon, R. A.

    1980-01-01

    Brouwer and Brouwer-Lyddanes' use of the Von Zeipel-Delaunay method is employed to develop an efficient analytical orbit theory suitable for microcomputers. A succinctly simple pseudo-phenomenologically conceptualized algorithm is introduced which accurately and economically synthesizes modeling of drag effects. The method epitomizes and manifests effortless efficient computer mechanization. Simulated trajectory data is employed to illustrate the theory's ability to accurately accommodate oblateness and drag effects for microcomputer ground based or onboard predicted orbital representation. Real tracking data is used to demonstrate that the theory's orbit determination and orbit prediction capabilities are favorably adaptable to and are comparable with results obtained utilizing complex definitive Cowell method solutions on satellites experiencing significant drag effects.

  7. Predictability in community dynamics.

    PubMed

    Blonder, Benjamin; Moulton, Derek E; Blois, Jessica; Enquist, Brian J; Graae, Bente J; Macias-Fauria, Marc; McGill, Brian; Nogué, Sandra; Ordonez, Alejandro; Sandel, Brody; Svenning, Jens-Christian

    2017-03-01

    The coupling between community composition and climate change spans a gradient from no lags to strong lags. The no-lag hypothesis is the foundation of many ecophysiological models, correlative species distribution modelling and climate reconstruction approaches. Simple lag hypotheses have become prominent in disequilibrium ecology, proposing that communities track climate change following a fixed function or with a time delay. However, more complex dynamics are possible and may lead to memory effects and alternate unstable states. We develop graphical and analytic methods for assessing these scenarios and show that these dynamics can appear in even simple models. The overall implications are that (1) complex community dynamics may be common and (2) detailed knowledge of past climate change and community states will often be necessary yet sometimes insufficient to make predictions of a community's future state. © 2017 John Wiley & Sons Ltd/CNRS.

  8. Flight-Test Evaluation of Flutter-Prediction Methods

    NASA Technical Reports Server (NTRS)

    Lind, RIck; Brenner, Marty

    2003-01-01

    The flight-test community routinely spends considerable time and money to determine a range of flight conditions, called a flight envelope, within which an aircraft is safe to fly. The cost of determining a flight envelope could be greatly reduced if there were a method of safely and accurately predicting the speed associated with the onset of an instability called flutter. Several methods have been developed with the goal of predicting flutter speeds to improve the efficiency of flight testing. These methods include (1) data-based methods, in which one relies entirely on information obtained from the flight tests and (2) model-based approaches, in which one relies on a combination of flight data and theoretical models. The data-driven methods include one based on extrapolation of damping trends, one that involves an envelope function, one that involves the Zimmerman-Weissenburger flutter margin, and one that involves a discrete-time auto-regressive model. An example of a model-based approach is that of the flutterometer. These methods have all been shown to be theoretically valid and have been demonstrated on simple test cases; however, until now, they have not been thoroughly evaluated in flight tests. An experimental apparatus called the Aerostructures Test Wing (ATW) was developed to test these prediction methods.

  9. A method for studying the hunting oscillations of an airplane with a simple type of automatic control

    NASA Technical Reports Server (NTRS)

    Jones, Robert T

    1944-01-01

    A method is presented for predicting the amplitude and frequency, under certain simplifying conditions, of the hunting oscillations of an automatically controlled aircraft with lag in the control system or in the response of the aircraft to the controls. If the steering device is actuated by a simple right-left type of signal, the series of alternating fixed-amplified signals occurring during the hunting may ordinarily be represented by a "square wave." Formulas are given expressing the response to such a variations of signal in terms of the response to a unit signal. A more complex type of hunting, which may involve cyclic repetition of signals of varying duration, has not been treated and requires further analysis. Several examples of application of the method are included and the results discussed.

  10. Trajectory optimization for an asymmetric launch vehicle. M.S. Thesis - MIT

    NASA Technical Reports Server (NTRS)

    Sullivan, Jeanne Marie

    1990-01-01

    A numerical optimization technique is used to fully automate the trajectory design process for an symmetric configuration of the proposed Advanced Launch System (ALS). The objective of the ALS trajectory design process is the maximization of the vehicle mass when it reaches the desired orbit. The trajectories used were based on a simple shape that could be described by a small set of parameters. The use of a simple trajectory model can significantly reduce the computation time required for trajectory optimization. A predictive simulation was developed to determine the on-orbit mass given an initial vehicle state, wind information, and a set of trajectory parameters. This simulation utilizes an idealized control system to speed computation by increasing the integration time step. The conjugate gradient method is used for the numerical optimization of on-orbit mass. The method requires only the evaluation of the on-orbit mass function using the predictive simulation, and the gradient of the on-orbit mass function with respect to the trajectory parameters. The gradient is approximated with finite differencing. Prelaunch trajectory designs were carried out using the optimization procedure. The predictive simulation is used in flight to redesign the trajectory to account for trajectory deviations produced by off-nominal conditions, e.g., stronger than expected head winds.

  11. SMSIM--Fortran programs for simulating ground motions from earthquakes: Version 2.0.--a revision of OFR 96-80-A

    USGS Publications Warehouse

    Boore, David M.

    2000-01-01

    A simple and powerful method for simulating ground motions is based on the assumption that the amplitude of ground motion at a site can be specified in a deterministic way, with a random phase spectrum modified such that the motion is distributed over a duration related to the earthquake magnitude and to distance from the source. This method of simulating ground motions often goes by the name "the stochastic method." It is particularly useful for simulating the higher-frequency ground motions of most interest to engineers, and it is widely used to predict ground motions for regions of the world in which recordings of motion from damaging earthquakes are not available. This simple method has been successful in matching a variety of ground-motion measures for earthquakes with seismic moments spanning more than 12 orders of magnitude. One of the essential characteristics of the method is that it distills what is known about the various factors affecting ground motions (source, path, and site) into simple functional forms that can be used to predict ground motions. SMSIM is a set of programs for simulating ground motions based on the stochastic method. This Open-File Report is a revision of an earlier report (Boore, 1996) describing a set of programs for simulating ground motions from earthquakes. The programs are based on modifications I have made to the stochastic method first introduced by Hanks and McGuire (1981). The report contains source codes, written in Fortran, and executables that can be used on a PC. Programs are included both for time-domain and for random vibration simulations. In addition, programs are included to produce Fourier amplitude spectra for the models used in the simulations and to convert shear velocity vs. depth into frequency-dependent amplification. The revision to the previous report is needed because the input and output files have changed significantly, and a number of new programs have been included in the set.

  12. Harnessing atomistic simulations to predict the rate at which dislocations overcome obstacles

    NASA Astrophysics Data System (ADS)

    Saroukhani, S.; Nguyen, L. D.; Leung, K. W. K.; Singh, C. V.; Warner, D. H.

    2016-05-01

    Predicting the rate at which dislocations overcome obstacles is key to understanding the microscopic features that govern the plastic flow of modern alloys. In this spirit, the current manuscript examines the rate at which an edge dislocation overcomes an obstacle in aluminum. Predictions were made using different popular variants of Harmonic Transition State Theory (HTST) and compared to those of direct Molecular Dynamics (MD) simulations. The HTST predictions were found to be grossly inaccurate due to the large entropy barrier associated with the dislocation-obstacle interaction. Considering the importance of finite temperature effects, the utility of the Finite Temperature String (FTS) method was then explored. While this approach was found capable of identifying a prominent reaction tube, it was not capable of computing the free energy profile along the tube. Lastly, the utility of the Transition Interface Sampling (TIS) approach was explored, which does not need a free energy profile and is known to be less reliant on the choice of reaction coordinate. The TIS approach was found capable of accurately predicting the rate, relative to direct MD simulations. This finding was utilized to examine the temperature and load dependence of the dislocation-obstacle interaction in a simple periodic cell configuration. An attractive rate prediction approach combining TST and simple continuum models is identified, and the strain rate sensitivity of individual dislocation obstacle interactions is predicted.

  13. A review of statistical updating methods for clinical prediction models.

    PubMed

    Su, Ting-Li; Jaki, Thomas; Hickey, Graeme L; Buchan, Iain; Sperrin, Matthew

    2018-01-01

    A clinical prediction model is a tool for predicting healthcare outcomes, usually within a specific population and context. A common approach is to develop a new clinical prediction model for each population and context; however, this wastes potentially useful historical information. A better approach is to update or incorporate the existing clinical prediction models already developed for use in similar contexts or populations. In addition, clinical prediction models commonly become miscalibrated over time, and need replacing or updating. In this article, we review a range of approaches for re-using and updating clinical prediction models; these fall in into three main categories: simple coefficient updating, combining multiple previous clinical prediction models in a meta-model and dynamic updating of models. We evaluated the performance (discrimination and calibration) of the different strategies using data on mortality following cardiac surgery in the United Kingdom: We found that no single strategy performed sufficiently well to be used to the exclusion of the others. In conclusion, useful tools exist for updating existing clinical prediction models to a new population or context, and these should be implemented rather than developing a new clinical prediction model from scratch, using a breadth of complementary statistical methods.

  14. The generalized scattering coefficient method for plane wave scattering in layered structures

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Li, Chao; Wang, Huai-Yu; Zhou, Yun-Song

    2017-02-01

    The generalized scattering coefficient (GSC) method is pedagogically derived and employed to study the scattering of plane waves in homogeneous and inhomogeneous layered structures. The numerical stabilities and accuracies of this method and other commonly used numerical methods are discussed and compared. For homogeneous layered structures, concise scattering formulas with clear physical interpretations and strong numerical stability are obtained by introducing the GSCs. For inhomogeneous layered structures, three numerical methods are employed: the staircase approximation method, the power series expansion method, and the differential equation based on the GSCs. We investigate the accuracies and convergence behaviors of these methods by comparing their predictions to the exact results. The conclusions are as follows. The staircase approximation method has a slow convergence in spite of its simple and intuitive implementation, and a fine stratification within the inhomogeneous layer is required for obtaining accurate results. The expansion method results are sensitive to the expansion order, and the treatment becomes very complicated for relatively complex configurations, which restricts its applicability. By contrast, the GSC-based differential equation possesses a simple implementation while providing fast and accurate results.

  15. Multi-Label Learning via Random Label Selection for Protein Subcellular Multi-Locations Prediction.

    PubMed

    Wang, Xiao; Li, Guo-Zheng

    2013-03-12

    Prediction of protein subcellular localization is an important but challenging problem, particularly when proteins may simultaneously exist at, or move between, two or more different subcellular location sites. Most of the existing protein subcellular localization methods are only used to deal with the single-location proteins. In the past few years, only a few methods have been proposed to tackle proteins with multiple locations. However, they only adopt a simple strategy, that is, transforming the multi-location proteins to multiple proteins with single location, which doesn't take correlations among different subcellular locations into account. In this paper, a novel method named RALS (multi-label learning via RAndom Label Selection), is proposed to learn from multi-location proteins in an effective and efficient way. Through five-fold cross validation test on a benchmark dataset, we demonstrate our proposed method with consideration of label correlations obviously outperforms the baseline BR method without consideration of label correlations, indicating correlations among different subcellular locations really exist and contribute to improvement of prediction performance. Experimental results on two benchmark datasets also show that our proposed methods achieve significantly higher performance than some other state-of-the-art methods in predicting subcellular multi-locations of proteins. The prediction web server is available at http://levis.tongji.edu.cn:8080/bioinfo/MLPred-Euk/ for the public usage.

  16. Evaluation of Strain-Life Fatigue Curve Estimation Methods and Their Application to a Direct-Quenched High-Strength Steel

    NASA Astrophysics Data System (ADS)

    Dabiri, M.; Ghafouri, M.; Rohani Raftar, H. R.; Björk, T.

    2018-03-01

    Methods to estimate the strain-life curve, which were divided into three categories: simple approximations, artificial neural network-based approaches and continuum damage mechanics models, were examined, and their accuracy was assessed in strain-life evaluation of a direct-quenched high-strength steel. All the prediction methods claim to be able to perform low-cycle fatigue analysis using available or easily obtainable material properties, thus eliminating the need for costly and time-consuming fatigue tests. Simple approximations were able to estimate the strain-life curve with satisfactory accuracy using only monotonic properties. The tested neural network-based model, although yielding acceptable results for the material in question, was found to be overly sensitive to the data sets used for training and showed an inconsistency in estimation of the fatigue life and fatigue properties. The studied continuum damage-based model was able to produce a curve detecting early stages of crack initiation. This model requires more experimental data for calibration than approaches using simple approximations. As a result of the different theories underlying the analyzed methods, the different approaches have different strengths and weaknesses. However, it was found that the group of parametric equations categorized as simple approximations are the easiest for practical use, with their applicability having already been verified for a broad range of materials.

  17. Selecting long-term care facilities with high use of acute hospitalisations: issues and options

    PubMed Central

    2014-01-01

    Background This paper considers approaches to the question “Which long-term care facilities have residents with high use of acute hospitalisations?” It compares four methods of identifying long-term care facilities with high use of acute hospitalisations by demonstrating four selection methods, identifies key factors to be resolved when deciding which methods to employ, and discusses their appropriateness for different research questions. Methods OPAL was a census-type survey of aged care facilities and residents in Auckland, New Zealand, in 2008. It collected information about facility management and resident demographics, needs and care. Survey records (149 aged care facilities, 6271 residents) were linked to hospital and mortality records routinely assembled by health authorities. The main ranking endpoint was acute hospitalisations for diagnoses that were classified as potentially avoidable. Facilities were ranked using 1) simple event counts per person, 2) event rates per year of resident follow-up, 3) statistical model of rates using four predictors, and 4) change in ranks between methods 2) and 3). A generalized mixed model was used for Method 3 to handle the clustered nature of the data. Results 3048 potentially avoidable hospitalisations were observed during 22 months’ follow-up. The same “top ten” facilities were selected by Methods 1 and 2. The statistical model (Method 3), predicting rates from resident and facility characteristics, ranked facilities differently than these two simple methods. The change-in-ranks method identified a very different set of “top ten” facilities. All methods showed a continuum of use, with no clear distinction between facilities with higher use. Conclusion Choice of selection method should depend upon the purpose of selection. To monitor performance during a period of change, a recent simple rate, count per resident, or even count per bed, may suffice. To find high–use facilities regardless of resident needs, recent history of admissions is highly predictive. To target a few high-use facilities that have high rates after considering facility and resident characteristics, model residuals or a large increase in rank may be preferable. PMID:25052433

  18. Analysis of simple 2-D and 3-D metal structures subjected to fragment impact

    NASA Technical Reports Server (NTRS)

    Witmer, E. A.; Stagliano, T. R.; Spilker, R. L.; Rodal, J. J. A.

    1977-01-01

    Theoretical methods were developed for predicting the large-deflection elastic-plastic transient structural responses of metal containment or deflector (C/D) structures to cope with rotor burst fragment impact attack. For two-dimensional C/D structures both, finite element and finite difference analysis methods were employed to analyze structural response produced by either prescribed transient loads or fragment impact. For the latter category, two time-wise step-by-step analysis procedures were devised to predict the structural responses resulting from a succession of fragment impacts: the collision force method (CFM) which utilizes an approximate prediction of the force applied to the attacked structure during fragment impact, and the collision imparted velocity method (CIVM) in which the impact-induced velocity increment acquired by a region of the impacted structure near the impact point is computed. The merits and limitations of these approaches are discussed. For the analysis of 3-d responses of C/D structures, only the CIVM approach was investigated.

  19. Benchmarking protein-protein interface predictions: why you should care about protein size.

    PubMed

    Martin, Juliette

    2014-07-01

    A number of predictive methods have been developed to predict protein-protein binding sites. Each new method is traditionally benchmarked using sets of protein structures of various sizes, and global statistics are used to assess the quality of the prediction. Little attention has been paid to the potential bias due to protein size on these statistics. Indeed, small proteins involve proportionally more residues at interfaces than large ones. If a predictive method is biased toward small proteins, this can lead to an over-estimation of its performance. Here, we investigate the bias due to the size effect when benchmarking protein-protein interface prediction on the widely used docking benchmark 4.0. First, we simulate random scores that favor small proteins over large ones. Instead of the 0.5 AUC (Area Under the Curve) value expected by chance, these biased scores result in an AUC equal to 0.6 using hypergeometric distributions, and up to 0.65 using constant scores. We then use real prediction results to illustrate how to detect the size bias by shuffling, and subsequently correct it using a simple conversion of the scores into normalized ranks. In addition, we investigate the scores produced by eight published methods and show that they are all affected by the size effect, which can change their relative ranking. The size effect also has an impact on linear combination scores by modifying the relative contributions of each method. In the future, systematic corrections should be applied when benchmarking predictive methods using data sets with mixed protein sizes. © 2014 Wiley Periodicals, Inc.

  20. Portal dosimetry for VMAT using integrated images obtained during treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bedford, James L., E-mail: James.Bedford@icr.ac.uk; Hanson, Ian M.; Hansen, Vibeke Nordmark

    2014-02-15

    Purpose: Portal dosimetry provides an accurate and convenient means of verifying dose delivered to the patient. A simple method for carrying out portal dosimetry for volumetric modulated arc therapy (VMAT) is described, together with phantom measurements demonstrating the validity of the approach. Methods: Portal images were predicted by projecting dose in the isocentric plane through to the portal image plane, with exponential attenuation and convolution with a double-Gaussian scatter function. Appropriate parameters for the projection were selected by fitting the calculation model to portal images measured on an iViewGT portal imager (Elekta AB, Stockholm, Sweden) for a variety of phantommore » thicknesses and field sizes. This model was then used to predict the portal image resulting from each control point of a VMAT arc. Finally, all these control point images were summed to predict the overall integrated portal image for the whole arc. The calculated and measured integrated portal images were compared for three lung and three esophagus plans delivered to a thorax phantom, and three prostate plans delivered to a homogeneous phantom, using a gamma index for 3% and 3 mm. A 0.6 cm{sup 3} ionization chamber was used to verify the planned isocentric dose. The sensitivity of this method to errors in monitor units, field shaping, gantry angle, and phantom position was also evaluated by means of computer simulations. Results: The calculation model for portal dose prediction was able to accurately compute the portal images due to simple square fields delivered to solid water phantoms. The integrated images of VMAT treatments delivered to phantoms were also correctly predicted by the method. The proportion of the images with a gamma index of less than unity was 93.7% ± 3.0% (1SD) and the difference between isocenter dose calculated by the planning system and measured by the ionization chamber was 0.8% ± 1.0%. The method was highly sensitive to errors in monitor units and field shape, but less sensitive to errors in gantry angle or phantom position. Conclusions: This method of predicting integrated portal images provides a convenient means of verifying dose delivered using VMAT, with minimal image acquisition and data processing requirements.« less

  1. Predicting the chromatographic retention of polymers: poly(methyl methacrylate)s and polyacryate blends.

    PubMed

    Bashir, Mubasher A; Radke, Wolfgang

    2007-09-07

    The suitability of a retention model especially designed for polymers is investigated to describe and predict the chromatographic retention behavior of poly(methyl methacrylate)s as a function of mobile phase composition and gradient steepness. It is found that three simple yet rationally chosen chromatographic experiments suffice to extract the analyte specific model parameters necessary to calculate the retention volumes. This allows predicting accurate retention volumes based on a minimum number of initial experiments. Therefore, methods for polymer separations can be developed in relatively short time. The suitability of the virtual chromatography approach to predict the separation of polymer blend is demonstrated for the first time using a blend of different polyacrylates.

  2. A novel geometry-dosimetry label fusion method in multi-atlas segmentation for radiotherapy: a proof-of-concept study

    NASA Astrophysics Data System (ADS)

    Chang, Jina; Tian, Zhen; Lu, Weiguo; Gu, Xuejun; Chen, Mingli; Jiang, Steve B.

    2017-05-01

    Multi-atlas segmentation (MAS) has been widely used to automate the delineation of organs at risk (OARs) for radiotherapy. Label fusion is a crucial step in MAS to cope with the segmentation variabilities among multiple atlases. However, most existing label fusion methods do not consider the potential dosimetric impact of the segmentation result. In this proof-of-concept study, we propose a novel geometry-dosimetry label fusion method for MAS-based OAR auto-contouring, which evaluates the segmentation performance in terms of both geometric accuracy and the dosimetric impact of the segmentation accuracy on the resulting treatment plan. Differently from the original selective and iterative method for performance level estimation (SIMPLE), we evaluated and rejected the atlases based on both Dice similarity coefficient and the predicted error of the dosimetric endpoints. The dosimetric error was predicted using our previously developed geometry-dosimetry model. We tested our method in MAS-based rectum auto-contouring on 20 prostate cancer patients. The accuracy in the rectum sub-volume close to the planning tumor volume (PTV), which was found to be a dosimetric sensitive region of the rectum, was greatly improved. The mean absolute distance between the obtained contour and the physician-drawn contour in the rectum sub-volume 2 mm away from PTV was reduced from 3.96 mm to 3.36 mm on average for the 20 patients, with the maximum decrease found to be from 9.22 mm to 3.75 mm. We also compared the dosimetric endpoints predicted for the obtained contours with those predicted for the physician-drawn contours. Our method led to smaller dosimetric endpoint errors than the SIMPLE method in 15 patients, comparable errors in 2 patients, and slightly larger errors in 3 patients. These results indicated the efficacy of our method in terms of considering both geometric accuracy and dosimetric impact during label fusion. Our algorithm can be applied to different tumor sites and radiation treatments, given a specifically trained geometry-dosimetry model.

  3. A novel geometry-dosimetry label fusion method in multi-atlas segmentation for radiotherapy: a proof-of-concept study.

    PubMed

    Chang, Jina; Tian, Zhen; Lu, Weiguo; Gu, Xuejun; Chen, Mingli; Jiang, Steve B

    2017-05-07

    Multi-atlas segmentation (MAS) has been widely used to automate the delineation of organs at risk (OARs) for radiotherapy. Label fusion is a crucial step in MAS to cope with the segmentation variabilities among multiple atlases. However, most existing label fusion methods do not consider the potential dosimetric impact of the segmentation result. In this proof-of-concept study, we propose a novel geometry-dosimetry label fusion method for MAS-based OAR auto-contouring, which evaluates the segmentation performance in terms of both geometric accuracy and the dosimetric impact of the segmentation accuracy on the resulting treatment plan. Differently from the original selective and iterative method for performance level estimation (SIMPLE), we evaluated and rejected the atlases based on both Dice similarity coefficient and the predicted error of the dosimetric endpoints. The dosimetric error was predicted using our previously developed geometry-dosimetry model. We tested our method in MAS-based rectum auto-contouring on 20 prostate cancer patients. The accuracy in the rectum sub-volume close to the planning tumor volume (PTV), which was found to be a dosimetric sensitive region of the rectum, was greatly improved. The mean absolute distance between the obtained contour and the physician-drawn contour in the rectum sub-volume 2 mm away from PTV was reduced from 3.96 mm to 3.36 mm on average for the 20 patients, with the maximum decrease found to be from 9.22 mm to 3.75 mm. We also compared the dosimetric endpoints predicted for the obtained contours with those predicted for the physician-drawn contours. Our method led to smaller dosimetric endpoint errors than the SIMPLE method in 15 patients, comparable errors in 2 patients, and slightly larger errors in 3 patients. These results indicated the efficacy of our method in terms of considering both geometric accuracy and dosimetric impact during label fusion. Our algorithm can be applied to different tumor sites and radiation treatments, given a specifically trained geometry-dosimetry model.

  4. A Fast Method of Deriving the Kirchhoff Formula for Moving Surfaces

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Posey, Joe W.

    2007-01-01

    The Kirchhoff formula for a moving surface is very useful in many wave propagation problems, particularly in the prediction of noise from rotating machinery. Several publications in the last two decades have presented derivations of the Kirchhoff formula for moving surfaces in both time and frequency domains. Here we present a method originally developed by Farassat and Myers in time domain that is both simple and direct. It is based on generalized function theory and the useful concept of imbedding the problem in the unbounded three-dimensional space. We derive an inhomogeneous wave equation with the source terms that involve Dirac delta functions with their supports on the moving data surface. This wave equation is then solved using the simple free space Green's function of the wave equation resulting in the Kirchhoff formula. The algebraic manipulations are minimal and simple. We do not need the Green's theorem in four dimensions and there is no ambiguity in the interpretation of any terms in the final formulas. Furthermore, this method also gives the simplest derivation of the classical Kirchhoff formula which has a fairly lengthy derivation in physics and applied mathematics books. The Farassat-Myers method can be used easily in frequency domain.

  5. Improving Alcohol Screening for College Students: Screening for Alcohol Misuse amongst College Students with a Simple Modification to the CAGE Questionnaire

    ERIC Educational Resources Information Center

    Taylor, Purcell; El-Sabawi, Taleed; Cangin, Causenge

    2016-01-01

    Objective: To improve the CAGE (Cut down, Annoyed, Guilty, Eye opener) questionnaire's predictive accuracy in screening college students. Participants: The sample consisted of 219 midwestern university students who self-administered a confidential survey. Methods: Exploratory factor analysis, confirmatory factor analysis, receiver operating…

  6. Overload retardation due to plasticity-induced crack closure

    NASA Technical Reports Server (NTRS)

    Fleck, N. A.; Shercliff, H. R.

    1989-01-01

    Experiments are reported which show that plasticity-induced crack closure can account for crack growth retardation following an overload. The finite element method is used to provide evidence which supports the experimental observations of crack closure. Finally, a simple model is presented which predicts with limited success the retardation transient following an overload.

  7. Geometry and the Physics of Seasons

    ERIC Educational Resources Information Center

    Khavrus, Vyacheslav; Shelevytsky, Ihor

    2012-01-01

    By means of a simple mathematical model recently developed by the authors (2010 "Phys. Educ." 45 641), the passage of the seasons on the Earth is simulated for arbitrary latitudes, taking into account sunlight attenuation in the atmosphere. The method developed can be used to predict a realistic value of the solar energy input (insolation) that…

  8. Acorn Production Characteristics of Southern Appalachian Oaks: A Simple Method to Predict Within-Year Crop Size

    Treesearch

    Cathryn H. Greenberg; Bernard R. Parresol

    2000-01-01

    We examined acorn production from 1993-97 by black oak (Quercus velutina Lam.), northern red oak (Q. rubra L.), scarlet oak (Q. coccinea Muenchh.), chestnut oak (Q. prinus L.), and white oak (Q. alba L.) in the Southern Appalichians to determine how frequency of acorn...

  9. The influence of physiological status on age prediction of Anopheles arabiensis using near infra-red spectroscopy

    USDA-ARS?s Scientific Manuscript database

    Determining the age of malaria vectors is essential for evaluating the impact of interventions that reduce the survival of wild mosquito populations and for estimating changes in vectorial capacity. Near infra-red spectroscopy (NIRS) is a simple and non-destructive method that has been used to deter...

  10. The PRONE score: an algorithm for predicting doctors’ risks of formal patient complaints using routinely collected administrative data

    PubMed Central

    Spittal, Matthew J; Bismark, Marie M; Studdert, David M

    2015-01-01

    Background Medicolegal agencies—such as malpractice insurers, medical boards and complaints bodies—are mostly passive regulators; they react to episodes of substandard care, rather than intervening to prevent them. At least part of the explanation for this reactive role lies in the widely recognised difficulty of making robust predictions about medicolegal risk at the individual clinician level. We aimed to develop a simple, reliable scoring system for predicting Australian doctors’ risks of becoming the subject of repeated patient complaints. Methods Using routinely collected administrative data, we constructed a national sample of 13 849 formal complaints against 8424 doctors. The complaints were lodged by patients with state health service commissions in Australia over a 12-year period. We used multivariate logistic regression analysis to identify predictors of subsequent complaints, defined as another complaint occurring within 2 years of an index complaint. Model estimates were then used to derive a simple predictive algorithm, designed for application at the doctor level. Results The PRONE (Predicted Risk Of New Event) score is a 22-point scoring system that indicates a doctor's future complaint risk based on four variables: a doctor's specialty and sex, the number of previous complaints and the time since the last complaint. The PRONE score performed well in predicting subsequent complaints, exhibiting strong validity and reliability and reasonable goodness of fit (c-statistic=0.70). Conclusions The PRONE score appears to be a valid method for assessing individual doctors’ risks of attracting recurrent complaints. Regulators could harness such information to target quality improvement interventions, and prevent substandard care and patient dissatisfaction. The approach we describe should be replicable in other agencies that handle large numbers of patient complaints or malpractice claims. PMID:25855664

  11. Extension of the Helmholtz-Smoluchowski velocity to the hydrophobic microchannels with velocity slip.

    PubMed

    Park, H M; Kim, T W

    2009-01-21

    Electrokinetic flows through hydrophobic microchannels experience velocity slip at the microchannel wall, which affects volumetric flow rate and solute retention time. The usual method of predicting the volumetric flow rate and velocity profile for hydrophobic microchannels is to solve the Navier-Stokes equation and the Poisson-Boltzmann equation for the electric potential with the boundary condition of velocity slip expressed by the Navier slip coefficient, which is computationally demanding and defies analytic solutions. In the present investigation, we have devised a simple method of predicting the velocity profiles and volumetric flow rates of electrokinetic flows by extending the concept of the Helmholtz-Smoluchowski velocity to microchannels with Navier slip. The extended Helmholtz-Smoluchowski velocity is simple to use and yields accurate results as compared to the exact solutions. Employing the extended Helmholtz-Smoluchowski velocity, the analytical expressions for volumetric flow rate and velocity profile for electrokinetic flows through rectangular microchannels with Navier slip have been obtained at high values of zeta potential. The range of validity of the extended Helmholtz-Smoluchowski velocity is also investigated.

  12. Performance of Trajectory Models with Wind Uncertainty

    NASA Technical Reports Server (NTRS)

    Lee, Alan G.; Weygandt, Stephen S.; Schwartz, Barry; Murphy, James R.

    2009-01-01

    Typical aircraft trajectory predictors use wind forecasts but do not account for the forecast uncertainty. A method for generating estimates of wind prediction uncertainty is described and its effect on aircraft trajectory prediction uncertainty is investigated. The procedure for estimating the wind prediction uncertainty relies uses a time-lagged ensemble of weather model forecasts from the hourly updated Rapid Update Cycle (RUC) weather prediction system. Forecast uncertainty is estimated using measures of the spread amongst various RUC time-lagged ensemble forecasts. This proof of concept study illustrates the estimated uncertainty and the actual wind errors, and documents the validity of the assumed ensemble-forecast accuracy relationship. Aircraft trajectory predictions are made using RUC winds with provision for the estimated uncertainty. Results for a set of simulated flights indicate this simple approach effectively translates the wind uncertainty estimate into an aircraft trajectory uncertainty. A key strength of the method is the ability to relate uncertainty to specific weather phenomena (contained in the various ensemble members) allowing identification of regional variations in uncertainty.

  13. Modeling the time--varying subjective quality of HTTP video streams with rate adaptations.

    PubMed

    Chen, Chao; Choi, Lark Kwon; de Veciana, Gustavo; Caramanis, Constantine; Heath, Robert W; Bovik, Alan C

    2014-05-01

    Newly developed hypertext transfer protocol (HTTP)-based video streaming technologies enable flexible rate-adaptation under varying channel conditions. Accurately predicting the users' quality of experience (QoE) for rate-adaptive HTTP video streams is thus critical to achieve efficiency. An important aspect of understanding and modeling QoE is predicting the up-to-the-moment subjective quality of a video as it is played, which is difficult due to hysteresis effects and nonlinearities in human behavioral responses. This paper presents a Hammerstein-Wiener model for predicting the time-varying subjective quality (TVSQ) of rate-adaptive videos. To collect data for model parameterization and validation, a database of longer duration videos with time-varying distortions was built and the TVSQs of the videos were measured in a large-scale subjective study. The proposed method is able to reliably predict the TVSQ of rate adaptive videos. Since the Hammerstein-Wiener model has a very simple structure, the proposed method is suitable for online TVSQ prediction in HTTP-based streaming.

  14. Two-layer convective heating prediction procedures and sensitivities for blunt body reentry vehicles

    NASA Technical Reports Server (NTRS)

    Bouslog, Stanley A.; An, Michael Y.; Wang, K. C.; Tam, Luen T.; Caram, Jose M.

    1993-01-01

    This paper provides a description of procedures typically used to predict convective heating rates to hypersonic reentry vehicles using the two-layer method. These procedures were used to compute the pitch-plane heating distributions to the Apollo geometry for a wind tunnel test case and for three flight cases. Both simple engineering methods and coupled inviscid/boundary layer solutions were used to predict the heating rates. The sensitivity of the heating results in the choice of metrics, pressure distributions, boundary layer edge conditions, and wall catalycity used in the heating analysis were evaluated. Streamline metrics, pressure distributions, and boundary layer edge properties were defined from perfect gas (wind tunnel case) and chemical equilibrium and nonequilibrium (flight cases) inviscid flow-field solutions. The results of this study indicated that the use of CFD-derived metrics and pressures provided better predictions of heating when compared to wind tunnel test data. The study also showed that modeling entropy layer swallowing and ionization had little effect on the heating predictions.

  15. Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels

    PubMed Central

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262

  16. Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.

    PubMed

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.

  17. Early Prediction of Reading Comprehension within the Simple View Framework

    ERIC Educational Resources Information Center

    Catts, Hugh W.; Herrera, Sarah; Nielsen, Diane Corcoran; Bridges, Mindy Sittner

    2015-01-01

    The simple view of reading proposes that reading comprehension is the product of word reading and language comprehension. In this study, we used the simple view framework to examine the early prediction of reading comprehension abilities. Using multiple measures for all constructs, we assessed word reading precursors (i.e., letter knowledge,…

  18. Syndrome diagnosis: human intuition or machine intelligence?

    PubMed

    Braaten, Oivind; Friestad, Johannes

    2008-01-01

    The aim of this study was to investigate whether artificial intelligence methods can represent objective methods that are essential in syndrome diagnosis. Most syndromes have no external criterion standard of diagnosis. The predictive value of a clinical sign used in diagnosis is dependent on the prior probability of the syndrome diagnosis. Clinicians often misjudge the probabilities involved. Syndromology needs objective methods to ensure diagnostic consistency, and take prior probabilities into account. We applied two basic artificial intelligence methods to a database of machine-generated patients - a 'vector method' and a set method. As reference methods we ran an ID3 algorithm, a cluster analysis and a naive Bayes' calculation on the same patient series. The overall diagnostic error rate for the the vector algorithm was 0.93%, and for the ID3 0.97%. For the clinical signs found by the set method, the predictive values varied between 0.71 and 1.0. The artificial intelligence methods that we used, proved simple, robust and powerful, and represent objective diagnostic methods.

  19. Effects of Moisture and Particle Size on Quantitative Determination of Total Organic Carbon (TOC) in Soils Using Near-Infrared Spectroscopy.

    PubMed

    Tamburini, Elena; Vincenzi, Fabio; Costa, Stefania; Mantovi, Paolo; Pedrini, Paola; Castaldelli, Giuseppe

    2017-10-17

    Near-Infrared Spectroscopy is a cost-effective and environmentally friendly technique that could represent an alternative to conventional soil analysis methods, including total organic carbon (TOC). Soil fertility and quality are usually measured by traditional methods that involve the use of hazardous and strong chemicals. The effects of physical soil characteristics, such as moisture content and particle size, on spectral signals could be of great interest in order to understand and optimize prediction capability and set up a robust and reliable calibration model, with the future perspective of being applied in the field. Spectra of 46 soil samples were collected. Soil samples were divided into three data sets: unprocessed, only dried and dried, ground and sieved, in order to evaluate the effects of moisture and particle size on spectral signals. Both separate and combined normalization methods including standard normal variate (SNV), multiplicative scatter correction (MSC) and normalization by closure (NCL), as well as smoothing using first and second derivatives (DV1 and DV2), were applied to a total of seven cases. Pretreatments for model optimization were designed and compared for each data set. The best combination of pretreatments was achieved by applying SNV and DV2 on partial least squares (PLS) modelling. There were no significant differences between the predictions using the three different data sets ( p < 0.05). Finally, a unique database including all three data sets was built to include all the sources of sample variability that were tested and used for final prediction. External validation of TOC was carried out on 16 unknown soil samples to evaluate the predictive ability of the final combined calibration model. Hence, we demonstrate that sample preprocessing has minor influence on the quality of near infrared spectroscopy (NIR) predictions, laying the ground for a direct and fast in situ application of the method. Data can be acquired outside the laboratory since the method is simple and does not need more than a simple band ratio of the spectra.

  20. An Approximation to the Adaptive Exponential Integrate-and-Fire Neuron Model Allows Fast and Predictive Fitting to Physiological Data.

    PubMed

    Hertäg, Loreen; Hass, Joachim; Golovko, Tatiana; Durstewitz, Daniel

    2012-01-01

    For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean-input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ("in vivo-like") input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a "high-throughput" model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.

  1. A simple rain attenuation model for earth-space radio links operating at 10-35 GHz

    NASA Technical Reports Server (NTRS)

    Stutzman, W. L.; Yon, K. M.

    1986-01-01

    The simple attenuation model has been improved from an earlier version and now includes the effect of wave polarization. The model is for the prediction of rain attenuation statistics on earth-space communication links operating in the 10-35 GHz band. Simple calculations produce attenuation values as a function of average rain rate. These together with rain rate statistics (either measured or predicted) can be used to predict annual rain attenuation statistics. In this paper model predictions are compared to measured data from a data base of 62 experiments performed in the U.S., Europe, and Japan. Comparisons are also made to predictions from other models.

  2. Development and validation of a predictive equation for lean body mass in children and adolescents.

    PubMed

    Foster, Bethany J; Platt, Robert W; Zemel, Babette S

    2012-05-01

    Lean body mass (LBM) is not easy to measure directly in the field or clinical setting. Equations to predict LBM from simple anthropometric measures, which account for the differing contributions of fat and lean to body weight at different ages and levels of adiposity, would be useful to both human biologists and clinicians. To develop and validate equations to predict LBM in children and adolescents across the entire range of the adiposity spectrum. Dual energy X-ray absorptiometry was used to measure LBM in 836 healthy children (437 females) and linear regression was used to develop sex-specific equations to estimate LBM from height, weight, age, body mass index (BMI) for age z-score and population ancestry. Equations were validated using bootstrapping methods and in a local independent sample of 332 children and in national data collected by NHANES. The mean difference between measured and predicted LBM was - 0.12% (95% limits of agreement - 11.3% to 8.5%) for males and - 0.14% ( - 11.9% to 10.9%) for females. Equations performed equally well across the entire adiposity spectrum, as estimated by BMI z-score. Validation indicated no over-fitting. LBM was predicted within 5% of measured LBM in the validation sample. The equations estimate LBM accurately from simple anthropometric measures.

  3. QSAR classification models for the prediction of endocrine disrupting activity of brominated flame retardants.

    PubMed

    Kovarich, Simona; Papa, Ester; Gramatica, Paola

    2011-06-15

    The identification of potential endocrine disrupting (ED) chemicals is an important task for the scientific community due to their diffusion in the environment; the production and use of such compounds will be strictly regulated through the authorization process of the REACH regulation. To overcome the problem of insufficient experimental data, the quantitative structure-activity relationship (QSAR) approach is applied to predict the ED activity of new chemicals. In the present study QSAR classification models are developed, according to the OECD principles, to predict the ED potency for a class of emerging ubiquitary pollutants, viz. brominated flame retardants (BFRs). Different endpoints related to ED activity (i.e. aryl hydrocarbon receptor agonism and antagonism, estrogen receptor agonism and antagonism, androgen and progesterone receptor antagonism, T4-TTR competition, E2SULT inhibition) are modeled using the k-NN classification method. The best models are selected by maximizing the sensitivity and external predictive ability. We propose simple QSARs (based on few descriptors) characterized by internal stability, good predictive power and with a verified applicability domain. These models are simple tools that are applicable to screen BFRs in relation to their ED activity, and also to design safer alternatives, in agreement with the requirements of REACH regulation at the authorization step. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Control surface hinge moment prediction using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Simpson, Christopher David

    The following research determines the feasibility of predicting control surface hinge moments using various computational methods. A detailed analysis is conducted using a 2D GA(W)-1 airfoil with a 20% plain flap. Simple hinge moment prediction methods are tested, including empirical Datcom relations and XFOIL. Steady-state and time-accurate turbulent, viscous, Navier-Stokes solutions are computed using Fun3D. Hinge moment coefficients are computed. Mesh construction techniques are discussed. An adjoint-based mesh adaptation case is also evaluated. An NACA 0012 45-degree swept horizontal stabilizer with a 25% elevator is also evaluated using Fun3D. Results are compared with experimental wind-tunnel data obtained from references. Finally, the costs of various solution methods are estimated. Results indicate that while a steady-state Navier-Stokes solution can accurately predict control surface hinge moments for small angles of attack and deflection angles, a time-accurate solution is necessary to accurately predict hinge moments in the presence of flow separation. The ability to capture the unsteady vortex shedding behavior present in moderate to large control surface deflections is found to be critical to hinge moment prediction accuracy. Adjoint-based mesh adaptation is shown to give hinge moment predictions similar to a globally-refined mesh for a steady-state 2D simulation.

  5. Understanding the bond-energy, hardness, and adhesive force from the phase diagram via the electron work function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Hao; Huang, Xiaochen; Li, Dongyang, E-mail: dongyang.li@ualberta.ca

    2014-11-07

    Properties of metallic materials are intrinsically determined by their electron behavior. However, relevant theoretical treatment involving quantum mechanics is complicated and difficult to be applied in materials design. Electron work function (EWF) has been demonstrated to be a simple but fundamental parameter which well correlates properties of materials with their electron behavior and could thus be used to predict material properties from the aspect of electron activities in a relatively easy manner. In this article, we propose a method to extract the electron work functions of binary solid solutions or alloys from their phase diagrams and use this simple approachmore » to predict their mechanical strength and surface properties, such as adhesion. Two alloys, Fe-Ni and Cu-Zn, are used as samples for the study. EWFs extracted from phase diagrams show same trends as experimentally observed ones, based on which hardness and surface adhesive force of the alloys are predicted. This new methodology provides an alternative approach to predict material properties based on the work function, which is extractable from the phase diagram. This work may also help maximize the power of phase diagram for materials design and development.« less

  6. Learning Activity Predictors from Sensor Data: Algorithms, Evaluation, and Applications.

    PubMed

    Minor, Bryan; Doppa, Janardhan Rao; Cook, Diane J

    2017-12-01

    Recent progress in Internet of Things (IoT) platforms has allowed us to collect large amounts of sensing data. However, there are significant challenges in converting this large-scale sensing data into decisions for real-world applications. Motivated by applications like health monitoring and intervention and home automation we consider a novel problem called Activity Prediction , where the goal is to predict future activity occurrence times from sensor data. In this paper, we make three main contributions. First, we formulate and solve the activity prediction problem in the framework of imitation learning and reduce it to a simple regression learning problem. This approach allows us to leverage powerful regression learners that can reason about the relational structure of the problem with negligible computational overhead. Second, we present several metrics to evaluate activity predictors in the context of real-world applications. Third, we evaluate our approach using real sensor data collected from 24 smart home testbeds. We also embed the learned predictor into a mobile-device-based activity prompter and evaluate the app for 9 participants living in smart homes. Our results indicate that our activity predictor performs better than the baseline methods, and offers a simple approach for predicting activities from sensor data.

  7. Designing optimal cell factories: integer programming couples elementary mode analysis with regulation

    PubMed Central

    2012-01-01

    Background Elementary mode (EM) analysis is ideally suited for metabolic engineering as it allows for an unbiased decomposition of metabolic networks in biologically meaningful pathways. Recently, constrained minimal cut sets (cMCS) have been introduced to derive optimal design strategies for strain improvement by using the full potential of EM analysis. However, this approach does not allow for the inclusion of regulatory information. Results Here we present an alternative, novel and simple method for the prediction of cMCS, which allows to account for boolean transcriptional regulation. We use binary linear programming and show that the design of a regulated, optimal metabolic network of minimal functionality can be formulated as a standard optimization problem, where EM and regulation show up as constraints. We validated our tool by optimizing ethanol production in E. coli. Our study showed that up to 70% of the predicted cMCS contained non-enzymatic, non-annotated reactions, which are difficult to engineer. These cMCS are automatically excluded by our approach utilizing simple weight functions. Finally, due to efficient preprocessing, the binary program remains computationally feasible. Conclusions We used integer programming to predict efficient deletion strategies to metabolically engineer a production organism. Our formulation utilizes the full potential of cMCS but adds additional flexibility to the design process. In particular our method allows to integrate regulatory information into the metabolic design process and explicitly favors experimentally feasible deletions. Our method remains manageable even if millions or potentially billions of EM enter the analysis. We demonstrated that our approach is able to correctly predict the most efficient designs for ethanol production in E. coli. PMID:22898474

  8. A Finite Element Analysis for Predicting the Residual Compressive Strength of Impact-Damaged Sandwich Panels

    NASA Technical Reports Server (NTRS)

    Ratcliffe, James G.; Jackson, Wade C.

    2008-01-01

    A simple analysis method has been developed for predicting the residual compressive strength of impact-damaged sandwich panels. The method is tailored for honeycomb core-based sandwich specimens that exhibit an indentation growth failure mode under axial compressive loading, which is driven largely by the crushing behavior of the core material. The analysis method is in the form of a finite element model, where the impact-damaged facesheet is represented using shell elements and the core material is represented using spring elements, aligned in the thickness direction of the core. The nonlinear crush response of the core material used in the analysis is based on data from flatwise compression tests. A comparison with a previous analysis method and some experimental data shows good agreement with results from this new approach.

  9. A Finite Element Analysis for Predicting the Residual Compression Strength of Impact-Damaged Sandwich Panels

    NASA Technical Reports Server (NTRS)

    Ratcliffe, James G.; Jackson, Wade C.

    2008-01-01

    A simple analysis method has been developed for predicting the residual compression strength of impact-damaged sandwich panels. The method is tailored for honeycomb core-based sandwich specimens that exhibit an indentation growth failure mode under axial compression loading, which is driven largely by the crushing behavior of the core material. The analysis method is in the form of a finite element model, where the impact-damaged facesheet is represented using shell elements and the core material is represented using spring elements, aligned in the thickness direction of the core. The nonlinear crush response of the core material used in the analysis is based on data from flatwise compression tests. A comparison with a previous analysis method and some experimental data shows good agreement with results from this new approach.

  10. A new experimental method for the determination of the effective orifice area based on the acoustical source term

    NASA Astrophysics Data System (ADS)

    Kadem, L.; Knapp, Y.; Pibarot, P.; Bertrand, E.; Garcia, D.; Durand, L. G.; Rieu, R.

    2005-12-01

    The effective orifice area (EOA) is the most commonly used parameter to assess the severity of aortic valve stenosis as well as the performance of valve substitutes. Particle image velocimetry (PIV) may be used for in vitro estimation of valve EOA. In the present study, we propose a new and simple method based on Howe’s developments of Lighthill’s aero-acoustic theory. This method is based on an acoustical source term (AST) to estimate the EOA from the transvalvular flow velocity measurements obtained by PIV. The EOAs measured by the AST method downstream of three sharp-edged orifices were in excellent agreement with the EOAs predicted from the potential flow theory used as the reference method in this study. Moreover, the AST method was more accurate than other conventional PIV methods based on streamlines, inflexion point or vorticity to predict the theoretical EOAs. The superiority of the AST method is likely due to the nonlinear form of the AST. There was also an excellent agreement between the EOAs measured by the AST method downstream of the three sharp-edged orifices as well as downstream of a bioprosthetic valve with those obtained by the conventional clinical method based on Doppler-echocardiographic measurements of transvalvular velocity. The results of this study suggest that this new simple PIV method provides an accurate estimation of the aortic valve flow EOA. This new method may thus be used as a reference method to estimate the EOA in experimental investigation of the performance of valve substitutes and to validate Doppler-echocardiographic measurements under various physiologic and pathologic flow conditions.

  11. Improving consensus contact prediction via server correlation reduction.

    PubMed

    Gao, Xin; Bu, Dongbo; Xu, Jinbo; Li, Ming

    2009-05-06

    Protein inter-residue contacts play a crucial role in the determination and prediction of protein structures. Previous studies on contact prediction indicate that although template-based consensus methods outperform sequence-based methods on targets with typical templates, such consensus methods perform poorly on new fold targets. However, we find out that even for new fold targets, the models generated by threading programs can contain many true contacts. The challenge is how to identify them. In this paper, we develop an integer linear programming model for consensus contact prediction. In contrast to the simple majority voting method assuming that all the individual servers are equally important and independent, the newly developed method evaluates their correlation by using maximum likelihood estimation and extracts independent latent servers from them by using principal component analysis. An integer linear programming method is then applied to assign a weight to each latent server to maximize the difference between true contacts and false ones. The proposed method is tested on the CASP7 data set. If the top L/5 predicted contacts are evaluated where L is the protein size, the average accuracy is 73%, which is much higher than that of any previously reported study. Moreover, if only the 15 new fold CASP7 targets are considered, our method achieves an average accuracy of 37%, which is much better than that of the majority voting method, SVM-LOMETS, SVM-SEQ, and SAM-T06. These methods demonstrate an average accuracy of 13.0%, 10.8%, 25.8% and 21.2%, respectively. Reducing server correlation and optimally combining independent latent servers show a significant improvement over the traditional consensus methods. This approach can hopefully provide a powerful tool for protein structure refinement and prediction use.

  12. Syndrome Diagnosis: Human Intuition or Machine Intelligence?

    PubMed Central

    Braaten, Øivind; Friestad, Johannes

    2008-01-01

    The aim of this study was to investigate whether artificial intelligence methods can represent objective methods that are essential in syndrome diagnosis. Most syndromes have no external criterion standard of diagnosis. The predictive value of a clinical sign used in diagnosis is dependent on the prior probability of the syndrome diagnosis. Clinicians often misjudge the probabilities involved. Syndromology needs objective methods to ensure diagnostic consistency, and take prior probabilities into account. We applied two basic artificial intelligence methods to a database of machine-generated patients - a ‘vector method’ and a set method. As reference methods we ran an ID3 algorithm, a cluster analysis and a naive Bayes’ calculation on the same patient series. The overall diagnostic error rate for the the vector algorithm was 0.93%, and for the ID3 0.97%. For the clinical signs found by the set method, the predictive values varied between 0.71 and 1.0. The artificial intelligence methods that we used, proved simple, robust and powerful, and represent objective diagnostic methods. PMID:19415142

  13. QSPR using MOLGEN-QSPR: the challenge of fluoroalkane boiling points.

    PubMed

    Rücker, Christoph; Meringer, Markus; Kerber, Adalbert

    2005-01-01

    By means of the new software MOLGEN-QSPR, a multilinear regression model for the boiling points of lower fluoroalkanes is established. The model is based exclusively on simple descriptors derived directly from molecular structure and nevertheless describes a broader set of data more precisely than previous attempts that used either more demanding (quantum chemical) descriptors or more demanding (nonlinear) statistical methods such as neural networks. The model's internal consistency was confirmed by leave-one-out cross-validation. The model was used to predict all unknown boiling points of fluorobutanes, and the quality of predictions was estimated by means of comparison with boiling point predictions for fluoropentanes.

  14. Mathematical methods for protein science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, W.; Istrail, S.; Atkins, J.

    1997-12-31

    Understanding the structure and function of proteins is a fundamental endeavor in molecular biology. Currently, over 100,000 protein sequences have been determined by experimental methods. The three dimensional structure of the protein determines its function, but there are currently less than 4,000 structures known to atomic resolution. Accordingly, techniques to predict protein structure from sequence have an important role in aiding the understanding of the Genome and the effects of mutations in genetic disease. The authors describe current efforts at Sandia to better understand the structure of proteins through rigorous mathematical analyses of simple lattice models. The efforts have focusedmore » on two aspects of protein science: mathematical structure prediction, and inverse protein folding.« less

  15. Transient excitation and mechanical admittance test techniques for prediction of payload vibration environments

    NASA Technical Reports Server (NTRS)

    Kana, D. D.; Vargas, L. M.

    1977-01-01

    Transient excitation forces were applied separately to simple beam-and-mass launch vehicle and payload models to develop complex admittance functions for the interface and other appropriate points on the structures. These measured admittances were then analytically combined by a matrix representation to obtain a description of the coupled system dynamic characteristics. Response of the payload model to excitation of the launch vehicle model was predicted and compared with results measured on the combined models. These results are also compared with results of earlier work in which a similar procedure was employed except that steady-state sinusoidal excitation techniques were included. It is found that the method employing transient tests produces results that are better overall than the steady state methods. Furthermore, the transient method requires far less time to implement, and provides far better resolution in the data. However, the data acquisition and handling problem is more complex for this method. It is concluded that the transient test and admittance matrix prediction method can be a valuable tool for development of payload vibration tests.

  16. Pre-operative prediction of surgical morbidity in children: comparison of five statistical models.

    PubMed

    Cooper, Jennifer N; Wei, Lai; Fernandez, Soledad A; Minneci, Peter C; Deans, Katherine J

    2015-02-01

    The accurate prediction of surgical risk is important to patients and physicians. Logistic regression (LR) models are typically used to estimate these risks. However, in the fields of data mining and machine-learning, many alternative classification and prediction algorithms have been developed. This study aimed to compare the performance of LR to several data mining algorithms for predicting 30-day surgical morbidity in children. We used the 2012 National Surgical Quality Improvement Program-Pediatric dataset to compare the performance of (1) a LR model that assumed linearity and additivity (simple LR model) (2) a LR model incorporating restricted cubic splines and interactions (flexible LR model) (3) a support vector machine, (4) a random forest and (5) boosted classification trees for predicting surgical morbidity. The ensemble-based methods showed significantly higher accuracy, sensitivity, specificity, PPV, and NPV than the simple LR model. However, none of the models performed better than the flexible LR model in terms of the aforementioned measures or in model calibration or discrimination. Support vector machines, random forests, and boosted classification trees do not show better performance than LR for predicting pediatric surgical morbidity. After further validation, the flexible LR model derived in this study could be used to assist with clinical decision-making based on patient-specific surgical risks. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Prediction Equation for Calculating Fat Mass in Young Indian Adults

    PubMed Central

    Sandhu, Jaspal Singh; Gupta, Giniya; Shenoy, Shweta

    2010-01-01

    Purpose Accurate measurement or prediction of fat mass is useful in physiology, nutrition and clinical medicine. Most predictive equations currently used to assess percentage of body fat or fat mass, using simple anthropometric measurements were derived from people in western societies and they may not be appropriate for individuals with other genotypic and phenotypic characteristics. We developed equations to predict fat mass from anthropometric measurements in young Indian adults. Methods Fat mass was measured in 60 females and 58 males, aged 20 to 29 yrs by using hydrostatic weighing and by simultaneous measurement of residual lung volume. Anthropometric measure included weight (kg), height (m) and 4 skinfold thickness [STs (mm)]. Sex specific linear regression model was developed with fat mass as the dependent variable and all anthropometric measures as independent variables. Results The prediction equation obtained for fat mass (kg) for males was 8.46+0.32 (weight) − 15.16 (height) + 9.54 (log of sum of 4 STs) (R2= 0. 53, SEE=3.42 kg) and − 20.22 + 0.33 (weight) + 3.44 (height) + 7.66 (log of sum of 4 STs) (R2=0.72, SEE=3.01kg) for females. Conclusion A new prediction equation for the measurement of fat mass was derived and internally validated in young Indian adults using simple anthropometric measurements. PMID:22375197

  18. Prediction of Frequency for Simulation of Asphalt Mix Fatigue Tests Using MARS and ANN

    PubMed Central

    Fakhri, Mansour

    2014-01-01

    Fatigue life of asphalt mixes in laboratory tests is commonly determined by applying a sinusoidal or haversine waveform with specific frequency. The pavement structure and loading conditions affect the shape and the frequency of tensile response pulses at the bottom of asphalt layer. This paper introduces two methods for predicting the loading frequency in laboratory asphalt fatigue tests for better simulation of field conditions. Five thousand (5000) four-layered pavement sections were analyzed and stress and strain response pulses in both longitudinal and transverse directions was determined. After fitting the haversine function to the response pulses by the concept of equal-energy pulse, the effective length of the response pulses were determined. Two methods including Multivariate Adaptive Regression Splines (MARS) and Artificial Neural Network (ANN) methods were then employed to predict the effective length (i.e., frequency) of tensile stress and strain pulses in longitudinal and transverse directions based on haversine waveform. It is indicated that, under controlled stress and strain modes, both methods (MARS and ANN) are capable of predicting the frequency of loading in HMA fatigue tests with very good accuracy. The accuracy of ANN method is, however, more than MARS method. It is furthermore shown that the results of the present study can be generalized to sinusoidal waveform by a simple equation. PMID:24688400

  19. Prediction of frequency for simulation of asphalt mix fatigue tests using MARS and ANN.

    PubMed

    Ghanizadeh, Ali Reza; Fakhri, Mansour

    2014-01-01

    Fatigue life of asphalt mixes in laboratory tests is commonly determined by applying a sinusoidal or haversine waveform with specific frequency. The pavement structure and loading conditions affect the shape and the frequency of tensile response pulses at the bottom of asphalt layer. This paper introduces two methods for predicting the loading frequency in laboratory asphalt fatigue tests for better simulation of field conditions. Five thousand (5000) four-layered pavement sections were analyzed and stress and strain response pulses in both longitudinal and transverse directions was determined. After fitting the haversine function to the response pulses by the concept of equal-energy pulse, the effective length of the response pulses were determined. Two methods including Multivariate Adaptive Regression Splines (MARS) and Artificial Neural Network (ANN) methods were then employed to predict the effective length (i.e., frequency) of tensile stress and strain pulses in longitudinal and transverse directions based on haversine waveform. It is indicated that, under controlled stress and strain modes, both methods (MARS and ANN) are capable of predicting the frequency of loading in HMA fatigue tests with very good accuracy. The accuracy of ANN method is, however, more than MARS method. It is furthermore shown that the results of the present study can be generalized to sinusoidal waveform by a simple equation.

  20. A Partially-Stirred Batch Reactor Model for Under-Ventilated Fire Dynamics

    NASA Astrophysics Data System (ADS)

    McDermott, Randall; Weinschenk, Craig

    2013-11-01

    A simple discrete quadrature method is developed for closure of the mean chemical source term in large-eddy simulations (LES) and implemented in the publicly available fire model, Fire Dynamics Simulator (FDS). The method is cast as a partially-stirred batch reactor model for each computational cell. The model has three distinct components: (1) a subgrid mixing environment, (2) a mixing model, and (3) a set of chemical rate laws. The subgrid probability density function (PDF) is described by a linear combination of Dirac delta functions with quadrature weights set to satisfy simple integral constraints for the computational cell. It is shown that under certain limiting assumptions, the present method reduces to the eddy dissipation concept (EDC). The model is used to predict carbon monoxide concentrations in direct numerical simulation (DNS) of a methane slot burner and in LES of an under-ventilated compartment fire.

  1. Simple and universal model for electron-impact ionization of complex biomolecules

    NASA Astrophysics Data System (ADS)

    Tan, Hong Qi; Mi, Zhaohong; Bettiol, Andrew A.

    2018-03-01

    We present a simple and universal approach to calculate the total ionization cross section (TICS) for electron impact ionization in DNA bases and other biomaterials in the condensed phase. Evaluating the electron impact TICS plays a vital role in ion-beam radiobiology simulation at the cellular level, as secondary electrons are the main cause of DNA damage in particle cancer therapy. Our method is based on extending the dielectric formalism. The calculated results agree well with experimental data and show a good comparison with other theoretical calculations. This method only requires information of the chemical composition and density and an estimate of the mean binding energy to produce reasonably accurate TICS of complex biomolecules. Because of its simplicity and great predictive effectiveness, this method could be helpful in situations where the experimental TICS data are absent or scarce, such as in particle cancer therapy.

  2. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions.

    PubMed

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-11-11

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials.

  3. Predictability of monthly temperature and precipitation using automatic time series forecasting methods

    NASA Astrophysics Data System (ADS)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2018-02-01

    We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.

  4. NetMHCcons: a consensus method for the major histocompatibility complex class I predictions.

    PubMed

    Karosiene, Edita; Lundegaard, Claus; Lund, Ole; Nielsen, Morten

    2012-03-01

    A key role in cell-mediated immunity is dedicated to the major histocompatibility complex (MHC) molecules that bind peptides for presentation on the cell surface. Several in silico methods capable of predicting peptide binding to MHC class I have been developed. The accuracy of these methods depends on the data available characterizing the binding specificity of the MHC molecules. It has, moreover, been demonstrated that consensus methods defined as combinations of two or more different methods led to improved prediction accuracy. This plethora of methods makes it very difficult for the non-expert user to choose the most suitable method for predicting binding to a given MHC molecule. In this study, we have therefore made an in-depth analysis of combinations of three state-of-the-art MHC-peptide binding prediction methods (NetMHC, NetMHCpan and PickPocket). We demonstrate that a simple combination of NetMHC and NetMHCpan gives the highest performance when the allele in question is included in the training and is characterized by at least 50 data points with at least ten binders. Otherwise, NetMHCpan is the best predictor. When an allele has not been characterized, the performance depends on the distance to the training data. NetMHCpan has the highest performance when close neighbours are present in the training set, while the combination of NetMHCpan and PickPocket outperforms either of the two methods for alleles with more remote neighbours. The final method, NetMHCcons, is publicly available at www.cbs.dtu.dk/services/NetMHCcons , and allows the user in an automatic manner to obtain the most accurate predictions for any given MHC molecule.

  5. Study on the medical meteorological forecast of the number of hypertension inpatient based on SVR

    NASA Astrophysics Data System (ADS)

    Zhai, Guangyu; Chai, Guorong; Zhang, Haifeng

    2017-06-01

    The purpose of this study is to build a hypertension prediction model by discussing the meteorological factors for hypertension incidence. The research method is selecting the standard data of relative humidity, air temperature, visibility, wind speed and air pressure of Lanzhou from 2010 to 2012(calculating the maximum, minimum and average value with 5 days as a unit ) as the input variables of Support Vector Regression(SVR) and the standard data of hypertension incidence of the same period as the output dependent variables to obtain the optimal prediction parameters by cross validation algorithm, then by SVR algorithm learning and training, a SVR forecast model for hypertension incidence is built. The result shows that the hypertension prediction model is composed of 15 input independent variables, the training accuracy is 0.005, the final error is 0.0026389. The forecast accuracy based on SVR model is 97.1429%, which is higher than statistical forecast equation and neural network prediction method. It is concluded that SVR model provides a new method for hypertension prediction with its simple calculation, small error as well as higher historical sample fitting and Independent sample forecast capability.

  6. Pharmacokinetics of low-dose nedaplatin and validation of AUC prediction in patients with non-small-cell lung carcinoma.

    PubMed

    Niioka, Takenori; Uno, Tsukasa; Yasui-Furukori, Norio; Takahata, Takenori; Shimizu, Mikiko; Sugawara, Kazunobu; Tateishi, Tomonori

    2007-04-01

    The aim of this study was to determine the pharmacokinetics of low-dose nedaplatin combined with paclitaxel and radiation therapy in patients having non-small-cell lung carcinoma and establish the optimal dosage regimen for low-dose nedaplatin. We also evaluated predictive accuracy of reported formulas to estimate the area under the plasma concentration-time curve (AUC) of low-dose nedaplatin. A total of 19 patients were administered a constant intravenous infusion of 20 mg/m(2) body surface area (BSA) nedaplatin for an hour, and blood samples were collected at 1, 2, 3, 4, 6, 8, and 19 h after the administration. Plasma concentrations of unbound platinum were measured, and the actual value of platinum AUC (actual AUC) was calculated based on these data. The predicted value of platinum AUC (predicted AUC) was determined by three predictive methods reported in previous studies, consisting of Bayesian method, limited sampling strategies with plasma concentration at a single time point, and simple formula method (SFM) without measured plasma concentration. Three error indices, mean prediction error (ME, measure of bias), mean absolute error (MAE, measure of accuracy), and root mean squared prediction error (RMSE, measure of precision), were obtained from the difference between the actual and the predicted AUC, to compare the accuracy between the three predictive methods. The AUC showed more than threefold inter-patient variation, and there was a favorable correlation between nedaplatin clearance and creatinine clearance (Ccr) (r = 0.832, P < 0.01). In three error indices, MAE and RMSE showed significant difference between the three AUC predictive methods, and the method of SFM had the most favorable results, in which %ME, %MAE, and %RMSE were 5.5, 10.7, and 15.4, respectively. The dosage regimen of low-dose nedaplatin should be established based on Ccr rather than on BSA. Since prediction accuracy of SFM, which did not require measured plasma concentration, was most favorable among the three methods evaluated in this study, SFM could be the most practical method to predict AUC of low-dose nedaplatin in a clinical situation judging from its high accuracy in predicting AUC without measured plasma concentration.

  7. A Multiobjective Approach Applied to the Protein Structure Prediction Problem

    DTIC Science & Technology

    2002-03-07

    like a low energy search landscape . 2.1.1 Symbolic/Formalized Problem Domain Description. Every computer representable problem can also be embodied...method [60]. 3.4 Energy Minimization Methods The energy landscape algorithms are based on the idea that a protein’s final resting conformation is...in our GA used to search the PSP problem energy landscape ). 3.5.1 Simple GA. The main routine in a sGA, after encoding the problem, builds a

  8. Conformational equilibria of alkanes in aqueous solution: relationship to water structure near hydrophobic solutes.

    PubMed Central

    Ashbaugh, H S; Garde, S; Hummer, G; Kaler, E W; Paulaitis, M E

    1999-01-01

    Conformational free energies of butane, pentane, and hexane in water are calculated from molecular simulations with explicit waters and from a simple molecular theory in which the local hydration structure is estimated based on a proximity approximation. This proximity approximation uses only the two nearest carbon atoms on the alkane to predict the local water density at a given point in space. Conformational free energies of hydration are subsequently calculated using a free energy perturbation method. Quantitative agreement is found between the free energies obtained from simulations and theory. Moreover, free energy calculations using this proximity approximation are approximately four orders of magnitude faster than those based on explicit water simulations. Our results demonstrate the accuracy and utility of the proximity approximation for predicting water structure as the basis for a quantitative description of n-alkane conformational equilibria in water. In addition, the proximity approximation provides a molecular foundation for extending predictions of water structure and hydration thermodynamic properties of simple hydrophobic solutes to larger clusters or assemblies of hydrophobic solutes. PMID:10423414

  9. High precision in protein contact prediction using fully convolutional neural networks and minimal sequence features.

    PubMed

    Jones, David T; Kandathil, Shaun M

    2018-04-26

    In addition to substitution frequency data from protein sequence alignments, many state-of-the-art methods for contact prediction rely on additional sources of information, or features, of protein sequences in order to predict residue-residue contacts, such as solvent accessibility, predicted secondary structure, and scores from other contact prediction methods. It is unclear how much of this information is needed to achieve state-of-the-art results. Here, we show that using deep neural network models, simple alignment statistics contain sufficient information to achieve state-of-the-art precision. Our prediction method, DeepCov, uses fully convolutional neural networks operating on amino-acid pair frequency or covariance data derived directly from sequence alignments, without using global statistical methods such as sparse inverse covariance or pseudolikelihood estimation. Comparisons against CCMpred and MetaPSICOV2 show that using pairwise covariance data calculated from raw alignments as input allows us to match or exceed the performance of both of these methods. Almost all of the achieved precision is obtained when considering relatively local windows (around 15 residues) around any member of a given residue pairing; larger window sizes have comparable performance. Assessment on a set of shallow sequence alignments (fewer than 160 effective sequences) indicates that the new method is substantially more precise than CCMpred and MetaPSICOV2 in this regime, suggesting that improved precision is attainable on smaller sequence families. Overall, the performance of DeepCov is competitive with the state of the art, and our results demonstrate that global models, which employ features from all parts of the input alignment when predicting individual contacts, are not strictly needed in order to attain precise contact predictions. DeepCov is freely available at https://github.com/psipred/DeepCov. d.t.jones@ucl.ac.uk.

  10. Electro-oculography-based detection of sleep-wake in sleep apnea patients.

    PubMed

    Virkkala, Jussi; Toppila, Jussi; Maasilta, Paula; Bachour, Adel

    2015-09-01

    Recently, we have developed a simple method that uses two electro-oculography (EOG) electrodes for the automatic scoring of sleep-wake in normal subjects. In this study, we investigated the usefulness of this method on 284 consecutive patients referred for a suspicion of sleep apnea who underwent a polysomnography (PSG). We applied the AASM 2007 scoring rules. A simple automatic sleep-wake classification algorithm based on 18-45 Hz beta power was applied to the calculated bipolar EOG channel and was compared to standard polysomnography. Epoch by epoch agreement was evaluated. Eighteen patients were excluded due to poor EOG quality. One hundred fifty-eight males and 108 females were studied, their mean age was 48 (range 17-89) years, apnea-hypopnea index 13 (range 0-96) /h, BMI 29 (range 17-52) kg/m(2), and sleep efficiency 78 (range 0-98) %. The mean agreement in sleep-wake states between EOG and PSG was 85% and the Cohen's kappa was 0.56. Overall epoch-by-epoch agreement was 85%, and the Cohen's kappa was 0.57 with positive predictive value of 91% and negative predictive value of 65%. The EOG method can be applied to patients referred for suspicion of sleep apnea to indicate the sleep-wake state.

  11. Prediction of nocturnal hypoglycemia by an aggregation of previously known prediction approaches: proof of concept for clinical application.

    PubMed

    Tkachenko, Pavlo; Kriukova, Galyna; Aleksandrova, Marharyta; Chertov, Oleg; Renard, Eric; Pereverzyev, Sergei V

    2016-10-01

    Nocturnal hypoglycemia (NH) is common in patients with insulin-treated diabetes. Despite the risk associated with NH, there are only a few methods aiming at the prediction of such events based on intermittent blood glucose monitoring data and none has been validated for clinical use. Here we propose a method of combining several predictors into a new one that will perform at the level of the best involved one, or even outperform all individual candidates. The idea of the method is to use a recently developed strategy for aggregating ranking algorithms. The method has been calibrated and tested on data extracted from clinical trials, performed in the European FP7-funded project DIAdvisor. Then we have tested the proposed approach on other datasets to show the portability of the method. This feature of the method allows its simple implementation in the form of a diabetic smartphone app. On the considered datasets the proposed approach exhibits good performance in terms of sensitivity, specificity and predictive values. Moreover, the resulting predictor automatically performs at the level of the best involved method or even outperforms it. We propose a strategy for a combination of NH predictors that leads to a method exhibiting a reliable performance and the potential for everyday use by any patient who performs self-monitoring of blood glucose. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Boys with a simple delayed puberty reach their target height.

    PubMed

    Cools, B L M; Rooman, R; Op De Beeck, L; Du Caju, M V L

    2008-01-01

    Final height in boys with delayed puberty is thought to be below target height. This conclusion, however, is based on studies that included patients with genetic short stature. We therefore studied final height in a group of 33 untreated boys with delayed puberty with a target height >-1.5 SDS. Standing height, sitting height, weight and arm span width were measured in each patient. Final height was predicted by the method of Greulich and Pyle using the tables of Bailey and Pinneau for retarded boys at their bone age (PAH1) and the tables of Bailey and Pinneau for average boys plus six months (PAH2). Mean final height (175.8 +/- 6.5 cm) was appropriate for the mean target height (174.7 +/- 4.5 cm). The prediction method of Bailey and Pinneau overestimated the final height by 1.4 cm and the modified prediction method slightly underestimated the final height (-0.15 cm). Boys with untreated delayed puberty reach a final height appropriate for their target height. Final height was best predicted by the method of Bailey and Pinneau using the tables for average boys at their bone age plus six months. Copyright 2008 S. Karger AG, Basel.

  13. Predicting a future lifetime through Box-Cox transformation.

    PubMed

    Yang, Z

    1999-09-01

    In predicting a future lifetime based on a sample of past lifetimes, the Box-Cox transformation method provides a simple and unified procedure that is shown in this article to meet or often outperform the corresponding frequentist solution in terms of coverage probability and average length of prediction intervals. Kullback-Leibler information and second-order asymptotic expansion are used to justify the Box-Cox procedure. Extensive Monte Carlo simulations are also performed to evaluate the small sample behavior of the procedure. Certain popular lifetime distributions, such as Weibull, inverse Gaussian and Birnbaum-Saunders are served as illustrative examples. One important advantage of the Box-Cox procedure lies in its easy extension to linear model predictions where the exact frequentist solutions are often not available.

  14. Analysis Methods and Models for Small Unit Operations

    DTIC Science & Technology

    2006-07-01

    wordt in andere studies ogebruikt orn a-an te geven welke op welke wijze operationele effectiviteit kan worden gekwalificeerd en gekwanuificeerd...the node ’Prediction’ is called a child of the node ’Success’ and the node ’Success’ is called a parent of the node ’Prediction’. Figure C.2 A simple...event A is a child of event B and event B is a child of event C ( C -- B -- A). The belief network or influence diagram has to be a directed network

  15. TMSEG: Novel prediction of transmembrane helices.

    PubMed

    Bernhofer, Michael; Kloppmann, Edda; Reeb, Jonas; Rost, Burkhard

    2016-11-01

    Transmembrane proteins (TMPs) are important drug targets because they are essential for signaling, regulation, and transport. Despite important breakthroughs, experimental structure determination remains challenging for TMPs. Various methods have bridged the gap by predicting transmembrane helices (TMHs), but room for improvement remains. Here, we present TMSEG, a novel method identifying TMPs and accurately predicting their TMHs and their topology. The method combines machine learning with empirical filters. Testing it on a non-redundant dataset of 41 TMPs and 285 soluble proteins, and applying strict performance measures, TMSEG outperformed the state-of-the-art in our hands. TMSEG correctly distinguished helical TMPs from other proteins with a sensitivity of 98 ± 2% and a false positive rate as low as 3 ± 1%. Individual TMHs were predicted with a precision of 87 ± 3% and recall of 84 ± 3%. Furthermore, in 63 ± 6% of helical TMPs the placement of all TMHs and their inside/outside topology was correctly predicted. There are two main features that distinguish TMSEG from other methods. First, the errors in finding all helical TMPs in an organism are significantly reduced. For example, in human this leads to 200 and 1600 fewer misclassifications compared to the second and third best method available, and 4400 fewer mistakes than by a simple hydrophobicity-based method. Second, TMSEG provides an add-on improvement for any existing method to benefit from. Proteins 2016; 84:1706-1716. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. Non-invasive prediction of forthcoming cirrhosis-related complications

    PubMed Central

    Kang, Wonseok; Kim, Seung Up; Ahn, Sang Hoon

    2014-01-01

    In patients with chronic liver diseases, identification of significant liver fibrosis and cirrhosis is essential for determining treatment strategies, assessing therapeutic response, and stratifying long-term prognosis. Although liver biopsy remains the reference standard for evaluating the extent of liver fibrosis in patients with chronic liver diseases, several non-invasive methods have been developed as alternatives to liver biopsies. Some of these non-invasive methods have demonstrated clinical accuracy for diagnosing significant fibrosis or cirrhosis in many cross-sectional studies with the histological fibrosis stage as a reference standard. However, non-invasive methods cannot be fully validated through cross-sectional studies since liver biopsy is not a perfect surrogate endpoint marker. Accordingly, recent studies have focused on assessing the performance of non-invasive methods through long-term, longitudinal, follow-up studies with solid clinical endpoints related to advanced stages of liver fibrosis and cirrhosis. As a result, current view is that these alternative methods can independently predict future cirrhosis-related complications, such as hepatic decompensation, liver failure, hepatocellular carcinoma, or liver-related death. The clinical role of non-invasive models seems to be shifting from a simple tool for predicting the extent of fibrosis to a surveillance tool for predicting future liver-related events. In this article, we will summarize recent longitudinal studies of non-invasive methods for predicting forthcoming complications related to liver cirrhosis and discuss the clinical value of currently available non-invasive methods based on evidence from the literature. PMID:24627597

  17. A simple prediction tool for inhaled corticosteroid response in asthmatic children.

    PubMed

    Wu, Yi-Fan; Su, Ming-Wei; Chiang, Bor-Luen; Yang, Yao-Hsu; Tsai, Ching-Hui; Lee, Yungling L

    2017-12-07

    Inhaled corticosteroids are recommended as the first-line controller medication for childhood asthma owing to their multiple clinical benefits. However, heterogeneity in the response towards these drugs remains a significant clinical problem. Children aged 5 to 18 years with mild to moderate persistent asthma were recruited into the Taiwanese Consortium of Childhood Asthma Study. Their responses to inhaled corticosteroids were assessed based on their improvements in the asthma control test and peak expiratory flow. The predictors of responsiveness were demographic and clinical features that were available in primary care settings. We have developed a prediction model using logistic regression and have simplified it to formulate a practical tool. We assessed its predictive performance using the area under the receiver operating characteristic curve. Of the 73 asthmatic children with baseline and follow-up outcome measurements for inhaled corticosteroids treatment, 24 (33%) were defined as non-responders. The tool we have developed consisted of three predictors yielding a total score between 0 and 5, which are comprised of the following parameters: the age at physician-diagnosis of asthma, sex, and exhaled nitric oxide. Sensitivity and specificity of the tool for prediction of inhaled corticosteroids non-responsiveness, for a score of 3, were 0.75 and 0.69, respectively. The areas under the receiver operating characteristic curve for the prediction tool was 0.763. Our prediction tool represents a simple and low-cost method for predicting the response of inhaled corticosteroids treatment in asthmatic children.

  18. The use of sonographic subjective tumor assessment, IOTA logistic regression model 1, IOTA Simple Rules and GI-RADS system in the preoperative prediction of malignancy in women with adnexal masses.

    PubMed

    Koneczny, Jarosław; Czekierdowski, Artur; Florczak, Marek; Poziemski, Paweł; Stachowicz, Norbert; Borowski, Dariusz

    2017-01-01

    Sonography based methods with various tumor markers are currently used to discriminate the type of adnexal masses. To compare the predictive value of selected sonography-based models along with subjective assessment in ovarian cancer prediction. We analyzed data of 271 women operated because of adnexal masses. All masses were verified by histological examination. Preoperative sonography was performed in all patients and various predictive models includ¬ing IOTA group logistic regression model LR1 (LR1), IOTA simple ultrasound-based rules by IOTA (SR), GI-RADS and risk of malignancy index (RMI3) were used. ROC curves were constructed and respective AUC's with 95% CI's were compared. Of 271 masses 78 proved to be malignant including 6 borderline tumors. LR1 had sensitivity of 91.0%, specificity of 91.2%, AUC = 0.95 (95% CI: 0.92-0.98). Sensitivity for GI-RADS for 271 patients was 88.5% with specificity of 85% and AUC = 0.91 (95% CI: 0.88-0.95). Subjective assessment yielded sensitivity and specificity of 85.9% and 96.9%, respectively with AUC = 0.97 (95% CI: 0.94-0.99). SR were applicable in 236 masses and had sensitivity of 90.6% with specificity of 95.3% and AUC = 0.93 (95% CI 0.89-0.97). RMI3 was calculated only in 104 women who had CA125 available and had sensitivity of 55.3%, specificity of 94% and AUC = 0.85 (95% CI: 0.77-0.93). Although subjective assessment by the ultrasound expert remains the best current method of adnexal tumors preoperative discrimination, the simplicity and high predictive value favor the IOTA SR method, and when not applicable, the IOTA LR1 or GI-RADS models to be primarily and effectively used.

  19. A nonparametric method to generate synthetic populations to adjust for complex sampling design features.

    PubMed

    Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E

    2014-06-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.

  20. A nonparametric method to generate synthetic populations to adjust for complex sampling design features

    PubMed Central

    Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608

  1. Prediction of free turbulent mixing using a turbulent kinetic energy method

    NASA Technical Reports Server (NTRS)

    Harsha, P. T.

    1973-01-01

    Free turbulent mixing of two-dimensional and axisymmetric one- and two-stream flows is analyzed by a relatively simple turbulent kinetic energy method. This method incorporates a linear relationship between the turbulent shear and the turbulent kinetic energy and an algebraic relationship for the length scale appearing in the turbulent kinetic energy equation. Good results are obtained for a wide variety of flows. The technique is shown to be especially applicable to flows with heat and mass transfer, for which nonunity Prandtl and Schmidt numbers may be assumed.

  2. Decision curve analysis: a novel method for evaluating prediction models.

    PubMed

    Vickers, Andrew J; Elkin, Elena B

    2006-01-01

    Diagnostic and prognostic models are typically evaluated with measures of accuracy that do not address clinical consequences. Decision-analytic techniques allow assessment of clinical outcomes but often require collection of additional information and may be cumbersome to apply to models that yield a continuous result. The authors sought a method for evaluating and comparing prediction models that incorporates clinical consequences,requires only the data set on which the models are tested,and can be applied to models that have either continuous or dichotomous results. The authors describe decision curve analysis, a simple, novel method of evaluating predictive models. They start by assuming that the threshold probability of a disease or event at which a patient would opt for treatment is informative of how the patient weighs the relative harms of a false-positive and a false-negative prediction. This theoretical relationship is then used to derive the net benefit of the model across different threshold probabilities. Plotting net benefit against threshold probability yields the "decision curve." The authors apply the method to models for the prediction of seminal vesicle invasion in prostate cancer patients. Decision curve analysis identified the range of threshold probabilities in which a model was of value, the magnitude of benefit, and which of several models was optimal. Decision curve analysis is a suitable method for evaluating alternative diagnostic and prognostic strategies that has advantages over other commonly used measures and techniques.

  3. RAPA: a novel in vitro method to evaluate anti-bacterial skin cleansing products.

    PubMed

    Ansari, S A; Gafur, R B; Jones, K; Espada, L A; Polefka, T G

    2010-04-01

    Development of efficacious anti-bacterial skin cleansing products has been limited by the availability of a pre-clinical (in vitro) method to predict clinical efficacy adequately. We report a simple and rapid method, designated as rapid agar plate assay (RAPA), that uses the bacteriological agar surface as a surrogate substrate for skin and combines elements of two widely used in vivo (clinical) methods (Agar Patch and Cup Scrub). To simulate the washing of the human hand or forearm skin with the test product, trypticase soy agar plates were directly washed with the test product and rinsed under running tap water. After air-drying the washed plates, test bacteria (Staphylococcus aureus or Escherichia coli) were applied and the plates were incubated at 37 degrees C for 18-24 h. Using S. aureus as the test organism, anti-bacterial bar soap containing triclocarbanilide showed a strong linear relationship (R(2) = 0.97) between bacterial dose and their per cent reduction. A similar dose-response relationship (R(2) = 0.96) was observed for anti-bacterial liquid hand soap against E. coli. RAPA was able to distinguish between anti-bacterial products based on the nature and level of actives in them. In limited comparative tests, results obtained by RAPA were comparable with the results obtained by clinical agar patch and clinical cup scrub methods. In conclusion, RAPA provides a simple, rugged and reproducible in vitro method for testing the relative efficacy of anti-bacterial skin cleansing products with a likelihood of comparable clinical efficacy. Further testing is warranted to improve the clinical predictability of this method.

  4. Tenax extraction as a simple approach to improve environmental risk assessments.

    PubMed

    Harwood, Amanda D; Nutile, Samuel A; Landrum, Peter F; Lydy, Michael J

    2015-07-01

    It is well documented that using exhaustive chemical extractions is not an effective means of assessing exposure of hydrophobic organic compounds in sediments and that bioavailability-based techniques are an improvement over traditional methods. One technique that has shown special promise as a method for assessing the bioavailability of hydrophobic organic compounds in sediment is the use of Tenax-extractable concentrations. A 6-h or 24-h single-point Tenax-extractable concentration correlates to both bioaccumulation and toxicity. This method has demonstrated effectiveness for several hydrophobic organic compounds in various organisms under both field and laboratory conditions. In addition, a Tenax bioaccumulation model was developed for multiple compounds relating 24-h Tenax-extractable concentrations to oligochaete tissue concentrations exposed in both the laboratory and field. This model has demonstrated predictive capacity for additional compounds and species. Use of Tenax-extractable concentrations to estimate exposure is rapid, simple, straightforward, and relatively inexpensive, as well as accurate. Therefore, this method would be an invaluable tool if implemented in risk assessments. © 2015 SETAC.

  5. A Predictive Model for Medical Events Based on Contextual Embedding of Temporal Sequences

    PubMed Central

    Wang, Zhimu; Huang, Yingxiang; Wang, Shuang; Wang, Fei; Jiang, Xiaoqian

    2016-01-01

    Background Medical concepts are inherently ambiguous and error-prone due to human fallibility, which makes it hard for them to be fully used by classical machine learning methods (eg, for tasks like early stage disease prediction). Objective Our work was to create a new machine-friendly representation that resembles the semantics of medical concepts. We then developed a sequential predictive model for medical events based on this new representation. Methods We developed novel contextual embedding techniques to combine different medical events (eg, diagnoses, prescriptions, and labs tests). Each medical event is converted into a numerical vector that resembles its “semantics,” via which the similarity between medical events can be easily measured. We developed simple and effective predictive models based on these vectors to predict novel diagnoses. Results We evaluated our sequential prediction model (and standard learning methods) in estimating the risk of potential diseases based on our contextual embedding representation. Our model achieved an area under the receiver operating characteristic (ROC) curve (AUC) of 0.79 on chronic systolic heart failure and an average AUC of 0.67 (over the 80 most common diagnoses) using the Medical Information Mart for Intensive Care III (MIMIC-III) dataset. Conclusions We propose a general early prognosis predictor for 80 different diagnoses. Our method computes numeric representation for each medical event to uncover the potential meaning of those events. Our results demonstrate the efficiency of the proposed method, which will benefit patients and physicians by offering more accurate diagnosis. PMID:27888170

  6. Dynamic Deployment Simulations of Inflatable Space Structures

    NASA Technical Reports Server (NTRS)

    Wang, John T.

    2005-01-01

    The feasibility of using Control Volume (CV) method and the Arbitrary Lagrangian Eulerian (ALE) method in LSDYNA to simulate the dynamic deployment of inflatable space structures is investigated. The CV and ALE methods were used to predict the inflation deployments of three folded tube configurations. The CV method was found to be a simple and computationally efficient method that may be adequate for modeling slow inflation deployment sine the inertia of the inflation gas can be neglected. The ALE method was found to be very computationally intensive since it involves the solving of three conservative equations of fluid as well as dealing with complex fluid structure interactions.

  7. Evaluation of approximate methods for the prediction of noise shielding by airframe components

    NASA Technical Reports Server (NTRS)

    Ahtye, W. F.; Mcculley, G.

    1980-01-01

    An evaluation of some approximate methods for the prediction of shielding of monochromatic sound and broadband noise by aircraft components is reported. Anechoic-chamber measurements of the shielding of a point source by various simple geometric shapes were made and the measured values compared with those calculated by the superposition of asymptotic closed-form solutions for the shielding by a semi-infinite plane barrier. The shields used in the measurements consisted of rectangular plates, a circular cylinder, and a rectangular plate attached to the cylinder to simulate a wing-body combination. The normalized frequency, defined as a product of the acoustic wave number and either the plate width or cylinder diameter, ranged from 4.6 to 114. Microphone traverses in front of the rectangular plates and cylinders generally showed a series of diffraction bands that matched those predicted by the approximate methods, except for differences in the magnitudes of the attenuation minima which can be attributed to experimental inaccuracies. The shielding of wing-body combinations was predicted by modifications of the approximations used for rectangular and cylindrical shielding. Although the approximations failed to predict diffraction patterns in certain regions, they did predict the average level of wing-body shielding with an average deviation of less than 3 dB.

  8. Development of a program for toric intraocular lens calculation considering posterior corneal astigmatism, incision-induced posterior corneal astigmatism, and effective lens position.

    PubMed

    Eom, Youngsub; Ryu, Dongok; Kim, Dae Wook; Yang, Seul Ki; Song, Jong Suk; Kim, Sug-Whan; Kim, Hyo Myung

    2016-10-01

    To evaluate the toric intraocular lens (IOL) calculation considering posterior corneal astigmatism, incision-induced posterior corneal astigmatism, and effective lens position (ELP). Two thousand samples of corneal parameters with keratometric astigmatism ≥ 1.0 D were obtained using bootstrap methods. The probability distributions for incision-induced keratometric and posterior corneal astigmatisms, as well as ELP were estimated from the literature review. The predicted residual astigmatism error using method D with an IOL add power calculator (IAPC) was compared with those derived using methods A, B, and C through Monte-Carlo simulation. Method A considered the keratometric astigmatism and incision-induced keratometric astigmatism, method B considered posterior corneal astigmatism in addition to the A method, method C considered incision-induced posterior corneal astigmatism in addition to the B method, and method D considered ELP in addition to the C method. To verify the IAPC used in this study, the predicted toric IOL cylinder power and its axis using the IAPC were compared with ray-tracing simulation results. The median magnitude of the predicted residual astigmatism error using method D (0.25 diopters [D]) was smaller than that derived using methods A (0.42 D), B (0.38 D), and C (0.28 D) respectively. Linear regression analysis indicated that the predicted toric IOL cylinder power and its axis had excellent goodness-of-fit between the IAPC and ray-tracing simulation. The IAPC is a simple but accurate method for predicting the toric IOL cylinder power and its axis considering posterior corneal astigmatism, incision-induced posterior corneal astigmatism, and ELP.

  9. Complex versus simple models: ion-channel cardiac toxicity prediction.

    PubMed

    Mistry, Hitesh B

    2018-01-01

    There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  10. Brainstorming: weighted voting prediction of inhibitors for protein targets.

    PubMed

    Plewczynski, Dariusz

    2011-09-01

    The "Brainstorming" approach presented in this paper is a weighted voting method that can improve the quality of predictions generated by several machine learning (ML) methods. First, an ensemble of heterogeneous ML algorithms is trained on available experimental data, then all solutions are gathered and a consensus is built between them. The final prediction is performed using a voting procedure, whereby the vote of each method is weighted according to a quality coefficient calculated using multivariable linear regression (MLR). The MLR optimization procedure is very fast, therefore no additional computational cost is introduced by using this jury approach. Here, brainstorming is applied to selecting actives from large collections of compounds relating to five diverse biological targets of medicinal interest, namely HIV-reverse transcriptase, cyclooxygenase-2, dihydrofolate reductase, estrogen receptor, and thrombin. The MDL Drug Data Report (MDDR) database was used for selecting known inhibitors for these protein targets, and experimental data was then used to train a set of machine learning methods. The benchmark dataset (available at http://bio.icm.edu.pl/∼darman/chemoinfo/benchmark.tar.gz ) can be used for further testing of various clustering and machine learning methods when predicting the biological activity of compounds. Depending on the protein target, the overall recall value is raised by at least 20% in comparison to any single machine learning method (including ensemble methods like random forest) and unweighted simple majority voting procedures.

  11. Database and new models based on a group contribution method to predict the refractive index of ionic liquids.

    PubMed

    Wang, Xinxin; Lu, Xingmei; Zhou, Qing; Zhao, Yongsheng; Li, Xiaoqian; Zhang, Suojiang

    2017-08-02

    Refractive index is one of the important physical properties, which is widely used in separation and purification. In this study, the refractive index data of ILs were collected to establish a comprehensive database, which included about 2138 pieces of data from 1996 to 2014. The Group Contribution-Artificial Neural Network (GC-ANN) model and Group Contribution (GC) method were employed to predict the refractive index of ILs at different temperatures from 283.15 K to 368.15 K. Average absolute relative deviations (AARD) of the GC-ANN model and the GC method were 0.179% and 0.628%, respectively. The results showed that a GC-ANN model provided an effective way to estimate the refractive index of ILs, whereas the GC method was simple and extensive. In summary, both of the models were accurate and efficient approaches for estimating refractive indices of ILs.

  12. Estimating the R-curve from residual strength data

    NASA Technical Reports Server (NTRS)

    Orange, T. W.

    1985-01-01

    A method is presented for estimating the crack-extension resistance curve (R-curve) from residual-strength (maximum load against original crack length) data for precracked fracture specimens. The method allows additional information to be inferred from simple test results, and that information can be used to estimate the failure loads of more complicated structures of the same material and thickness. The fundamentals of the R-curve concept are reviewed first. Then the analytical basis for the estimation method is presented. The estimation method has been verified in two ways. Data from the literature (involving several materials and different types of specimens) are used to show that the estimated R-curve is in good agreement with the measured R-curve. A recent predictive blind round-robin program offers a more crucial test. When the actual failure loads are disclosed, the predictions are found to be in good agreement.

  13. Consideration of Moving Tooth Load in Gear Crack Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Lewicki, David G.; Handschuh, Robert F.; Spievak, Lisa E.; Wawrzynek, Paul A.; Ingraffea, Anthony R.

    2001-01-01

    Robust gear designs consider not only crack initiation, but crack propagation trajectories for a fail-safe design. In actual gear operation, the magnitude as well as the position of the force changes as the gear rotates through the mesh. A study to determine the effect of moving gear tooth load on crack propagation predictions was performed. Two-dimensional analysis of an involute spur gear and three-dimensional analysis of a spiral-bevel pinion gear using the finite element method and boundary element method were studied and compared to experiments. A modified theory for predicting gear crack propagation paths based on the criteria of Erdogan and Sih was investigated. Crack simulation based on calculated stress intensity factors and mixed mode crack angle prediction techniques using a simple static analysis in which the tooth load was located at the highest point of single tooth contact was validated. For three-dimensional analysis, however, the analysis was valid only as long as the crack did not approach the contact region on the tooth.

  14. Methodology to predict delayed failure due to slow crack growth in ceramic tubular components using data from simple specimens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jadaan, O.M.; Tressler, R.E.

    1993-04-01

    The methodology to predict the lifetime of sintered [alpha]-silicon carbide (SASC) tubes subjected to slow crack growth (SCG) conditions involved the experimental determination of the SCG parameters of that material and the scaling analysis to project the stress rupture data from small specimens to large components. Dynamic fatigue testing, taking into account the effect of threshold stress intensity factor, of O-ring and compressed C-ring specimens was used to obtain the SCG parameters. These SCG parameters were in excellent agreement with those published in the literature and extracted from stress rupture tests of tensile and bend specimens. Two methods were usedmore » to predict the lifetimes of internally heated and pressurized SASC tubes. The first is a fracture mechanics approach that is well known in the literature. The second method used a scaling analysis in which the stress rupture distribution (lifetime) of any specimen configuration can be predicted from stress rupture data of another.« less

  15. Development of novel in silico model for developmental toxicity assessment by using naïve Bayes classifier method.

    PubMed

    Zhang, Hui; Ren, Ji-Xia; Kang, Yan-Li; Bo, Peng; Liang, Jun-Yu; Ding, Lan; Kong, Wei-Bao; Zhang, Ji

    2017-08-01

    Toxicological testing associated with developmental toxicity endpoints are very expensive, time consuming and labor intensive. Thus, developing alternative approaches for developmental toxicity testing is an important and urgent task in the drug development filed. In this investigation, the naïve Bayes classifier was applied to develop a novel prediction model for developmental toxicity. The established prediction model was evaluated by the internal 5-fold cross validation and external test set. The overall prediction results for the internal 5-fold cross validation of the training set and external test set were 96.6% and 82.8%, respectively. In addition, four simple descriptors and some representative substructures of developmental toxicants were identified. Thus, we hope the established in silico prediction model could be used as alternative method for toxicological assessment. And these obtained molecular information could afford a deeper understanding on the developmental toxicants, and provide guidance for medicinal chemists working in drug discovery and lead optimization. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Monostatic Radar Cross Section Estimation of Missile Shaped Object Using Physical Optics Method

    NASA Astrophysics Data System (ADS)

    Sasi Bhushana Rao, G.; Nambari, Swathi; Kota, Srikanth; Ranga Rao, K. S.

    2017-08-01

    Stealth Technology manages many signatures for a target in which most radar systems use radar cross section (RCS) for discriminating targets and classifying them with regard to Stealth. During a war target’s RCS has to be very small to make target invisible to enemy radar. In this study, Radar Cross Section of perfectly conducting objects like cylinder, truncated cone (frustum) and circular flat plate is estimated with respect to parameters like size, frequency and aspect angle. Due to the difficulties in exactly predicting the RCS, approximate methods become the alternative. Majority of approximate methods are valid in optical region and where optical region has its own strengths and weaknesses. Therefore, the analysis given in this study is purely based on far field monostatic RCS measurements in the optical region. Computation is done using Physical Optics (PO) method for determining RCS of simple models. In this study not only the RCS of simple models but also missile shaped and rocket shaped models obtained from the cascaded objects with backscatter has been computed using Matlab simulation. Rectangular plots are obtained for RCS in dbsm versus aspect angle for simple and missile shaped objects using Matlab simulation. Treatment of RCS, in this study is based on Narrow Band.

  17. Efficient differentially private learning improves drug sensitivity prediction.

    PubMed

    Honkela, Antti; Das, Mrinal; Nieminen, Arttu; Dikmen, Onur; Kaski, Samuel

    2018-02-06

    Users of a personalised recommendation system face a dilemma: recommendations can be improved by learning from data, but only if other users are willing to share their private information. Good personalised predictions are vitally important in precision medicine, but genomic information on which the predictions are based is also particularly sensitive, as it directly identifies the patients and hence cannot easily be anonymised. Differential privacy has emerged as a potentially promising solution: privacy is considered sufficient if presence of individual patients cannot be distinguished. However, differentially private learning with current methods does not improve predictions with feasible data sizes and dimensionalities. We show that useful predictors can be learned under powerful differential privacy guarantees, and even from moderately-sized data sets, by demonstrating significant improvements in the accuracy of private drug sensitivity prediction with a new robust private regression method. Our method matches the predictive accuracy of the state-of-the-art non-private lasso regression using only 4x more samples under relatively strong differential privacy guarantees. Good performance with limited data is achieved by limiting the sharing of private information by decreasing the dimensionality and by projecting outliers to fit tighter bounds, therefore needing to add less noise for equal privacy. The proposed differentially private regression method combines theoretical appeal and asymptotic efficiency with good prediction accuracy even with moderate-sized data. As already the simple-to-implement method shows promise on the challenging genomic data, we anticipate rapid progress towards practical applications in many fields. This article was reviewed by Zoltan Gaspari and David Kreil.

  18. Uncertainty Estimation using Bootstrapped Kriging Predictions for Precipitation Isoscapes

    NASA Astrophysics Data System (ADS)

    Ma, C.; Bowen, G. J.; Vander Zanden, H.; Wunder, M.

    2017-12-01

    Isoscapes are spatial models representing the distribution of stable isotope values across landscapes. Isoscapes of hydrogen and oxygen in precipitation are now widely used in a diversity of fields, including geology, biology, hydrology, and atmospheric science. To generate isoscapes, geostatistical methods are typically applied to extend predictions from limited data measurements. Kriging is a popular method in isoscape modeling, but quantifying the uncertainty associated with the resulting isoscapes is challenging. Applications that use precipitation isoscapes to determine sample origin require estimation of uncertainty. Here we present a simple bootstrap method (SBM) to estimate the mean and uncertainty of the krigged isoscape and compare these results with a generalized bootstrap method (GBM) applied in previous studies. We used hydrogen isotopic data from IsoMAP to explore these two approaches for estimating uncertainty. We conducted 10 simulations for each bootstrap method and found that SBM results in more kriging predictions (9/10) compared to GBM (4/10). Prediction from SBM was closer to the original prediction generated without bootstrapping and had less variance than GBM. SBM was tested on different datasets from IsoMAP with different numbers of observation sites. We determined that predictions from the datasets with fewer than 40 observation sites using SBM were more variable than the original prediction. The approaches we used for estimating uncertainty will be compiled in an R package that is under development. We expect that these robust estimates of precipitation isoscape uncertainty can be applied in diagnosing the origin of samples ranging from various type of waters to migratory animals, food products, and humans.

  19. A Physics-Based Engineering Approach to Predict the Cross Section for Advanced SRAMs

    NASA Astrophysics Data System (ADS)

    Li, Lei; Zhou, Wanting; Liu, Huihua

    2012-12-01

    This paper presents a physics-based engineering approach to estimate the heavy ion induced upset cross section for 6T SRAM cells from layout and technology parameters. The new approach calculates the effects of radiation with junction photocurrent, which is derived based on device physics. The new and simple approach handles the problem by using simple SPICE simulations. At first, the approach uses a standard SPICE program on a typical PC to predict the SPICE-simulated curve of the collected charge vs. its affected distance from the drain-body junction with the derived junction photocurrent. And then, the SPICE-simulated curve is used to calculate the heavy ion induced upset cross section with a simple model, which considers that the SEU cross section of a SRAM cell is more related to a “radius of influence” around a heavy ion strike than to the physical size of a diffusion node in the layout for advanced SRAMs in nano-scale process technologies. The calculated upset cross section based on this method is in good agreement with the test results for 6T SRAM cells processed using 90 nm process technology.

  20. GASP: Gapped Ancestral Sequence Prediction for proteins

    PubMed Central

    Edwards, Richard J; Shields, Denis C

    2004-01-01

    Background The prediction of ancestral protein sequences from multiple sequence alignments is useful for many bioinformatics analyses. Predicting ancestral sequences is not a simple procedure and relies on accurate alignments and phylogenies. Several algorithms exist based on Maximum Parsimony or Maximum Likelihood methods but many current implementations are unable to process residues with gaps, which may represent insertion/deletion (indel) events or sequence fragments. Results Here we present a new algorithm, GASP (Gapped Ancestral Sequence Prediction), for predicting ancestral sequences from phylogenetic trees and the corresponding multiple sequence alignments. Alignments may be of any size and contain gaps. GASP first assigns the positions of gaps in the phylogeny before using a likelihood-based approach centred on amino acid substitution matrices to assign ancestral amino acids. Important outgroup information is used by first working down from the tips of the tree to the root, using descendant data only to assign probabilities, and then working back up from the root to the tips using descendant and outgroup data to make predictions. GASP was tested on a number of simulated datasets based on real phylogenies. Prediction accuracy for ungapped data was similar to three alternative algorithms tested, with GASP performing better in some cases and worse in others. Adding simple insertions and deletions to the simulated data did not have a detrimental effect on GASP accuracy. Conclusions GASP (Gapped Ancestral Sequence Prediction) will predict ancestral sequences from multiple protein alignments of any size. Although not as accurate in all cases as some of the more sophisticated maximum likelihood approaches, it can process a wide range of input phylogenies and will predict ancestral sequences for gapped and ungapped residues alike. PMID:15350199

  1. Why significant variables aren't automatically good predictors.

    PubMed

    Lo, Adeline; Chernoff, Herman; Zheng, Tian; Lo, Shaw-Hwa

    2015-11-10

    Thus far, genome-wide association studies (GWAS) have been disappointing in the inability of investigators to use the results of identified, statistically significant variants in complex diseases to make predictions useful for personalized medicine. Why are significant variables not leading to good prediction of outcomes? We point out that this problem is prevalent in simple as well as complex data, in the sciences as well as the social sciences. We offer a brief explanation and some statistical insights on why higher significance cannot automatically imply stronger predictivity and illustrate through simulations and a real breast cancer example. We also demonstrate that highly predictive variables do not necessarily appear as highly significant, thus evading the researcher using significance-based methods. We point out that what makes variables good for prediction versus significance depends on different properties of the underlying distributions. If prediction is the goal, we must lay aside significance as the only selection standard. We suggest that progress in prediction requires efforts toward a new research agenda of searching for a novel criterion to retrieve highly predictive variables rather than highly significant variables. We offer an alternative approach that was not designed for significance, the partition retention method, which was very effective predicting on a long-studied breast cancer data set, by reducing the classification error rate from 30% to 8%.

  2. A single step reversed-phase high performance liquid chromatography separation of polar and non-polar lipids.

    PubMed

    Olsson, Petter; Holmbäck, Jan; Herslöf, Bengt

    2014-11-21

    This paper reports a simple chromatographic system to separate lipids classes as well as their molecular species. By the use of phenyl coated silica as stationary phase in combination with a simple mobile phase consisting of methanol and water, all tested lipid classes elute within 30 min. Furthermore, a method to accurately predict retention times of specific lipid components for this type of chromatography is presented. Common detection systems were used, namely evaporative light scattering detection (ELSD), charged aerosol detection (CAD), electrospray mass spectrometry (ESI-MS), and UV detection. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Collapse limit states of reinforced earth retaining walls

    NASA Astrophysics Data System (ADS)

    Bolton, M. D.; Pang, P. L. R.

    The use of systems of earth reinforcement or anchorage is gaining in popularity. It therefore becomes important to assess whether the methods of design which were adopted for such constructions represent valid predictions of realistic limit states. Confidence can only be gained with regard to the effectiveness of limit state criteria if a wide variety of representative limit states were observed. Over 80 centrifugal model tests of simple reinforced earth retaining walls were carried out, with the main purpose of clarifying the nature of appropriate collapse criteria. Collapses due to an insufficiency of friction were shown to be repeatable and therefore subject to fairly simple limit state calculations.

  4. Remote sensing techniques for prediction of watershed runoff

    NASA Technical Reports Server (NTRS)

    Blanchard, B. J.

    1975-01-01

    Hydrologic parameters of watersheds for use in mathematical models and as design criteria for flood detention structures are sometimes difficult to quantify using conventional measuring systems. The advent of remote sensing devices developed in the past decade offers the possibility that watershed characteristics such as vegetative cover, soils, soil moisture, etc., may be quantified rapidly and economically. Experiments with visible and near infrared data from the LANDSAT-1 multispectral scanner indicate a simple technique for calibration of runoff equation coefficients is feasible. The technique was tested on 10 watersheds in the Chickasha area and test results show more accurate runoff coefficients were obtained than with conventional methods. The technique worked equally as well using a dry fall scene. The runoff equation coefficients were then predicted for 22 subwatersheds with flood detention structures. Predicted values were again more accurate than coefficients produced by conventional methods.

  5. Synthesis, spectroscopic and electrochemical characterization of secnidazole esters

    NASA Astrophysics Data System (ADS)

    Shahid, Hafiz Abdullah; Jahangir, Sajid; Hanif, Muddasir; Xiong, Tianrou; Muhammad, Haji; Wahid, Sana; Yousuf, Sammer; Qureshi, Naseem

    2017-12-01

    We report a low-cost, less toxic to environment and simple method for the esterification of secnidazole. This is first comprehensive structural characterization of novel secnidazole esters by the spectroscopic and electrochemical methods. The important EIMS fragmentation analysis showed unique contribution of heteroatom bonds explained by the fragmentation patterns. These peaks originate from the loss of single electron, loss of HCN, M-O, M-NO, M-NO2, M-C7H10N3O3, and M-C8H10N3O4. The comparison of 13C NMR predicted values with the experimental values showed that ChemBioDraw Ultra 14.0 has advantage of predicting aromatic (sp2) carbons, while MestReNova 6.1 predicts sp3 hybrid carbons more accurately. The electrochemical properties indicated an irreversible oxidation process and reversible reduction process in these ester molecules similar to the parent secnidazole.

  6. Predicting catalyst-support interactions between metal nanoparticles and amorphous silica supports

    NASA Astrophysics Data System (ADS)

    Ewing, Christopher S.; Veser, Götz; McCarthy, Joseph J.; Lambrecht, Daniel S.; Johnson, J. Karl

    2016-10-01

    Metal-support interactions significantly affect the stability and activity of supported catalytic nanoparticles (NPs), yet there is no simple and reliable method for estimating NP-support interactions, especially for amorphous supports. We present an approach for rapid prediction of catalyst-support interactions between Pt NPs and amorphous silica supports for NPs of various sizes and shapes. We use density functional theory calculations of 13 atom Pt clusters on model amorphous silica supports to determine linear correlations relating catalyst properties to NP-support interactions. We show that these correlations can be combined with fast discrete element method simulations to predict adhesion energy and NP net charge for NPs of larger sizes and different shapes. Furthermore, we demonstrate that this approach can be successfully transferred to Pd, Au, Ni, and Fe NPs. This approach can be used to quickly screen stability and net charge transfer and leads to a better fundamental understanding of catalyst-support interactions.

  7. United3D: a protein model quality assessment program that uses two consensus based methods.

    PubMed

    Terashi, Genki; Oosawa, Makoto; Nakamura, Yuuki; Kanou, Kazuhiko; Takeda-Shitaka, Mayuko

    2012-01-01

    In protein structure prediction, such as template-based modeling and free modeling (ab initio modeling), the step that assesses the quality of protein models is very important. We have developed a model quality assessment (QA) program United3D that uses an optimized clustering method and a simple Cα atom contact-based potential. United3D automatically estimates the quality scores (Qscore) of predicted protein models that are highly correlated with the actual quality (GDT_TS). The performance of United3D was tested in the ninth Critical Assessment of protein Structure Prediction (CASP9) experiment. In CASP9, United3D showed the lowest average loss of GDT_TS (5.3) among the QA methods participated in CASP9. This result indicates that the performance of United3D to identify the high quality models from the models predicted by CASP9 servers on 116 targets was best among the QA methods that were tested in CASP9. United3D also produced high average Pearson correlation coefficients (0.93) and acceptable Kendall rank correlation coefficients (0.68) between the Qscore and GDT_TS. This performance was competitive with the other top ranked QA methods that were tested in CASP9. These results indicate that United3D is a useful tool for selecting high quality models from many candidate model structures provided by various modeling methods. United3D will improve the accuracy of protein structure prediction.

  8. A simple test of choice stepping reaction time for assessing fall risk in people with multiple sclerosis.

    PubMed

    Tijsma, Mylou; Vister, Eva; Hoang, Phu; Lord, Stephen R

    2017-03-01

    Purpose To determine (a) the discriminant validity for established fall risk factors and (b) the predictive validity for falls of a simple test of choice stepping reaction time (CSRT) in people with multiple sclerosis (MS). Method People with MS (n = 210, 21-74y) performed the CSRT, sensorimotor, balance and neuropsychological tests in a single session. They were then followed up for falls using monthly fall diaries for 6 months. Results The CSRT test had excellent discriminant validity with respect to established fall risk factors. Frequent fallers (≥3 falls) performed significantly worse in the CSRT test than non-frequent fallers (0-2 falls). With the odds of suffering frequent falls increasing 69% with each SD increase in CSRT (OR = 1.69, 95% CI: 1.27-2.26, p = <0.001). In regression analysis, CSRT was best explained by sway, time to complete the 9-Hole Peg test, knee extension strength of the weaker leg, proprioception and the time to complete the Trails B test (multiple R 2   =   0.449, p < 0.001). Conclusions A simple low tech CSRT test has excellent discriminative and predictive validity in relation to falls in people with MS. This test may prove useful in documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions. Implications for rehabilitation Good choice stepping reaction time (CSRT) is required for maintaining balance. A simple low-tech CSRT test has excellent discriminative and predictive validity in relation to falls in people with MS. This test may prove useful documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions.

  9. Use of paired simple and complex models to reduce predictive bias and quantify uncertainty

    NASA Astrophysics Data System (ADS)

    Doherty, John; Christensen, Steen

    2011-12-01

    Modern environmental management and decision-making is based on the use of increasingly complex numerical models. Such models have the advantage of allowing representation of complex processes and heterogeneous system property distributions inasmuch as these are understood at any particular study site. The latter are often represented stochastically, this reflecting knowledge of the character of system heterogeneity at the same time as it reflects a lack of knowledge of its spatial details. Unfortunately, however, complex models are often difficult to calibrate because of their long run times and sometimes questionable numerical stability. Analysis of predictive uncertainty is also a difficult undertaking when using models such as these. Such analysis must reflect a lack of knowledge of spatial hydraulic property details. At the same time, it must be subject to constraints on the spatial variability of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration-constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology is demonstrated using a synthetic example based on groundwater modeling environments commonly encountered in northern Europe and North America.

  10. Finite-difference computations of rotor loads

    NASA Technical Reports Server (NTRS)

    Caradonna, F. X.; Tung, C.

    1985-01-01

    This paper demonstrates the current and future potential of finite-difference methods for solving real rotor problems which now rely largely on empiricism. The demonstration consists of a simple means of combining existing finite-difference, integral, and comprehensive loads codes to predict real transonic rotor flows. These computations are performed for hover and high-advance-ratio flight. Comparisons are made with experimental pressure data.

  11. Finite-difference computations of rotor loads

    NASA Technical Reports Server (NTRS)

    Caradonna, F. X.; Tung, C.

    1985-01-01

    The current and future potential of finite difference methods for solving real rotor problems which now rely largely on empiricism are demonstrated. The demonstration consists of a simple means of combining existing finite-difference, integral, and comprehensive loads codes to predict real transonic rotor flows. These computations are performed for hover and high-advanced-ratio flight. Comparisons are made with experimental pressure data.

  12. The Reliability and Validity of Using Regression Residuals to Measure Institutional Effectiveness in Promoting Degree Completion

    ERIC Educational Resources Information Center

    Horn, Aaron S.; Lee, Giljae

    2016-01-01

    A relatively simple way of measuring institutional effectiveness in relation to degree completion is to estimate the difference between an actual and predicted graduation rate, but the reliability and validity of this method have not been thoroughly examined. Longitudinal data were obtained from IPEDS for both public and private not-for-profit…

  13. From progressive to finite deformation, and back: the universal deformation matrix

    NASA Astrophysics Data System (ADS)

    Provost, A.; Buisson, C.; Merle, O.

    2003-04-01

    It is widely accepted that any finite strain recorded in the field may be interpreted in terms of the simultaneous combination of a pure shear component with one or several simple shear components. To predict strain in geological structures, approximate solutions may be obtained by multiplying successive small increments of each elementary strain component. A more rigorous method consists in achieving the simultaneous combination in the velocity gradient tensor but solutions already proposed in the literature are valid for special cases only and cannot be used, e.g., for the general combination of a pure shear component and six elementary simple shear components. In this paper, we show that the combination of any strain components is as simple as a mouse click, both analytically and numerically. The finite deformation matrix is given by L=exp(L.Δt) where L.Δt is the time-integrated velocity gradient tensor. This method makes it possible to predict finite strain for any combination of strain components. Reciprocally, L.Δt=ln(D) , which allows to unravel the simplest deformation history that might be liable for a given finite deformation. Given the strain ellipsoid only, it is still possible to constrain the range of compatible deformation matrices and thus the range of strain component combinations. Interestingly, certain deformation matrices, though geologically sensible, have no real logarithm so cannot be explained by a deformation history implying strain rate components with constant proportions, what implies significant changes of the stress field during the history of deformation. The study as a whole opens the possibility for further investigations on deformation analysis in general, the method could be used wathever the configuration is.

  14. Comparative assessment of several post-processing methods for correcting evapotranspiration forecasts derived from TIGGE datasets.

    NASA Astrophysics Data System (ADS)

    Tian, D.; Medina, H.

    2017-12-01

    Post-processing of medium range reference evapotranspiration (ETo) forecasts based on numerical weather prediction (NWP) models has the potential of improving the quality and utility of these forecasts. This work compares the performance of several post-processing methods for correcting ETo forecasts over the continental U.S. generated from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) database using data from Europe (EC), the United Kingdom (MO), and the United States (NCEP). The pondered post-processing techniques are: simple bias correction, the use of multimodels, the Ensemble Model Output Statistics (EMOS, Gneitting et al., 2005) and the Bayesian Model Averaging (BMA, Raftery et al., 2005). ETo estimates based on quality-controlled U.S. Regional Climate Reference Network measurements, and computed with the FAO 56 Penman Monteith equation, are adopted as baseline. EMOS and BMA are generally the most efficient post-processing techniques of the ETo forecasts. Nevertheless, the simple bias correction of the best model is commonly much more rewarding than using multimodel raw forecasts. Our results demonstrate the potential of different forecasting and post-processing frameworks in operational evapotranspiration and irrigation advisory systems at national scale.

  15. Postprocessing for Air Quality Predictions

    NASA Astrophysics Data System (ADS)

    Delle Monache, L.

    2017-12-01

    In recent year, air quality (AQ) forecasting has made significant progress towards better predictions with the goal of protecting the public from harmful pollutants. This progress is the results of improvements in weather and chemical transport models, their coupling, and more accurate emission inventories (e.g., with the development of new algorithms to account in near real-time for fires). Nevertheless, AQ predictions are still affected at times by significant biases which stem from limitations in both weather and chemistry transport models. Those are the result of numerical approximations and the poor representation (and understanding) of important physical and chemical process. Moreover, although the quality of emission inventories has been significantly improved, they are still one of the main sources of uncertainties in AQ predictions. For operational real-time AQ forecasting, a significant portion of these biases can be reduced with the implementation of postprocessing methods. We will review some of the techniques that have been proposed to reduce both systematic and random errors of AQ predictions, and improve the correlation between predictions and observations of ground-level ozone and surface particulate matter less than 2.5 µm in diameter (PM2.5). These methods, which can be applied to both deterministic and probabilistic predictions, include simple bias-correction techniques, corrections inspired by the Kalman filter, regression methods, and the more recently developed analog-based algorithms. These approaches will be compared and contrasted, and strength and weaknesses of each will be discussed.

  16. In-depth analysis and characterization of a dual damascene process with respect to different CD

    NASA Astrophysics Data System (ADS)

    Krause, Gerd; Hofmann, Detlef; Habets, Boris; Buhl, Stefan; Gutsch, Manuela; Lopez-Gomez, Alberto; Kim, Wan-Soo; Thrun, Xaver

    2018-03-01

    In a 200 mm high volume environment, we studied data from a dual damascene process. Dual damascene is a combination of lithography, etch and CMP that is used to create copper lines and contacts in one single step. During these process steps, different metal CD are measured by different measurement methods. In this study, we analyze the key numbers of the different measurements after different process steps and develop simple models to predict the electrical behavior* . In addition, radial profiles have been analyzed of both inline measurement parameters and electrical parameters. A matching method was developed based on inline and electrical data. Finally, correlation analysis for radial signatures is presented that can be used to predict excursions in electrical signatures.

  17. Predictive Thermal Control Applied to HabEx

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas E.

    2017-01-01

    Exoplanet science can be accomplished with a telescope that has an internal coronagraph or with an external starshade. An internal coronagraph architecture requires extreme wavefront stability (10 pm change/10 minutes for 10(exp -10) contrast), so every source of wavefront error (WFE) must be controlled. Analysis has been done to estimate the thermal stability required to meet the wavefront stability requirement. This paper illustrates the potential of a new thermal control method called predictive thermal control (PTC) to achieve the required thermal stability. A simple development test using PTC indicates that PTC may meet the thermal stability requirements. Further testing of the PTC method in flight-like environments will be conducted in the X-ray and Cryogenic Facility (XRCF) at Marshall Space Flight Center (MSFC).

  18. Predictive thermal control applied to HabEx

    NASA Astrophysics Data System (ADS)

    Brooks, Thomas E.

    2017-09-01

    Exoplanet science can be accomplished with a telescope that has an internal coronagraph or with an external starshade. An internal coronagraph architecture requires extreme wavefront stability (10 pm change/10 minutes for 10-10 contrast), so every source of wavefront error (WFE) must be controlled. Analysis has been done to estimate the thermal stability required to meet the wavefront stability requirement. This paper illustrates the potential of a new thermal control method called predictive thermal control (PTC) to achieve the required thermal stability. A simple development test using PTC indicates that PTC may meet the thermal stability requirements. Further testing of the PTC method in flight-like environments will be conducted in the X-ray and Cryogenic Facility (XRCF) at Marshall Space Flight Center (MSFC).

  19. Simplified sonic-boom prediction. [using aerodynamic configuration charts and calculators or slide rules

    NASA Technical Reports Server (NTRS)

    Carlson, H. W.

    1978-01-01

    Sonic boom overpressures and signature duration may be predicted for the entire affected ground area for a wide variety of supersonic airplane configurations and spacecraft operating at altitudes up to 76 km in level flight or in moderate climbing or descending flight paths. The outlined procedure relies to a great extent on the use of charts to provide generation and propagation factors for use in relatively simple expressions for signature calculation. Computational requirements can be met by hand-held scientific calculators, or even by slide rules. A variety of correlations of predicted and measured sonic-boom data for airplanes and spacecraft serve to demonstrate the applicability of the simplified method.

  20. Prediction on carbon dioxide emissions based on fuzzy rules

    NASA Astrophysics Data System (ADS)

    Pauzi, Herrini; Abdullah, Lazim

    2014-06-01

    There are several ways to predict air quality, varying from simple regression to models based on artificial intelligence. Most of the conventional methods are not sufficiently able to provide good forecasting performances due to the problems with non-linearity uncertainty and complexity of the data. Artificial intelligence techniques are successfully used in modeling air quality in order to cope with the problems. This paper describes fuzzy inference system (FIS) to predict CO2 emissions in Malaysia. Furthermore, adaptive neuro-fuzzy inference system (ANFIS) is used to compare the prediction performance. Data of five variables: energy use, gross domestic product per capita, population density, combustible renewable and waste and CO2 intensity are employed in this comparative study. The results from the two model proposed are compared and it is clearly shown that the ANFIS outperforms FIS in CO2 prediction.

  1. Predicting "Hot" and "Warm" Spots for Fragment Binding.

    PubMed

    Rathi, Prakash Chandra; Ludlow, R Frederick; Hall, Richard J; Murray, Christopher W; Mortenson, Paul N; Verdonk, Marcel L

    2017-05-11

    Computational fragment mapping methods aim to predict hotspots on protein surfaces where small fragments will bind. Such methods are popular for druggability assessment as well as structure-based design. However, to date researchers developing or using such tools have had no clear way of assessing the performance of these methods. Here, we introduce the first diverse, high quality validation set for computational fragment mapping. The set contains 52 diverse examples of fragment binding "hot" and "warm" spots from the Protein Data Bank (PDB). Additionally, we describe PLImap, a novel protocol for fragment mapping based on the Protein-Ligand Informatics force field (PLIff). We evaluate PLImap against the new fragment mapping test set, and compare its performance to that of simple shape-based algorithms and fragment docking using GOLD. PLImap is made publicly available from https://bitbucket.org/AstexUK/pli .

  2. Application of Discrete Huygens Method for Diffraction of Transient Ultrasonic Field

    NASA Astrophysics Data System (ADS)

    Alia, A.

    2018-01-01

    Several time-domain methods have been widely used to predict impulse response in acoustics. Despite its great potential, Discrete Huygens Method (DHM) has not been as widely used in the domain of ultrasonic diffraction as in other fields. In fact, little can be found in literature about the application of the DHM to diffraction phenomenon that can be described in terms of direct and edge waves, a concept suggested by Young since 1802. In this paper, a simple axisymmetric DHM-model has been used to simulate the transient ultrasonic field radiation of a baffled transducer and its diffraction by a target located on axis. The results are validated by impulse response based calculations. They indicate the capability of DHM to simulate diffraction occurring at transducer and target edges and to predict the complicated transient field in pulse mode.

  3. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions

    PubMed Central

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-01-01

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials. PMID:27833140

  4. Protein asparagine deamidation prediction based on structures with machine learning methods.

    PubMed

    Jia, Lei; Sun, Yaxiong

    2017-01-01

    Chemical stability is a major concern in the development of protein therapeutics due to its impact on both efficacy and safety. Protein "hotspots" are amino acid residues that are subject to various chemical modifications, including deamidation, isomerization, glycosylation, oxidation etc. A more accurate prediction method for potential hotspot residues would allow their elimination or reduction as early as possible in the drug discovery process. In this work, we focus on prediction models for asparagine (Asn) deamidation. Sequence-based prediction method simply identifies the NG motif (amino acid asparagine followed by a glycine) to be liable to deamidation. It still dominates deamidation evaluation process in most pharmaceutical setup due to its convenience. However, the simple sequence-based method is less accurate and often causes over-engineering a protein. We introduce structure-based prediction models by mining available experimental and structural data of deamidated proteins. Our training set contains 194 Asn residues from 25 proteins that all have available high-resolution crystal structures. Experimentally measured deamidation half-life of Asn in penta-peptides as well as 3D structure-based properties, such as solvent exposure, crystallographic B-factors, local secondary structure and dihedral angles etc., were used to train prediction models with several machine learning algorithms. The prediction tools were cross-validated as well as tested with an external test data set. The random forest model had high enrichment in ranking deamidated residues higher than non-deamidated residues while effectively eliminated false positive predictions. It is possible that such quantitative protein structure-function relationship tools can also be applied to other protein hotspot predictions. In addition, we extensively discussed metrics being used to evaluate the performance of predicting unbalanced data sets such as the deamidation case.

  5. Predict or classify: The deceptive role of time-locking in brain signal classification

    NASA Astrophysics Data System (ADS)

    Rusconi, Marco; Valleriani, Angelo

    2016-06-01

    Several experimental studies claim to be able to predict the outcome of simple decisions from brain signals measured before subjects are aware of their decision. Often, these studies use multivariate pattern recognition methods with the underlying assumption that the ability to classify the brain signal is equivalent to predict the decision itself. Here we show instead that it is possible to correctly classify a signal even if it does not contain any predictive information about the decision. We first define a simple stochastic model that mimics the random decision process between two equivalent alternatives, and generate a large number of independent trials that contain no choice-predictive information. The trials are first time-locked to the time point of the final event and then classified using standard machine-learning techniques. The resulting classification accuracy is above chance level long before the time point of time-locking. We then analyze the same trials using information theory. We demonstrate that the high classification accuracy is a consequence of time-locking and that its time behavior is simply related to the large relaxation time of the process. We conclude that when time-locking is a crucial step in the analysis of neural activity patterns, both the emergence and the timing of the classification accuracy are affected by structural properties of the network that generates the signal.

  6. High accuracy operon prediction method based on STRING database scores.

    PubMed

    Taboada, Blanca; Verde, Cristina; Merino, Enrique

    2010-07-01

    We present a simple and highly accurate computational method for operon prediction, based on intergenic distances and functional relationships between the protein products of contiguous genes, as defined by STRING database (Jensen,L.J., Kuhn,M., Stark,M., Chaffron,S., Creevey,C., Muller,J., Doerks,T., Julien,P., Roth,A., Simonovic,M. et al. (2009) STRING 8-a global view on proteins and their functional interactions in 630 organisms. Nucleic Acids Res., 37, D412-D416). These two parameters were used to train a neural network on a subset of experimentally characterized Escherichia coli and Bacillus subtilis operons. Our predictive model was successfully tested on the set of experimentally defined operons in E. coli and B. subtilis, with accuracies of 94.6 and 93.3%, respectively. As far as we know, these are the highest accuracies ever obtained for predicting bacterial operons. Furthermore, in order to evaluate the predictable accuracy of our model when using an organism's data set for the training procedure, and a different organism's data set for testing, we repeated the E. coli operon prediction analysis using a neural network trained with B. subtilis data, and a B. subtilis analysis using a neural network trained with E. coli data. Even for these cases, the accuracies reached with our method were outstandingly high, 91.5 and 93%, respectively. These results show the potential use of our method for accurately predicting the operons of any other organism. Our operon predictions for fully-sequenced genomes are available at http://operons.ibt.unam.mx/OperonPredictor/.

  7. Personalized Risk Prediction in Clinical Oncology Research: Applications and Practical Issues Using Survival Trees and Random Forests.

    PubMed

    Hu, Chen; Steingrimsson, Jon Arni

    2018-01-01

    A crucial component of making individualized treatment decisions is to accurately predict each patient's disease risk. In clinical oncology, disease risks are often measured through time-to-event data, such as overall survival and progression/recurrence-free survival, and are often subject to censoring. Risk prediction models based on recursive partitioning methods are becoming increasingly popular largely due to their ability to handle nonlinear relationships, higher-order interactions, and/or high-dimensional covariates. The most popular recursive partitioning methods are versions of the Classification and Regression Tree (CART) algorithm, which builds a simple interpretable tree structured model. With the aim of increasing prediction accuracy, the random forest algorithm averages multiple CART trees, creating a flexible risk prediction model. Risk prediction models used in clinical oncology commonly use both traditional demographic and tumor pathological factors as well as high-dimensional genetic markers and treatment parameters from multimodality treatments. In this article, we describe the most commonly used extensions of the CART and random forest algorithms to right-censored outcomes. We focus on how they differ from the methods for noncensored outcomes, and how the different splitting rules and methods for cost-complexity pruning impact these algorithms. We demonstrate these algorithms by analyzing a randomized Phase III clinical trial of breast cancer. We also conduct Monte Carlo simulations to compare the prediction accuracy of survival forests with more commonly used regression models under various scenarios. These simulation studies aim to evaluate how sensitive the prediction accuracy is to the underlying model specifications, the choice of tuning parameters, and the degrees of missing covariates.

  8. Estimating phosphorus loss in runoff from manure and fertilizer for a phosphorus loss quantification tool.

    PubMed

    Vadas, P A; Good, L W; Moore, P A; Widman, N

    2009-01-01

    Nonpoint-source pollution of fresh waters by P is a concern because it contributes to accelerated eutrophication. Given the state of the science concerning agricultural P transport, a simple tool to quantify annual, field-scale P loss is a realistic goal. We developed new methods to predict annual dissolved P loss in runoff from surface-applied manures and fertilizers and validated the methods with data from 21 published field studies. We incorporated these manure and fertilizer P runoff loss methods into an annual, field-scale P loss quantification tool that estimates dissolved and particulate P loss in runoff from soil, manure, fertilizer, and eroded sediment. We validated the P loss tool using independent data from 28 studies that monitored P loss in runoff from a variety of agricultural land uses for at least 1 yr. Results demonstrated (i) that our new methods to estimate P loss from surface manure and fertilizer are an improvement over methods used in existing Indexes, and (ii) that it was possible to reliably quantify annual dissolved, sediment, and total P loss in runoff using relatively simple methods and readily available inputs. Thus, a P loss quantification tool that does not require greater degrees of complexity or input data than existing P Indexes could accurately predict P loss across a variety of management and fertilization practices, soil types, climates, and geographic locations. However, estimates of runoff and erosion are still needed that are accurate to a level appropriate for the intended use of the quantification tool.

  9. Prediction of the interaction between a simple moving vehicle and an infinite periodically supported rail - Green's functions approach

    NASA Astrophysics Data System (ADS)

    Mazilu, Traian

    2010-09-01

    This paper herein describes the interaction between a simple moving vehicle and an infinite periodically supported rail, in order to signalise the basic features of the vehicle/track vibration behaviour in general, and wheel/rail vibration, in particular. The rail is modelled as an infinite Timoshenko beam resting on semi-sleepers via three-directional rail pads and ballast. The time-domain analysis was performed applying Green's matrix of the track method. This method allows taking into account the nonlinearities of the wheel/rail contact and the Doppler effect. The numerical analysis is dedicated to the wheel/rail response due to two types of excitation: the steady-state interaction and rail irregularities. The study points out to certain aspects regarding the parametric resonance, the amplitude-modulated vibration due to corrugation and the Doppler effect.

  10. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity.

    PubMed

    Brette, Romain; Gerstner, Wulfram

    2005-11-01

    We introduce a two-dimensional integrate-and-fire model that combines an exponential spike mechanism with an adaptation equation, based on recent theoretical findings. We describe a systematic method to estimate its parameters with simple electrophysiological protocols (current-clamp injection of pulses and ramps) and apply it to a detailed conductance-based model of a regular spiking neuron. Our simple model predicts correctly the timing of 96% of the spikes (+/-2 ms) of the detailed model in response to injection of noisy synaptic conductances. The model is especially reliable in high-conductance states, typical of cortical activity in vivo, in which intrinsic conductances were found to have a reduced role in shaping spike trains. These results are promising because this simple model has enough expressive power to reproduce qualitatively several electrophysiological classes described in vitro.

  11. The Phyre2 web portal for protein modelling, prediction and analysis

    PubMed Central

    Kelley, Lawrence A; Mezulis, Stefans; Yates, Christopher M; Wass, Mark N; Sternberg, Michael JE

    2017-01-01

    Summary Phyre2 is a suite of tools available on the web to predict and analyse protein structure, function and mutations. The focus of Phyre2 is to provide biologists with a simple and intuitive interface to state-of-the-art protein bioinformatics tools. Phyre2 replaces Phyre, the original version of the server for which we previously published a protocol. In this updated protocol, we describe Phyre2, which uses advanced remote homology detection methods to build 3D models, predict ligand binding sites, and analyse the effect of amino-acid variants (e.g. nsSNPs) for a user’s protein sequence. Users are guided through results by a simple interface at a level of detail determined by them. This protocol will guide a user from submitting a protein sequence to interpreting the secondary and tertiary structure of their models, their domain composition and model quality. A range of additional available tools is described to find a protein structure in a genome, to submit large number of sequences at once and to automatically run weekly searches for proteins difficult to model. The server is available at http://www.sbg.bio.ic.ac.uk/phyre2. A typical structure prediction will be returned between 30mins and 2 hours after submission. PMID:25950237

  12. Fuzzy cluster analysis of simple physicochemical properties of amino acids for recognizing secondary structure in proteins.

    PubMed Central

    Mocz, G.

    1995-01-01

    Fuzzy cluster analysis has been applied to the 20 amino acids by using 65 physicochemical properties as a basis for classification. The clustering products, the fuzzy sets (i.e., classical sets with associated membership functions), have provided a new measure of amino acid similarities for use in protein folding studies. This work demonstrates that fuzzy sets of simple molecular attributes, when assigned to amino acid residues in a protein's sequence, can predict the secondary structure of the sequence with reasonable accuracy. An approach is presented for discriminating standard folding states, using near-optimum information splitting in half-overlapping segments of the sequence of assigned membership functions. The method is applied to a nonredundant set of 252 proteins and yields approximately 73% matching for correctly predicted and correctly rejected residues with approximately 60% overall success rate for the correctly recognized ones in three folding states: alpha-helix, beta-strand, and coil. The most useful attributes for discriminating these states appear to be related to size, polarity, and thermodynamic factors. Van der Waals volume, apparent average thickness of surrounding molecular free volume, and a measure of dimensionless surface electron density can explain approximately 95% of prediction results. hydrogen bonding and hydrophobicity induces do not yet enable clear clustering and prediction. PMID:7549882

  13. Performance of the Tariff Method: validation of a simple additive algorithm for analysis of verbal autopsies

    PubMed Central

    2011-01-01

    Background Verbal autopsies provide valuable information for studying mortality patterns in populations that lack reliable vital registration data. Methods for transforming verbal autopsy results into meaningful information for health workers and policymakers, however, are often costly or complicated to use. We present a simple additive algorithm, the Tariff Method (termed Tariff), which can be used for assigning individual cause of death and for determining cause-specific mortality fractions (CSMFs) from verbal autopsy data. Methods Tariff calculates a score, or "tariff," for each cause, for each sign/symptom, across a pool of validated verbal autopsy data. The tariffs are summed for a given response pattern in a verbal autopsy, and this sum (score) provides the basis for predicting the cause of death in a dataset. We implemented this algorithm and evaluated the method's predictive ability, both in terms of chance-corrected concordance at the individual cause assignment level and in terms of CSMF accuracy at the population level. The analysis was conducted separately for adult, child, and neonatal verbal autopsies across 500 pairs of train-test validation verbal autopsy data. Results Tariff is capable of outperforming physician-certified verbal autopsy in most cases. In terms of chance-corrected concordance, the method achieves 44.5% in adults, 39% in children, and 23.9% in neonates. CSMF accuracy was 0.745 in adults, 0.709 in children, and 0.679 in neonates. Conclusions Verbal autopsies can be an efficient means of obtaining cause of death data, and Tariff provides an intuitive, reliable method for generating individual cause assignment and CSMFs. The method is transparent and flexible and can be readily implemented by users without training in statistics or computer science. PMID:21816107

  14. Discrete stochastic simulation methods for chemically reacting systems.

    PubMed

    Cao, Yang; Samuels, David C

    2009-01-01

    Discrete stochastic chemical kinetics describe the time evolution of a chemically reacting system by taking into account the fact that, in reality, chemical species are present with integer populations and exhibit some degree of randomness in their dynamical behavior. In recent years, with the development of new techniques to study biochemistry dynamics in a single cell, there are increasing studies using this approach to chemical kinetics in cellular systems, where the small copy number of some reactant species in the cell may lead to deviations from the predictions of the deterministic differential equations of classical chemical kinetics. This chapter reviews the fundamental theory related to stochastic chemical kinetics and several simulation methods based on that theory. We focus on nonstiff biochemical systems and the two most important discrete stochastic simulation methods: Gillespie's stochastic simulation algorithm (SSA) and the tau-leaping method. Different implementation strategies of these two methods are discussed. Then we recommend a relatively simple and efficient strategy that combines the strengths of the two methods: the hybrid SSA/tau-leaping method. The implementation details of the hybrid strategy are given here and a related software package is introduced. Finally, the hybrid method is applied to simple biochemical systems as a demonstration of its application.

  15. Evaluating surrogate endpoints, prognostic markers, and predictive markers — some simple themes

    PubMed Central

    Baker, Stuart G.; Kramer, Barnett S.

    2014-01-01

    Background A surrogate endpoint is an endpoint observed earlier than the true endpoint (a health outcome) that is used to draw conclusions about the effect of treatment on the unobserved true endpoint. A prognostic marker is a marker for predicting the risk of an event given a control treatment; it informs treatment decisions when there is information on anticipated benefits and harms of a new treatment applied to persons at high risk. A predictive marker is a marker for predicting the effect of treatment on outcome in a subgroup of patients or study participants; it provides more rigorous information for treatment selection than a prognostic marker when it is based on estimated treatment effects in a randomized trial. Methods We organized our discussion around a different theme for each topic. Results “Fundamentally an extrapolation” refers to the non-statistical considerations and assumptions needed when using surrogate endpoints to evaluate a new treatment. “Decision analysis to the rescue” refers to use the use of decision analysis to evaluate an additional prognostic marker because it is not possible to choose between purely statistical measures of marker performance. “The appeal of simplicity” refers to a straightforward and efficient use of a single randomized trial to evaluate overall treatment effect and treatment effect within subgroups using predictive markers. Conclusion The simple themes provide a general guideline for evaluation of surrogate endpoints, prognostic markers, and predictive markers. PMID:25385934

  16. Benchmarking the Fundamental Electronic Properties of small TiO 2 Nanoclusters by GW and Coupled Cluster Theory Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berardo, Enrico; Kaplan, Ferdinand; Bhaskaran-Nair, Kiran

    We study the vertical ionisation potential, electron affinity, fundamental gap and exciton binding energy values of small bare and hydroxylated TiO 2 nanoclusters to understand how the excited state properties change as a function of size and hydroxylation. In addition, we have employed a range of many-body methods; including G 0 W 0, qs GW, EA/IP-EOM-CCSD and DFT (B3LYP, PBE), to compare the performance and predictions of the different classes of methods. We demonstrate that for bare (i.e. non-hydroxylated) clusters all many-body methods predict the same trend with cluster size. The highest occupied and lowest unoccupied DFT orbitals follow themore » same trends as the electron affinity and ionisation potentials predicted by the many-body methods but are generally far too shallow and deep respectively in absolute terms. In contrast, the ΔDFT method is found to yield values in the correct energy window. However, its predictions depend on the functional used and do not necessarily follow trends based on the many-body methods. The effect of hydroxylation of the clusters is to open up both the optical and fundamental gap. In conclusion, a simple microscopic explanation for the observed trends with cluster size and upon hydroxylation is proposed in terms of the Madelung onsite potential.« less

  17. Benchmarking the Fundamental Electronic Properties of small TiO 2 Nanoclusters by GW and Coupled Cluster Theory Calculations

    DOE PAGES

    Berardo, Enrico; Kaplan, Ferdinand; Bhaskaran-Nair, Kiran; ...

    2017-06-19

    We study the vertical ionisation potential, electron affinity, fundamental gap and exciton binding energy values of small bare and hydroxylated TiO 2 nanoclusters to understand how the excited state properties change as a function of size and hydroxylation. In addition, we have employed a range of many-body methods; including G 0 W 0, qs GW, EA/IP-EOM-CCSD and DFT (B3LYP, PBE), to compare the performance and predictions of the different classes of methods. We demonstrate that for bare (i.e. non-hydroxylated) clusters all many-body methods predict the same trend with cluster size. The highest occupied and lowest unoccupied DFT orbitals follow themore » same trends as the electron affinity and ionisation potentials predicted by the many-body methods but are generally far too shallow and deep respectively in absolute terms. In contrast, the ΔDFT method is found to yield values in the correct energy window. However, its predictions depend on the functional used and do not necessarily follow trends based on the many-body methods. The effect of hydroxylation of the clusters is to open up both the optical and fundamental gap. In conclusion, a simple microscopic explanation for the observed trends with cluster size and upon hydroxylation is proposed in terms of the Madelung onsite potential.« less

  18. Comparison of techniques for correction of magnification of pelvic X-rays for hip surgery planning.

    PubMed

    The, Bertram; Kootstra, Johan W J; Hosman, Anton H; Verdonschot, Nico; Gerritsma, Carina L E; Diercks, Ron L

    2007-12-01

    The aim of this study was to develop an accurate method for correction of magnification of pelvic x-rays to enhance accuracy of hip surgery planning. All investigated methods aim at estimating the anteroposterior location of the hip joint in supine position to correctly position a reference object for correction of magnification. An existing method-which is currently being used in clinical practice in our clinics-is based on estimating the position of the hip joint by palpation of the greater trochanter. It is only moderately accurate and difficult to execute reliably in clinical practice. To develop a new method, 99 patients who already had a hip implant in situ were included; this enabled determining the true location of the hip joint deducted from the magnification of the prosthesis. Physical examination was used to obtain predictor variables possibly associated with the height of the hip joint. This included a simple dynamic hip joint examination to estimate the position of the center of rotation. Prediction equations were then constructed using regression analysis. The performance of these prediction equations was compared with the performance of the existing protocol. The mean absolute error in predicting the height of the hip joint center using the old method was 20 mm (range -79 mm to +46 mm). This was 11 mm for the new method (-32 mm to +39 mm). The prediction equation is: height (mm) = 34 + 1/2 abdominal circumference (cm). The newly developed prediction equation is a superior method for predicting the height of the hip joint center for correction of magnification of pelvic x-rays. We recommend its implementation in the departments of radiology and orthopedic surgery.

  19. Simple extrapolation method to predict the electronic structure of conjugated polymers from calculations on oligomers

    DOE PAGES

    Larsen, Ross E.

    2016-04-12

    In this study, we introduce two simple tight-binding models, which we call fragment frontier orbital extrapolations (FFOE), to extrapolate important electronic properties to the polymer limit using electronic structure calculations on only a few small oligomers. In particular, we demonstrate by comparison to explicit density functional theory calculations that for long oligomers the energies of the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO), and of the first electronic excited state are accurately described as a function of number of repeat units by a simple effective Hamiltonian parameterized from electronic structure calculations on monomers, dimers and, optionally,more » tetramers. For the alternating copolymer materials that currently comprise some of the most efficient polymer organic photovoltaic devices one can use these simple but rigorous models to extrapolate computed properties to the polymer limit based on calculations on a small number of low-molecular-weight oligomers.« less

  20. From Sour Grapes to Low-Hanging Fruit: A Case Study Demonstrating a Practical Strategy for Natural Language Processing Portability.

    PubMed

    Johnson, Stephen B; Adekkanattu, Prakash; Campion, Thomas R; Flory, James; Pathak, Jyotishman; Patterson, Olga V; DuVall, Scott L; Major, Vincent; Aphinyanaphongs, Yindalon

    2018-01-01

    Natural Language Processing (NLP) holds potential for patient care and clinical research, but a gap exists between promise and reality. While some studies have demonstrated portability of NLP systems across multiple sites, challenges remain. Strategies to mitigate these challenges can strive for complex NLP problems using advanced methods (hard-to-reach fruit), or focus on simple NLP problems using practical methods (low-hanging fruit). This paper investigates a practical strategy for NLP portability using extraction of left ventricular ejection fraction (LVEF) as a use case. We used a tool developed at the Department of Veterans Affair (VA) to extract the LVEF values from free-text echocardiograms in the MIMIC-III database. The approach showed an accuracy of 98.4%, sensitivity of 99.4%, a positive predictive value of 98.7%, and F-score of 99.0%. This experience, in which a simple NLP solution proved highly portable with excellent performance, illustrates the point that simple NLP applications may be easier to disseminate and adapt, and in the short term may prove more useful, than complex applications.

  1. Total ion chromatographic fingerprints combined with chemometrics and mass defect filter to predict antitumor components of Picrasma quassioids.

    PubMed

    Shi, Yuanyuan; Zhan, Hao; Zhong, Liuyi; Yan, Fangrong; Feng, Feng; Liu, Wenyuan; Xie, Ning

    2016-07-01

    A method of total ion chromatogram combined with chemometrics and mass defect filter was established for the prediction of active ingredients in Picrasma quassioides samples. The total ion chromatogram data of 28 batches were pretreated with wavelet transformation and correlation optimized warping to correct baseline drifts and retention time shifts. Then partial least squares regression was applied to construct a regression model to bridge the total ion chromatogram fingerprints and the antitumor activity of P. quassioides. Finally, the regression coefficients were used to predict the active peaks in total ion chromatogram fingerprints. In this strategy, mass defect filter was employed to classify and characterize the active peaks from a chemical point of view. A total of 17 constituents were predicted as the potential active compounds, 16 of which were identified as alkaloids by this developed approach. The results showed that the established method was not only simple and easy to operate, but also suitable to predict ultraviolet undetectable compounds and provide chemical information for the prediction of active compounds in herbs. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. A framework for qualitative reasoning about solid objects

    NASA Technical Reports Server (NTRS)

    Davis, E.

    1987-01-01

    Predicting the behavior of a qualitatively described system of solid objects requires a combination of geometrical, temporal, and physical reasoning. Methods based upon formulating and solving differential equations are not adequate for robust prediction, since the behavior of a system over extended time may be much simpler than its behavior over local time. A first-order logic, in which one can state simple physical problems and derive their solution deductively, without recourse to solving the differential equations, is discussed. This logic is substantially more expressive and powerful than any previous AI representational system in this domain.

  3. Design Guidelines for Quiet Fans and Pumps for Space Vehicles

    NASA Technical Reports Server (NTRS)

    Lovell, John S.; Magliozzi, Bernard

    2008-01-01

    This document presents guidelines for the design of quiet fans and pumps of the class used on space vehicles. A simple procedure is presented for the prediction of fan noise over the meaningful frequency spectrum. A section also presents general design criteria for axial flow fans, squirrel cage fans, centrifugal fans, and centrifugal pumps. The basis for this report is an experimental program conducted by Hamilton Standard under NASA Contract NAS 9-12457. The derivations of the noise predicting methods used in this document are explained in Hamilton Standard Report SVHSER 6183, "Fan and Pump Noise Control," dated May 1973 (6).

  4. Stochastic Optimal Prediction with Application to Averaged Euler Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bell, John; Chorin, Alexandre J.; Crutchfield, William

    Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.

  5. Poly-Omic Prediction of Complex Traits: OmicKriging

    PubMed Central

    Wheeler, Heather E.; Aquino-Michaels, Keston; Gamazon, Eric R.; Trubetskoy, Vassily V.; Dolan, M. Eileen; Huang, R. Stephanie; Cox, Nancy J.; Im, Hae Kyung

    2014-01-01

    High-confidence prediction of complex traits such as disease risk or drug response is an ultimate goal of personalized medicine. Although genome-wide association studies have discovered thousands of well-replicated polymorphisms associated with a broad spectrum of complex traits, the combined predictive power of these associations for any given trait is generally too low to be of clinical relevance. We propose a novel systems approach to complex trait prediction, which leverages and integrates similarity in genetic, transcriptomic, or other omics-level data. We translate the omic similarity into phenotypic similarity using a method called Kriging, commonly used in geostatistics and machine learning. Our method called OmicKriging emphasizes the use of a wide variety of systems-level data, such as those increasingly made available by comprehensive surveys of the genome, transcriptome, and epigenome, for complex trait prediction. Furthermore, our OmicKriging framework allows easy integration of prior information on the function of subsets of omics-level data from heterogeneous sources without the sometimes heavy computational burden of Bayesian approaches. Using seven disease datasets from the Wellcome Trust Case Control Consortium (WTCCC), we show that OmicKriging allows simple integration of sparse and highly polygenic components yielding comparable performance at a fraction of the computing time of a recently published Bayesian sparse linear mixed model method. Using a cellular growth phenotype, we show that integrating mRNA and microRNA expression data substantially increases performance over either dataset alone. Using clinical statin response, we show improved prediction over existing methods. PMID:24799323

  6. A stochastic vortex structure method for interacting particles in turbulent shear flows

    NASA Astrophysics Data System (ADS)

    Dizaji, Farzad F.; Marshall, Jeffrey S.; Grant, John R.

    2018-01-01

    In a recent study, we have proposed a new synthetic turbulence method based on stochastic vortex structures (SVSs), and we have demonstrated that this method can accurately predict particle transport, collision, and agglomeration in homogeneous, isotropic turbulence in comparison to direct numerical simulation results. The current paper extends the SVS method to non-homogeneous, anisotropic turbulence. The key element of this extension is a new inversion procedure, by which the vortex initial orientation can be set so as to generate a prescribed Reynolds stress field. After validating this inversion procedure for simple problems, we apply the SVS method to the problem of interacting particle transport by a turbulent planar jet. Measures of the turbulent flow and of particle dispersion, clustering, and collision obtained by the new SVS simulations are shown to compare well with direct numerical simulation results. The influence of different numerical parameters, such as number of vortices and vortex lifetime, on the accuracy of the SVS predictions is also examined.

  7. Calibration and combination of monthly near-surface temperature and precipitation predictions over Europe

    NASA Astrophysics Data System (ADS)

    Rodrigues, Luis R. L.; Doblas-Reyes, Francisco J.; Coelho, Caio A. S.

    2018-02-01

    A Bayesian method known as the Forecast Assimilation (FA) was used to calibrate and combine monthly near-surface temperature and precipitation outputs from seasonal dynamical forecast systems. The simple multimodel (SMM), a method that combines predictions with equal weights, was used as a benchmark. This research focuses on Europe and adjacent regions for predictions initialized in May and November, covering the boreal summer and winter months. The forecast quality of the FA and SMM as well as the single seasonal dynamical forecast systems was assessed using deterministic and probabilistic measures. A non-parametric bootstrap method was used to account for the sampling uncertainty of the forecast quality measures. We show that the FA performs as well as or better than the SMM in regions where the dynamical forecast systems were able to represent the main modes of climate covariability. An illustration with the near-surface temperature over North Atlantic, the Mediterranean Sea and Middle-East in summer months associated with the well predicted first mode of climate covariability is offered. However, the main modes of climate covariability are not well represented in most situations discussed in this study as the seasonal dynamical forecast systems have limited skill when predicting the European climate. In these situations, the SMM performs better more often.

  8. EMBEDDED LENSING TIME DELAYS, THE FERMAT POTENTIAL, AND THE INTEGRATED SACHS–WOLFE EFFECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Bin; Kantowski, Ronald; Dai, Xinyu, E-mail: bchen3@fsu.edu

    2015-05-01

    We derive the Fermat potential for a spherically symmetric lens embedded in a Friedman–Lemaître–Robertson–Walker cosmology and use it to investigate the late-time integrated Sachs–Wolfe (ISW) effect, i.e., secondary temperature fluctuations in the cosmic microwave background (CMB) caused by individual large-scale clusters and voids. We present a simple analytical expression for the temperature fluctuation in the CMB across such a lens as a derivative of the lens’ Fermat potential. This formalism is applicable to both linear and nonlinear density evolution scenarios, to arbitrarily large density contrasts, and to all open and closed background cosmologies. It is much simpler to use andmore » makes the same predictions as conventional approaches. In this approach the total temperature fluctuation can be split into a time-delay part and an evolutionary part. Both parts must be included for cosmic structures that evolve and both can be equally important. We present very simple ISW models for cosmic voids and galaxy clusters to illustrate the ease of use of our formalism. We use the Fermat potentials of simple cosmic void models to compare predicted ISW effects with those recently extracted from WMAP and Planck data by stacking large cosmic voids using the aperture photometry method. If voids in the local universe with large density contrasts are no longer evolving we find that the time delay contribution alone predicts values consistent with the measurements. However, we find that for voids still evolving linearly, the evolutionary contribution cancels a significant part of the time delay contribution and results in predicted signals that are much smaller than recently observed.« less

  9. ClusPro: an automated docking and discrimination method for the prediction of protein complexes.

    PubMed

    Comeau, Stephen R; Gatchell, David W; Vajda, Sandor; Camacho, Carlos J

    2004-01-01

    Predicting protein interactions is one of the most challenging problems in functional genomics. Given two proteins known to interact, current docking methods evaluate billions of docked conformations by simple scoring functions, and in addition to near-native structures yield many false positives, i.e. structures with good surface complementarity but far from the native. We have developed a fast algorithm for filtering docked conformations with good surface complementarity, and ranking them based on their clustering properties. The free energy filters select complexes with lowest desolvation and electrostatic energies. Clustering is then used to smooth the local minima and to select the ones with the broadest energy wells-a property associated with the free energy at the binding site. The robustness of the method was tested on sets of 2000 docked conformations generated for 48 pairs of interacting proteins. In 31 of these cases, the top 10 predictions include at least one near-native complex, with an average RMSD of 5 A from the native structure. The docking and discrimination method also provides good results for a number of complexes that were used as targets in the Critical Assessment of PRedictions of Interactions experiment. The fully automated docking and discrimination server ClusPro can be found at http://structure.bu.edu

  10. Uncertainty Estimation Improves Energy Measurement and Verification Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walter, Travis; Price, Phillip N.; Sohn, Michael D.

    2014-05-14

    Implementing energy conservation measures in buildings can reduce energy costs and environmental impacts, but such measures cost money to implement so intelligent investment strategies require the ability to quantify the energy savings by comparing actual energy used to how much energy would have been used in absence of the conservation measures (known as the baseline energy use). Methods exist for predicting baseline energy use, but a limitation of most statistical methods reported in the literature is inadequate quantification of the uncertainty in baseline energy use predictions. However, estimation of uncertainty is essential for weighing the risks of investing in retrofits.more » Most commercial buildings have, or soon will have, electricity meters capable of providing data at short time intervals. These data provide new opportunities to quantify uncertainty in baseline predictions, and to do so after shorter measurement durations than are traditionally used. In this paper, we show that uncertainty estimation provides greater measurement and verification (M&V) information and helps to overcome some of the difficulties with deciding how much data is needed to develop baseline models and to confirm energy savings. We also show that cross-validation is an effective method for computing uncertainty. In so doing, we extend a simple regression-based method of predicting energy use using short-interval meter data. We demonstrate the methods by predicting energy use in 17 real commercial buildings. We discuss the benefits of uncertainty estimates which can provide actionable decision making information for investing in energy conservation measures.« less

  11. Methods and optical fibers that decrease pulse degradation resulting from random chromatic dispersion

    DOEpatents

    Chertkov, Michael; Gabitov, Ildar

    2004-03-02

    The present invention provides methods and optical fibers for periodically pinning an actual (random) accumulated chromatic dispersion of an optical fiber to a predicted accumulated dispersion of the fiber through relatively simple modifications of fiber-optic manufacturing methods or retrofitting of existing fibers. If the pinning occurs with sufficient frequency (at a distance less than or are equal to a correlation scale), pulse degradation resulting from random chromatic dispersion is minimized. Alternatively, pinning may occur quasi-periodically, i.e., the pinning distance is distributed between approximately zero and approximately two to three times the correlation scale.

  12. Dislocation-induced stress in polycrystalline materials: mesoscopic simulations in the dislocation density formalism

    NASA Astrophysics Data System (ADS)

    Berkov, D. V.; Gorn, N. L.

    2018-06-01

    In this paper we present a simple and effective numerical method which allows a fast Fourier transformation-based evaluation of stress generated by dislocations with arbitrary directions and Burgers vectors if the (site-dependent) dislocation density is known. Our method allows the evaluation of the dislocation stress using a rectangular grid with shape-anisotropic discretization cells without employing higher multipole moments of the dislocation interaction coefficients. Using the proposed method, we first simulate the stress created by relatively simple non-homogeneous distributions of vertical edge and so-called ‘mixed’ dislocations in a disk-shaped sample, which is necessary to understand the dislocation behavior in more complicated systems. The main part of our research is devoted to the stress distribution in polycrystalline layers with the dislocation density rapidly varying with the distance to the layer bottom. Considering GaN as a typical example of such systems, we investigate dislocation-induced stress for edge and mixed dislocations, having random orientations of Burgers vectors among crystal grains. We show that the rapid decay of the dislocation density leads to many highly non-trivial features of the stress distributions in such layers and study in detail the dependence of these features on the average grain size. Finally we develop an analytical approach which allows us to predict the evolution of the stress variance with the grain size and compare analytical predictions with numerical results.

  13. Millimeter wave satellite communication studies. Results of the 1981 propagation modeling effort

    NASA Technical Reports Server (NTRS)

    Stutzman, W. L.; Tsolakis, A.; Dishman, W. K.

    1982-01-01

    Theoretical modeling associated with rain effects on millimeter wave propagation is detailed. Three areas of work are discussed. A simple model for prediction of rain attenuation is developed and evaluated. A method for computing scattering from single rain drops is presented. A complete multiple scattering model is described which permits accurate calculation of the effects on dual polarized signals passing through rain.

  14. COUSCOus: improved protein contact prediction using an empirical Bayes covariance estimator.

    PubMed

    Rawi, Reda; Mall, Raghvendra; Kunji, Khalid; El Anbari, Mohammed; Aupetit, Michael; Ullah, Ehsan; Bensmail, Halima

    2016-12-15

    The post-genomic era with its wealth of sequences gave rise to a broad range of protein residue-residue contact detecting methods. Although various coevolution methods such as PSICOV, DCA and plmDCA provide correct contact predictions, they do not completely overlap. Hence, new approaches and improvements of existing methods are needed to motivate further development and progress in the field. We present a new contact detecting method, COUSCOus, by combining the best shrinkage approach, the empirical Bayes covariance estimator and GLasso. Using the original PSICOV benchmark dataset, COUSCOus achieves mean accuracies of 0.74, 0.62 and 0.55 for the top L/10 predicted long, medium and short range contacts, respectively. In addition, COUSCOus attains mean areas under the precision-recall curves of 0.25, 0.29 and 0.30 for long, medium and short contacts and outperforms PSICOV. We also observed that COUSCOus outperforms PSICOV w.r.t. Matthew's correlation coefficient criterion on full list of residue contacts. Furthermore, COUSCOus achieves on average 10% more gain in prediction accuracy compared to PSICOV on an independent test set composed of CASP11 protein targets. Finally, we showed that when using a simple random forest meta-classifier, by combining contact detecting techniques and sequence derived features, PSICOV predictions should be replaced by the more accurate COUSCOus predictions. We conclude that the consideration of superior covariance shrinkage approaches will boost several research fields that apply the GLasso procedure, amongst the presented one of residue-residue contact prediction as well as fields such as gene network reconstruction.

  15. Studies of aerothermal loads generated in regions of shock/shock interaction in hypersonic flow

    NASA Technical Reports Server (NTRS)

    Holden, Michael S.; Moselle, John R.; Lee, Jinho

    1991-01-01

    Experimental studies were conducted to examine the aerothermal characteristics of shock/shock/boundary layer interaction regions generated by single and multiple incident shocks. The presented experimental studies were conducted over a Mach number range from 6 to 19 for a range of Reynolds numbers to obtain both laminar and turbulent interaction regions. Detailed heat transfer and pressure measurements were made for a range of interaction types and incident shock strengths over a transverse cylinder, with emphasis on the 3 and 4 type interaction regions. The measurements were compared with the simple Edney, Keyes, and Hains models for a range of interaction configurations and freestream conditions. The complex flowfields and aerothermal loads generated by multiple-shock impingement, while not generating as large peak loads, provide important test cases for code prediction. The detailed heat transfer and pressure measurements proved a good basis for evaluating the accuracy of simple prediction methods and detailed numerical solutions for laminar and transitional regions or shock/shock interactions.

  16. Prediction during statistical learning, and implications for the implicit/explicit divide

    PubMed Central

    Dale, Rick; Duran, Nicholas D.; Morehead, J. Ryan

    2012-01-01

    Accounts of statistical learning, both implicit and explicit, often invoke predictive processes as central to learning, yet practically all experiments employ non-predictive measures during training. We argue that the common theoretical assumption of anticipation and prediction needs clearer, more direct evidence for it during learning. We offer a novel experimental context to explore prediction, and report results from a simple sequential learning task designed to promote predictive behaviors in participants as they responded to a short sequence of simple stimulus events. Predictive tendencies in participants were measured using their computer mouse, the trajectories of which served as a means of tapping into predictive behavior while participants were exposed to very short and simple sequences of events. A total of 143 participants were randomly assigned to stimulus sequences along a continuum of regularity. Analysis of computer-mouse trajectories revealed that (a) participants almost always anticipate events in some manner, (b) participants exhibit two stable patterns of behavior, either reacting to vs. predicting future events, (c) the extent to which participants predict relates to performance on a recall test, and (d) explicit reports of perceiving patterns in the brief sequence correlates with extent of prediction. We end with a discussion of implicit and explicit statistical learning and of the role prediction may play in both kinds of learning. PMID:22723817

  17. A univariate model of river water nitrate time series

    NASA Astrophysics Data System (ADS)

    Worrall, F.; Burt, T. P.

    1999-01-01

    Four time series were taken from three catchments in the North and South of England. The sites chosen included two in predominantly agricultural catchments, one at the tidal limit and one downstream of a sewage treatment works. A time series model was constructed for each of these series as a means of decomposing the elements controlling river water nitrate concentrations and to assess whether this approach could provide a simple management tool for protecting water abstractions. Autoregressive (AR) modelling of the detrended and deseasoned time series showed a "memory effect". This memory effect expressed itself as an increase in the winter-summer difference in nitrate levels that was dependent upon the nitrate concentration 12 or 6 months previously. Autoregressive moving average (ARMA) modelling showed that one of the series contained seasonal, non-stationary elements that appeared as an increasing trend in the winter-summer difference. The ARMA model was used to predict nitrate levels and predictions were tested against data held back from the model construction process - predictions gave average percentage errors of less than 10%. Empirical modelling can therefore provide a simple, efficient method for constructing management models for downstream water abstraction.

  18. Molecular simulation of simple fluids and polymers in nanoconfinement

    NASA Astrophysics Data System (ADS)

    Rasmussen, Christopher John

    Prediction of phase behavior and transport properties of simple fluids and polymers confined to nanoscale pores is important to a wide range of chemical and biochemical engineering processes. A practical approach to investigate nanoscale systems is molecular simulation, specifically Monte Carlo (MC) methods. One of the most challenging problems is the need to calculate chemical potentials in simulated phases. Through the seminal work of Widom, practitioners have a powerful method for calculating chemical potentials. Yet, this method fails for dense and inhomogeneous systems, as well as for complex molecules such as polymers. In this dissertation, the gauge cell MC method, which had previously been successfully applied to confined simple fluids, was employed and extended to investigate nanoscale fluids in several key areas. Firstly, the process of cavitation (the formation and growth of bubbles) during desorption of fluids from nanopores was investigated. The dependence of cavitation pressure on pore size was determined with gauge cell MC calculations of the nucleation barriers correlated with experimental data. Additional computational studies elucidated the role of surface defects and pore connectivity in the formation of cavitation bubbles. Secondly, the gauge cell method was extended to polymers. The method was verified against the literature results and found significantly more efficient. It was used to examine adsorption of polymers in nanopores. These results were applied to model the dynamics of translocation, the act of a polymer threading through a small opening, which is implicated in drug packaging and delivery, and DNA sequencing. Translocation dynamics was studied as diffusion along the free energy landscape. Thirdly, we show how computer simulation of polymer adsorption could shed light on the specifics of polymer chromatography, which is a key tool for the analysis and purification of polymers. The quality of separation depends on the physico-chemical mechanisms of polymer/pore interaction. We considered liquid chromatography at critical conditions, and calculated the dependence of the partition coefficient on chain length. Finally, solvent-gradient chromatography was modeled using a statistical model of polymer adsorption. A model for predicting separation of complex polymers (with functional groups or copolymers) was developed for practical use in chromatographic separations.

  19. Parameterization, sensitivity analysis, and inversion: an investigation using groundwater modeling of the surface-mined Tivoli-Guidonia basin (Metropolitan City of Rome, Italy)

    NASA Astrophysics Data System (ADS)

    La Vigna, Francesco; Hill, Mary C.; Rossetto, Rudy; Mazza, Roberto

    2016-09-01

    With respect to model parameterization and sensitivity analysis, this work uses a practical example to suggest that methods that start with simple models and use computationally frugal model analysis methods remain valuable in any toolbox of model development methods. In this work, groundwater model calibration starts with a simple parameterization that evolves into a moderately complex model. The model is developed for a water management study of the Tivoli-Guidonia basin (Rome, Italy) where surface mining has been conducted in conjunction with substantial dewatering. The approach to model development used in this work employs repeated analysis using sensitivity and inverse methods, including use of a new observation-stacked parameter importance graph. The methods are highly parallelizable and require few model runs, which make the repeated analyses and attendant insights possible. The success of a model development design can be measured by insights attained and demonstrated model accuracy relevant to predictions. Example insights were obtained: (1) A long-held belief that, except for a few distinct fractures, the travertine is homogeneous was found to be inadequate, and (2) The dewatering pumping rate is more critical to model accuracy than expected. The latter insight motivated additional data collection and improved pumpage estimates. Validation tests using three other recharge and pumpage conditions suggest good accuracy for the predictions considered. The model was used to evaluate management scenarios and showed that similar dewatering results could be achieved using 20 % less pumped water, but would require installing newly positioned wells and cooperation between mine owners.

  20. Shock loading predictions from application of indicial theory to shock-turbulence interactions

    NASA Technical Reports Server (NTRS)

    Keefe, Laurence R.; Nixon, David

    1991-01-01

    A sequence of steps that permits prediction of some of the characteristics of the pressure field beneath a fluctuating shock wave from knowledge of the oncoming turbulent boundary layer is presented. The theory first predicts the power spectrum and pdf of the position and velocity of the shock wave, which are then used to obtain the shock frequency distribution, and the pdf of the pressure field, as a function of position within the interaction region. To test the validity of the crucial assumption of linearity, the indicial response of a normal shock is calculated from numerical simulation. This indicial response, after being fit by a simple relaxation model, is used to predict the shock position and velocity spectra, along with the shock passage frequency distribution. The low frequency portion of the shock spectra, where most of the energy is concentrated, is satisfactorily predicted by this method.

  1. Technique to Obtain a Predictable Aesthetic Result through Appropriate Placement of the Prosthesis/Soft Tissue Junction in the Edentulous Patient with a Gingival Smile.

    PubMed

    Demurashvili, Georgy; Davarpanah, Keyvan; Szmukler-Moncler, Serge; Davarpanah, Mithridade; Raux, Didier; Capelle-Ouadah, Nedjoua; Rajzbaum, Philippe

    2015-10-01

    Treating the edentulous patient with a gingival smile requires securing the prosthesis/soft tissue junction (PSTJ) under the upper lip. To present a simple method that helps achieve a predictable aesthetic result when alveoplasty of the anterior maxilla is needed to place implants apical to the presurgical position of the alveolar ridge. The maximum smile line of the patient is recorded and carved on a thin silicone bite impression as a soft tissue landmark. During the three-dimensional radiographic examination, the patient wears the silicone guide loaded with radiopaque markers. The NobelClinician® software is then used to bring the hard and soft tissue landmarks together in a single reading. Using the software, a line is drawn 5 mm apical to the smile line; it dictates the position of the crestal ridge to be reached following the alveoplasty. Subsequently, the simulated implant position and the simulated residual bone height following alveoplasty can be simultaneously evaluated on each transverse section. An alveoplasty of the anterior maxilla was performed as simulated on the software, and implants were placed accordingly. The PSTJ was always under the upper lip, even during maximum smile events. The aesthetic result was, therefore, fully satisfactory. This simple method permits the placement of the PSTJ under the upper lip with a predictable outcome; it ensures a reliable aesthetic result for the edentulous patient with a gingival smile. © 2013 Wiley Periodicals, Inc.

  2. The potential impact of integrated malaria transmission control on entomologic inoculation rate in highly endemic areas.

    PubMed

    Killeen, G F; McKenzie, F E; Foy, B D; Schieffelin, C; Billingsley, P F; Beier, J C

    2000-05-01

    We have used a relatively simple but accurate model for predicting the impact of integrated transmission control on the malaria entomologic inoculation rate (EIR) at four endemic sites from across sub-Saharan Africa and the southwest Pacific. The simulated campaign incorporated modestly effective vaccine coverage, bed net use, and larval control. The results indicate that such campaigns would reduce EIRs at all four sites by 30- to 50-fold. Even without the vaccine, 15- to 25-fold reductions of EIR were predicted, implying that integrated control with a few modestly effective tools can meaningfully reduce malaria transmission in a range of endemic settings. The model accurately predicts the effects of bed nets and indoor spraying and demonstrates that they are the most effective tools available for reducing EIR. However, the impact of domestic adult vector control is amplified by measures for reducing the rate of emergence of vectors or the level of infectiousness of the human reservoir. We conclude that available tools, including currently neglected methods for larval control, can reduce malaria transmission intensity enough to alleviate mortality. Integrated control programs should be implemented to the fullest extent possible, even in areas of intense transmission, using simple models as decision-making tools. However, we also conclude that to eliminate malaria in many areas of intense transmission is beyond the scope of methods which developing nations can currently afford. New, cost-effective, practical tools are needed if malaria is ever to be eliminated from highly endemic areas.

  3. A Simple Model Predicting Individual Weight Change in Humans

    PubMed Central

    Thomas, Diana M.; Martin, Corby K.; Heymsfield, Steven; Redman, Leanne M.; Schoeller, Dale A.; Levine, James A.

    2010-01-01

    Excessive weight in adults is a national concern with over 2/3 of the US population deemed overweight. Because being overweight has been correlated to numerous diseases such as heart disease and type 2 diabetes, there is a need to understand mechanisms and predict outcomes of weight change and weight maintenance. A simple mathematical model that accurately predicts individual weight change offers opportunities to understand how individuals lose and gain weight and can be used to foster patient adherence to diets in clinical settings. For this purpose, we developed a one dimensional differential equation model of weight change based on the energy balance equation is paired to an algebraic relationship between fat free mass and fat mass derived from a large nationally representative sample of recently released data collected by the Centers for Disease Control. We validate the model's ability to predict individual participants’ weight change by comparing model estimates of final weight data from two recent underfeeding studies and one overfeeding study. Mean absolute error and standard deviation between model predictions and observed measurements of final weights are less than 1.8 ± 1.3 kg for the underfeeding studies and 2.5 ± 1.6 kg for the overfeeding study. Comparison of the model predictions to other one dimensional models of weight change shows improvement in mean absolute error, standard deviation of mean absolute error, and group mean predictions. The maximum absolute individual error decreased by approximately 60% substantiating reliability in individual weight change predictions. The model provides a viable method for estimating individual weight change as a result of changes in intake and determining individual dietary adherence during weight change studies. PMID:24707319

  4. Simulating boundary layer transition with low-Reynolds-number k-epsilon turbulence models. I - An evaluation of prediction characteristics. II - An approach to improving the predictions

    NASA Technical Reports Server (NTRS)

    Schmidt, R. C.; Patankar, S. V.

    1991-01-01

    The capability of two k-epsilon low-Reynolds number (LRN) turbulence models, those of Jones and Launder (1972) and Lam and Bremhorst (1981), to predict transition in external boundary-layer flows subject to free-stream turbulence is analyzed. Both models correctly predict the basic qualitative aspects of boundary-layer transition with free stream turbulence, but for calculations started at low values of certain defined Reynolds numbers, the transition is generally predicted at unrealistically early locations. Also, the methods predict transition lengths significantly shorter than those found experimentally. An approach to overcoming these deficiencies without abandoning the basic LRN k-epsilon framework is developed. This approach limits the production term in the turbulent kinetic energy equation and is based on a simple stability criterion. It is correlated to the free-stream turbulence value. The modification is shown to improve the qualitative and quantitative characteristics of the transition predictions.

  5. Using a combined computational-experimental approach to predict antibody-specific B cell epitopes.

    PubMed

    Sela-Culang, Inbal; Benhnia, Mohammed Rafii-El-Idrissi; Matho, Michael H; Kaever, Thomas; Maybeno, Matt; Schlossman, Andrew; Nimrod, Guy; Li, Sheng; Xiang, Yan; Zajonc, Dirk; Crotty, Shane; Ofran, Yanay; Peters, Bjoern

    2014-04-08

    Antibody epitope mapping is crucial for understanding B cell-mediated immunity and required for characterizing therapeutic antibodies. In contrast to T cell epitope mapping, no computational tools are in widespread use for prediction of B cell epitopes. Here, we show that, utilizing the sequence of an antibody, it is possible to identify discontinuous epitopes on its cognate antigen. The predictions are based on residue-pairing preferences and other interface characteristics. We combined these antibody-specific predictions with results of cross-blocking experiments that identify groups of antibodies with overlapping epitopes to improve the predictions. We validate the high performance of this approach by mapping the epitopes of a set of antibodies against the previously uncharacterized D8 antigen, using complementary techniques to reduce method-specific biases (X-ray crystallography, peptide ELISA, deuterium exchange, and site-directed mutagenesis). These results suggest that antibody-specific computational predictions and simple cross-blocking experiments allow for accurate prediction of residues in conformational B cell epitopes. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Fiber optic distributed temperature sensing for fire source localization

    NASA Astrophysics Data System (ADS)

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Sigrist, Markus W.; Li, Jun; Dong, Fengzhong

    2017-08-01

    A method for localizing a fire source based on a distributed temperature sensor system is proposed. Two sections of optical fibers were placed orthogonally to each other as the sensing elements. A tray of alcohol was lit to act as a fire outbreak in a cabinet with an uneven ceiling to simulate a real scene of fire. Experiments were carried out to demonstrate the feasibility of the method. Rather large fluctuations and systematic errors with respect to predicting the exact room coordinates of the fire source caused by the uneven ceiling were observed. Two mathematical methods (smoothing recorded temperature curves and finding temperature peak positions) to improve the prediction accuracy are presented, and the experimental results indicate that the fluctuation ranges and systematic errors are significantly reduced. The proposed scheme is simple and appears reliable enough to locate a fire source in large spaces.

  7. Numerical noise prediction in fluid machinery

    NASA Astrophysics Data System (ADS)

    Pantle, Iris; Magagnato, Franco; Gabi, Martin

    2005-09-01

    Numerical methods successively became important in the design and optimization of fluid machinery. However, as noise emission is considered, one can hardly find standardized prediction methods combining flow and acoustical optimization. Several numerical field methods for sound calculations have been developed. Due to the complexity of the considered flow, approaches must be chosen to avoid exhaustive computing. In this contribution the noise of a simple propeller is investigated. The configurations of the calculations comply with an existing experimental setup chosen for evaluation. The used in-house CFD solver SPARC contains an acoustic module based on Ffowcs Williams-Hawkings Acoustic Analogy. From the flow results of the time dependent Large Eddy Simulation the time dependent acoustic sources are extracted and given to the acoustic module where relevant sound pressure levels are calculated. The difficulties, which arise while proceeding from open to closed rotors and from gas to liquid are discussed.

  8. Monitoring methods and predictive models for water status in Jonathan apples.

    PubMed

    Trincă, Lucia Carmen; Căpraru, Adina-Mirela; Arotăriţei, Dragoş; Volf, Irina; Chiruţă, Ciprian

    2014-02-01

    Evaluation of water status in Jonathan apples was performed for 20 days. Loss moisture content (LMC) was carried out through slow drying of wholes apples and the moisture content (MC) was carried out through oven drying and lyophilisation for apple samples (chunks, crushed and juice). We approached a non-destructive method to evaluate LMC and MC of apples using image processing and multilayer neural networks (NN) predictor. We proposed a new simple algorithm that selects the texture descriptors based on initial set heuristically chosen. Both structure and weights of NN are optimised by a genetic algorithm with variable length genotype that led to a high precision of the predictive model (R(2)=0.9534). In our opinion, the developing of this non-destructive method for the assessment of LMC and MC (and of other chemical parameters) seems to be very promising in online inspection of food quality. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Characterization and prediction of residues determining protein functional specificity.

    PubMed

    Capra, John A; Singh, Mona

    2008-07-01

    Within a homologous protein family, proteins may be grouped into subtypes that share specific functions that are not common to the entire family. Often, the amino acids present in a small number of sequence positions determine each protein's particular functional specificity. Knowledge of these specificity determining positions (SDPs) aids in protein function prediction, drug design and experimental analysis. A number of sequence-based computational methods have been introduced for identifying SDPs; however, their further development and evaluation have been hindered by the limited number of known experimentally determined SDPs. We combine several bioinformatics resources to automate a process, typically undertaken manually, to build a dataset of SDPs. The resulting large dataset, which consists of SDPs in enzymes, enables us to characterize SDPs in terms of their physicochemical and evolutionary properties. It also facilitates the large-scale evaluation of sequence-based SDP prediction methods. We present a simple sequence-based SDP prediction method, GroupSim, and show that, surprisingly, it is competitive with a representative set of current methods. We also describe ConsWin, a heuristic that considers sequence conservation of neighboring amino acids, and demonstrate that it improves the performance of all methods tested on our large dataset of enzyme SDPs. Datasets and GroupSim code are available online at http://compbio.cs.princeton.edu/specificity/. Supplementary data are available at Bioinformatics online.

  10. A multivariate prediction model for Rho-dependent termination of transcription.

    PubMed

    Nadiras, Cédric; Eveno, Eric; Schwartz, Annie; Figueroa-Bossi, Nara; Boudvillain, Marc

    2018-06-21

    Bacterial transcription termination proceeds via two main mechanisms triggered either by simple, well-conserved (intrinsic) nucleic acid motifs or by the motor protein Rho. Although bacterial genomes can harbor hundreds of termination signals of either type, only intrinsic terminators are reliably predicted. Computational tools to detect the more complex and diversiform Rho-dependent terminators are lacking. To tackle this issue, we devised a prediction method based on Orthogonal Projections to Latent Structures Discriminant Analysis [OPLS-DA] of a large set of in vitro termination data. Using previously uncharacterized genomic sequences for biochemical evaluation and OPLS-DA, we identified new Rho-dependent signals and quantitative sequence descriptors with significant predictive value. Most relevant descriptors specify features of transcript C>G skewness, secondary structure, and richness in regularly-spaced 5'CC/UC dinucleotides that are consistent with known principles for Rho-RNA interaction. Descriptors collectively warrant OPLS-DA predictions of Rho-dependent termination with a ∼85% success rate. Scanning of the Escherichia coli genome with the OPLS-DA model identifies significantly more termination-competent regions than anticipated from transcriptomics and predicts that regions intrinsically refractory to Rho are primarily located in open reading frames. Altogether, this work delineates features important for Rho activity and describes the first method able to predict Rho-dependent terminators in bacterial genomes.

  11. Use of group theory in the interpretation of infrared and Raman spectra. [Tables, vibrational spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silberman, E.; Morgan, H.W.

    1977-01-01

    Application of the mathematical theory of groups to the symmetry of molecules is a powerful method which permits the prediction, classification, and qualitative description of many molecular properties. In the particular case of vibrational molecular spectroscopy, applications of group theory lead to simple methods for the prediction of the number of bands to be found in the infrared and Raman spectra, their shape and polarization, and the qualitative description of the normal modes with which they are associated. The tables necessary for the application of group theory to vibrational spectroscopy and instructions on how to use them for molecular gases,more » liquids, and solutions are presented. A brief introduction to the concepts, definitions, nomenclature, and formulae is also included.« less

  12. Lattice modeling and calibration with turn-by-turn orbit data

    NASA Astrophysics Data System (ADS)

    Huang, Xiaobiao; Sebek, Jim; Martin, Don

    2010-11-01

    A new method that explores turn-by-turn beam position monitor (BPM) data to calibrate lattice models of accelerators is proposed. The turn-by-turn phase space coordinates at one location of the ring are first established using data from two BPMs separated by a simple section with a known transfer matrix, such as a drift space. The phase space coordinates are then tracked with the model to predict positions at other BPMs, which can be compared to measurements. The model is adjusted to minimize the difference between the measured and predicted orbit data. BPM gains and rolls are included as fitting variables. This technique can be applied to either the entire or a section of the ring. We have tested the method experimentally on a part of the SPEAR3 ring.

  13. Experimental Evaluation of Balance Prediction Models for Sit-to-Stand Movement in the Sagittal Plane

    PubMed Central

    Pena Cabra, Oscar David; Watanabe, Takashi

    2013-01-01

    Evaluation of balance control ability would become important in the rehabilitation training. In this paper, in order to make clear usefulness and limitation of a traditional simple inverted pendulum model in balance prediction in sit-to-stand movements, the traditional simple model was compared to an inertia (rotational radius) variable inverted pendulum model including multiple-joint influence in the balance predictions. The predictions were tested upon experimentation with six healthy subjects. The evaluation showed that the multiple-joint influence model is more accurate in predicting balance under demanding sit-to-stand conditions. On the other hand, the evaluation also showed that the traditionally used simple inverted pendulum model is still reliable in predicting balance during sit-to-stand movement under non-demanding (normal) condition. Especially, the simple model was shown to be effective for sit-to-stand movements with low center of mass velocity at the seat-off. Moreover, almost all trajectories under the normal condition seemed to follow the same control strategy, in which the subjects used extra energy than the minimum one necessary for standing up. This suggests that the safety considerations come first than the energy efficiency considerations during a sit to stand, since the most energy efficient trajectory is close to the backward fall boundary. PMID:24187580

  14. Predicting tidal currents in San Francisco Bay using a spectral model

    USGS Publications Warehouse

    Burau, Jon R.; Cheng, Ralph T.

    1988-01-01

    This paper describes the formulation of a spectral (or frequency based) model which solves the linearized shallow water equations. To account for highly variable basin bathymetry, spectral solutions are obtained using the finite element method which allows the strategic placement of the computation points in the specific areas of interest or in areas where the gradients of the dependent variables are expected to be large. Model results are compared with data using simple statistics to judge overall model performance in the San Francisco Bay estuary. Once the model is calibrated and verified, prediction of the tides and tidal currents in San Francisco Bay is accomplished by applying astronomical tides (harmonic constants deduced from field data) at the prediction time along the model boundaries.

  15. Hurricane track forecast cones from fluctuations

    PubMed Central

    Meuel, T.; Prado, G.; Seychelles, F.; Bessafi, M.; Kellay, H.

    2012-01-01

    Trajectories of tropical cyclones may show large deviations from predicted tracks leading to uncertainty as to their landfall location for example. Prediction schemes usually render this uncertainty by showing track forecast cones representing the most probable region for the location of a cyclone during a period of time. By using the statistical properties of these deviations, we propose a simple method to predict possible corridors for the future trajectory of a cyclone. Examples of this scheme are implemented for hurricane Ike and hurricane Jimena. The corridors include the future trajectory up to at least 50 h before landfall. The cones proposed here shed new light on known track forecast cones as they link them directly to the statistics of these deviations. PMID:22701776

  16. Technique for Predicting the RF Field Strength Inside an Enclosure

    NASA Technical Reports Server (NTRS)

    Hallett, M.; Reddell, J.

    1998-01-01

    This Memorandum presents a simple analytical technique for predicting the RF electric field strength inside an enclosed volume in which radio frequency radiation occurs. The technique was developed to predict the radio frequency (RF) field strength within a launch vehicle's fairing from payloads launched with their telemetry transmitters radiating and to the impact of the radiation on the vehicle and payload. The RF field strength is shown to be a function of the surface materials and surface areas. The method accounts for RF energy losses within exposed surfaces, through RF windows, and within multiple layers of dielectric materials which may cover the surfaces. This Memorandum includes the rigorous derivation of all equations and presents examples and data to support the validity of the technique.

  17. Accuracy of binding mode prediction with a cascadic stochastic tunneling method.

    PubMed

    Fischer, Bernhard; Basili, Serena; Merlitz, Holger; Wenzel, Wolfgang

    2007-07-01

    We investigate the accuracy of the binding modes predicted for 83 complexes of the high-resolution subset of the ASTEX/CCDC receptor-ligand database using the atomistic FlexScreen approach with a simple forcefield-based scoring function. The median RMS deviation between experimental and predicted binding mode was just 0.83 A. Over 80% of the ligands dock within 2 A of the experimental binding mode, for 60 complexes the docking protocol locates the correct binding mode in all of ten independent simulations. Most docking failures arise because (a) the experimental structure clashed in our forcefield and is thus unattainable in the docking process or (b) because the ligand is stabilized by crystal water. 2007 Wiley-Liss, Inc.

  18. Ultimate Longitudinal Strength of Composite Ship Hulls

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangming; Huang, Lingkai; Zhu, Libao; Tang, Yuhang; Wang, Anwen

    2017-01-01

    A simple analytical model to estimate the longitudinal strength of ship hulls in composite materials under buckling, material failure and ultimate collapse is presented in this paper. Ship hulls are regarded as assemblies of stiffened panels which idealized as group of plate-stiffener combinations. Ultimate strain of the plate-stiffener combination is predicted under buckling or material failure with composite beam-column theory. The effects of initial imperfection of ship hull and eccentricity of load are included. Corresponding longitudinal strengths of ship hull are derived in a straightforward method. A longitudinally framed ship hull made of symmetrically stacked unidirectional plies under sagging is analyzed. The results indicate that present analytical results have a good agreement with FEM method. The initial deflection of ship hull and eccentricity of load can dramatically reduce the bending capacity of ship hull. The proposed formulations provide a simple but useful tool for the longitudinal strength estimation in practical design.

  19. High-throughput purification of recombinant proteins using self-cleaving intein tags.

    PubMed

    Coolbaugh, M J; Shakalli Tang, M J; Wood, D W

    2017-01-01

    High throughput methods for recombinant protein production using E. coli typically involve the use of affinity tags for simple purification of the protein of interest. One drawback of these techniques is the occasional need for tag removal before study, which can be hard to predict. In this work, we demonstrate two high throughput purification methods for untagged protein targets based on simple and cost-effective self-cleaving intein tags. Two model proteins, E. coli beta-galactosidase (βGal) and superfolder green fluorescent protein (sfGFP), were purified using self-cleaving versions of the conventional chitin-binding domain (CBD) affinity tag and the nonchromatographic elastin-like-polypeptide (ELP) precipitation tag in a 96-well filter plate format. Initial tests with shake flask cultures confirmed that the intein purification scheme could be scaled down, with >90% pure product generated in a single step using both methods. The scheme was then validated in a high throughput expression platform using 24-well plate cultures followed by purification in 96-well plates. For both tags and with both target proteins, the purified product was consistently obtained in a single-step, with low well-to-well and plate-to-plate variability. This simple method thus allows the reproducible production of highly pure untagged recombinant proteins in a convenient microtiter plate format. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Rapid investigation of α-glucosidase inhibitory activity of Phaleria macrocarpa extracts using FTIR-ATR based fingerprinting.

    PubMed

    Easmin, Sabina; Sarker, Md Zaidul Islam; Ghafoor, Kashif; Ferdosh, Sahena; Jaffri, Juliana; Ali, Md Eaqub; Mirhosseini, Hamed; Al-Juhaimi, Fahad Y; Perumal, Vikneswari; Khatib, Alfi

    2017-04-01

    Phaleria macrocarpa, known as "Mahkota Dewa", is a widely used medicinal plant in Malaysia. This study focused on the characterization of α-glucosidase inhibitory activity of P. macrocarpa extracts using Fourier transform infrared spectroscopy (FTIR)-based metabolomics. P. macrocarpa and its extracts contain thousands of compounds having synergistic effect. Generally, their variability exists, and there are many active components in meager amounts. Thus, the conventional measurement methods of a single component for the quality control are time consuming, laborious, expensive, and unreliable. It is of great interest to develop a rapid prediction method for herbal quality control to investigate the α-glucosidase inhibitory activity of P. macrocarpa by multicomponent analyses. In this study, a rapid and simple analytical method was developed using FTIR spectroscopy-based fingerprinting. A total of 36 extracts of different ethanol concentrations were prepared and tested on inhibitory potential and fingerprinted using FTIR spectroscopy, coupled with chemometrics of orthogonal partial least square (OPLS) at the 4000-400 cm -1 frequency region and resolution of 4 cm -1 . The OPLS model generated the highest regression coefficient with R 2 Y = 0.98 and Q 2 Y = 0.70, lowest root mean square error estimation = 17.17, and root mean square error of cross validation = 57.29. A five-component (1+4+0) predictive model was build up to correlate FTIR spectra with activity, and the responsible functional groups, such as -CH, -NH, -COOH, and -OH, were identified for the bioactivity. A successful multivariate model was constructed using FTIR-attenuated total reflection as a simple and rapid technique to predict the inhibitory activity. Copyright © 2016. Published by Elsevier B.V.

  1. Diagnostic value of ultrasound indicators of neoplastic risk in preoperative differentiation of adnexal masses

    PubMed Central

    Bachanek, Michał; Trojanowski, Seweryn; Cendrowski, Krzysztof; Sawicki, Włodzimierz

    2013-01-01

    Aim To assess the diagnostic value of the risk of malignancy indices and simple ultrasound- based rules in preoperative differentiation of adnexal masses. Material and methods Retrospective examination of 87 patients admitted to hospital due to adnexal tumors. The lesions were evaluated on the basis of international ultrasound classification of ovarian tumors and four risk of malignancy indices were calculated based on ultrasound examination, concentration of CA 125 and menopausal status. Results The patients were aged between 17 and 79, the mean age was 44.5 (standard deviation SD=16.6). Most of the patients (60.91%) were before their menopause. The sensitivity of the simple ultrasound-based rules in the diagnosis of malignancies equaled 64.71% and the specificity constituted 90.00%. A significant statistical difference in the presence of the malignant process was demonstrated in relation to age, menopausal status, CA 125 concentration and analyzed ultrasound score. All indices were characterized by similar sensitivity and specificity. The highest specificity and predictive value of malignant lesions out of the assessed ones was demonstrated by the risk of malignancy index proposed by Yamamoto. The risk of malignancy index according to Jacobs, however, showed the highest predictive value in the case of non-malignant lesions. Conclusions The multiparametric ultrasound examination may facilitate the selection of patients with adnexal tumors to provide them with an appropriate treatment – observation, laparotomy and laparoscopy. These parameters constitute a simple ambulatory method of determining the character of adnexal masses before recommending appropriate treatment. PMID:26674849

  2. A simple formula for estimating Stark widths of neutral lines. [of stellar atmospheres

    NASA Technical Reports Server (NTRS)

    Freudenstein, S. A.; Cooper, J.

    1978-01-01

    A simple formula for the prediction of Stark widths of neutral lines similar to the semiempirical method of Griem (1968) for ion lines is presented. This formula is a simplification of the quantum-mechanical classical path impact theory and can be used for complicated atoms for which detailed calculations are not readily available, provided that the effective position of the closest interacting level is known. The expression does not require the use of a computer. The formula has been applied to a limited number of neutral lines of interest, and the width obtained is compared with the much more complete calculations of Bennett and Griem (1971). The agreement generally is well within 50% of the published value for the lines investigated. Comparisons with other formulas are also made. In addition, a simple estimate for the ion-broadening parameter is given.

  3. Debris-flow runout predictions based on the average channel slope (ACS)

    USGS Publications Warehouse

    Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.

    2008-01-01

    Prediction of the runout distance of a debris flow is an important element in the delineation of potentially hazardous areas on alluvial fans and for the siting of mitigation structures. Existing runout estimation methods rely on input parameters that are often difficult to estimate, including volume, velocity, and frictional factors. In order to provide a simple method for preliminary estimates of debris-flow runout distances, we developed a model that provides runout predictions based on the average channel slope (ACS model) for non-volcanic debris flows that emanate from confined channels and deposit on well-defined alluvial fans. This model was developed from 20 debris-flow events in the western United States and British Columbia. Based on a runout estimation method developed for snow avalanches, this model predicts debris-flow runout as an angle of reach from a fixed point in the drainage channel to the end of the runout zone. The best fixed point was found to be the mid-point elevation of the drainage channel, measured from the apex of the alluvial fan to the top of the drainage basin. Predicted runout lengths were more consistent than those obtained from existing angle-of-reach estimation methods. Results of the model compared well with those of laboratory flume tests performed using the same range of channel slopes. The robustness of this model was tested by applying it to three debris-flow events not used in its development: predicted runout ranged from 82 to 131% of the actual runout for these three events. Prediction interval multipliers were also developed so that the user may calculate predicted runout within specified confidence limits. ?? 2008 Elsevier B.V. All rights reserved.

  4. Simple numerical method for predicting steady compressible flows

    NASA Technical Reports Server (NTRS)

    Vonlavante, Ernst; Nelson, N. Duane

    1986-01-01

    A numerical method for solving the isenthalpic form of the governing equations for compressible viscous and inviscid flows was developed. The method was based on the concept of flux vector splitting in its implicit form. The method was tested on several demanding inviscid and viscous configurations. Two different forms of the implicit operator were investigated. The time marching to steady state was accelerated by the implementation of the multigrid procedure. Its various forms very effectively increased the rate of convergence of the present scheme. High quality steady state results were obtained in most of the test cases; these required only short computational times due to the relative efficiency of the basic method.

  5. Numerical simulation of axisymmetric turbulent flow in combustors and diffusors. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Yung, Chain Nan

    1988-01-01

    A method for predicting turbulent flow in combustors and diffusers is developed. The Navier-Stokes equations, incorporating a turbulence kappa-epsilon model equation, were solved in a nonorthogonal curvilinear coordinate system. The solution applied the finite volume method to discretize the differential equations and utilized the SIMPLE algorithm iteratively to solve the differenced equations. A zonal grid method, wherein the flow field was divided into several subsections, was developed. This approach permitted different computational schemes to be used in the various zones. In addition, grid generation was made a more simple task. However, treatment of the zonal boundaries required special handling. Boundary overlap and interpolating techniques were used and an adjustment of the flow variables was required to assure conservation of mass, momentum and energy fluxes. The numerical accuracy was assessed using different finite differencing methods, i.e., hybrid, quadratic upwind and skew upwind, to represent the convection terms. Flows in different geometries of combustors and diffusers were simulated and results compared with experimental data and good agreement was obtained.

  6. PREDICTION OF INTERFACIAL AREAS DURING IMBIBITION IN SIMPLE POROUS MEDIA. (R827116)

    EPA Science Inventory

    The interfacial area between wetting (W-) and non-wetting (NW-) phases is one of the crucial parameters in several flow and transport processes in porous media. This paper gives predictions of such areas during imbibition (displacement of NW-phase by W) in simple porous media....

  7. Linear prediction data extrapolation superresolution radar imaging

    NASA Astrophysics Data System (ADS)

    Zhu, Zhaoda; Ye, Zhenru; Wu, Xiaoqing

    1993-05-01

    Range resolution and cross-range resolution of range-doppler imaging radars are related to the effective bandwidth of transmitted signal and the angle through which the object rotates relatively to the radar line of sight (RLOS) during the coherent processing time, respectively. In this paper, linear prediction data extrapolation discrete Fourier transform (LPDEDFT) superresolution imaging method is investigated for the purpose of surpassing the limitation imposed by the conventional FFT range-doppler processing and improving the resolution capability of range-doppler imaging radar. The LPDEDFT superresolution imaging method, which is conceptually simple, consists of extrapolating observed data beyond the observation windows by means of linear prediction, and then performing the conventional IDFT of the extrapolated data. The live data of a metalized scale model B-52 aircraft mounted on a rotating platform in a microwave anechoic chamber and a flying Boeing-727 aircraft were processed. It is concluded that, compared to the conventional Fourier method, either higher resolution for the same effective bandwidth of transmitted signals and total rotation angle of the object or equal-quality images from smaller bandwidth and total angle may be obtained by LPDEDFT.

  8. Development of procedures for calculating stiffness and damping properties of elastomers. Part 3: The effects of temperature, dissipation level and geometry

    NASA Technical Reports Server (NTRS)

    Smalley, A. J.; Tessarzik, J. M.

    1975-01-01

    Effects of temperature, dissipation level and geometry on the dynamic behavior of elastomer elements were investigated. Force displacement relationships in elastomer elements and the effects of frequency, geometry and temperature upon these relationships are reviewed. Based on this review, methods of reducing stiffness and damping data for shear and compression test elements to material properties (storage and loss moduli) and empirical geometric factors are developed and tested using previously generated experimental data. A prediction method which accounts for large amplitudes of deformation is developed on the assumption that their effect is to increase temperature through the elastomers, thereby modifying the local material properties. Various simple methods of predicting the radial stiffness of ring cartridge elements are developed and compared. Material properties were determined from the shear specimen tests as a function of frequency and temperature. Using these material properties, numerical predictions of stiffness and damping for cartridge and compression specimens were made and compared with corresponding measurements at different temperatures, with encouraging results.

  9. Binding Affinity prediction with Property Encoded Shape Distribution signatures

    PubMed Central

    Das, Sourav; Krein, Michael P.

    2010-01-01

    We report the use of the molecular signatures known as “Property-Encoded Shape Distributions” (PESD) together with standard Support Vector Machine (SVM) techniques to produce validated models that can predict the binding affinity of a large number of protein ligand complexes. This “PESD-SVM” method uses PESD signatures that encode molecular shapes and property distributions on protein and ligand surfaces as features to build SVM models that require no subjective feature selection. A simple protocol was employed for tuning the SVM models during their development, and the results were compared to SFCscore – a regression-based method that was previously shown to perform better than 14 other scoring functions. Although the PESD-SVM method is based on only two surface property maps, the overall results were comparable. For most complexes with a dominant enthalpic contribution to binding (ΔH/-TΔS > 3), a good correlation between true and predicted affinities was observed. Entropy and solvent were not considered in the present approach and further improvement in accuracy would require accounting for these components rigorously. PMID:20095526

  10. An Empirical Non-TNT Approach to Launch Vehicle Explosion Modeling

    NASA Technical Reports Server (NTRS)

    Blackwood, James M.; Skinner, Troy; Richardson, Erin H.; Bangham, Michal E.

    2015-01-01

    In an effort to increase crew survivability from catastrophic explosions of Launch Vehicles (LV), a study was conducted to determine the best method for predicting LV explosion environments in the near field. After reviewing such methods as TNT equivalence, Vapor Cloud Explosion (VCE) theory, and Computational Fluid Dynamics (CFD), it was determined that the best approach for this study was to assemble all available empirical data from full scale launch vehicle explosion tests and accidents. Approximately 25 accidents or full-scale tests were found that had some amount of measured blast wave, thermal, or fragment explosion environment characteristics. Blast wave overpressure was found to be much lower in the near field than predicted by most TNT equivalence methods. Additionally, fragments tended to be larger, fewer, and slower than expected if the driving force was from a high explosive type event. In light of these discoveries, a simple model for cryogenic rocket explosions is presented. Predictions from this model encompass all known applicable full scale launch vehicle explosion data. Finally, a brief description of on-going analysis and testing to further refine the launch vehicle explosion environment is discussed.

  11. AAA gunnermodel based on observer theory. [predicting a gunner's tracking response

    NASA Technical Reports Server (NTRS)

    Kou, R. S.; Glass, B. C.; Day, C. N.; Vikmanis, M. M.

    1978-01-01

    The Luenberger observer theory is used to develop a predictive model of a gunner's tracking response in antiaircraft artillery systems. This model is composed of an observer, a feedback controller and a remnant element. An important feature of the model is that the structure is simple, hence a computer simulation requires only a short execution time. A parameter identification program based on the least squares curve fitting method and the Gauss Newton gradient algorithm is developed to determine the parameter values of the gunner model. Thus, a systematic procedure exists for identifying model parameters for a given antiaircraft tracking task. Model predictions of tracking errors are compared with human tracking data obtained from manned simulation experiments. Model predictions are in excellent agreement with the empirical data for several flyby and maneuvering target trajectories.

  12. Finite element analysis of large transient elastic-plastic deformations of simple structures, with application to the engine rotor fragment containment/deflection problem

    NASA Technical Reports Server (NTRS)

    Wu, R. W.; Witmer, E. A.

    1972-01-01

    Assumed-displacement versions of the finite-element method are developed to predict large-deformation elastic-plastic transient deformations of structures. Both the conventional and a new improved finite-element variational formulation are derived. These formulations are then developed in detail for straight-beam and curved-beam elements undergoing (1) Bernoulli-Euler-Kirchhoff or (2) Timoshenko deformation behavior, in one plane. For each of these categories, several types of assumed-displacement finite elements are developed, and transient response predictions are compared with available exact solutions for small-deflection, linear-elastic transient responses. The present finite-element predictions for large-deflection elastic-plastic transient responses are evaluated via several beam and ring examples for which experimental measurements of transient strains and large transient deformations and independent finite-difference predictions are available.

  13. State-space prediction model for chaotic time series

    NASA Astrophysics Data System (ADS)

    Alparslan, A. K.; Sayar, M.; Atilgan, A. R.

    1998-08-01

    A simple method for predicting the continuation of scalar chaotic time series ahead in time is proposed. The false nearest neighbors technique in connection with the time-delayed embedding is employed so as to reconstruct the state space. A local forecasting model based upon the time evolution of the topological neighboring in the reconstructed phase space is suggested. A moving root-mean-square error is utilized in order to monitor the error along the prediction horizon. The model is tested for the convection amplitude of the Lorenz model. The results indicate that for approximately 100 cycles of the training data, the prediction follows the actual continuation very closely about six cycles. The proposed model, like other state-space forecasting models, captures the long-term behavior of the system due to the use of spatial neighbors in the state space.

  14. A simple method for predicting solar fractions of IPH and space heating systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chauhan, R.; Goodling, J.S.

    1982-01-01

    In this paper, a method has been developed to evaluate the solar fractions of liquid based industrial process heat (IPH) and space heating systems, without the use of computer simulations. The new method is the result of joining two theories, Lunde's equation to determine monthly performance of solar heating systems and the utilizability correlations of Collares-Pereira and Rabl by making appropriate assumptions. The new method requires the input of the monthly averages of the utilizable radiation and the collector operating time. These quantities are determined conveniently by the method of Collares-Pereira and Rabl. A comparison of the results of themore » new method with the most acceptable design methods shows excellent agreement.« less

  15. Optimization of a two stage light gas gun. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Rynearson, R. J.; Rand, J. L.

    1972-01-01

    Performance characteristics of the Texas A&M University light gas gun are presented along with a review of basic gun theory and popular prediction methods. A computer routine based on the simple isentropic compression method is discussed. Results from over 60 test shots are given which demonstrate an increase in gun muzzle velocity from 9.100 ft/sec. to 19,000 ft/sec. The data gathered indicated the Texas A&M light gas gun more closely resembles an isentropic compression gun rather than a shock compression gun.

  16. Effect of Boundary Conditions on the Axial Compression Buckling of Homogeneous Orthotropic Composite Cylinders in the Long Column Range

    NASA Technical Reports Server (NTRS)

    Mikulas, Martin M., Jr.; Nemeth, Michael P.; Oremont, Leonard; Jegley, Dawn C.

    2011-01-01

    Buckling loads for long isotropic and laminated cylinders are calculated based on Euler, Fluegge and Donnell's equations. Results from these methods are presented using simple parameters useful for fundamental design work. Buckling loads for two types of simply supported boundary conditions are calculated using finite element methods for comparison to select cases of the closed form solution. Results indicate that relying on Donnell theory can result in an over-prediction of buckling loads by as much as 40% in isotropic materials.

  17. Calculation of density of states for modeling photoemission using method of moments

    NASA Astrophysics Data System (ADS)

    Finkenstadt, Daniel; Lambrakos, Samuel G.; Jensen, Kevin L.; Shabaev, Andrew; Moody, Nathan A.

    2017-09-01

    Modeling photoemission using the Moments Approach (akin to Spicer's "Three Step Model") is often presumed to follow simple models for the prediction of two critical properties of photocathodes: the yield or "Quantum Efficiency" (QE), and the intrinsic spreading of the beam or "emittance" ɛnrms. The simple models, however, tend to obscure properties of electrons in materials, the understanding of which is necessary for a proper prediction of a semiconductor or metal's QE and ɛnrms. This structure is characterized by localized resonance features as well as a universal trend at high energy. Presented in this study is a prototype analysis concerning the density of states (DOS) factor D(E) for Copper in bulk to replace the simple three-dimensional form of D(E) = (m/π2 h3)p2mE currently used in the Moments approach. This analysis demonstrates that excited state spectra of atoms, molecules and solids based on density-functional theory can be adapted as useful information for practical applications, as well as providing theoretical interpretation of density-of-states structure, e.g., qualitatively good descriptions of optical transitions in matter, in addition to DFT's utility in providing the optical constants and material parameters also required in the Moments Approach.

  18. The Scaled SLW model of gas radiation in non-uniform media based on Planck-weighted moments of gas absorption cross-section

    NASA Astrophysics Data System (ADS)

    Solovjov, Vladimir P.; Andre, Frederic; Lemonnier, Denis; Webb, Brent W.

    2018-02-01

    The Scaled SLW model for prediction of radiation transfer in non-uniform gaseous media is presented. The paper considers a new approach for construction of a Scaled SLW model. In order to maintain the SLW method as a simple and computationally efficient engineering method special attention is paid to explicit non-iterative methods of calculation of the scaling coefficient. The moments of gas absorption cross-section weighted by the Planck blackbody emissive power (in particular, the first moment - Planck mean, and first inverse moment - Rosseland mean) are used as the total characteristics of the absorption spectrum to be preserved by scaling. Generalized SLW modelling using these moments including both discrete gray gases and the continuous formulation is presented. Application of line-by-line look-up table for corresponding ALBDF and inverse ALBDF distribution functions (such that no solution of implicit equations is needed) ensures that the method is flexible and efficient. Predictions for radiative transfer using the Scaled SLW model are compared to line-by-line benchmark solutions, and predictions using the Rank Correlated SLW model and SLW Reference Approach. Conclusions and recommendations regarding application of the Scaled SLW model are made.

  19. Predicting the accuracy of ligand overlay methods with Random Forest models.

    PubMed

    Nandigam, Ravi K; Evans, David A; Erickson, Jon A; Kim, Sangtae; Sutherland, Jeffrey J

    2008-12-01

    The accuracy of binding mode prediction using standard molecular overlay methods (ROCS, FlexS, Phase, and FieldCompare) is studied. Previous work has shown that simple decision tree modeling can be used to improve accuracy by selection of the best overlay template. This concept is extended to the use of Random Forest (RF) modeling for template and algorithm selection. An extensive data set of 815 ligand-bound X-ray structures representing 5 gene families was used for generating ca. 70,000 overlays using four programs. RF models, trained using standard measures of ligand and protein similarity and Lipinski-related descriptors, are used for automatically selecting the reference ligand and overlay method maximizing the probability of reproducing the overlay deduced from X-ray structures (i.e., using rmsd < or = 2 A as the criteria for success). RF model scores are highly predictive of overlay accuracy, and their use in template and method selection produces correct overlays in 57% of cases for 349 overlay ligands not used for training RF models. The inclusion in the models of protein sequence similarity enables the use of templates bound to related protein structures, yielding useful results even for proteins having no available X-ray structures.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavlickova, Katarina; Vyskupova, Monika, E-mail: vyskupova@fns.uniba.sk

    Cumulative environmental impact assessment deals with the occasional use in practical application of environmental impact assessment process. The main reasons are the difficulty of cumulative impact identification caused by lack of data, inability to measure the intensity and spatial effect of all types of impacts and the uncertainty of their future evolution. This work presents a method proposal to predict cumulative impacts on the basis of landscape vulnerability evaluation. For this purpose, qualitative assessment of landscape ecological stability is conducted and major vulnerability indicators of environmental and socio-economic receptors are specified and valuated. Potential cumulative impacts and the overall impactmore » significance are predicted quantitatively in modified Argonne multiple matrixes while considering the vulnerability of affected landscape receptors and the significance of impacts identified individually. The method was employed in the concrete environmental impact assessment process conducted in Slovakia. The results obtained in this case study reflect that this methodology is simple to apply, valid for all types of impacts and projects, inexpensive and not time-consuming. The objectivity of the partial methods used in this procedure is improved by quantitative landscape ecological stability evaluation, assignment of weights to vulnerability indicators based on the detailed characteristics of affected factors, and grading impact significance. - Highlights: • This paper suggests a method proposal for cumulative impact prediction. • The method includes landscape vulnerability evaluation. • The vulnerability of affected receptors is determined by their sensitivity. • This method can increase the objectivity of impact prediction in the EIA process.« less

  1. A Method to Constrain Genome-Scale Models with 13C Labeling Data

    PubMed Central

    García Martín, Héctor; Kumar, Vinay Satish; Weaver, Daniel; Ghosh, Amit; Chubukov, Victor; Mukhopadhyay, Aindrila; Arkin, Adam; Keasling, Jay D.

    2015-01-01

    Current limitations in quantitatively predicting biological behavior hinder our efforts to engineer biological systems to produce biofuels and other desired chemicals. Here, we present a new method for calculating metabolic fluxes, key targets in metabolic engineering, that incorporates data from 13C labeling experiments and genome-scale models. The data from 13C labeling experiments provide strong flux constraints that eliminate the need to assume an evolutionary optimization principle such as the growth rate optimization assumption used in Flux Balance Analysis (FBA). This effective constraining is achieved by making the simple but biologically relevant assumption that flux flows from core to peripheral metabolism and does not flow back. The new method is significantly more robust than FBA with respect to errors in genome-scale model reconstruction. Furthermore, it can provide a comprehensive picture of metabolite balancing and predictions for unmeasured extracellular fluxes as constrained by 13C labeling data. A comparison shows that the results of this new method are similar to those found through 13C Metabolic Flux Analysis (13C MFA) for central carbon metabolism but, additionally, it provides flux estimates for peripheral metabolism. The extra validation gained by matching 48 relative labeling measurements is used to identify where and why several existing COnstraint Based Reconstruction and Analysis (COBRA) flux prediction algorithms fail. We demonstrate how to use this knowledge to refine these methods and improve their predictive capabilities. This method provides a reliable base upon which to improve the design of biological systems. PMID:26379153

  2. Automatic anatomy recognition using neural network learning of object relationships via virtual landmarks

    NASA Astrophysics Data System (ADS)

    Yan, Fengxia; Udupa, Jayaram K.; Tong, Yubing; Xu, Guoping; Odhner, Dewey; Torigian, Drew A.

    2018-03-01

    The recently developed body-wide Automatic Anatomy Recognition (AAR) methodology depends on fuzzy modeling of individual objects, hierarchically arranging objects, constructing an anatomy ensemble of these models, and a dichotomous object recognition-delineation process. The parent-to-offspring spatial relationship in the object hierarchy is crucial in the AAR method. We have found this relationship to be quite complex, and as such any improvement in capturing this relationship information in the anatomy model will improve the process of recognition itself. Currently, the method encodes this relationship based on the layout of the geometric centers of the objects. Motivated by the concept of virtual landmarks (VLs), this paper presents a new one-shot AAR recognition method that utilizes the VLs to learn object relationships by training a neural network to predict the pose and the VLs of an offspring object given the VLs of the parent object in the hierarchy. We set up two neural networks for each parent-offspring object pair in a body region, one for predicting the VLs and another for predicting the pose parameters. The VL-based learning/prediction method is evaluated on two object hierarchies involving 14 objects. We utilize 54 computed tomography (CT) image data sets of head and neck cancer patients and the associated object contours drawn by dosimetrists for routine radiation therapy treatment planning. The VL neural network method is found to yield more accurate object localization than the currently used simple AAR method.

  3. Diffusional transport and predicting oxidative failure during cyclic oxidation of beta-NiAl alloys

    NASA Technical Reports Server (NTRS)

    Nesbitt, J. A.; Vinarcik, E. J.; Barrett, C. A.; Doychak, J.

    1992-01-01

    Nickel aluminides (NiAl) containing 40-50 at. percent Al and up to 0.1 at. percent Zr have been studied following cyclic oxidation at 1200, 1300, 1350 and 1400 C. The selective oxidation of aluminum resulted in the formation of protective Al2O3 scales on each alloy composition at each temperature. However, repeated cycling eventually resulted in the gradual formation of less protective NiAl2O4. The appearance of the NiAl2O4, signaling the end of the protective scale-forming capability of the alloy, was related to the presence of gamma-prime-(Ni3Al) which formed as a result of the loss of aluminum from the sample. A simple methodology is presented to predict the protective life of beta-NiAl alloys. This method predicts the oxidative lifetime due to aluminum depletion when the aluminum concentration decreases to a critical concentration. The time interval preceding NiAl2O4 formation (i.e., the lifetime based on protective Al2O3 formation) and predicted lifetimes are compared and discussed. Use of the method to predict the maximum use temperature for NiAl-Zr alloys is also discussed.

  4. A method for obtaining a statistically stationary turbulent free shear flow

    NASA Technical Reports Server (NTRS)

    Timson, Stephen F.; Lele, S. K.; Moser, R. D.

    1994-01-01

    The long-term goal of the current research is the study of Large-Eddy Simulation (LES) as a tool for aeroacoustics. New algorithms and developments in computer hardware are making possible a new generation of tools for aeroacoustic predictions, which rely on the physics of the flow rather than empirical knowledge. LES, in conjunction with an acoustic analogy, holds the promise of predicting the statistics of noise radiated to the far-field of a turbulent flow. LES's predictive ability will be tested through extensive comparison of acoustic predictions based on a Direct Numerical Simulation (DNS) and LES of the same flow, as well as a priori testing of DNS results. The method presented here is aimed at allowing simulation of a turbulent flow field that is both simple and amenable to acoustic predictions. A free shear flow is homogeneous in both the streamwise and spanwise directions and which is statistically stationary will be simulated using equations based on the Navier-Stokes equations with a small number of added terms. Studying a free shear flow eliminates the need to consider flow-surface interactions as an acoustic source. The homogeneous directions and the flow's statistically stationary nature greatly simplify the application of an acoustic analogy.

  5. Prediction of the Creep-Fatigue Lifetime of Alloy 617: An Application of Non-destructive Evaluation and Information Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vivek Agarwal; Richard Wright; Timothy Roney

    A relatively simple method using the nominal constant average stress information and the creep rupture model is developed to predict the creep-fatigue lifetime of Alloy 617, in terms of time to rupture. The nominal constant average stress is computed using the stress relaxation curve. The predicted time to rupture can be converted to number of cycles to failure using the strain range, the strain rate during each cycle, and the hold time information. The predicted creep-fatigue lifetime is validated against the experimental measurements of the creep-fatigue lifetime collected using conventional laboratory creep-fatigue tests. High temperature creep-fatigue tests of Alloy 617more » were conducted in air at 950°C with a tensile hold period of up to 1800s in a cycle at total strain ranges of 0.3% and 0.6%. It was observed that the proposed method is conservative in that the predicted lifetime is less than the experimentally determined values. The approach would be relevant to calculate the remaining useful life to a component like a steam generator that might fail by the creep-fatigue mechanism.« less

  6. A numerical method for computing unsteady 2-D boundary layer flows

    NASA Technical Reports Server (NTRS)

    Krainer, Andreas

    1988-01-01

    A numerical method for computing unsteady two-dimensional boundary layers in incompressible laminar and turbulent flows is described and applied to a single airfoil changing its incidence angle in time. The solution procedure adopts a first order panel method with a simple wake model to solve for the inviscid part of the flow, and an implicit finite difference method for the viscous part of the flow. Both procedures integrate in time in a step-by-step fashion, in the course of which each step involves the solution of the elliptic Laplace equation and the solution of the parabolic boundary layer equations. The Reynolds shear stress term of the boundary layer equations is modeled by an algebraic eddy viscosity closure. The location of transition is predicted by an empirical data correlation originating from Michel. Since transition and turbulence modeling are key factors in the prediction of viscous flows, their accuracy will be of dominant influence to the overall results.

  7. Arrhenius time-scaled least squares: a simple, robust approach to accelerated stability data analysis for bioproducts.

    PubMed

    Rauk, Adam P; Guo, Kevin; Hu, Yanling; Cahya, Suntara; Weiss, William F

    2014-08-01

    Defining a suitable product presentation with an acceptable stability profile over its intended shelf-life is one of the principal challenges in bioproduct development. Accelerated stability studies are routinely used as a tool to better understand long-term stability. Data analysis often employs an overall mass action kinetics description for the degradation and the Arrhenius relationship to capture the temperature dependence of the observed rate constant. To improve predictive accuracy and precision, the current work proposes a least-squares estimation approach with a single nonlinear covariate and uses a polynomial to describe the change in a product attribute with respect to time. The approach, which will be referred to as Arrhenius time-scaled (ATS) least squares, enables accurate, precise predictions to be achieved for degradation profiles commonly encountered during bioproduct development. A Monte Carlo study is conducted to compare the proposed approach with the common method of least-squares estimation on the logarithmic form of the Arrhenius equation and nonlinear estimation of a first-order model. The ATS least squares method accommodates a range of degradation profiles, provides a simple and intuitive approach for data presentation, and can be implemented with ease. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  8. Process-conditioned bias correction for seasonal forecasting: a case-study with ENSO in Peru

    NASA Astrophysics Data System (ADS)

    Manzanas, R.; Gutiérrez, J. M.

    2018-05-01

    This work assesses the suitability of a first simple attempt for process-conditioned bias correction in the context of seasonal forecasting. To do this, we focus on the northwestern part of Peru and bias correct 1- and 4-month lead seasonal predictions of boreal winter (DJF) precipitation from the ECMWF System4 forecasting system for the period 1981-2010. In order to include information about the underlying large-scale circulation which may help to discriminate between precipitation affected by different processes, we introduce here an empirical quantile-quantile mapping method which runs conditioned on the state of the Southern Oscillation Index (SOI), which is accurately predicted by System4 and is known to affect the local climate. Beyond the reduction of model biases, our results show that the SOI-conditioned method yields better ROC skill scores and reliability than the raw model output over the entire region of study, whereas the standard unconditioned implementation provides no added value for any of these metrics. This suggests that conditioning the bias correction on simple but well-simulated large-scale processes relevant to the local climate may be a suitable approach for seasonal forecasting. Yet, further research on the suitability of the application of similar approaches to the one considered here for other regions, seasons and/or variables is needed.

  9. Mathematical prediction of core body temperature from environment, activity, and clothing: The heat strain decision aid (HSDA).

    PubMed

    Potter, Adam W; Blanchard, Laurie A; Friedl, Karl E; Cadarette, Bruce S; Hoyt, Reed W

    2017-02-01

    Physiological models provide useful summaries of complex interrelated regulatory functions. These can often be reduced to simple input requirements and simple predictions for pragmatic applications. This paper demonstrates this modeling efficiency by tracing the development of one such simple model, the Heat Strain Decision Aid (HSDA), originally developed to address Army needs. The HSDA, which derives from the Givoni-Goldman equilibrium body core temperature prediction model, uses 16 inputs from four elements: individual characteristics, physical activity, clothing biophysics, and environmental conditions. These inputs are used to mathematically predict core temperature (T c ) rise over time and can estimate water turnover from sweat loss. Based on a history of military applications such as derivation of training and mission planning tools, we conclude that the HSDA model is a robust integration of physiological rules that can guide a variety of useful predictions. The HSDA model is limited to generalized predictions of thermal strain and does not provide individualized predictions that could be obtained from physiological sensor data-driven predictive models. This fully transparent physiological model should be improved and extended with new findings and new challenging scenarios. Published by Elsevier Ltd.

  10. Using a topographic index to distribute variable source area runoff predicted with the SCS curve-number equation

    NASA Astrophysics Data System (ADS)

    Lyon, Steve W.; Walter, M. Todd; Gérard-Marchant, Pierre; Steenhuis, Tammo S.

    2004-10-01

    Because the traditional Soil Conservation Service curve-number (SCS-CN) approach continues to be used ubiquitously in water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed and tested a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Predicting the location of source areas is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point-source pollution. The method presented here used the traditional SCS-CN approach to predict runoff volume and spatial extent of saturated areas and a topographic index, like that used in TOPMODEL, to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was applied to two subwatersheds of the Delaware basin in the Catskill Mountains region of New York State and one watershed in south-eastern Australia to produce runoff-probability maps. Observed saturated area locations in the watersheds agreed with the distributed CN-VSA method. Results showed good agreement with those obtained from the previously validated soil moisture routing (SMR) model. When compared with the traditional SCS-CN method, the distributed CN-VSA method predicted a similar total volume of runoff, but vastly different locations of runoff generation. Thus, the distributed CN-VSA approach provides a physically based method that is simple enough to be incorporated into water quality models, and other tools that currently use the traditional SCS-CN method, while still adhering to the principles of VSA hydrology.

  11. Simultaneous determination of potassium guaiacolsulfonate, guaifenesin, diphenhydramine HCl and carbetapentane citrate in syrups by using HPLC-DAD coupled with partial least squares multivariate calibration.

    PubMed

    Dönmez, Ozlem Aksu; Aşçi, Bürge; Bozdoğan, Abdürrezzak; Sungur, Sidika

    2011-02-15

    A simple and rapid analytical procedure was proposed for the determination of chromatographic peaks by means of partial least squares multivariate calibration (PLS) of high-performance liquid chromatography with diode array detection (HPLC-DAD). The method is exemplified with analysis of quaternary mixtures of potassium guaiacolsulfonate (PG), guaifenesin (GU), diphenhydramine HCI (DP) and carbetapentane citrate (CP) in syrup preparations. In this method, the area does not need to be directly measured and predictions are more accurate. Though the chromatographic and spectral peaks of the analytes were heavily overlapped and interferents coeluted with the compounds studied, good recoveries of analytes could be obtained with HPLC-DAD coupled with PLS calibration. This method was tested by analyzing the synthetic mixture of PG, GU, DP and CP. As a comparison method, a classsical HPLC method was used. The proposed methods were applied to syrups samples containing four drugs and the obtained results were statistically compared with each other. Finally, the main advantage of HPLC-PLS method over the classical HPLC method tried to emphasized as the using of simple mobile phase, shorter analysis time and no use of internal standard and gradient elution. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. Improved Prediction of Non-methylated Islands in Vertebrates Highlights Different Characteristic Sequence Patterns

    PubMed Central

    Vingron, Martin

    2016-01-01

    Non-methylated islands (NMIs) of DNA are genomic regions that are important for gene regulation and development. A recent study of genome-wide non-methylation data in vertebrates by Long et al. (eLife 2013;2:e00348) has shown that many experimentally identified non-methylated regions do not overlap with classically defined CpG islands which are computationally predicted using simple DNA sequence features. This is especially true in cold-blooded vertebrates such as Danio rerio (zebrafish). In order to investigate how predictive DNA sequence is of a region’s methylation status, we applied a supervised learning approach using a spectrum kernel support vector machine, to see if a more complex model and supervised learning can be used to improve non-methylated island prediction and to understand the sequence properties of these regions. We demonstrate that DNA sequence is highly predictive of methylation status, and that in contrast to existing CpG island prediction methods our method is able to provide more useful predictions of NMIs genome-wide in all vertebrate organisms that were studied. Our results also show that in cold-blooded vertebrates (Anolis carolinensis, Xenopus tropicalis and Danio rerio) where genome-wide classical CpG island predictions consist primarily of false positives, longer primarily AT-rich DNA sequence features are able to identify these regions much more accurately. PMID:27984582

  13. Evaluating a variety of text-mined features for automatic protein function prediction with GOstruct.

    PubMed

    Funk, Christopher S; Kahanda, Indika; Ben-Hur, Asa; Verspoor, Karin M

    2015-01-01

    Most computational methods that predict protein function do not take advantage of the large amount of information contained in the biomedical literature. In this work we evaluate both ontology term co-mention and bag-of-words features mined from the biomedical literature and analyze their impact in the context of a structured output support vector machine model, GOstruct. We find that even simple literature based features are useful for predicting human protein function (F-max: Molecular Function =0.408, Biological Process =0.461, Cellular Component =0.608). One advantage of using literature features is their ability to offer easy verification of automated predictions. We find through manual inspection of misclassifications that some false positive predictions could be biologically valid predictions based upon support extracted from the literature. Additionally, we present a "medium-throughput" pipeline that was used to annotate a large subset of co-mentions; we suggest that this strategy could help to speed up the rate at which proteins are curated.

  14. Linear regression models for solvent accessibility prediction in proteins.

    PubMed

    Wagner, Michael; Adamczak, Rafał; Porollo, Aleksey; Meller, Jarosław

    2005-04-01

    The relative solvent accessibility (RSA) of an amino acid residue in a protein structure is a real number that represents the solvent exposed surface area of this residue in relative terms. The problem of predicting the RSA from the primary amino acid sequence can therefore be cast as a regression problem. Nevertheless, RSA prediction has so far typically been cast as a classification problem. Consequently, various machine learning techniques have been used within the classification framework to predict whether a given amino acid exceeds some (arbitrary) RSA threshold and would thus be predicted to be "exposed," as opposed to "buried." We have recently developed novel methods for RSA prediction using nonlinear regression techniques which provide accurate estimates of the real-valued RSA and outperform classification-based approaches with respect to commonly used two-class projections. However, while their performance seems to provide a significant improvement over previously published approaches, these Neural Network (NN) based methods are computationally expensive to train and involve several thousand parameters. In this work, we develop alternative regression models for RSA prediction which are computationally much less expensive, involve orders-of-magnitude fewer parameters, and are still competitive in terms of prediction quality. In particular, we investigate several regression models for RSA prediction using linear L1-support vector regression (SVR) approaches as well as standard linear least squares (LS) regression. Using rigorously derived validation sets of protein structures and extensive cross-validation analysis, we compare the performance of the SVR with that of LS regression and NN-based methods. In particular, we show that the flexibility of the SVR (as encoded by metaparameters such as the error insensitivity and the error penalization terms) can be very beneficial to optimize the prediction accuracy for buried residues. We conclude that the simple and computationally much more efficient linear SVR performs comparably to nonlinear models and thus can be used in order to facilitate further attempts to design more accurate RSA prediction methods, with applications to fold recognition and de novo protein structure prediction methods.

  15. Classification of speech dysfluencies using LPC based parameterization techniques.

    PubMed

    Hariharan, M; Chee, Lim Sin; Ai, Ooi Chia; Yaacob, Sazali

    2012-06-01

    The goal of this paper is to discuss and compare three feature extraction methods: Linear Predictive Coefficients (LPC), Linear Prediction Cepstral Coefficients (LPCC) and Weighted Linear Prediction Cepstral Coefficients (WLPCC) for recognizing the stuttered events. Speech samples from the University College London Archive of Stuttered Speech (UCLASS) were used for our analysis. The stuttered events were identified through manual segmentation and were used for feature extraction. Two simple classifiers namely, k-nearest neighbour (kNN) and Linear Discriminant Analysis (LDA) were employed for speech dysfluencies classification. Conventional validation method was used for testing the reliability of the classifier results. The study on the effect of different frame length, percentage of overlapping, value of ã in a first order pre-emphasizer and different order p were discussed. The speech dysfluencies classification accuracy was found to be improved by applying statistical normalization before feature extraction. The experimental investigation elucidated LPC, LPCC and WLPCC features can be used for identifying the stuttered events and WLPCC features slightly outperforms LPCC features and LPC features.

  16. Subcellular location prediction of proteins using support vector machines with alignment of block sequences utilizing amino acid composition.

    PubMed

    Tamura, Takeyuki; Akutsu, Tatsuya

    2007-11-30

    Subcellular location prediction of proteins is an important and well-studied problem in bioinformatics. This is a problem of predicting which part in a cell a given protein is transported to, where an amino acid sequence of the protein is given as an input. This problem is becoming more important since information on subcellular location is helpful for annotation of proteins and genes and the number of complete genomes is rapidly increasing. Since existing predictors are based on various heuristics, it is important to develop a simple method with high prediction accuracies. In this paper, we propose a novel and general predicting method by combining techniques for sequence alignment and feature vectors based on amino acid composition. We implemented this method with support vector machines on plant data sets extracted from the TargetP database. Through fivefold cross validation tests, the obtained overall accuracies and average MCC were 0.9096 and 0.8655 respectively. We also applied our method to other datasets including that of WoLF PSORT. Although there is a predictor which uses the information of gene ontology and yields higher accuracy than ours, our accuracies are higher than existing predictors which use only sequence information. Since such information as gene ontology can be obtained only for known proteins, our predictor is considered to be useful for subcellular location prediction of newly-discovered proteins. Furthermore, the idea of combination of alignment and amino acid frequency is novel and general so that it may be applied to other problems in bioinformatics. Our method for plant is also implemented as a web-system and available on http://sunflower.kuicr.kyoto-u.ac.jp/~tamura/slpfa.html.

  17. Broadband moth-eye antireflection coatings on silicon

    NASA Astrophysics Data System (ADS)

    Sun, Chih-Hung; Jiang, Peng; Jiang, Bin

    2008-02-01

    We report a bioinspired templating technique for fabricating broadband antireflection coatings that mimic antireflective moth eyes. Wafer-scale, subwavelength-structured nipple arrays are directly patterned on silicon using spin-coated silica colloidal monolayers as etching masks. The templated gratings exhibit excellent broadband antireflection properties and the normal-incidence specular reflection matches with the theoretical prediction using a rigorous coupled-wave analysis (RCWA) model. We further demonstrate that two common simulation methods, RCWA and thin-film multilayer models, generate almost identical prediction for the templated nipple arrays. This simple bottom-up technique is compatible with standard microfabrication, promising for reducing the manufacturing cost of crystalline silicon solar cells.

  18. Predictive momentum management for the Space Station

    NASA Technical Reports Server (NTRS)

    Hatis, P. D.

    1986-01-01

    Space station control moment gyro momentum management is addressed by posing a deterministic optimization problem with a performance index that includes station external torque loading, gyro control torque demand, and excursions from desired reference attitudes. It is shown that a simple analytic desired attitude solution exists for all axes with pitch prescription decoupled, but roll and yaw coupled. Continuous gyro desaturation is shown to fit neatly into the scheme. Example results for pitch axis control of the NASA power tower Space Station are shown based on predictive attitude prescription. Control effector loading is shown to be reduced by this method when compared to more conventional momentum management techniques.

  19. On the predictions of the 11B solid state NMR parameters

    NASA Astrophysics Data System (ADS)

    Czernek, Jiří; Brus, Jiří

    2016-07-01

    The set of boron containing compounds has been subject to the prediction of the 11B solid state NMR spectral parameters using DFT-GIPAW methods properly treating the solid phase effects. The quantification of the differences between measured and theoretical values has been presented, which is directly applicable in structural studies involving 11B nuclei. In particular, a simple scheme has been proposed, which is expected to provide for an estimate of the 11B chemical shift within ±2.0 ppm from the experimental value. The computer program, INFOR, enabling the visualization of concomitant Euler rotations related to the tensorial transformations has been presented.

  20. Brownian systems with spatially inhomogeneous activity

    NASA Astrophysics Data System (ADS)

    Sharma, A.; Brader, J. M.

    2017-09-01

    We generalize the Green-Kubo approach, previously applied to bulk systems of spherically symmetric active particles [J. Chem. Phys. 145, 161101 (2016), 10.1063/1.4966153], to include spatially inhomogeneous activity. The method is applied to predict the spatial dependence of the average orientation per particle and the density. The average orientation is given by an integral over the self part of the Van Hove function and a simple Gaussian approximation to this quantity yields an accurate analytical expression. Taking this analytical result as input to a dynamic density functional theory approximates the spatial dependence of the density in good agreement with simulation data. All theoretical predictions are validated using Brownian dynamics simulations.

  1. Optimal interpolation analysis of leaf area index using MODIS data

    USGS Publications Warehouse

    Gu, Yingxin; Belair, Stephane; Mahfouf, Jean-Francois; Deblonde, Godelieve

    2006-01-01

    A simple data analysis technique for vegetation leaf area index (LAI) using Moderate Resolution Imaging Spectroradiometer (MODIS) data is presented. The objective is to generate LAI data that is appropriate for numerical weather prediction. A series of techniques and procedures which includes data quality control, time-series data smoothing, and simple data analysis is applied. The LAI analysis is an optimal combination of the MODIS observations and derived climatology, depending on their associated errors σo and σc. The “best estimate” LAI is derived from a simple three-point smoothing technique combined with a selection of maximum LAI (after data quality control) values to ensure a higher quality. The LAI climatology is a time smoothed mean value of the “best estimate” LAI during the years of 2002–2004. The observation error is obtained by comparing the MODIS observed LAI with the “best estimate” of the LAI, and the climatological error is obtained by comparing the “best estimate” of LAI with the climatological LAI value. The LAI analysis is the result of a weighting between these two errors. Demonstration of the method described in this paper is presented for the 15-km grid of Meteorological Service of Canada (MSC)'s regional version of the numerical weather prediction model. The final LAI analyses have a relatively smooth temporal evolution, which makes them more appropriate for environmental prediction than the original MODIS LAI observation data. They are also more realistic than the LAI data currently used operationally at the MSC which is based on land-cover databases.

  2. Continuously Variable Rating: a new, simple and logical procedure to evaluate original scientific publications

    PubMed Central

    Silva, Mauricio Rocha e

    2011-01-01

    OBJECTIVE: Impact Factors (IF) are widely used surrogates to evaluate single articles, in spite of known shortcomings imposed by cite distribution skewness. We quantify this asymmetry and propose a simple computer-based procedure for evaluating individual articles. METHOD: (a) Analysis of symmetry. Journals clustered around nine Impact Factor points were selected from the medical “Subject Categories” in Journal Citation Reports 2010. Citable items published in 2008 were retrieved and ranked by granted citations over the Jan/2008 - Jun/2011 period. Frequency distribution of cites, normalized cumulative cites and absolute cites/decile were determined for each journal cluster. (b) Positive Predictive Value. Three arbitrarily established evaluation classes were generated: LOW (1.3≤IF<2.6); MID: (2.6≤IF<3.9); HIGH: (IF≥3.9). Positive Predictive Value for journal clusters within each class range was estimated. (c) Continuously Variable Rating. An alternative evaluation procedure is proposed to allow the rating of individually published articles in comparison to all articles published in the same journal within the same year of publication. The general guiding lines for the construction of a totally dedicated software program are delineated. RESULTS AND CONCLUSIONS: Skewness followed the Pareto Distribution for (1

  3. Genotypic analysis of human immunodeficiency virus type 1 env V3 loop sequences: bioinformatics prediction of coreceptor usage among 28 infected mother-infant pairs in a drug-naive population.

    PubMed

    Duri, Kerina; Soko, White; Gumbo, Felicity; Kristiansen, Knut; Mapingure, Munyaradzi; Stray-Pedersen, Babill; Muller, Fredrik

    2011-04-01

    We sought to predict virus coreceptor utilization using a simple bioinformatics method based on genotypic analysis of human immunodeficiency virus types 1 (HIV-1) env V3 loop sequences of 28 infected but drug-naive women during pregnancy and their infected infants and to better understand coreceptor usage in vertical transmission dynamics. The HIV-1 env V3 loop was sequenced from plasma samples and analyzed for viral coreceptor usage and subtype in a cohort of HIV-1-infected pregnant women. Predicted maternal frequencies of the X4, R5X4, and R5 genotypes were 7%, 11%, and 82%, respectively. Antenatal plasma viral load was higher, with a mean log(10) (SD) of 4.8 (1.6) and 3.6 (1.2) for women with the X4 and R5 genotypes, respectively, p = 0.078. Amino acid substitution from the conserved V3 loop crown motif GPGQ to GPGR and lymphadenopathy were associated with the X4 genotype, p = 0.031 and 0.043, respectively. The maternal viral coreceptor genotype was generally preserved in vertical transmission and was predictive of the newborn's viral genotype. Infants born to mothers with X4 genotypes were more likely to have lower birth weights relative to those born to mothers with the R5 genotype, with a mean weight (SD) of 2870 (±332) and 3069 (±300) g, respectively. These data show that at least in HIV-1 subtype C, R5 coreceptor usage is the most predominant genotype, which is generally preserved following vertical transmission and is associated with the V3 GPGQ crown motif. Therefore, antiretroviral-naive pregnant women and their infants can benefit from ARV combination therapies that include R5 entry inhibitors following prediction of their coreceptor genotype using simple bioinformatics methods.

  4. Correlation between cystatin C-based formulas, Schwartz formula and urinary creatinine clearance for glomerular filtration rate estimation in children with kidney disease.

    PubMed

    Safaei-Asl, Afshin; Enshaei, Mercede; Heydarzadeh, Abtin; Maleknejad, Shohreh

    2016-01-01

    Assessment of glomerular filtration rate (GFR) is an important tool for monitoring renal function. Regarding to limitations in available methods, we intended to calculate GFR by cystatin C (Cys C) based formulas and determine correlation rate of them with current methods. We studied 72 children (38 boys and 34 girls) with renal disorders. The 24 hour urinary creatinine (Cr) clearance was the gold standard method. GFR was measured with Schwartz formula and Cys C-based formulas (Grubb, Hoek, Larsson and Simple). Then correlation rates of these formulas were determined. Using Pearson correlation coefficient, a significant positive correlation between all formulas and the standard method was seen (R(2) for Schwartz, Hoek, Larsson, Grubb and Simple formula was 0.639, 0.722, 0.705, 0.712, 0.722, respectively) (P<0.001). Cys C-based formulas could predict the variance of standard method results with high power. These formulas had correlation with Schwarz formula by R(2) 0.62-0.65 (intermediate correlation). Using linear regression and constant (y-intercept), it revealed that Larsson, Hoek and Grubb formulas can estimate GFR amounts with no statistical difference compared with standard method; but Schwartz and Simple formulas overestimate GFR. This study shows that Cys C-based formulas have strong relationship with 24 hour urinary Cr clearance. Hence, they can determine GFR in children with kidney injury, easier and with enough accuracy. It helps the physician to diagnosis of renal disease in early stages and improves the prognosis.

  5. Minimizing effects of methodological decisions on interpretation and prediction in species distribution studies: An example with background selection

    USGS Publications Warehouse

    Jarnevich, Catherine S.; Talbert, Marian; Morisette, Jeffrey T.; Aldridge, Cameron L.; Brown, Cynthia; Kumar, Sunil; Manier, Daniel; Talbert, Colin; Holcombe, Tracy R.

    2017-01-01

    Evaluating the conditions where a species can persist is an important question in ecology both to understand tolerances of organisms and to predict distributions across landscapes. Presence data combined with background or pseudo-absence locations are commonly used with species distribution modeling to develop these relationships. However, there is not a standard method to generate background or pseudo-absence locations, and method choice affects model outcomes. We evaluated combinations of both model algorithms (simple and complex generalized linear models, multivariate adaptive regression splines, Maxent, boosted regression trees, and random forest) and background methods (random, minimum convex polygon, and continuous and binary kernel density estimator (KDE)) to assess the sensitivity of model outcomes to choices made. We evaluated six questions related to model results, including five beyond the common comparison of model accuracy assessment metrics (biological interpretability of response curves, cross-validation robustness, independent data accuracy and robustness, and prediction consistency). For our case study with cheatgrass in the western US, random forest was least sensitive to background choice and the binary KDE method was least sensitive to model algorithm choice. While this outcome may not hold for other locations or species, the methods we used can be implemented to help determine appropriate methodologies for particular research questions.

  6. Prediction of allosteric sites on protein surfaces with an elastic-network-model-based thermodynamic method.

    PubMed

    Su, Ji Guo; Qi, Li Sheng; Li, Chun Hua; Zhu, Yan Ying; Du, Hui Jing; Hou, Yan Xue; Hao, Rui; Wang, Ji Hua

    2014-08-01

    Allostery is a rapid and efficient way in many biological processes to regulate protein functions, where binding of an effector at the allosteric site alters the activity and function at a distant active site. Allosteric regulation of protein biological functions provides a promising strategy for novel drug design. However, how to effectively identify the allosteric sites remains one of the major challenges for allosteric drug design. In the present work, a thermodynamic method based on the elastic network model was proposed to predict the allosteric sites on the protein surface. In our method, the thermodynamic coupling between the allosteric and active sites was considered, and then the allosteric sites were identified as those where the binding of an effector molecule induces a large change in the binding free energy of the protein with its ligand. Using the proposed method, two proteins, i.e., the 70 kD heat shock protein (Hsp70) and GluA2 alpha-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid (AMPA) receptor, were studied and the allosteric sites on the protein surface were successfully identified. The predicted results are consistent with the available experimental data, which indicates that our method is a simple yet effective approach for the identification of allosteric sites on proteins.

  7. Prediction of allosteric sites on protein surfaces with an elastic-network-model-based thermodynamic method

    NASA Astrophysics Data System (ADS)

    Su, Ji Guo; Qi, Li Sheng; Li, Chun Hua; Zhu, Yan Ying; Du, Hui Jing; Hou, Yan Xue; Hao, Rui; Wang, Ji Hua

    2014-08-01

    Allostery is a rapid and efficient way in many biological processes to regulate protein functions, where binding of an effector at the allosteric site alters the activity and function at a distant active site. Allosteric regulation of protein biological functions provides a promising strategy for novel drug design. However, how to effectively identify the allosteric sites remains one of the major challenges for allosteric drug design. In the present work, a thermodynamic method based on the elastic network model was proposed to predict the allosteric sites on the protein surface. In our method, the thermodynamic coupling between the allosteric and active sites was considered, and then the allosteric sites were identified as those where the binding of an effector molecule induces a large change in the binding free energy of the protein with its ligand. Using the proposed method, two proteins, i.e., the 70 kD heat shock protein (Hsp70) and GluA2 alpha-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid (AMPA) receptor, were studied and the allosteric sites on the protein surface were successfully identified. The predicted results are consistent with the available experimental data, which indicates that our method is a simple yet effective approach for the identification of allosteric sites on proteins.

  8. Ensemble Deep Learning for Biomedical Time Series Classification

    PubMed Central

    2016-01-01

    Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost. PMID:27725828

  9. Shear velocity criterion for incipient motion of sediment

    USGS Publications Warehouse

    Simoes, Francisco J.

    2014-01-01

    The prediction of incipient motion has had great importance to the theory of sediment transport. The most commonly used methods are based on the concept of critical shear stress and employ an approach similar, or identical, to the Shields diagram. An alternative method that uses the movability number, defined as the ratio of the shear velocity to the particle’s settling velocity, was employed in this study. A large amount of experimental data were used to develop an empirical incipient motion criterion based on the movability number. It is shown that this approach can provide a simple and accurate method of computing the threshold condition for sediment motion.

  10. Quantum ring-polymer contraction method: Including nuclear quantum effects at no additional computational cost in comparison to ab initio molecular dynamics

    NASA Astrophysics Data System (ADS)

    John, Christopher; Spura, Thomas; Habershon, Scott; Kühne, Thomas D.

    2016-04-01

    We present a simple and accurate computational method which facilitates ab initio path-integral molecular dynamics simulations, where the quantum-mechanical nature of the nuclei is explicitly taken into account, at essentially no additional computational cost in comparison to the corresponding calculation using classical nuclei. The predictive power of the proposed quantum ring-polymer contraction method is demonstrated by computing various static and dynamic properties of liquid water at ambient conditions using density functional theory. This development will enable routine inclusion of nuclear quantum effects in ab initio molecular dynamics simulations of condensed-phase systems.

  11. Three-dimensional application of the Johnson-King turbulence model for a boundary-layer direct method

    NASA Technical Reports Server (NTRS)

    Kavsaoglu, Mehmet S.; Kaynak, Unver; Van Dalsem, William R.

    1989-01-01

    The Johnson-King turbulence model as extended to three-dimensional flows was evaluated using finite-difference boundary-layer direct method. Calculations were compared against the experimental data of the well-known Berg-Elsenaar incompressible flow over an infinite swept-wing. The Johnson-King model, which includes the nonequilibrium effects in a developing turbulent boundary-layer, was found to significantly improve the predictive quality of a direct boundary-layer method. The improvement was especially visible in the computations with increased three-dimensionality of the mean flow, larger integral parameters, and decreasing eddy-viscosity and shear stress magnitudes in the streamwise direction; all in better agreement with the experiment than simple mixing-length methods.

  12. Exploration of attenuated total reflectance mid-infrared spectroscopy and multivariate calibration to measure immunoglobulin G in human sera.

    PubMed

    Hou, Siyuan; Riley, Christopher B; Mitchell, Cynthia A; Shaw, R Anthony; Bryanton, Janet; Bigsby, Kathryn; McClure, J Trenton

    2015-09-01

    Immunoglobulin G (IgG) is crucial for the protection of the host from invasive pathogens. Due to its importance for human health, tools that enable the monitoring of IgG levels are highly desired. Consequently there is a need for methods to determine the IgG concentration that are simple, rapid, and inexpensive. This work explored the potential of attenuated total reflectance (ATR) infrared spectroscopy as a method to determine IgG concentrations in human serum samples. Venous blood samples were collected from adults and children, and from the umbilical cord of newborns. The serum was harvested and tested using ATR infrared spectroscopy. Partial least squares (PLS) regression provided the basis to develop the new analytical methods. Three PLS calibrations were determined: one for the combined set of the venous and umbilical cord serum samples, the second for only the umbilical cord samples, and the third for only the venous samples. The number of PLS factors was chosen by critical evaluation of Monte Carlo-based cross validation results. The predictive performance for each PLS calibration was evaluated using the Pearson correlation coefficient, scatter plot and Bland-Altman plot, and percent deviations for independent prediction sets. The repeatability was evaluated by standard deviation and relative standard deviation. The results showed that ATR infrared spectroscopy is potentially a simple, quick, and inexpensive method to measure IgG concentrations in human serum samples. The results also showed that it is possible to build a united calibration curve for the umbilical cord and the venous samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Ultra-High Bypass Ratio Jet Noise

    NASA Technical Reports Server (NTRS)

    Low, John K. C.

    1994-01-01

    The jet noise from a 1/15 scale model of a Pratt and Whitney Advanced Ducted Propulsor (ADP) was measured in the United Technology Research Center anechoic research tunnel (ART) under a range of operating conditions. Conditions were chosen to match engine operating conditions. Data were obtained at static conditions and at wind tunnel Mach numbers of 0.2, 0.27, and 0.35 to simulate inflight effects on jet noise. Due to a temperature dependence of the secondary nozzle area, the model nozzle secondary to primary area ratio varied from 7.12 at 100 percent thrust to 7.39 at 30 percent thrust. The bypass ratio varied from 10.2 to 11.8 respectively. Comparison of the data with predictions using the current Society of Automotive Engineers (SAE) Jet Noise Prediction Method showed that the current prediction method overpredicted the ADP jet noise by 6 decibels. The data suggest that a simple method of subtracting 6 decibels from the SAE Coaxial Jet Noise Prediction for the merged and secondary flow source components would result in good agreement between predicted and measured levels. The simulated jet noise flight effects with wind tunnel Mach numbers up to 0.35 produced jet noise inflight noise reductions up to 12 decibels. The reductions in jet noise levels were across the entire jet noise spectra, suggesting that the inflight effects affected all source noise components.

  14. Estimation of critical behavior from the density of states in classical statistical models

    NASA Astrophysics Data System (ADS)

    Malakis, A.; Peratzakis, A.; Fytas, N. G.

    2004-12-01

    We present a simple and efficient approximation scheme which greatly facilitates the extension of Wang-Landau sampling (or similar techniques) in large systems for the estimation of critical behavior. The method, presented in an algorithmic approach, is based on a very simple idea, familiar in statistical mechanics from the notion of thermodynamic equivalence of ensembles and the central limit theorem. It is illustrated that we can predict with high accuracy the critical part of the energy space and by using this restricted part we can extend our simulations to larger systems and improve the accuracy of critical parameters. It is proposed that the extensions of the finite-size critical part of the energy space, determining the specific heat, satisfy a scaling law involving the thermal critical exponent. The method is applied successfully for the estimation of the scaling behavior of specific heat of both square and simple cubic Ising lattices. The proposed scaling law is verified by estimating the thermal critical exponent from the finite-size behavior of the critical part of the energy space. The density of states of the zero-field Ising model on these lattices is obtained via a multirange Wang-Landau sampling.

  15. The lucky image-motion prediction for simple scene observation based soft-sensor technology

    NASA Astrophysics Data System (ADS)

    Li, Yan; Su, Yun; Hu, Bin

    2015-08-01

    High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.

  16. Risk Factors Analysis and Death Prediction in Some Life-Threatening Ailments Using Chi-Square Case-Based Reasoning (χ2 CBR) Model.

    PubMed

    Adeniyi, D A; Wei, Z; Yang, Y

    2018-01-30

    A wealth of data are available within the health care system, however, effective analysis tools for exploring the hidden patterns in these datasets are lacking. To alleviate this limitation, this paper proposes a simple but promising hybrid predictive model by suitably combining the Chi-square distance measurement with case-based reasoning technique. The study presents the realization of an automated risk calculator and death prediction in some life-threatening ailments using Chi-square case-based reasoning (χ 2 CBR) model. The proposed predictive engine is capable of reducing runtime and speeds up execution process through the use of critical χ 2 distribution value. This work also showcases the development of a novel feature selection method referred to as frequent item based rule (FIBR) method. This FIBR method is used for selecting the best feature for the proposed χ 2 CBR model at the preprocessing stage of the predictive procedures. The implementation of the proposed risk calculator is achieved through the use of an in-house developed PHP program experimented with XAMP/Apache HTTP server as hosting server. The process of data acquisition and case-based development is implemented using the MySQL application. Performance comparison between our system, the NBY, the ED-KNN, the ANN, the SVM, the Random Forest and the traditional CBR techniques shows that the quality of predictions produced by our system outperformed the baseline methods studied. The result of our experiment shows that the precision rate and predictive quality of our system in most cases are equal to or greater than 70%. Our result also shows that the proposed system executes faster than the baseline methods studied. Therefore, the proposed risk calculator is capable of providing useful, consistent, faster, accurate and efficient risk level prediction to both the patients and the physicians at any time, online and on a real-time basis.

  17. Life prediction modeling based on cyclic damage accumulation

    NASA Technical Reports Server (NTRS)

    Nelson, Richard S.

    1988-01-01

    A high temperature, low cycle fatigue life prediction method was developed. This method, Cyclic Damage Accumulation (CDA), was developed for use in predicting the crack initiation lifetime of gas turbine engine materials, where initiation was defined as a 0.030 inch surface length crack. A principal engineering feature of the CDA method is the minimum data base required for implementation. Model constants can be evaluated through a few simple specimen tests such as monotonic loading and rapic cycle fatigue. The method was expanded to account for the effects on creep-fatigue life of complex loadings such as thermomechanical fatigue, hold periods, waveshapes, mean stresses, multiaxiality, cumulative damage, coatings, and environmental attack. A significant data base was generated on the behavior of the cast nickel-base superalloy B1900+Hf, including hundreds of specimen tests under such loading conditions. This information is being used to refine and extend the CDA life prediction model, which is now nearing completion. The model is also being verified using additional specimen tests on wrought INCO 718, and the final version of the model is expected to be adaptable to most any high-temperature alloy. The model is currently available in the form of equations and related constants. A proposed contract addition will make the model available in the near future in the form of a computer code to potential users.

  18. Derivation and validation of simple anthropometric equations to predict adipose tissue mass and total fat mass with MRI as the reference method

    PubMed Central

    Al-Gindan, Yasmin Y.; Hankey, Catherine R.; Govan, Lindsay; Gallagher, Dympna; Heymsfield, Steven B.; Lean, Michael E. J.

    2017-01-01

    The reference organ-level body composition measurement method is MRI. Practical estimations of total adipose tissue mass (TATM), total adipose tissue fat mass (TATFM) and total body fat are valuable for epidemiology, but validated prediction equations based on MRI are not currently available. We aimed to derive and validate new anthropometric equations to estimate MRI-measured TATM/TATFM/total body fat and compare them with existing prediction equations using older methods. The derivation sample included 416 participants (222 women), aged between 18 and 88 years with BMI between 15·9 and 40·8 (kg/m2). The validation sample included 204 participants (110 women), aged between 18 and 86 years with BMI between 15·7 and 36·4 (kg/m2). Both samples included mixed ethnic/racial groups. All the participants underwent whole-body MRI to quantify TATM (dependent variable) and anthropometry (independent variables). Prediction equations developed using stepwise multiple regression were further investigated for agreement and bias before validation in separate data sets. Simplest equations with optimal R2 and Bland–Altman plots demonstrated good agreement without bias in the validation analyses: men: TATM (kg) = 0·198 weight (kg) + 0·478 waist (cm) − 0·147 height (cm) − 12·8 (validation: R2 0·79, CV = 20 %, standard error of the estimate (SEE)=3·8 kg) and women: TATM (kg)=0·789 weight (kg) + 0·0786 age (years) − 0·342 height (cm) + 24·5 (validation: R2 0·84, CV = 13 %, SEE = 3·0 kg). Published anthropometric prediction equations, based on MRI and computed tomographic scans, correlated strongly with MRI-measured TATM: (R2 0·70 – 0·82). Estimated TATFM correlated well with published prediction equations for total body fat based on underwater weighing (R2 0·70–0·80), with mean bias of 2·5–4·9 kg, correctable with log-transformation in most equations. In conclusion, new equations, using simple anthropometric measurements, estimated MRI-measured TATM with correlations and agreements suitable for use in groups and populations across a wide range of fatness. PMID:26435103

  19. Forensic use of the Greulich and Pyle atlas: prediction intervals and relevance.

    PubMed

    Chaumoitre, K; Saliba-Serre, B; Adalian, P; Signoli, M; Leonetti, G; Panuel, M

    2017-03-01

    The Greulich and Pyle (GP) atlas is one of the most frequently used methods of bone age (BA) estimation. Our aim is to assess its accuracy and to calculate the prediction intervals at 95% for forensic use. The study was conducted on a multi-ethnic sample of 2614 individuals (1423 boys and 1191 girls) referred to the university hospital of Marseille (France) for simple injuries. Hand radiographs were analysed using the GP atlas. Reliability of GP atlas and agreement between BA and chronological age (CA) were assessed and prediction intervals at 95% were calculated. The repeatability was excellent and the reproducibility was good. Pearson's linear correlation coefficient between CA and BA was 0.983. The mean difference between BA and CA was -0.18 years (boys) and 0.06 years (girls). The prediction interval at 95% for CA was given for each GP category and ranged between 1.2 and more than 4.5 years. The GP atlas is a reproducible and repeatable method that is still accurate for the present population, with a high correlation between BA and CA. The prediction intervals at 95% are wide, reflecting individual variability, and should be known when the method is used in forensic cases. • The GP atlas is still accurate at the present time. • There is a high correlation between bone age and chronological age. • Individual variability must be known when GP is used in forensic cases. • Prediction intervals (95%) are large; around 4 years after 10 year olds.

  20. Used-habitat calibration plots: A new procedure for validating species distribution, resource selection, and step-selection models

    USGS Publications Warehouse

    Fieberg, John R.; Forester, James D.; Street, Garrett M.; Johnson, Douglas H.; ArchMiller, Althea A.; Matthiopoulos, Jason

    2018-01-01

    “Species distribution modeling” was recently ranked as one of the top five “research fronts” in ecology and the environmental sciences by ISI's Essential Science Indicators (Renner and Warton 2013), reflecting the importance of predicting how species distributions will respond to anthropogenic change. Unfortunately, species distribution models (SDMs) often perform poorly when applied to novel environments. Compounding on this problem is the shortage of methods for evaluating SDMs (hence, we may be getting our predictions wrong and not even know it). Traditional methods for validating SDMs quantify a model's ability to classify locations as used or unused. Instead, we propose to focus on how well SDMs can predict the characteristics of used locations. This subtle shift in viewpoint leads to a more natural and informative evaluation and validation of models across the entire spectrum of SDMs. Through a series of examples, we show how simple graphical methods can help with three fundamental challenges of habitat modeling: identifying missing covariates, non-linearity, and multicollinearity. Identifying habitat characteristics that are not well-predicted by the model can provide insights into variables affecting the distribution of species, suggest appropriate model modifications, and ultimately improve the reliability and generality of conservation and management recommendations.

  1. Universal fragment descriptors for predicting properties of inorganic crystals

    NASA Astrophysics Data System (ADS)

    Isayev, Olexandr; Oses, Corey; Toher, Cormac; Gossett, Eric; Curtarolo, Stefano; Tropsha, Alexander

    2017-06-01

    Although historically materials discovery has been driven by a laborious trial-and-error process, knowledge-driven materials design can now be enabled by the rational combination of Machine Learning methods and materials databases. Here, data from the AFLOW repository for ab initio calculations is combined with Quantitative Materials Structure-Property Relationship models to predict important properties: metal/insulator classification, band gap energy, bulk/shear moduli, Debye temperature and heat capacities. The prediction's accuracy compares well with the quality of the training data for virtually any stoichiometric inorganic crystalline material, reciprocating the available thermomechanical experimental data. The universality of the approach is attributed to the construction of the descriptors: Property-Labelled Materials Fragments. The representations require only minimal structural input allowing straightforward implementations of simple heuristic design rules.

  2. Bayesian geostatistics in health cartography: the perspective of malaria.

    PubMed

    Patil, Anand P; Gething, Peter W; Piel, Frédéric B; Hay, Simon I

    2011-06-01

    Maps of parasite prevalences and other aspects of infectious diseases that vary in space are widely used in parasitology. However, spatial parasitological datasets rarely, if ever, have sufficient coverage to allow exact determination of such maps. Bayesian geostatistics (BG) is a method for finding a large sample of maps that can explain a dataset, in which maps that do a better job of explaining the data are more likely to be represented. This sample represents the knowledge that the analyst has gained from the data about the unknown true map. BG provides a conceptually simple way to convert these samples to predictions of features of the unknown map, for example regional averages. These predictions account for each map in the sample, yielding an appropriate level of predictive precision.

  3. Bayesian geostatistics in health cartography: the perspective of malaria

    PubMed Central

    Patil, Anand P.; Gething, Peter W.; Piel, Frédéric B.; Hay, Simon I.

    2011-01-01

    Maps of parasite prevalences and other aspects of infectious diseases that vary in space are widely used in parasitology. However, spatial parasitological datasets rarely, if ever, have sufficient coverage to allow exact determination of such maps. Bayesian geostatistics (BG) is a method for finding a large sample of maps that can explain a dataset, in which maps that do a better job of explaining the data are more likely to be represented. This sample represents the knowledge that the analyst has gained from the data about the unknown true map. BG provides a conceptually simple way to convert these samples to predictions of features of the unknown map, for example regional averages. These predictions account for each map in the sample, yielding an appropriate level of predictive precision. PMID:21420361

  4. Universal fragment descriptors for predicting properties of inorganic crystals.

    PubMed

    Isayev, Olexandr; Oses, Corey; Toher, Cormac; Gossett, Eric; Curtarolo, Stefano; Tropsha, Alexander

    2017-06-05

    Although historically materials discovery has been driven by a laborious trial-and-error process, knowledge-driven materials design can now be enabled by the rational combination of Machine Learning methods and materials databases. Here, data from the AFLOW repository for ab initio calculations is combined with Quantitative Materials Structure-Property Relationship models to predict important properties: metal/insulator classification, band gap energy, bulk/shear moduli, Debye temperature and heat capacities. The prediction's accuracy compares well with the quality of the training data for virtually any stoichiometric inorganic crystalline material, reciprocating the available thermomechanical experimental data. The universality of the approach is attributed to the construction of the descriptors: Property-Labelled Materials Fragments. The representations require only minimal structural input allowing straightforward implementations of simple heuristic design rules.

  5. Rotor/Wing Interactions in Hover

    NASA Technical Reports Server (NTRS)

    Young, Larry A.; Derby, Michael R.

    2002-01-01

    Hover predictions of tiltrotor aircraft are hampered by the lack of accurate and computationally efficient models for rotor/wing interactional aerodynamics. This paper summarizes the development of an approximate, potential flow solution for the rotor-on-rotor and wing-on-rotor interactions. This analysis is based on actuator disk and vortex theory and the method of images. The analysis is applicable for out-of-ground-effect predictions. The analysis is particularly suited for aircraft preliminary design studies. Flow field predictions from this simple analytical model are validated against experimental data from previous studies. The paper concludes with an analytical assessment of the influence of rotor-on-rotor and wing-on-rotor interactions. This assessment examines the effect of rotor-to-wing offset distance, wing sweep, wing span, and flaperon incidence angle on tiltrotor inflow and performance.

  6. Lipoabdominoplasty: An exponential advantage for a consistently safe and aesthetic outcome.

    PubMed

    Kanjoor, J R; Singh, A K

    2012-01-01

    Extensive liposuction along with limited dissection of abdominal flaps is slowly emerging as a well proven advantageous method over standard abdominoplasty. A retrospective study analyzed 146 patients managed for the abdominal contour deformities from March 2004 to February 2010. A simple method to project the post operative outcome by rotation of a supine lateral photograph to upright posture in 46 patients prospectively has succeeded in projecting a predictable result. All patients were encouraged to practice chest physiotherapy in 'tummy tuck' position during the preoperative counseling. Aggressive liposuction of entire upper abdomen, a limited dissection in the midline, plication of diastasis of rectus whenever indicated, panniculectomy and neoumblicoplasty were done in all patients. The patients had a mean age of 43, youngest being 29 and oldest 72 years. Majority were of normal weight (94%). Twelve were morbidly obese; 57 patients had undergone previous abdominal surgeries; 49 patients had associated hernias. Lipoabdominoplasty yielded a satisfactory result in 110 (94%) patients. The postoperative patient had a definitely less heavy harmonious abdomen with improved waistline. The complications were more with higher BMI, fat thickness of more than 7 cm and prolonged operating time when other procedures were combined. Extensive liposuction combined with limited dissection method applied to all abdominoplasty patients yielded consistently safe, reliable and predictable aesthetic results with less complications and faster recovery. The simple photographic manipulation has helped project the postoperative outcome reliably. The preoperative chest physiotherapy in tummytuck position helped prevent chest complications.

  7. Linear stability theory and three-dimensional boundary layer transition

    NASA Technical Reports Server (NTRS)

    Spall, Robert E.; Malik, Mujeeb R.

    1992-01-01

    The viewgraphs and discussion of linear stability theory and three dimensional boundary layer transition are provided. The ability to predict, using analytical tools, the location of boundary layer transition over aircraft-type configurations is of great importance to designers interested in laminar flow control (LFC). The e(sup N) method has proven to be fairly effective in predicting, in a consistent manner, the location of the onset of transition for simple geometries in low disturbance environments. This method provides a correlation between the most amplified single normal mode and the experimental location of the onset of transition. Studies indicate that values of N between 8 and 10 correlate well with the onset of transition. For most previous calculations, the mean flows were restricted to two-dimensional or axisymmetric cases, or have employed simple three-dimensional mean flows (e.g., rotating disk, infinite swept wing, or tapered swept wing with straight isobars). Unfortunately, for flows over general wing configurations, and for nearly all flows over fuselage-type bodies at incidence, the analysis of fully three-dimensional flow fields is required. Results obtained for the linear stability of fully three-dimensional boundary layers formed over both wing and fuselage-type geometries, and for both high and low speed flows are discussed. When possible, transition estimates form the e(sup N) method are compared to experimentally determined locations. The stability calculations are made using a modified version of the linear stability code COSAL. Mean flows were computed using both Navier Stokes and boundary-layer codes.

  8. The Fatigue Approach to Vibration and Health: is it a Practical and Viable way of Predicting the Effects on People?

    NASA Astrophysics Data System (ADS)

    Sandover, J.

    1998-08-01

    The fatigue approach assumes that the vertebral end-plates are the weak link in the spine subjected to shock and vibration, and fail as a result of material fatigue. The theory assumes that end-plate damage leads to degeneration and pain in the lumbar spine. There is evidence for both the damage predicted and the fatigue mode of failure so that the approach may provide a basis for predictive methods for use in epidemiology and standards. An available data set from a variety of heavy vehicles in practical situations was used for predictions of spinal stress and fatigue life. Although there was some disparity between the predictive methods used, the more developed methods indicated fatigue lives that appeared reasonable, taking into account the vehicles tested and our knowledge of spinal degeneration. It is argued that the modelling and fatigue approaches combined offer a basis for estimating the effects of vibration and shock on health. Although the human variables are such that the approach, as yet, only offers rough estimates, it offers a good basis for understanding. The approach indicates that peak values are important and large peaks dominate risk. The method indicates that long term r.m.s. methods probably underestimate the risk of injury. The BS 6841Wband ISO 2631Wkweightings have shortcomings when used where peak values are important. A simple model may be more appropriate. The principle can be applied to continuous vibration as well as high acceleration events so that one method can be applied universally to continuous vibrations, high acceleration events and mixtures of these. An endurance limit can be hypothesised and, if this limit is sufficiently high, then the need for many measurements can be reduced.

  9. Prediction of Sublimation Pressures of Low Volatility Solids

    NASA Astrophysics Data System (ADS)

    Drake, Bruce Douglas

    Sublimation pressures are required for solid-vapor phase equilibrium models in design of processes such as supercritical fluid extraction, sublimation purification and vapor epitaxy. The objective of this work is to identify and compare alternative methods for predicting sublimation pressures. A bibliography of recent sublimation data is included. Corresponding states methods based on the triple point (rather than critical point) are examined. A modified Trouton's rule is the preferred method for estimating triple point pressure in the absence of any sublimation data. Only boiling and melting temperatures are required. Typical error in log_{10} P _{rm triple} is 0.3. For lower temperature estimates, the slope of the sublimation curve is predicted by a correlation based on molar volume. Typical error is 10% of slope. Molecular dynamics methods for surface modeling are tested as estimators of vapor pressure. The time constants of the vapor and solid phases are too different to allow the vapor to come to thermal equilibrium with the solid. The method shows no advantages in prediction of sublimation pressure but provides insight into appropriate models and experimental methods for sublimation. Density-dependent augmented van der Waals equations of state based on hard-sphere distribution functions are examined. The perturbation term is almost linear and is well fit by a simple quadratic. Use of the equation provides reasonable fitting of sublimation pressures from one data point. Order-of-magnitude estimation is possible from melting temperature and solid molar volume. The inverse -12 fluid is used to develop an additional equation of state. Sublimation pressure results, including quality of pressure predictions, are similar to the hard-sphere results. Three-body (Axilrod -Teller) interactions are used to improve results.

  10. A simple method of measuring tibial tubercle to trochlear groove distance on MRI: description of a novel and reliable technique.

    PubMed

    Camp, Christopher L; Heidenreich, Mark J; Dahm, Diane L; Bond, Jeffrey R; Collins, Mark S; Krych, Aaron J

    2016-03-01

    Tibial tubercle-trochlear groove (TT-TG) distance is a variable that helps guide surgical decision-making in patients with patellar instability. The purpose of this study was to compare the accuracy and reliability of an MRI TT-TG measuring technique using a simple external alignment method to a previously validated gold standard technique that requires advanced software read by radiologists. TT-TG was calculated by MRI on 59 knees with a clinical diagnosis of patellar instability in a blinded and randomized fashion by two musculoskeletal radiologists using advanced software and by two orthopaedists using the study technique which utilizes measurements taken on a simple electronic imaging platform. Interrater reliability between the two radiologists and the two orthopaedists and intermethods reliability between the two techniques were calculated using interclass correlation coefficients (ICC) and concordance correlation coefficients (CCC). ICC and CCC values greater than 0.75 were considered to represent excellent agreement. The mean TT-TG distance was 14.7 mm (Standard Deviation (SD) 4.87 mm) and 15.4 mm (SD 5.41) as measured by the radiologists and orthopaedists, respectively. Excellent interobserver agreement was noted between the radiologists (ICC 0.941; CCC 0.941), the orthopaedists (ICC 0.978; CCC 0.976), and the two techniques (ICC 0.941; CCC 0.933). The simple TT-TG distance measurement technique analysed in this study resulted in excellent agreement and reliability as compared to the gold standard technique. This method can predictably be performed by orthopaedic surgeons without advanced radiologic software. II.

  11. Investigation into the propagation of Omega very low frequency signals and techniques for improvement of navigation accuracy including differential and composite omega

    NASA Technical Reports Server (NTRS)

    1973-01-01

    An analysis of Very Low Frequency propagation in the atmosphere in the 10-14 kHz range leads to a discussion of some of the more significant causes of phase perturbation. The method of generating sky-wave corrections to predict the Omega phase is discussed. Composite Omega is considered as a means of lane identification and of reducing Omega navigation error. A simple technique for generating trapezoidal model (T-model) phase prediction is presented and compared with the Navy predictions and actual phase measurements. The T-model prediction analysis illustrates the ability to account for the major phase shift created by the diurnal effects on the lower ionosphere. An analysis of the Navy sky-wave correction table is used to provide information about spatial and temporal correlation of phase correction relative to the differential mode of operation.

  12. Interpretable Deep Models for ICU Outcome Prediction

    PubMed Central

    Che, Zhengping; Purushotham, Sanjay; Khemani, Robinder; Liu, Yan

    2016-01-01

    Exponential surge in health care data, such as longitudinal data from electronic health records (EHR), sensor data from intensive care unit (ICU), etc., is providing new opportunities to discover meaningful data-driven characteristics and patterns ofdiseases. Recently, deep learning models have been employedfor many computational phenotyping and healthcare prediction tasks to achieve state-of-the-art performance. However, deep models lack interpretability which is crucial for wide adoption in medical research and clinical decision-making. In this paper, we introduce a simple yet powerful knowledge-distillation approach called interpretable mimic learning, which uses gradient boosting trees to learn interpretable models and at the same time achieves strong prediction performance as deep learning models. Experiment results on Pediatric ICU dataset for acute lung injury (ALI) show that our proposed method not only outperforms state-of-the-art approaches for morality and ventilator free days prediction tasks but can also provide interpretable models to clinicians. PMID:28269832

  13. VLP Simulation: An Interactive Simple Virtual Model to Encourage Geoscience Skill about Volcano

    NASA Astrophysics Data System (ADS)

    Hariyono, E.; Liliasari; Tjasyono, B.; Rosdiana, D.

    2017-09-01

    The purpose of this study was to describe physics students predicting skills after following the geoscience learning using VLP (Volcano Learning Project) simulation. This research was conducted to 24 physics students at one of the state university in East Java-Indonesia. The method used is the descriptive analysis based on students’ answers related to predicting skills about volcanic activity. The results showed that the learning by using VLP simulation was very potential to develop physics students predicting skills. Students were able to explain logically about volcanic activity and they have been able to predict the potential eruption that will occur based on the real data visualization. It can be concluded that the VLP simulation is very suitable for physics student requirements in developing geosciences skill and recommended as an alternative media to educate the society in an understanding of volcanic phenomena.

  14. Quantifying characteristic growth dynamics in a semiarid grassland ecosystem by predicting short-term NDVI phenology from daily rainfall: a simple 4 parameter coupled-reservoir model

    USDA-ARS?s Scientific Manuscript database

    Predicting impacts of the magnitude and seasonal timing of rainfall pulses in water-limited grassland ecosystems concerns ecologists, climate scientists, hydrologists, and a variety of stakeholders. This report describes a simple, effective procedure to emulate the seasonal response of grassland bio...

  15. Novel surgical performance evaluation approximates Standardized Incidence Ratio with high accuracy at simple means.

    PubMed

    Gabbay, Itay E; Gabbay, Uri

    2013-01-01

    Excess adverse events may be attributable to poor surgical performance but also to case-mix, which is controlled through the Standardized Incidence Ratio (SIR). SIR calculations can be complicated, resource consuming, and unfeasible in some settings. This article suggests a novel method for SIR approximation. In order to evaluate a potential SIR surrogate measure we predefined acceptance criteria. We developed a new measure - Approximate Risk Index (ARI). "Number Needed for Event" (NNE) is the theoretical number of patients needed "to produce" one adverse event. ARI is defined as the quotient of the group of patients needed for no observed events Ge by total patients treated Ga. Our evaluation compared 2500 surgical units and over 3 million heterogeneous risk surgical patients that were induced through a computerized simulation. Surgical unit's data were computed for SIR and ARI to evaluate compliance with the predefined criteria. Approximation was evaluated by correlation analysis and performance prediction capability by Receiver Operating Characteristics (ROC) analysis. ARI strongly correlates with SIR (r(2) = 0.87, p < 0.05). ARI prediction of excessive risk revealed excellent ROC (Area Under the Curve > 0.9) 87% sensitivity and 91% specificity. ARI provides good approximation of SIR and excellent prediction capability. ARI is simple and cost-effective as it requires thorough risk evaluation of only the adverse events patients. ARI can provide a crucial screening and performance evaluation quality control tool. The ARI method may suit other clinical and epidemiological settings where relatively small fraction of the entire population is affected. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  16. Can Google Searches Predict the Popularity and Harm of Psychoactive Agents?

    PubMed

    Jankowski, Wojciech; Hoffmann, Marcin

    2016-02-25

    Predicting the popularity of and harm caused by psychoactive agents is a serious problem that would be difficult to do by a single simple method. However, because of the growing number of drugs it is very important to provide a simple and fast tool for predicting some characteristics of these substances. We were inspired by the Google Flu Trends study on the activity of the influenza virus, which showed that influenza virus activity worldwide can be monitored based on queries entered into the Google search engine. Our aim was to propose a fast method for ranking the most popular and most harmful drugs based on easily available data gathered from the Internet. We used the Google search engine to acquire data for the ranking lists. Subsequently, using the resulting list and the frequency of hits for the respective psychoactive drugs combined with the word "harm" or "harmful", we estimated quickly how much harm is associated with each drug. We ranked the most popular and harmful psychoactive drugs. As we conducted the research over a period of several months, we noted that the relative popularity indexes tended to change depending on when we obtained them. This suggests that the data may be useful in monitoring changes over time in the use of each of these psychoactive agents. Our data correlate well with the results from a multicriteria decision analysis of drug harms in the United Kingdom. We showed that Google search data can be a valuable source of information to assess the popularity of and harm caused by psychoactive agents and may help in monitoring drug use trends.

  17. THE POTENTIAL IMPACT OF INTEGRATED MALARIA TRANSMISSION CONTROL ON ENTOMOLOGIC INOCULATION RATE IN HIGHLY ENDEMIC AREAS

    PubMed Central

    KILLEEN, GERRY F.; McKENZIE, F. ELLIS; FOY, BRIAN D.; SCHIEFFELIN, CATHERINE; BILLINGSLEY, PETER F.; BEIER, JOHN C.

    2008-01-01

    We have used a relatively simple but accurate model for predicting the impact of integrated transmission control on the malaria entomologic inoculation rate (EIR) at four endemic sites from across sub-Saharan Africa and the southwest Pacific. The simulated campaign incorporated modestly effective vaccine coverage, bed net use, and larval control. The results indicate that such campaigns would reduce EIRs at all four sites by 30- to 50-fold. Even without the vaccine, 15- to 25-fold reductions of EIR were predicted, implying that integrated control with a few modestly effective tools can meaningfully reduce malaria transmission in a range of endemic settings. The model accurately predicts the effects of bed nets and indoor spraying and demonstrates that they are the most effective tools available for reducing EIR. However, the impact of domestic adult vector control is amplified by measures for reducing the rate of emergence of vectors or the level of infectiousness of the human reservoir. We conclude that available tools, including currently neglected methods for larval control, can reduce malaria transmission intensity enough to alleviate mortality. Integrated control programs should be implemented to the fullest extent possible, even in areas of intense transmission, using simple models as decision-making tools. However, we also conclude that to eliminate malaria in many areas of intense transmission is beyond the scope of methods which developing nations can currently afford. New, cost-effective, practical tools are needed if malaria is ever to be eliminated from highly endemic areas. PMID:11289662

  18. Predicting New Indications for Approved Drugs Using a Proteo-Chemometric Method

    PubMed Central

    Dakshanamurthy, Sivanesan; Issa, Naiem T; Assefnia, Shahin; Seshasayee, Ashwini; Peters, Oakland J; Madhavan, Subha; Uren, Aykut; Brown, Milton L; Byers, Stephen W

    2012-01-01

    The most effective way to move from target identification to the clinic is to identify already approved drugs with the potential for activating or inhibiting unintended targets (repurposing or repositioning). This is usually achieved by high throughput chemical screening, transcriptome matching or simple in silico ligand docking. We now describe a novel rapid computational proteo-chemometric method called “Train, Match, Fit, Streamline” (TMFS) to map new drug-target interaction space and predict new uses. The TMFS method combines shape, topology and chemical signatures, including docking score and functional contact points of the ligand, to predict potential drug-target interactions with remarkable accuracy. Using the TMFS method, we performed extensive molecular fit computations on 3,671 FDA approved drugs across 2,335 human protein crystal structures. The TMFS method predicts drug-target associations with 91% accuracy for the majority of drugs. Over 58% of the known best ligands for each target were correctly predicted as top ranked, followed by 66%, 76%, 84% and 91% for agents ranked in the top 10, 20, 30 and 40, respectively, out of all 3,671 drugs. Drugs ranked in the top 1–40, that have not been experimentally validated for a particular target now become candidates for repositioning. Furthermore, we used the TMFS method to discover that mebendazole, an anti-parasitic with recently discovered and unexpected anti-cancer properties, has the structural potential to inhibit VEGFR2. We confirmed experimentally that mebendazole inhibits VEGFR2 kinase activity as well as angiogenesis at doses comparable with its known effects on hookworm. TMFS also predicted, and was confirmed with surface plasmon resonance, that dimethyl celecoxib and the anti-inflammatory agent celecoxib can bind cadherin-11, an adhesion molecule important in rheumatoid arthritis and poor prognosis malignancies for which no targeted therapies exist. We anticipate that expanding our TMFS method to the >27,000 clinically active agents available worldwide across all targets will be most useful in the repositioning of existing drugs for new therapeutic targets. PMID:22780961

  19. A Simple Two Aircraft Conflict Resolution Algorithm

    NASA Technical Reports Server (NTRS)

    Chatterji, Gano B.

    1999-01-01

    Conflict detection and resolution methods are crucial for distributed air-ground traffic management in which the crew in the cockpit, dispatchers in operation control centers and air traffic controllers in the ground-based air traffic management facilities share information and participate in the traffic flow and traffic control imctions.This paper describes a conflict detection and a conflict resolution method. The conflict detection method predicts the minimum separation and the time-to-go to the closest point of approach by assuming that both the aircraft will continue to fly at their current speeds along their current headings. The conflict resolution method described here is motivated by the proportional navigation algorithm. It generates speed and heading commands to rotate the line-of-sight either clockwise or counter-clockwise for conflict resolution. Once the aircraft achieve a positive range-rate and no further conflict is predicted, the algorithm generates heading commands to turn back the aircraft to their nominal trajectories. The speed commands are set to the optimal pre-resolution speeds. Six numerical examples are presented to demonstrate the conflict detection and resolution method.

  20. Numerical and Experimental Studies on Impact Loaded Concrete Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saarenheimo, Arja; Hakola, Ilkka; Karna, Tuomo

    2006-07-01

    An experimental set-up has been constructed for medium scale impact tests. The main objective of this effort is to provide data for the calibration and verification of numerical models of a loading scenario where an aircraft impacts against a nuclear power plant. One goal is to develop and take in use numerical methods for predicting response of reinforced concrete structures to impacts of deformable projectiles that may contain combustible liquid ('fuel'). Loading, structural behaviour, like collapsing mechanism and the damage grade, will be predicted by simple analytical methods and using non-linear FE-method. In the so-called Riera method the behavior ofmore » the missile material is assumed to be rigid plastic or rigid visco-plastic. Using elastic plastic and elastic visco-plastic material models calculations are carried out by ABAQUS/Explicit finite element code, assuming axisymmetric deformation mode for the missile. With both methods, typically, the impact force time history, the velocity of the missile rear end and the missile shortening during the impact were recorded for comparisons. (authors)« less

  1. Vortical Flow Prediction Using an Adaptive Unstructured Grid Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2001-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65deg delta wing with different values of leading-edge bluntness, and the second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the windtunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  2. A new model predictive control algorithm by reducing the computing time of cost function minimization for NPC inverter in three-phase power grids.

    PubMed

    Taheri, Asghar; Zhalebaghi, Mohammad Hadi

    2017-11-01

    This paper presents a new control strategy based on finite-control-set model-predictive control (FCS-MPC) for Neutral-point-clamped (NPC) three-level converters. Containing some advantages like fast dynamic response, easy inclusion of constraints and simple control loop, makes the FCS-MPC method attractive to use as a switching strategy for converters. However, the large amount of required calculations is a problem in the widespread of this method. In this way, to resolve this problem this paper presents a modified method that effectively reduces the computation load compare with conventional FCS-MPC method and at the same time does not affect on control performance. The proposed method can be used for exchanging power between electrical grid and DC resources by providing active and reactive power compensations. Experiments on three-level converter for three Power Factor Correction (PFC), inductive and capacitive compensation modes verify the good and comparable performance. The results have been simulated using MATLAB/SIMULINK software. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Applying Knowledge Discovery in Databases in Public Health Data Set: Challenges and Concerns

    PubMed Central

    Volrathongchia, Kanittha

    2003-01-01

    In attempting to apply Knowledge Discovery in Databases (KDD) to generate a predictive model from a health care dataset that is currently available to the public, the first step is to pre-process the data to overcome the challenges of missing data, redundant observations, and records containing inaccurate data. This study will demonstrate how to use simple pre-processing methods to improve the quality of input data. PMID:14728545

  4. Sensitivity of monthly streamflow forecasts to the quality of rainfall forcing: When do dynamical climate forecasts outperform the Ensemble Streamflow Prediction (ESP) method?

    NASA Astrophysics Data System (ADS)

    Tanguy, M.; Prudhomme, C.; Harrigan, S.; Smith, K. A.; Parry, S.

    2017-12-01

    Forecasting hydrological extremes is challenging, especially at lead times over 1 month for catchments with limited hydrological memory and variable climates. One simple way to derive monthly or seasonal hydrological forecasts is to use historical climate data to drive hydrological models using the Ensemble Streamflow Prediction (ESP) method. This gives a range of possible future streamflow given known initial hydrologic conditions alone. The degree of skill of ESP depends highly on the forecast initialisation month and catchment type. Using dynamic rainfall forecasts as driving data instead of historical data could potentially improve streamflow predictions. A lot of effort is being invested within the meteorological community to improve these forecasts. However, while recent progress shows promise (e.g. NAO in winter), the skill of these forecasts at monthly to seasonal timescales is generally still limited, and the extent to which they might lead to improved hydrological forecasts is an area of active research. Additionally, these meteorological forecasts are currently being produced at 1 month or seasonal time-steps in the UK, whereas hydrological models require forcings at daily or sub-daily time-steps. Keeping in mind these limitations of available rainfall forecasts, the objectives of this study are to find out (i) how accurate monthly dynamical rainfall forecasts need to be to outperform ESP, and (ii) how the method used to disaggregate monthly rainfall forecasts into daily rainfall time series affects results. For the first objective, synthetic rainfall time series were created by increasingly degrading observed data (proxy for a `perfect forecast') from 0 % to +/-50 % error. For the second objective, three different methods were used to disaggregate monthly rainfall data into daily time series. These were used to force a simple lumped hydrological model (GR4J) to generate streamflow predictions at a one-month lead time for over 300 catchments representative of the range of UK's hydro-climatic conditions. These forecasts were then benchmarked against the traditional ESP method. It is hoped that the results of this work will help the meteorological community to identify where to focus their efforts in order to increase the usefulness of their forecasts within hydrological forecasting systems.

  5. Creep and stress relaxation modeling of polycrystalline ceramic fibers

    NASA Technical Reports Server (NTRS)

    Dicarlo, James A.; Morscher, Gregory N.

    1994-01-01

    A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading about 800 C, these fibers display creep related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of a mechanism-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the Bend Stress Relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model, but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model tensile creep predictions based on the BSR test results with the literature data show good agreement, supporting both the predictive capability of the model and the use of the BSR text as a simple method for parameter determination for other fibers.

  6. Creep and stress relaxation modeling of polycrystalline ceramic fibers

    NASA Technical Reports Server (NTRS)

    Dicarlo, James A.; Morscher, Gregory N.

    1991-01-01

    A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading above 800 C, these fibers display creep-related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of mechanistic-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the bend stress relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model predictions and BSR test results with the literature tensile creep data show good agreement, supporting both the predictive capability of the model and the use of the BSR test as a simple method for parameter determination for other fibers.

  7. Evaluation of IOTA Simple Ultrasound Rules to Distinguish Benign and Malignant Ovarian Tumours

    PubMed Central

    Kaur, Amarjit; Mohi, Jaswinder Kaur; Sibia, Preet Kanwal; Kaur, Navkiran

    2017-01-01

    Introduction IOTA stands for International Ovarian Tumour Analysis group. Ovarian cancer is one of the common cancers in women and is diagnosed at later stage in majority. The limiting factor for early diagnosis is lack of standardized terms and procedures in gynaecological sonography. Introduction of IOTA rules has provided some consistency in defining morphological features of ovarian masses through a standardized examination technique. Aim To evaluate the efficacy of IOTA simple ultrasound rules in distinguishing benign and malignant ovarian tumours and establishing their use as a tool in early diagnosis of ovarian malignancy. Materials and Methods A hospital based case control prospective study was conducted. Patients with suspected ovarian pathology were evaluated using IOTA ultrasound rules and designated as benign or malignant. Findings were correlated with histopathological findings. Collected data was statistically analysed using chi-square test and kappa statistical method. Results Out of initial 55 patients, 50 patients were included in the final analysis who underwent surgery. IOTA simple rules were applicable in 45 out of these 50 patients (90%). The sensitivity for the detection of malignancy in cases where IOTA simple rules were applicable was 91.66% and the specificity was 84.84%. Accuracy was 86.66%. Classifying inconclusive cases as malignant, the sensitivity and specificity was 93% and 80% respectively. High level of agreement was found between USG and histopathological diagnosis with Kappa value as 0.323. Conclusion IOTA simple ultrasound rules were highly sensitive and specific in predicting ovarian malignancy preoperatively yet being reproducible, easy to train and use. PMID:28969237

  8. Assessing skin sensitization hazard in mice and men using non-animal test methods.

    PubMed

    Urbisch, Daniel; Mehling, Annette; Guth, Katharina; Ramirez, Tzutzuy; Honarvar, Naveed; Kolle, Susanne; Landsiedel, Robert; Jaworska, Joanna; Kern, Petra S; Gerberick, Frank; Natsch, Andreas; Emter, Roger; Ashikaga, Takao; Miyazawa, Masaaki; Sakaguchi, Hitoshi

    2015-03-01

    Sensitization, the prerequisite event in the development of allergic contact dermatitis, is a key parameter in both hazard and risk assessments. The pathways involved have recently been formally described in the OECD adverse outcome pathway (AOP) for skin sensitization. One single non-animal test method will not be sufficient to fully address this AOP and in many cases the use of a battery of tests will be necessary. A number of methods are now fully developed and validated. In order to facilitate acceptance of these methods by both the regulatory and scientific communities, results of the single test methods (DPRA, KeratinoSens, LuSens, h-CLAT, (m)MUSST) as well for a the simple '2 out of 3' ITS for 213 substances have been compiled and qualitatively compared to both animal and human data. The dataset was also used to define different mechanistic domains by probable protein-binding mechanisms. In general, the non-animal test methods exhibited good predictivities when compared to local lymph node assay (LLNA) data and even better predictivities when compared to human data. The '2 out of 3' prediction model achieved accuracies of 90% or 79% when compared to human or LLNA data, respectively and thereby even slightly exceeded that of the LLNA. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  9. A Simple Method for High-Lift Propeller Conceptual Design

    NASA Technical Reports Server (NTRS)

    Patterson, Michael; Borer, Nick; German, Brian

    2016-01-01

    In this paper, we present a simple method for designing propellers that are placed upstream of the leading edge of a wing in order to augment lift. Because the primary purpose of these "high-lift propellers" is to increase lift rather than produce thrust, these props are best viewed as a form of high-lift device; consequently, they should be designed differently than traditional propellers. We present a theory that describes how these props can be designed to provide a relatively uniform axial velocity increase, which is hypothesized to be advantageous for lift augmentation based on a literature survey. Computational modeling indicates that such propellers can generate the same average induced axial velocity while consuming less power and producing less thrust than conventional propeller designs. For an example problem based on specifications for NASA's Scalable Convergent Electric Propulsion Technology and Operations Research (SCEPTOR) flight demonstrator, a propeller designed with the new method requires approximately 15% less power and produces approximately 11% less thrust than one designed for minimum induced loss. Higher-order modeling and/or wind tunnel testing are needed to verify the predicted performance.

  10. Shape-Controlled Synthesis of Hybrid Nanomaterials via Three-Dimensional Hydrodynamic Focusing

    PubMed Central

    2015-01-01

    Shape-controlled synthesis of nanomaterials through a simple, continuous, and low-cost method is essential to nanomaterials research toward practical applications. Hydrodynamic focusing, with its advantages of simplicity, low-cost, and precise control over reaction conditions, has been used for nanomaterial synthesis. While most studies have focused on improving the uniformity and size control, few have addressed the potential of tuning the shape of the synthesized nanomaterials. Here we demonstrate a facile method to synthesize hybrid materials by three-dimensional hydrodynamic focusing (3D-HF). While keeping the flow rates of the reagents constant and changing only the flow rate of the buffer solution, the molar ratio of two reactants (i.e., tetrathiafulvalene (TTF) and HAuCl4) within the reaction zone varies. The synthesized TTF–Au hybrid materials possess very different and predictable morphologies. The reaction conditions at different buffer flow rates are studied through computational simulation, and the formation mechanisms of different structures are discussed. This simple one-step method to achieve continuous shape-tunable synthesis highlights the potential of 3D-HF in nanomaterials research. PMID:25268035

  11. Shape-controlled synthesis of hybrid nanomaterials via three-dimensional hydrodynamic focusing.

    PubMed

    Lu, Mengqian; Yang, Shikuan; Ho, Yi-Ping; Grigsby, Christopher L; Leong, Kam W; Huang, Tony Jun

    2014-10-28

    Shape-controlled synthesis of nanomaterials through a simple, continuous, and low-cost method is essential to nanomaterials research toward practical applications. Hydrodynamic focusing, with its advantages of simplicity, low-cost, and precise control over reaction conditions, has been used for nanomaterial synthesis. While most studies have focused on improving the uniformity and size control, few have addressed the potential of tuning the shape of the synthesized nanomaterials. Here we demonstrate a facile method to synthesize hybrid materials by three-dimensional hydrodynamic focusing (3D-HF). While keeping the flow rates of the reagents constant and changing only the flow rate of the buffer solution, the molar ratio of two reactants (i.e., tetrathiafulvalene (TTF) and HAuCl4) within the reaction zone varies. The synthesized TTF-Au hybrid materials possess very different and predictable morphologies. The reaction conditions at different buffer flow rates are studied through computational simulation, and the formation mechanisms of different structures are discussed. This simple one-step method to achieve continuous shape-tunable synthesis highlights the potential of 3D-HF in nanomaterials research.

  12. Disability: a model and measurement technique.

    PubMed Central

    Williams, R G; Johnston, M; Willis, L A; Bennett, A E

    1976-01-01

    Current methods of ranking or scoring disability tend to be arbitrary. A new method is put forward on the hypothesis that disability progresses in regular, cumulative patterns. A model of disability is defined and tested with the use of Guttman scale analysis. Its validity is indicated on data from a survey in the community and from postsurgical patients, and some factors involved in scale variation are identified. The model provides a simple measurement technique and has implications for the assessment of individual disadvantage, for the prediction of progress in recovery or deterioration, and for evaluation of the outcome of treatment regimes. PMID:953379

  13. An application of the Braunbeck method to the Maggi-Rubinowicz field representation

    NASA Technical Reports Server (NTRS)

    Meneghini, R.

    1982-01-01

    The Braunbek method is applied to the generalized vector potential associated with the Maggi-rubinowicz representation. Under certain approximations, an asymptotic evaluation of the vector potential is obtained. For observation points away from caustics or shadow boundaries, the field derived from this quantity is the same as that determined from the geometrical theory of diffraction on a singly diffracted edge ray. An evaluation of the field for the simple case of a plane wave normally incident on a circular aperture is presented showing that the field predicted by the Maggi-Rubinowicz theory is continuous across the shadow boundary.

  14. An application of the Braunbeck method to the Maggi-Rubinowicz field representation

    NASA Astrophysics Data System (ADS)

    Meneghini, R.

    1982-06-01

    The Braunbek method is applied to the generalized vector potential associated with the Maggi-rubinowicz representation. Under certain approximations, an asymptotic evaluation of the vector potential is obtained. For observation points away from caustics or shadow boundaries, the field derived from this quantity is the same as that determined from the geometrical theory of diffraction on a singly diffracted edge ray. An evaluation of the field for the simple case of a plane wave normally incident on a circular aperture is presented showing that the field predicted by the Maggi-Rubinowicz theory is continuous across the shadow boundary.

  15. Flame Shapes of Luminous NonBuoyant Laminar Coflowing Jet Diffusion Flames

    NASA Technical Reports Server (NTRS)

    Lin, K.-C.; Faeth, G. M.

    1999-01-01

    Laminar diffusion flames are of interest as model flame systems that are more tractable for analysis and experiments than practical turbulent diffusion flames. Certainly understanding laminar flames must precede understanding more complex turbulent flames while man'y laminar diffusion flame properties are directly relevant to turbulent diffusion flames using laminar flamelet concepts. Laminar diffusion flame shapes have been of interest since the classical study of Burke and Schumann because they involve a simple nonintrusive measurement that is convenient for evaluating flame structure predictions. Motivated by these observations, the shapes of laminar flames were considered during the present investigation. The present study was limited to nonbuoyant flames because most practical flames are not buoyant. Effects of buoyancy were minimized by observing flames having large flow velocities at small pressures. Present methods were based on the study of the shapes of nonbu,3yant round laminar jet diffusion flames of Lin et al. where it was found that a simple analysis due to Spalding yielded good predictions of the flame shapes reported by Urban et al. and Sunderland et al.

  16. A predictive score to identify hospitalized patients' risk of discharge to a post-acute care facility

    PubMed Central

    Louis Simonet, Martine; Kossovsky, Michel P; Chopard, Pierre; Sigaud, Philippe; Perneger, Thomas V; Gaspoz, Jean-Michel

    2008-01-01

    Background Early identification of patients who need post-acute care (PAC) may improve discharge planning. The purposes of the study were to develop and validate a score predicting discharge to a post-acute care (PAC) facility and to determine its best assessment time. Methods We conducted a prospective study including 349 (derivation cohort) and 161 (validation cohort) consecutive patients in a general internal medicine service of a teaching hospital. We developed logistic regression models predicting discharge to a PAC facility, based on patient variables measured on admission (day 1) and on day 3. The value of each model was assessed by its area under the receiver operating characteristics curve (AUC). A simple numerical score was derived from the best model, and was validated in a separate cohort. Results Prediction of discharge to a PAC facility was as accurate on day 1 (AUC: 0.81) as on day 3 (AUC: 0.82). The day-3 model was more parsimonious, with 5 variables: patient's partner inability to provide home help (4 pts); inability to self-manage drug regimen (4 pts); number of active medical problems on admission (1 pt per problem); dependency in bathing (4 pts) and in transfers from bed to chair (4 pts) on day 3. A score ≥ 8 points predicted discharge to a PAC facility with a sensitivity of 87% and a specificity of 63%, and was significantly associated with inappropriate hospital days due to discharge delays. Internal and external validations confirmed these results. Conclusion A simple score computed on the 3rd hospital day predicted discharge to a PAC facility with good accuracy. A score > 8 points should prompt early discharge planning. PMID:18647410

  17. Predicting dietary intakes with simple food recall information: a case study from rural Mozambique.

    PubMed

    Rose, D; Tschirley, D

    2003-10-01

    Improving dietary status is an important development objective, but monitoring of progress in this area can be too costly for many low-income countries. This paper demonstrates a simple, inexpensive technique for monitoring household diets in Mozambique. Secondary analysis of data from an intensive field survey on household food consumption and agricultural practices, known as the Nampula/Cabo Delgado Study (NCD). In total, 388 households in 16 villages from a stratified random sample of rural areas in Nampula and Cabo Delgado provinces in northern Mozambique. The NCD employed a quantitative 24-h food recall on two nonconsecutive days in each of the three different seasons. A dietary intake prediction model was developed with linear regression techniques based on NCD nutrient intake data and easy-to-collect variables, such as food group consumption and household size The model was used to predict the prevalence of low intakes among subsamples from the field study using only easy-to-collect variables. Using empirical data for the harvest season from the original NCD study, 40% of the observations on households had low-energy intakes, whereas rates of low intake for protein, vitamin A, and iron, were 14, 94, and 39, respectively. The model developed here predicted that 42% would have low-energy intakes and that 12, 93, and 35% would have low-protein, vitamin A, and iron intakes, respectively. Similarly, close predictions were found using an aggregate index of overall diet quality. This work demonstrates the potential for using low-cost methods for monitoring dietary intake in Mozambique.

  18. In situ Observations of Heliospheric Current Sheets Evolution

    NASA Astrophysics Data System (ADS)

    Liu, Yong; Peng, Jun; Huang, Jia; Klecker, Berndt

    2017-04-01

    We investigate the Heliospheric current sheet observation time difference of the spacecraft using the STEREO, ACE and WIND data. The observations are first compared to a simple theory in which the time difference is only determined by the radial and longitudinal separation between the spacecraft. The predictions fit well with the observations except for a few events. Then the time delay caused by the latitudinal separation is taken in consideration. The latitude of each spacecraft is calculated based on the PFSS model assuming that heliospheric current sheets propagate at the solar wind speed without changing their shapes from the origin to spacecraft near 1AU. However, including the latitudinal effects does not improve the prediction, possibly because that the PFSS model may not locate the current sheets accurately enough. A new latitudinal delay is predicted based on the time delay using the observations on ACE data. The new method improved the prediction on the time lag between spacecraft; however, further study is needed to predict the location of the heliospheric current sheet more accurately.

  19. Predicting the risk of malignancy in adnexal masses based on the Simple Rules from the International Ovarian Tumor Analysis group.

    PubMed

    Timmerman, Dirk; Van Calster, Ben; Testa, Antonia; Savelli, Luca; Fischerova, Daniela; Froyman, Wouter; Wynants, Laure; Van Holsbeke, Caroline; Epstein, Elisabeth; Franchi, Dorella; Kaijser, Jeroen; Czekierdowski, Artur; Guerriero, Stefano; Fruscio, Robert; Leone, Francesco P G; Rossi, Alberto; Landolfo, Chiara; Vergote, Ignace; Bourne, Tom; Valentin, Lil

    2016-04-01

    Accurate methods to preoperatively characterize adnexal tumors are pivotal for optimal patient management. A recent metaanalysis concluded that the International Ovarian Tumor Analysis algorithms such as the Simple Rules are the best approaches to preoperatively classify adnexal masses as benign or malignant. We sought to develop and validate a model to predict the risk of malignancy in adnexal masses using the ultrasound features in the Simple Rules. This was an international cross-sectional cohort study involving 22 oncology centers, referral centers for ultrasonography, and general hospitals. We included consecutive patients with an adnexal tumor who underwent a standardized transvaginal ultrasound examination and were selected for surgery. Data on 5020 patients were recorded in 3 phases from 2002 through 2012. The 5 Simple Rules features indicative of a benign tumor (B-features) and the 5 features indicative of malignancy (M-features) are based on the presence of ascites, tumor morphology, and degree of vascularity at ultrasonography. Gold standard was the histopathologic diagnosis of the adnexal mass (pathologist blinded to ultrasound findings). Logistic regression analysis was used to estimate the risk of malignancy based on the 10 ultrasound features and type of center. The diagnostic performance was evaluated by area under the receiver operating characteristic curve, sensitivity, specificity, positive likelihood ratio (LR+), negative likelihood ratio (LR-), positive predictive value (PPV), negative predictive value (NPV), and calibration curves. Data on 4848 patients were analyzed. The malignancy rate was 43% (1402/3263) in oncology centers and 17% (263/1585) in other centers. The area under the receiver operating characteristic curve on validation data was very similar in oncology centers (0.917; 95% confidence interval, 0.901-0.931) and other centers (0.916; 95% confidence interval, 0.873-0.945). Risk estimates showed good calibration. In all, 23% of patients in the validation data set had a very low estimated risk (<1%) and 48% had a high estimated risk (≥30%). For the 1% risk cutoff, sensitivity was 99.7%, specificity 33.7%, LR+ 1.5, LR- 0.010, PPV 44.8%, and NPV 98.9%. For the 30% risk cutoff, sensitivity was 89.0%, specificity 84.7%, LR+ 5.8, LR- 0.13, PPV 75.4%, and NPV 93.9%. Quantification of the risk of malignancy based on the Simple Rules has good diagnostic performance both in oncology centers and other centers. A simple classification based on these risk estimates may form the basis of a clinical management system. Patients with a high risk may benefit from surgery by a gynecological oncologist, while patients with a lower risk may be managed locally. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice.

    PubMed

    Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah

    2016-01-01

    One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel "trick" concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.

  1. Multivariate methods on the excitation emission matrix fluorescence spectroscopic data of diesel-kerosene mixtures: a comparative study.

    PubMed

    Divya, O; Mishra, Ashok K

    2007-05-29

    Quantitative determination of kerosene fraction present in diesel has been carried out based on excitation emission matrix fluorescence (EEMF) along with parallel factor analysis (PARAFAC) and N-way partial least squares regression (N-PLS). EEMF is a simple, sensitive and nondestructive method suitable for the analysis of multifluorophoric mixtures. Calibration models consisting of varying compositions of diesel and kerosene were constructed and their validation was carried out using leave-one-out cross validation method. The accuracy of the model was evaluated through the root mean square error of prediction (RMSEP) for the PARAFAC, N-PLS and unfold PLS methods. N-PLS was found to be a better method compared to PARAFAC and unfold PLS method because of its low RMSEP values.

  2. Predicting Drug-Target Interactions for New Drug Compounds Using a Weighted Nearest Neighbor Profile.

    PubMed

    van Laarhoven, Twan; Marchiori, Elena

    2013-01-01

    In silico discovery of interactions between drug compounds and target proteins is of core importance for improving the efficiency of the laborious and costly experimental determination of drug-target interaction. Drug-target interaction data are available for many classes of pharmaceutically useful target proteins including enzymes, ion channels, GPCRs and nuclear receptors. However, current drug-target interaction databases contain a small number of drug-target pairs which are experimentally validated interactions. In particular, for some drug compounds (or targets) there is no available interaction. This motivates the need for developing methods that predict interacting pairs with high accuracy also for these 'new' drug compounds (or targets). We show that a simple weighted nearest neighbor procedure is highly effective for this task. We integrate this procedure into a recent machine learning method for drug-target interaction we developed in previous work. Results of experiments indicate that the resulting method predicts true interactions with high accuracy also for new drug compounds and achieves results comparable or better than those of recent state-of-the-art algorithms. Software is publicly available at http://cs.ru.nl/~tvanlaarhoven/drugtarget2013/.

  3. Life Prediction/Reliability Data of Glass-Ceramic Material Determined for Radome Applications

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Gyekenyesi, John P.

    2002-01-01

    Brittle materials, ceramics, are candidate materials for a variety of structural applications for a wide range of temperatures. However, the process of slow crack growth, occurring in any loading configuration, limits the service life of structural components. Therefore, it is important to accurately determine the slow crack growth parameters required for component life prediction using an appropriate test methodology. This test methodology also should be useful in determining the influence of component processing and composition variables on the slow crack growth behavior of newly developed or existing materials, thereby allowing the component processing and composition to be tailored and optimized to specific needs. Through the American Society for Testing and Materials (ASTM), the authors recently developed two test methods to determine the life prediction parameters of ceramics. The two test standards, ASTM 1368 for room temperature and ASTM C 1465 for elevated temperatures, were published in the 2001 Annual Book of ASTM Standards, Vol. 15.01. Briefly, the test method employs constant stress-rate (or dynamic fatigue) testing to determine flexural strengths as a function of the applied stress rate. The merit of this test method lies in its simplicity: strengths are measured in a routine manner in flexure at four or more applied stress rates with an appropriate number of test specimens at each applied stress rate. The slow crack growth parameters necessary for life prediction are then determined from a simple relationship between the strength and the applied stress rate. Extensive life prediction testing was conducted at the NASA Glenn Research Center using the developed ASTM C 1368 test method to determine the life prediction parameters of a glass-ceramic material that the Navy will use for radome applications.

  4. High-order Newton-penalty algorithms

    NASA Astrophysics Data System (ADS)

    Dussault, Jean-Pierre

    2005-10-01

    Recent efforts in differentiable non-linear programming have been focused on interior point methods, akin to penalty and barrier algorithms. In this paper, we address the classical equality constrained program solved using the simple quadratic loss penalty function/algorithm. The suggestion to use extrapolations to track the differentiable trajectory associated with penalized subproblems goes back to the classic monograph of Fiacco & McCormick. This idea was further developed by Gould who obtained a two-steps quadratically convergent algorithm using prediction steps and Newton correction. Dussault interpreted the prediction step as a combined extrapolation with respect to the penalty parameter and the residual of the first order optimality conditions. Extrapolation with respect to the residual coincides with a Newton step.We explore here higher-order extrapolations, thus higher-order Newton-like methods. We first consider high-order variants of the Newton-Raphson method applied to non-linear systems of equations. Next, we obtain improved asymptotic convergence results for the quadratic loss penalty algorithm by using high-order extrapolation steps.

  5. Charge transfer kinetics at the solid-solid interface in porous electrodes

    NASA Astrophysics Data System (ADS)

    Bai, Peng; Bazant, Martin Z.

    2014-04-01

    Interfacial charge transfer is widely assumed to obey the Butler-Volmer kinetics. For certain liquid-solid interfaces, the Marcus-Hush-Chidsey theory is more accurate and predictive, but it has not been applied to porous electrodes. Here we report a simple method to extract the charge transfer rates in carbon-coated LiFePO4 porous electrodes from chronoamperometry experiments, obtaining curved Tafel plots that contradict the Butler-Volmer equation but fit the Marcus-Hush-Chidsey prediction over a range of temperatures. The fitted reorganization energy matches the Born solvation energy for electron transfer from carbon to the iron redox site. The kinetics are thus limited by electron transfer at the solid-solid (carbon-LixFePO4) interface rather than by ion transfer at the liquid-solid interface, as previously assumed. The proposed experimental method generalizes Chidsey’s method for phase-transforming particles and porous electrodes, and the results show the need to incorporate Marcus kinetics in modelling batteries and other electrochemical systems.

  6. High fidelity studies of exploding foil initiator bridges, Part 1: Experimental method

    NASA Astrophysics Data System (ADS)

    Bowden, Mike; Neal, William

    2017-01-01

    Simulations of high voltage detonators, such as Exploding Bridgewire (EBW) and Exploding Foil Initiators (EFI), have historically been simple, often empirical, one-dimensional models capable of predicting parameters such as current, voltage and in the case of EFIs, flyer velocity. Correspondingly, experimental methods have in general been limited to the same parameters. With the advent of complex, first principles magnetohydrodynamic codes such as ALEGRA and ALE-MHD, it is now possible to simulate these components in three dimensions, predicting a much greater range of parameters than before. A significant improvement in experimental capability was therefore required to ensure these simulations could be adequately validated. In this first paper of a three part study, the experimental method for determining the current, voltage, flyer velocity and multi-dimensional profile of detonator components is presented. This improved capability, along with high fidelity simulations, offer an opportunity to gain a greater understanding of the processes behind the functioning of EBW and EFI detonators.

  7. Improved Accuracy of the Inherent Shrinkage Method for Fast and More Reliable Welding Distortion Calculations

    NASA Astrophysics Data System (ADS)

    Mendizabal, A.; González-Díaz, J. B.; San Sebastián, M.; Echeverría, A.

    2016-07-01

    This paper describes the implementation of a simple strategy adopted for the inherent shrinkage method (ISM) to predict welding-induced distortion. This strategy not only makes it possible for the ISM to reach accuracy levels similar to the detailed transient analysis method (considered the most reliable technique for calculating welding distortion) but also significantly reduces the time required for these types of calculations. This strategy is based on the sequential activation of welding blocks to account for welding direction and transient movement of the heat source. As a result, a significant improvement in distortion prediction is achieved. This is demonstrated by experimentally measuring and numerically analyzing distortions in two case studies: a vane segment subassembly of an aero-engine, represented with 3D-solid elements, and a car body component, represented with 3D-shell elements. The proposed strategy proves to be a good alternative for quickly estimating the correct behaviors of large welded components and may have important practical applications in the manufacturing industry.

  8. Creep-rupture reliability analysis

    NASA Technical Reports Server (NTRS)

    Peralta-Duran, A.; Wirsching, P. H.

    1984-01-01

    A probabilistic approach to the correlation and extrapolation of creep-rupture data is presented. Time temperature parameters (TTP) are used to correlate the data, and an analytical expression for the master curve is developed. The expression provides a simple model for the statistical distribution of strength and fits neatly into a probabilistic design format. The analysis focuses on the Larson-Miller and on the Manson-Haferd parameters, but it can be applied to any of the TTP's. A method is developed for evaluating material dependent constants for TTP's. It is shown that optimized constants can provide a significant improvement in the correlation of the data, thereby reducing modelling error. Attempts were made to quantify the performance of the proposed method in predicting long term behavior. Uncertainty in predicting long term behavior from short term tests was derived for several sets of data. Examples are presented which illustrate the theory and demonstrate the application of state of the art reliability methods to the design of components under creep.

  9. Bias and Stability of Single Variable Classifiers for Feature Ranking and Selection

    PubMed Central

    Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Fotouhi, Farshad

    2014-01-01

    Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single Variable Classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices. PMID:25177107

  10. Bias and Stability of Single Variable Classifiers for Feature Ranking and Selection.

    PubMed

    Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Fotouhi, Farshad

    2014-11-01

    Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single Variable Classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices.

  11. A moment projection method for population balance dynamics with a shrinkage term

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Shaohua; Yapp, Edward K.Y.; Akroyd, Jethro

    A new method of moments for solving the population balance equation is developed and presented. The moment projection method (MPM) is numerically simple and easy to implement and attempts to address the challenge of particle shrinkage due to processes such as oxidation, evaporation or dissolution. It directly solves the moment transport equation for the moments and tracks the number of the smallest particles using the algorithm by Blumstein and Wheeler (1973) . The performance of the new method is measured against the method of moments (MOM) and the hybrid method of moments (HMOM). The results suggest that MPM performs muchmore » better than MOM and HMOM where shrinkage is dominant. The new method predicts mean quantities which are almost as accurate as a high-precision stochastic method calculated using the established direct simulation algorithm (DSA).« less

  12. Prediction of mitochondrial proteins of malaria parasite using split amino acid composition and PSSM profile.

    PubMed

    Verma, Ruchi; Varshney, Grish C; Raghava, G P S

    2010-06-01

    The rate of human death due to malaria is increasing day-by-day. Thus the malaria causing parasite Plasmodium falciparum (PF) remains the cause of concern. With the wealth of data now available, it is imperative to understand protein localization in order to gain deeper insight into their functional roles. In this manuscript, an attempt has been made to develop prediction method for the localization of mitochondrial proteins. In this study, we describe a method for predicting mitochondrial proteins of malaria parasite using machine-learning technique. All models were trained and tested on 175 proteins (40 mitochondrial and 135 non-mitochondrial proteins) and evaluated using five-fold cross validation. We developed a Support Vector Machine (SVM) model for predicting mitochondrial proteins of P. falciparum, using amino acids and dipeptides composition and achieved maximum MCC 0.38 and 0.51, respectively. In this study, split amino acid composition (SAAC) is used where composition of N-termini, C-termini, and rest of protein is computed separately. The performance of SVM model improved significantly from MCC 0.38 to 0.73 when SAAC instead of simple amino acid composition was used as input. In addition, SVM model has been developed using composition of PSSM profile with MCC 0.75 and accuracy 91.38%. We achieved maximum MCC 0.81 with accuracy 92% using a hybrid model, which combines PSSM profile and SAAC. When evaluated on an independent dataset our method performs better than existing methods. A web server PFMpred has been developed for predicting mitochondrial proteins of malaria parasites ( http://www.imtech.res.in/raghava/pfmpred/).

  13. SIMPL: A Simplified Model-Based Program for the Analysis and Visualization of Groundwater Rebound in Abandoned Mines to Prevent Contamination of Water and Soils by Acid Mine Drainage

    PubMed Central

    Kim, Sung-Min

    2018-01-01

    Cessation of dewatering following underground mine closure typically results in groundwater rebound, because mine voids and surrounding strata undergo flooding up to the levels of the decant points, such as shafts and drifts. SIMPL (Simplified groundwater program In Mine workings using the Pipe equation and Lumped parameter model), a simplified lumped parameter model-based program for predicting groundwater levels in abandoned mines, is presented herein. The program comprises a simulation engine module, 3D visualization module, and graphical user interface, which aids data processing, analysis, and visualization of results. The 3D viewer facilitates effective visualization of the predicted groundwater level rebound phenomenon together with a topographic map, mine drift, goaf, and geological properties from borehole data. SIMPL is applied to data from the Dongwon coal mine and Dalsung copper mine in Korea, with strong similarities in simulated and observed results. By considering mine workings and interpond connections, SIMPL can thus be used to effectively analyze and visualize groundwater rebound. In addition, the predictions by SIMPL can be utilized to prevent the surrounding environment (water and soil) from being polluted by acid mine drainage. PMID:29747480

  14. Recent developments on SMA actuators: predicting the actuation fatigue life for variable loading schemes

    NASA Astrophysics Data System (ADS)

    Wheeler, Robert W.; Lagoudas, Dimitris C.

    2017-04-01

    Shape memory alloys (SMAs), due to their ability to repeatably recover substantial deformations under applied mechanical loading, have the potential to impact the aerospace, automotive, biomedical, and energy industries as weight and volume saving replacements for conventional actuators. While numerous applications of SMA actuators have been flight tested and can be found in industrial applications, these actuators are generally limited to non-critical components, are not widely implemented and frequently one-off designs, and are generally overdesigned due to a lack of understanding of the effect of the loading path on the fatigue life and the lack of an accurate method for predicting actuator lifetimes. In recent years, multiple research efforts have increased our understanding of the actuation fatigue process of SMAs. These advances can be utilized to predict the fatigue lives and failure loads in SMA actuators. Additionally, these prediction methods can be implemented in order to intelligently design actuators in accordance with their fatigue and failure limits. In the following paper, both simple and complex thermomechanical loading paths have been considered. Experimental data was utilized from two material systems: equiatomic Nickel-Titanium and Nickelrich Nickel-Titanium.

  15. e-Bitter: Bitterant Prediction by the Consensus Voting From the Machine-Learning Methods

    PubMed Central

    Zheng, Suqing; Jiang, Mengying; Zhao, Chengwei; Zhu, Rui; Hu, Zhicheng; Xu, Yong; Lin, Fu

    2018-01-01

    In-silico bitterant prediction received the considerable attention due to the expensive and laborious experimental-screening of the bitterant. In this work, we collect the fully experimental dataset containing 707 bitterants and 592 non-bitterants, which is distinct from the fully or partially hypothetical non-bitterant dataset used in the previous works. Based on this experimental dataset, we harness the consensus votes from the multiple machine-learning methods (e.g., deep learning etc.) combined with the molecular fingerprint to build the bitter/bitterless classification models with five-fold cross-validation, which are further inspected by the Y-randomization test and applicability domain analysis. One of the best consensus models affords the accuracy, precision, specificity, sensitivity, F1-score, and Matthews correlation coefficient (MCC) of 0.929, 0.918, 0.898, 0.954, 0.936, and 0.856 respectively on our test set. For the automatic prediction of bitterant, a graphic program “e-Bitter” is developed for the convenience of users via the simple mouse click. To our best knowledge, it is for the first time to adopt the consensus model for the bitterant prediction and develop the first free stand-alone software for the experimental food scientist. PMID:29651416

  16. The Detection and Quantification of Adulteration in Ground Roasted Asian Palm Civet Coffee Using Near-Infrared Spectroscopy in Tandem with Chemometrics

    NASA Astrophysics Data System (ADS)

    Suhandy, D.; Yulia, M.; Ogawa, Y.; Kondo, N.

    2018-05-01

    In the present research, an evaluation of using near infrared (NIR) spectroscopy in tandem with full spectrum partial least squares (FS-PLS) regression for quantification of degree of adulteration in civet coffee was conducted. A number of 126 ground roasted coffee samples with degree of adulteration 0-51% were prepared. Spectral data were acquired using a NIR spectrometer equipped with an integrating sphere for diffuse reflectance measurement in the range of 1300-2500 nm. The samples were divided into two groups calibration sample set (84 samples) and prediction sample set (42 samples). The calibration model was developed on original spectra using FS-PLS regression with full-cross validation method. The calibration model exhibited the determination coefficient R2=0.96 for calibration and R2=0.92 for validation. The prediction resulted in low root mean square error of prediction (RMSEP) (4.67%) and high ratio prediction to deviation (RPD) (3.75). In conclusion, the degree of adulteration in civet coffee have been quantified successfully by using NIR spectroscopy and FS-PLS regression in a non-destructive, economical, precise, and highly sensitive method, which uses very simple sample preparation.

  17. Amplitude modulation detection by human listeners in sound fields.

    PubMed

    Zahorik, Pavel; Kim, Duck O; Kuwada, Shigeyuki; Anderson, Paul W; Brandewie, Eugene; Srinivasan, Nirmal

    2011-10-01

    The temporal modulation transfer function (TMTF) approach allows techniques from linear systems analysis to be used to predict how the auditory system will respond to arbitrary patterns of amplitude modulation (AM). Although this approach forms the basis for a standard method of predicting speech intelligibility based on estimates of the acoustical modulation transfer function (MTF) between source and receiver, human sensitivity to AM as characterized by the TMTF has not been extensively studied under realistic listening conditions, such as in reverberant sound fields. Here, TMTFs (octave bands from 2 - 512 Hz) were obtained in 3 listening conditions simulated using virtual auditory space techniques: diotic, anechoic sound field, reverberant room sound field. TMTFs were then related to acoustical MTFs estimated using two different methods in each of the listening conditions. Both diotic and anechoic data were found to be in good agreement with classic results, but AM thresholds in the reverberant room were lower than predictions based on acoustical MTFs. This result suggests that simple linear systems techniques may not be appropriate for predicting TMTFs from acoustical MTFs in reverberant sound fields, and may be suggestive of mechanisms that functionally enhance modulation during reverberant listening.

  18. e-Bitter: Bitterant Prediction by the Consensus Voting From the Machine-learning Methods

    NASA Astrophysics Data System (ADS)

    Zheng, Suqing; Jiang, Mengying; Zhao, Chengwei; Zhu, Rui; Hu, Zhicheng; Xu, Yong; Lin, Fu

    2018-03-01

    In-silico bitterant prediction received the considerable attention due to the expensive and laborious experimental-screening of the bitterant. In this work, we collect the fully experimental dataset containing 707 bitterants and 592 non-bitterants, which is distinct from the fully or partially hypothetical non-bitterant dataset used in the previous works. Based on this experimental dataset, we harness the consensus votes from the multiple machine-learning methods (e.g., deep learning etc.) combined with the molecular fingerprint to build the bitter/bitterless classification models with five-fold cross-validation, which are further inspected by the Y-randomization test and applicability domain analysis. One of the best consensus models affords the accuracy, precision, specificity, sensitivity, F1-score, and Matthews correlation coefficient (MCC) of 0.929, 0.918, 0.898, 0.954, 0.936, and 0.856 respectively on our test set. For the automatic prediction of bitterant, a graphic program “e-Bitter” is developed for the convenience of users via the simple mouse click. To our best knowledge, it is for the first time to adopt the consensus model for the bitterant prediction and develop the first free stand-alone software for the experimental food scientist.

  19. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    NASA Astrophysics Data System (ADS)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  20. A simple formula for predicting claw volume of cattle.

    PubMed

    Scott, T D; Naylor, J M; Greenough, P R

    1999-11-01

    The object of this study was to develop a simple method for accurately calculating the volume of bovine claws under field conditions. The digits of 30 slaughterhouse beef cattle were examined and the following four linear measurements taken from each pair of claws: (1) the length of the dorsal surface of the claw (Toe); (2) the length of the coronary band (CorBand); (3) the length of the bearing surface (Base); and (4) the height of the claw at the abaxial groove (AbaxGr). Measurements of claw volume using a simple hydrometer were highly repeatable (r(2)= 0.999) and could be calculated from linear measurements using the formula:Claw Volume (cm(3)) = (17.192 x Base) + (7.467 x AbaxGr) + 45.270 x (CorBand) - 798.5This formula was found to be accurate (r(2)= 0.88) when compared to volume data derived from a hydrometer displacement procedure. The front claws occupied 54% of the total volume compared to 46% for the hind claws. Copyright 1999 Harcourt Publishers Ltd.

  1. The structure of tropical forests and sphere packings

    PubMed Central

    Jahn, Markus Wilhelm; Dobner, Hans-Jürgen; Wiegand, Thorsten; Huth, Andreas

    2015-01-01

    The search for simple principles underlying the complex architecture of ecological communities such as forests still challenges ecological theorists. We use tree diameter distributions—fundamental for deriving other forest attributes—to describe the structure of tropical forests. Here we argue that tree diameter distributions of natural tropical forests can be explained by stochastic packing of tree crowns representing a forest crown packing system: a method usually used in physics or chemistry. We demonstrate that tree diameter distributions emerge accurately from a surprisingly simple set of principles that include site-specific tree allometries, random placement of trees, competition for space, and mortality. The simple static model also successfully predicted the canopy structure, revealing that most trees in our two studied forests grow up to 30–50 m in height and that the highest packing density of about 60% is reached between the 25- and 40-m height layer. Our approach is an important step toward identifying a minimal set of processes responsible for generating the spatial structure of tropical forests. PMID:26598678

  2. A prospective study of the Bedside Index for Severity in Acute Pancreatitis (BISAP) score in acute pancreatitis: an Indian perspective.

    PubMed

    Senapati, Debadutta; Debata, Prasanna Kumar; Jenasamant, Saumya Sekhar; Nayak, Anil Kumar; Gowda S, Manoj; Swain, Narendra Nath

    2014-01-01

    A simple and easily applicable system for stratifying patients with acute pancreatitis is lacking. The aim of our study was to evaluate the ability of BISAP score to predict mortality in acute pancreatitis patients from our institution and to predict which patients are at risk for development of organ failure, persistent organ failure and pancreatic necrosis. All patients with acute pancreatitis were included in the study. BISAP score was calculated within 24 h of admission. A Contrast CT was used to differentiate interstitial from necrotizing pancreatitis within seven days of hospitalization whereas Marshall Scoring System was used to characterize organ failure. Among 246 patients M:F = 153:93, most common aetiology among men was alcoholism and among women was gallstone disease. 207 patients had no organ failure and remaining 39 developed organ failure. 17 patients had persistent organ failure, 16 of those with BISAP score ≥3. 13 patients in our study died, out of which 12 patients had BISAP score ≥3. We also found that a BISAP score of ≥3 had a sensitivity of 92%, specificity of 76%, a positive predictive value of 17%, and a negative predictive value of 99% for mortality. The BISAP score is a simple and accurate method for the early identification of patients at increased risk for in hospital mortality and morbidity. Copyright © 2014 IAP and EPC. Published by Elsevier B.V. All rights reserved.

  3. An enhanced deterministic K-Means clustering algorithm for cancer subtype prediction from gene expression data.

    PubMed

    Nidheesh, N; Abdul Nazeer, K A; Ameer, P M

    2017-12-01

    Clustering algorithms with steps involving randomness usually give different results on different executions for the same dataset. This non-deterministic nature of algorithms such as the K-Means clustering algorithm limits their applicability in areas such as cancer subtype prediction using gene expression data. It is hard to sensibly compare the results of such algorithms with those of other algorithms. The non-deterministic nature of K-Means is due to its random selection of data points as initial centroids. We propose an improved, density based version of K-Means, which involves a novel and systematic method for selecting initial centroids. The key idea of the algorithm is to select data points which belong to dense regions and which are adequately separated in feature space as the initial centroids. We compared the proposed algorithm to a set of eleven widely used single clustering algorithms and a prominent ensemble clustering algorithm which is being used for cancer data classification, based on the performances on a set of datasets comprising ten cancer gene expression datasets. The proposed algorithm has shown better overall performance than the others. There is a pressing need in the Biomedical domain for simple, easy-to-use and more accurate Machine Learning tools for cancer subtype prediction. The proposed algorithm is simple, easy-to-use and gives stable results. Moreover, it provides comparatively better predictions of cancer subtypes from gene expression data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. ATLS Hypovolemic Shock Classification by Prediction of Blood Loss in Rats Using Regression Models.

    PubMed

    Choi, Soo Beom; Choi, Joon Yul; Park, Jee Soo; Kim, Deok Won

    2016-07-01

    In our previous study, our input data set consisted of 78 rats, the blood loss in percent as a dependent variable, and 11 independent variables (heart rate, systolic blood pressure, diastolic blood pressure, mean arterial pressure, pulse pressure, respiration rate, temperature, perfusion index, lactate concentration, shock index, and new index (lactate concentration/perfusion)). The machine learning methods for multicategory classification were applied to a rat model in acute hemorrhage to predict the four Advanced Trauma Life Support (ATLS) hypovolemic shock classes for triage in our previous study. However, multicategory classification is much more difficult and complicated than binary classification. We introduce a simple approach for classifying ATLS hypovolaemic shock class by predicting blood loss in percent using support vector regression and multivariate linear regression (MLR). We also compared the performance of the classification models using absolute and relative vital signs. The accuracies of support vector regression and MLR models with relative values by predicting blood loss in percent were 88.5% and 84.6%, respectively. These were better than the best accuracy of 80.8% of the direct multicategory classification using the support vector machine one-versus-one model in our previous study for the same validation data set. Moreover, the simple MLR models with both absolute and relative values could provide possibility of the future clinical decision support system for ATLS classification. The perfusion index and new index were more appropriate with relative changes than absolute values.

  5. Artificial intelligence exploration of unstable protocells leads to predictable properties and discovery of collective behavior.

    PubMed

    Points, Laurie J; Taylor, James Ward; Grizou, Jonathan; Donkers, Kevin; Cronin, Leroy

    2018-01-30

    Protocell models are used to investigate how cells might have first assembled on Earth. Some, like oil-in-water droplets, can be seemingly simple models, while able to exhibit complex and unpredictable behaviors. How such simple oil-in-water systems can come together to yield complex and life-like behaviors remains a key question. Herein, we illustrate how the combination of automated experimentation and image processing, physicochemical analysis, and machine learning allows significant advances to be made in understanding the driving forces behind oil-in-water droplet behaviors. Utilizing >7,000 experiments collected using an autonomous robotic platform, we illustrate how smart automation cannot only help with exploration, optimization, and discovery of new behaviors, but can also be core to developing fundamental understanding of such systems. Using this process, we were able to relate droplet formulation to behavior via predicted physical properties, and to identify and predict more occurrences of a rare collective droplet behavior, droplet swarming. Proton NMR spectroscopic and qualitative pH methods enabled us to better understand oil dissolution, chemical change, phase transitions, and droplet and aqueous phase flows, illustrating the utility of the combination of smart-automation and traditional analytical chemistry techniques. We further extended our study for the simultaneous exploration of both the oil and aqueous phases using a robotic platform. Overall, this work shows that the combination of chemistry, robotics, and artificial intelligence enables discovery, prediction, and mechanistic understanding in ways that no one approach could achieve alone.

  6. Estimation of the curvature of the solid liquid interface during Bridgman crystal growth

    NASA Astrophysics Data System (ADS)

    Barat, Catherine; Duffar, Thierry; Garandet, Jean-Paul

    1998-11-01

    An approximate solution for the solid/liquid interface curvature due to the crucible effect in crystal growth is derived from simple heat flux considerations. The numerical modelling of the problem carried out with the help of the finite element code FIDAP supports the predictions of our analytical expression and allows to identify its range of validity. Experimental interface curvatures, measured in gallium antimonide samples grown by the vertical Bridgman method, are seen to compare satisfactorily to analytical and numerical results. Other literature data are also in fair agreement with the predictions of our models in the case where the amount of heat carried by the crucible is small compared to the overall heat flux.

  7. An Algebraic Method for Exploring Quantum Monodromy and Quantum Phase Transitions in Non-Rigid Molecules

    NASA Astrophysics Data System (ADS)

    Larese, D.; Iachello, F.

    2011-06-01

    A simple algebraic Hamiltonian has been used to explore the vibrational and rotational spectra of the skeletal bending modes of HCNO, BrCNO, NCNCS, and other ``floppy`` (quasi-linear or quasi-bent) molecules. These molecules have large-amplitude, low-energy bending modes and champagne-bottle potential surfaces, making them good candidates for observing quantum phase transitions (QPT). We describe the geometric phase transitions from bent to linear in these and other non-rigid molecules, quantitatively analysing the spectroscopy signatures of ground state QPT, excited state QPT, and quantum monodromy.The algebraic framework is ideal for this work because of its small calculational effort yet robust results. Although these methods have historically found success with tri- and four-atomic molecules, we now address five-atomic and simple branched molecules such as CH_3NCO and GeH_3NCO. Extraction of potential functions is completed for several molecules, resulting in predictions of barriers to linearity and equilibrium bond angles.

  8. High density lipoproteins: Measurement techniques and potential biomarkers of cardiovascular risk

    PubMed Central

    Hafiane, Anouar; Genest, Jacques

    2015-01-01

    Plasma high density lipoprotein cholesterol (HDL) comprises a heterogeneous family of lipoprotein species, differing in surface charge, size and lipid and protein compositions. While HDL cholesterol (C) mass is a strong, graded and coherent biomarker of cardiovascular risk, genetic and clinical trial data suggest that the simple measurement of HDL-C may not be causal in preventing atherosclerosis nor reflect HDL functionality. Indeed, the measurement of HDL-C may be a biomarker of cardiovascular health. To assess the issue of HDL function as a potential therapeutic target, robust and simple analytical methods are required. The complex pleiotropic effects of HDL make the development of a single measurement challenging. Development of laboratory assays that accurately HDL function must be developed validated and brought to high-throughput for clinical purposes. This review discusses the limitations of current laboratory technologies for methods that separate and quantify HDL and potential application to predict CVD, with an emphasis on emergent approaches as potential biomarkers in clinical practice. PMID:26674734

  9. Helmholtz-Smoluchowski velocity for viscoelastic electroosmotic flows.

    PubMed

    Park, H M; Lee, W M

    2008-01-15

    Many biofluids such as blood and DNA solutions are viscoelastic and exhibit extraordinary flow behaviors, not existing in Newtonian fluids. Adopting appropriate constitutive equations these exotic flow behaviors can be modeled and predicted reasonably using various numerical methods. However, the governing equations for viscoelastic flows are not easily solvable, especially for electroosmotic flows where the streamwise velocity varies rapidly from zero at the wall to a nearly uniform velocity at the outside of the very thin electric double layer. In the present investigation, we have devised a simple method to find the volumetric flow rate of viscoelastic electroosmotic flows through microchannels. It is based on the concept of the Helmholtz-Smoluchowski velocity which is widely adopted in the electroosmotic flows of Newtonian fluids. It is shown that the Helmholtz-Smoluchowski velocity for viscoelastic fluids can be found by solving a simple cubic algebraic equation. The volumetric flow rate obtained using this Helmholtz-Smoluchowski velocity is found to be almost the same as that obtained by solving the governing partial differential equations for various viscoelastic fluids.

  10. An Integrated Fuselage-Sting Balance for a Sonic-Boom Wind-Tunnel Model

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.

    2004-01-01

    Measured and predicted pressure signatures from a lifting wind-tunnel model can be compared when the lift on the model is accurately known. The model's lift can be set by bending the support sting to a desired angle of attack. This method is simple in practice, but difficult to accurately apply. A second method is to build a normal force/pitching moment balance into the aft end of the sting, and use an angle-of-attack mechanism to set model attitude. In this report, a method for designing a sting/balance into the aft fuselage/sting of a sonic-boom model is described. A computer code is given, and a sample sting design is outlined to demonstrate the method.

  11. The phantom robot - Predictive displays for teleoperation with time delay

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.; Kim, Won S.; Venema, Steven C.

    1990-01-01

    An enhanced teleoperation technique for time-delayed bilateral teleoperator control is discussed. The control technique selected for time delay is based on the use of a high-fidelity graphics phantom robot that is being controlled in real time (without time delay) against the static task image. Thus, the motion of the phantom robot image on the monitor predicts the motion of the real robot. The real robot's motion will follow the phantom robot's motion on the monitor with the communication time delay implied in the task. Real-time high-fidelity graphics simulation of a PUMA arm is generated and overlaid on the actual camera view of the arm. A simple camera calibration technique is used for calibrated graphics overlay. A preliminary experiment is performed with the predictive display by using a very simple tapping task. The results with this simple task indicate that predictive display enhances the human operator's telemanipulation task performance significantly during free motion when there is a long time delay. It appears, however, that either two-view or stereoscopic predictive displays are necessary for general three-dimensional tasks.

  12. Linking species abundance distributions in numerical abundance and biomass through simple assumptions about community structure.

    PubMed

    Henderson, Peter A; Magurran, Anne E

    2010-05-22

    Species abundance distributions (SADs) are widely used as a tool for summarizing ecological communities but may have different shapes, depending on the currency used to measure species importance. We develop a simple plotting method that links SADs in the alternative currencies of numerical abundance and biomass and is underpinned by testable predictions about how organisms occupy physical space. When log numerical abundance is plotted against log biomass, the species lie within an approximately triangular region. Simple energetic and sampling constraints explain the triangular form. The dispersion of species within this triangle is the key to understanding why SADs of numerical abundance and biomass can differ. Given regular or random species dispersion, we can predict the shape of the SAD for both currencies under a variety of sampling regimes. We argue that this dispersion pattern will lie between regular and random for the following reasons. First, regular dispersion patterns will result if communities are comprised groups of organisms that use different components of the physical space (e.g. open water, the sea bed surface or rock crevices in a marine fish assemblage), and if the abundance of species in each of these spatial guilds is linked to the way individuals of varying size use the habitat. Second, temporal variation in abundance and sampling error will tend to randomize this regular pattern. Data from two intensively studied marine ecosystems offer empirical support for these predictions. Our approach also has application in environmental monitoring and the recognition of anthropogenic disturbance, which may change the shape of the triangular region by, for example, the loss of large body size top predators that occur at low abundance.

  13. Performance of HADDOCK and a simple contact-based protein-ligand binding affinity predictor in the D3R Grand Challenge 2

    NASA Astrophysics Data System (ADS)

    Kurkcuoglu, Zeynep; Koukos, Panagiotis I.; Citro, Nevia; Trellet, Mikael E.; Rodrigues, J. P. G. L. M.; Moreira, Irina S.; Roel-Touris, Jorge; Melquiond, Adrien S. J.; Geng, Cunliang; Schaarschmidt, Jörg; Xue, Li C.; Vangone, Anna; Bonvin, A. M. J. J.

    2018-01-01

    We present the performance of HADDOCK, our information-driven docking software, in the second edition of the D3R Grand Challenge. In this blind experiment, participants were requested to predict the structures and binding affinities of complexes between the Farnesoid X nuclear receptor and 102 different ligands. The models obtained in Stage1 with HADDOCK and ligand-specific protocol show an average ligand RMSD of 5.1 Å from the crystal structure. Only 6/35 targets were within 2.5 Å RMSD from the reference, which prompted us to investigate the limiting factors and revise our protocol for Stage2. The choice of the receptor conformation appeared to have the strongest influence on the results. Our Stage2 models were of higher quality (13 out of 35 were within 2.5 Å), with an average RMSD of 4.1 Å. The docking protocol was applied to all 102 ligands to generate poses for binding affinity prediction. We developed a modified version of our contact-based binding affinity predictor PRODIGY, using the number of interatomic contacts classified by their type and the intermolecular electrostatic energy. This simple structure-based binding affinity predictor shows a Kendall's Tau correlation of 0.37 in ranking the ligands (7th best out of 77 methods, 5th/25 groups). Those results were obtained from the average prediction over the top10 poses, irrespective of their similarity/correctness, underscoring the robustness of our simple predictor. This results in an enrichment factor of 2.5 compared to a random predictor for ranking ligands within the top 25%, making it a promising approach to identify lead compounds in virtual screening.

  14. Linking species abundance distributions in numerical abundance and biomass through simple assumptions about community structure

    PubMed Central

    Henderson, Peter A.; Magurran, Anne E.

    2010-01-01

    Species abundance distributions (SADs) are widely used as a tool for summarizing ecological communities but may have different shapes, depending on the currency used to measure species importance. We develop a simple plotting method that links SADs in the alternative currencies of numerical abundance and biomass and is underpinned by testable predictions about how organisms occupy physical space. When log numerical abundance is plotted against log biomass, the species lie within an approximately triangular region. Simple energetic and sampling constraints explain the triangular form. The dispersion of species within this triangle is the key to understanding why SADs of numerical abundance and biomass can differ. Given regular or random species dispersion, we can predict the shape of the SAD for both currencies under a variety of sampling regimes. We argue that this dispersion pattern will lie between regular and random for the following reasons. First, regular dispersion patterns will result if communities are comprised groups of organisms that use different components of the physical space (e.g. open water, the sea bed surface or rock crevices in a marine fish assemblage), and if the abundance of species in each of these spatial guilds is linked to the way individuals of varying size use the habitat. Second, temporal variation in abundance and sampling error will tend to randomize this regular pattern. Data from two intensively studied marine ecosystems offer empirical support for these predictions. Our approach also has application in environmental monitoring and the recognition of anthropogenic disturbance, which may change the shape of the triangular region by, for example, the loss of large body size top predators that occur at low abundance. PMID:20071388

  15. Social Anthropological Considerations on the Predictability and Unpredictability of Community Outcomes

    NASA Astrophysics Data System (ADS)

    Smith, Gregory O.

    This chapter surveys community process in a circumscribed area of central Italy in a comparative effort to show how simple quantitative methods can provide insights into the nature of community constitution. It is evident that individual and psychological processes are rooted in community experience, and in order to have a fuller understanding of the various system levels discussed in this volume, it is valuable also to have some insights into the organizational dynamics of localized communities.

  16. Simple Statistical Model to Quantify Maximum Expected EMC in Spacecraft and Avionics Boxes

    NASA Technical Reports Server (NTRS)

    Trout, Dawn H.; Bremner, Paul

    2014-01-01

    This study shows cumulative distribution function (CDF) comparisons of composite a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. Test and model data correlation is shown. In addition, this presentation shows application of the power balance and extention of this method to predict the variance and maximum exptected mean of the E-field data. This is valuable for large scale evaluations of transmission inside cavities.

  17. Proceedings of the Conference on Low Reynolds Number Airfoil Aerodynamics

    DTIC Science & Technology

    1985-06-01

    resulting in a simple bubble prediction method. The effect of tripping devices to decrease the adverse effect of the bubble on drag is discussed ...interacting flows. Of interest is a special form of the steady-state bifurcation , namely, symmetry breaking of an otherwise regular flow about a symmet- ric...Ratio Effects on the Aerodynamics of a Wortmann Airfoil at Low Reynolds Number J.F. Marchman, IIl, A.A. Abtahi and V. Sumantran . . . 183 Performance of

  18. Three demonstrations of degeneracy lifting

    NASA Astrophysics Data System (ADS)

    Morrison, Andrew

    2005-09-01

    Two normal modes of vibration of a single object having exactly the same frequency are said to be degenerate modes. Certain simple systems, such as a circular membrane, have predictable degenerate modes. A lack of isotropy in the material or a geometric asymmetry can separate the frequencies and lift the degeneracy. Demonstration of this effect is easily accomplished in the classroom. Three methods of showing the effect are presented using a handbell, a short metal rod, and a coffee mug.

  19. Measured effects of surface cloth impressions on polar backscatter and comparison with a reflection grating model

    NASA Technical Reports Server (NTRS)

    Madaras, Eric I.; Brush, Edwin F., III; Bridal, S. L.; Holland, Mark R.; Miller, James G.

    1992-01-01

    This paper focuses on the nature of a typical composite surface and its effects on scattering. Utilizing epoxy typical of that in composites and standard composite fabrication methods, a sample with release cloth impressions on its surface is produced. A simple model for the scattering from the surface impressions of this sample is constructed and then polar backscatter measurements are made on the sample and compared with the model predictions.

  20. Analysis of precision and accuracy in a simple model of machine learning

    NASA Astrophysics Data System (ADS)

    Lee, Julian

    2017-12-01

    Machine learning is a procedure where a model for the world is constructed from a training set of examples. It is important that the model should capture relevant features of the training set, and at the same time make correct prediction for examples not included in the training set. I consider the polynomial regression, the simplest method of learning, and analyze the accuracy and precision for different levels of the model complexity.

Top