Sample records for usage prediction algorithm

  1. Smartphone dependence classification using tensor factorization.

    PubMed

    Choi, Jingyun; Rho, Mi Jung; Kim, Yejin; Yook, In Hye; Yu, Hwanjo; Kim, Dai-Jin; Choi, In Young

    2017-01-01

    Excessive smartphone use causes personal and social problems. To address this issue, we sought to derive usage patterns that were directly correlated with smartphone dependence based on usage data. This study attempted to classify smartphone dependence using a data-driven prediction algorithm. We developed a mobile application to collect smartphone usage data. A total of 41,683 logs of 48 smartphone users were collected from March 8, 2015, to January 8, 2016. The participants were classified into the control group (SUC) or the addiction group (SUD) using the Korean Smartphone Addiction Proneness Scale for Adults (S-Scale) and a face-to-face offline interview by a psychiatrist and a clinical psychologist (SUC = 23 and SUD = 25). We derived usage patterns using tensor factorization and found the following six optimal usage patterns: 1) social networking services (SNS) during daytime, 2) web surfing, 3) SNS at night, 4) mobile shopping, 5) entertainment, and 6) gaming at night. The membership vectors of the six patterns obtained a significantly better prediction performance than the raw data. For all patterns, the usage times of the SUD were much longer than those of the SUC. From our findings, we concluded that usage patterns and membership vectors were effective tools to assess and predict smartphone dependence and could provide an intervention guideline to predict and treat smartphone dependence based on usage data.

  2. Parallel matrix multiplication on the Connection Machine

    NASA Technical Reports Server (NTRS)

    Tichy, Walter F.

    1988-01-01

    Matrix multiplication is a computation and communication intensive problem. Six parallel algorithms for matrix multiplication on the Connection Machine are presented and compared with respect to their performance and processor usage. For n by n matrices, the algorithms have theoretical running times of O(n to the 2nd power log n), O(n log n), O(n), and O(log n), and require n, n to the 2nd power, n to the 2nd power, and n to the 3rd power processors, respectively. With careful attention to communication patterns, the theoretically predicted runtimes can indeed be achieved in practice. The parallel algorithms illustrate the tradeoffs between performance, communication cost, and processor usage.

  3. Smartphone dependence classification using tensor factorization

    PubMed Central

    Kim, Yejin; Yook, In Hye; Yu, Hwanjo; Kim, Dai-Jin

    2017-01-01

    Excessive smartphone use causes personal and social problems. To address this issue, we sought to derive usage patterns that were directly correlated with smartphone dependence based on usage data. This study attempted to classify smartphone dependence using a data-driven prediction algorithm. We developed a mobile application to collect smartphone usage data. A total of 41,683 logs of 48 smartphone users were collected from March 8, 2015, to January 8, 2016. The participants were classified into the control group (SUC) or the addiction group (SUD) using the Korean Smartphone Addiction Proneness Scale for Adults (S-Scale) and a face-to-face offline interview by a psychiatrist and a clinical psychologist (SUC = 23 and SUD = 25). We derived usage patterns using tensor factorization and found the following six optimal usage patterns: 1) social networking services (SNS) during daytime, 2) web surfing, 3) SNS at night, 4) mobile shopping, 5) entertainment, and 6) gaming at night. The membership vectors of the six patterns obtained a significantly better prediction performance than the raw data. For all patterns, the usage times of the SUD were much longer than those of the SUC. From our findings, we concluded that usage patterns and membership vectors were effective tools to assess and predict smartphone dependence and could provide an intervention guideline to predict and treat smartphone dependence based on usage data. PMID:28636614

  4. Identification of chronic rhinosinusitis phenotypes using cluster analysis.

    PubMed

    Soler, Zachary M; Hyer, J Madison; Ramakrishnan, Viswanathan; Smith, Timothy L; Mace, Jess; Rudmik, Luke; Schlosser, Rodney J

    2015-05-01

    Current clinical classifications of chronic rhinosinusitis (CRS) have been largely defined based upon preconceived notions of factors thought to be important, such as polyp or eosinophil status. Unfortunately, these classification systems have little correlation with symptom severity or treatment outcomes. Unsupervised clustering can be used to identify phenotypic subgroups of CRS patients, describe clinical differences in these clusters and define simple algorithms for classification. A multi-institutional, prospective study of 382 patients with CRS who had failed initial medical therapy completed the Sino-Nasal Outcome Test (SNOT-22), Rhinosinusitis Disability Index (RSDI), Medical Outcomes Study Short Form-12 (SF-12), Pittsburgh Sleep Quality Index (PSQI), and Patient Health Questionnaire (PHQ-2). Objective measures of CRS severity included Brief Smell Identification Test (B-SIT), CT, and endoscopy scoring. All variables were reduced and unsupervised hierarchical clustering was performed. After clusters were defined, variations in medication usage were analyzed. Discriminant analysis was performed to develop a simplified, clinically useful algorithm for clustering. Clustering was largely determined by age, severity of patient reported outcome measures, depression, and fibromyalgia. CT and endoscopy varied somewhat among clusters. Traditional clinical measures, including polyp/atopic status, prior surgery, B-SIT and asthma, did not vary among clusters. A simplified algorithm based upon productivity loss, SNOT-22 score, and age predicted clustering with 89% accuracy. Medication usage among clusters did vary significantly. A simplified algorithm based upon hierarchical clustering is able to classify CRS patients and predict medication usage. Further studies are warranted to determine if such clustering predicts treatment outcomes. © 2015 ARS-AAOA, LLC.

  5. Effect of lysine to arginine mutagenesis in the V3 loop of HIV-1 gp120 on viral entry efficiency and neutralization.

    PubMed

    Schwalbe, Birco; Schreiber, Michael

    2015-01-01

    HIV-1 infection is characterized by an ongoing replication leading to T-lymphocyte decline which is paralleled by the switch from CCR5 to CXCR4 coreceptor usage. To predict coreceptor usage, several computer algorithms using gp120 V3 loop sequence data have been developed. In these algorithms an occupation of the V3 positions 11 and 25, by one of the amino acids lysine (K) or arginine (R), is an indicator for CXCR4 usage. Amino acids R and K dominate at these two positions, but can also be identified at positions 9 and 10. Generally, CXCR4-viruses possess V3 sequences, with an overall positive charge higher than the V3 sequences of R5-viruses. The net charge is calculated by subtracting the number of negatively charged amino acids (D, aspartic acid and E, glutamic acid) from the number of positively charged ones (K and R). In contrast to D and E, which are very similar in their polar and acidic properties, the characteristics of the R guanidinium group differ significantly from the K ammonium group. However, in coreceptor predictive computer algorithms R and K are both equally rated. The study was conducted to analyze differences in infectivity and coreceptor usage because of R-to-K mutations at the V3 positions 9, 10 and 11. V3 loop mutants with all possible RRR-to-KKK triplets were constructed and analyzed for coreceptor usage, infectivity and neutralization by SDF-1α and RANTES. Virus mutants R9R10R11 showed the highest infectivity rates, and were inhibited more efficiently in contrast to the K9K10K11 viruses. They also showed higher efficiency in a virus-gp120 paired infection assay. Especially V3 loop position 9 was relevant for a switch to higher infectivity when occupied by R. Thus, K-to-R exchanges play a role for enhanced viral entry efficiency and should therefore be considered when the viral phenotype is predicted based on V3 sequence data.

  6. Effect of Lysine to Arginine Mutagenesis in the V3 Loop of HIV-1 gp120 on Viral Entry Efficiency and Neutralization

    PubMed Central

    Schwalbe, Birco; Schreiber, Michael

    2015-01-01

    HIV-1 infection is characterized by an ongoing replication leading to T-lymphocyte decline which is paralleled by the switch from CCR5 to CXCR4 coreceptor usage. To predict coreceptor usage, several computer algorithms using gp120 V3 loop sequence data have been developed. In these algorithms an occupation of the V3 positions 11 and 25, by one of the amino acids lysine (K) or arginine (R), is an indicator for CXCR4 usage. Amino acids R and K dominate at these two positions, but can also be identified at positions 9 and 10. Generally, CXCR4-viruses possess V3 sequences, with an overall positive charge higher than the V3 sequences of R5-viruses. The net charge is calculated by subtracting the number of negatively charged amino acids (D, aspartic acid and E, glutamic acid) from the number of positively charged ones (K and R). In contrast to D and E, which are very similar in their polar and acidic properties, the characteristics of the R guanidinium group differ significantly from the K ammonium group. However, in coreceptor predictive computer algorithms R and K are both equally rated. The study was conducted to analyze differences in infectivity and coreceptor usage because of R-to-K mutations at the V3 positions 9, 10 and 11. V3 loop mutants with all possible RRR-to-KKK triplets were constructed and analyzed for coreceptor usage, infectivity and neutralization by SDF-1α and RANTES. Virus mutants R9R10R11 showed the highest infectivity rates, and were inhibited more efficiently in contrast to the K9K10K11 viruses. They also showed higher efficiency in a virus-gp120 paired infection assay. Especially V3 loop position 9 was relevant for a switch to higher infectivity when occupied by R. Thus, K-to-R exchanges play a role for enhanced viral entry efficiency and should therefore be considered when the viral phenotype is predicted based on V3 sequence data. PMID:25785610

  7. Current V3 genotyping algorithms are inadequate for predicting X4 co-receptor usage in clinical isolates.

    PubMed

    Low, Andrew J; Dong, Winnie; Chan, Dennison; Sing, Tobias; Swanstrom, Ronald; Jensen, Mark; Pillai, Satish; Good, Benjamin; Harrigan, P Richard

    2007-09-12

    Integrating CCR5 antagonists into clinical practice would benefit from accurate assays of co-receptor usage (CCR5 versus CXCR4) with fast turnaround and low cost. Published HIV V3-loop based predictors of co-receptor usage were compared with actual phenotypic tropism results in a large cohort of antiretroviral naive individuals to determine accuracy on clinical samples and identify areas for improvement. Aligned HIV envelope V3 loop sequences (n = 977), derived by bulk sequencing were analyzed by six methods: the 11/25 rule; a neural network (NN), two support vector machines, and two subtype-B position specific scoring matrices (PSSM). Co-receptor phenotype results (Trofile Co-receptor Phenotype Assay; Monogram Biosciences) were stratified by CXCR4 relative light unit (RLU) readout and CD4 cell count. Co-receptor phenotype was available for 920 clinical samples with V3 genotypes having fewer than seven amino acid mixtures (n = 769 R5; n = 151 X4-capable). Sensitivity and specificity for predicting X4 capacity were evaluated for the 11/25 rule (30% sensitivity/93% specificity), NN (44%/88%), PSSM(sinsi) (34%/96%), PSSM(x4r5) (24%/97%), SVMgenomiac (22%/90%) and SVMgeno2pheno (50%/89%). Quantitative increases in sensitivity could be obtained by optimizing the cut-off for methods with continuous output (PSSM methods), and/or integrating clinical data (CD4%). Sensitivity was directly proportional to strength of X4 signal in the phenotype assay (P < 0.05). Current default implementations of co-receptor prediction algorithms are inadequate for predicting HIV X4 co-receptor usage in clinical samples, particularly those X4 phenotypes with low CXCR4 RLU signals. Significant improvements can be made to genotypic predictors, including training on clinical samples, using additional data to improve predictions and optimizing cutoffs and increasing genotype sensitivity.

  8. RANdom SAmple Consensus (RANSAC) algorithm for material-informatics: application to photovoltaic solar cells.

    PubMed

    Kaspi, Omer; Yosipof, Abraham; Senderowitz, Hanoch

    2017-06-06

    An important aspect of chemoinformatics and material-informatics is the usage of machine learning algorithms to build Quantitative Structure Activity Relationship (QSAR) models. The RANdom SAmple Consensus (RANSAC) algorithm is a predictive modeling tool widely used in the image processing field for cleaning datasets from noise. RANSAC could be used as a "one stop shop" algorithm for developing and validating QSAR models, performing outlier removal, descriptors selection, model development and predictions for test set samples using applicability domain. For "future" predictions (i.e., for samples not included in the original test set) RANSAC provides a statistical estimate for the probability of obtaining reliable predictions, i.e., predictions within a pre-defined number of standard deviations from the true values. In this work we describe the first application of RNASAC in material informatics, focusing on the analysis of solar cells. We demonstrate that for three datasets representing different metal oxide (MO) based solar cell libraries RANSAC-derived models select descriptors previously shown to correlate with key photovoltaic properties and lead to good predictive statistics for these properties. These models were subsequently used to predict the properties of virtual solar cells libraries highlighting interesting dependencies of PV properties on MO compositions.

  9. Disk storage management for LHCb based on Data Popularity estimator

    NASA Astrophysics Data System (ADS)

    Hushchyn, Mikhail; Charpentier, Philippe; Ustyuzhanin, Andrey

    2015-12-01

    This paper presents an algorithm providing recommendations for optimizing the LHCb data storage. The LHCb data storage system is a hybrid system. All datasets are kept as archives on magnetic tapes. The most popular datasets are kept on disks. The algorithm takes the dataset usage history and metadata (size, type, configuration etc.) to generate a recommendation report. This article presents how we use machine learning algorithms to predict future data popularity. Using these predictions it is possible to estimate which datasets should be removed from disk. We use regression algorithms and time series analysis to find the optimal number of replicas for datasets that are kept on disk. Based on the data popularity and the number of replicas optimization, the algorithm minimizes a loss function to find the optimal data distribution. The loss function represents all requirements for data distribution in the data storage system. We demonstrate how our algorithm helps to save disk space and to reduce waiting times for jobs using this data.

  10. Towards Prognostics of Electrolytic Capacitors

    NASA Technical Reports Server (NTRS)

    Celaya, Jose R.; Kulkarni, Chetan; Biswas, Gautam; Goegel, Kai

    2011-01-01

    A remaining useful life prediction algorithm and degradation model for electrolytic capacitors is presented. Electrolytic capacitors are used in several applications ranging from power supplies on critical avionics equipment to power drivers for electro-mechanical actuators. These devices are known for their low reliability and given their criticality in electronics subsystems they are a good candidate for component level prognostics and health management research. Prognostics provides a way to assess remaining useful life of a capacitor based on its current state of health and its anticipated future usage and operational conditions. In particular, experimental results of an accelerated aging test under electrical stresses are presented. The capacitors used in this test form the basis for a remaining life prediction algorithm where a model of the degradation process is suggested. This preliminary remaining life prediction algorithm serves as a demonstration of how prognostics methodologies could be used for electrolytic capacitors.

  11. Seamless Merging of Hypertext and Algorithm Animation

    ERIC Educational Resources Information Center

    Karavirta, Ville

    2009-01-01

    Online learning material that students use by themselves is one of the typical usages of algorithm animation (AA). Thus, the integration of algorithm animations into hypertext is seen as an important topic today to promote the usage of algorithm animation in teaching. This article presents an algorithm animation viewer implemented purely using…

  12. Accelerated probabilistic inference of RNA structure evolution

    PubMed Central

    Holmes, Ian

    2005-01-01

    Background Pairwise stochastic context-free grammars (Pair SCFGs) are powerful tools for evolutionary analysis of RNA, including simultaneous RNA sequence alignment and secondary structure prediction, but the associated algorithms are intensive in both CPU and memory usage. The same problem is faced by other RNA alignment-and-folding algorithms based on Sankoff's 1985 algorithm. It is therefore desirable to constrain such algorithms, by pre-processing the sequences and using this first pass to limit the range of structures and/or alignments that can be considered. Results We demonstrate how flexible classes of constraint can be imposed, greatly reducing the computational costs while maintaining a high quality of structural homology prediction. Any score-attributed context-free grammar (e.g. energy-based scoring schemes, or conditionally normalized Pair SCFGs) is amenable to this treatment. It is now possible to combine independent structural and alignment constraints of unprecedented general flexibility in Pair SCFG alignment algorithms. We outline several applications to the bioinformatics of RNA sequence and structure, including Waterman-Eggert N-best alignments and progressive multiple alignment. We evaluate the performance of the algorithm on test examples from the RFAM database. Conclusion A program, Stemloc, that implements these algorithms for efficient RNA sequence alignment and structure prediction is available under the GNU General Public License. PMID:15790387

  13. Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics

    NASA Astrophysics Data System (ADS)

    Junghans, Christoph; Mniszewski, Susan; Voter, Arthur; Perez, Danny; Eidenbenz, Stephan

    2014-03-01

    We present an example of a new class of tools that we call application simulators, parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation (PDES). We demonstrate our approach with a TADSim application simulator that models the Temperature Accelerated Dynamics (TAD) method, which is an algorithmically complex member of the Accelerated Molecular Dynamics (AMD) family. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We further extend TADSim to model algorithm extensions to standard TAD, such as speculative spawning of the compute-bound stages of the algorithm, and predict performance improvements without having to implement such a method. Focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights into the TAD algorithm behavior and suggested extensions to the TAD method.

  14. UAV Control on the Basis of 3D Landmark Bearing-Only Observations.

    PubMed

    Karpenko, Simon; Konovalenko, Ivan; Miller, Alexander; Miller, Boris; Nikolaev, Dmitry

    2015-11-27

    The article presents an approach to the control of a UAV on the basis of 3D landmark observations. The novelty of the work is the usage of the 3D RANSAC algorithm developed on the basis of the landmarks' position prediction with the aid of a modified Kalman-type filter. Modification of the filter based on the pseudo-measurements approach permits obtaining unbiased UAV position estimation with quadratic error characteristics. Modeling of UAV flight on the basis of the suggested algorithm shows good performance, even under significant external perturbations.

  15. UAV Control on the Basis of 3D Landmark Bearing-Only Observations

    PubMed Central

    Karpenko, Simon; Konovalenko, Ivan; Miller, Alexander; Miller, Boris; Nikolaev, Dmitry

    2015-01-01

    The article presents an approach to the control of a UAV on the basis of 3D landmark observations. The novelty of the work is the usage of the 3D RANSAC algorithm developed on the basis of the landmarks’ position prediction with the aid of a modified Kalman-type filter. Modification of the filter based on the pseudo-measurements approach permits obtaining unbiased UAV position estimation with quadratic error characteristics. Modeling of UAV flight on the basis of the suggested algorithm shows good performance, even under significant external perturbations. PMID:26633394

  16. A Program Complexity Metric Based on Variable Usage for Algorithmic Thinking Education of Novice Learners

    ERIC Educational Resources Information Center

    Fuwa, Minori; Kayama, Mizue; Kunimune, Hisayoshi; Hashimoto, Masami; Asano, David K.

    2015-01-01

    We have explored educational methods for algorithmic thinking for novices and implemented a block programming editor and a simple learning management system. In this paper, we propose a program/algorithm complexity metric specified for novice learners. This metric is based on the variable usage in arithmetic and relational formulas in learner's…

  17. EEG seizure detection and prediction algorithms: a survey

    NASA Astrophysics Data System (ADS)

    Alotaiby, Turkey N.; Alshebeili, Saleh A.; Alshawi, Tariq; Ahmad, Ishtiaq; Abd El-Samie, Fathi E.

    2014-12-01

    Epilepsy patients experience challenges in daily life due to precautions they have to take in order to cope with this condition. When a seizure occurs, it might cause injuries or endanger the life of the patients or others, especially when they are using heavy machinery, e.g., deriving cars. Studies of epilepsy often rely on electroencephalogram (EEG) signals in order to analyze the behavior of the brain during seizures. Locating the seizure period in EEG recordings manually is difficult and time consuming; one often needs to skim through tens or even hundreds of hours of EEG recordings. Therefore, automatic detection of such an activity is of great importance. Another potential usage of EEG signal analysis is in the prediction of epileptic activities before they occur, as this will enable the patients (and caregivers) to take appropriate precautions. In this paper, we first present an overview of seizure detection and prediction problem and provide insights on the challenges in this area. Second, we cover some of the state-of-the-art seizure detection and prediction algorithms and provide comparison between these algorithms. Finally, we conclude with future research directions and open problems in this topic.

  18. Evaluation of the genotypic prediction of HIV-1 coreceptor use versus a phenotypic assay and correlation with the virological response to maraviroc: the ANRS GenoTropism study.

    PubMed

    Recordon-Pinson, Patricia; Soulié, Cathia; Flandre, Philippe; Descamps, Diane; Lazrek, Mouna; Charpentier, Charlotte; Montes, Brigitte; Trabaud, Mary-Anne; Cottalorda, Jacqueline; Schneider, Véronique; Morand-Joubert, Laurence; Tamalet, Catherine; Desbois, Delphine; Macé, Muriel; Ferré, Virginie; Vabret, Astrid; Ruffault, Annick; Pallier, Coralie; Raymond, Stéphanie; Izopet, Jacques; Reynes, Jacques; Marcelin, Anne-Geneviève; Masquelier, Bernard

    2010-08-01

    Genotypic algorithms for prediction of HIV-1 coreceptor usage need to be evaluated in a clinical setting. We aimed at studying (i) the correlation of genotypic prediction of coreceptor use in comparison with a phenotypic assay and (ii) the relationship between genotypic prediction of coreceptor use at baseline and the virological response (VR) to a therapy including maraviroc (MVC). Antiretroviral-experienced patients were included in the MVC Expanded Access Program if they had an R5 screening result with Trofile (Monogram Biosciences). V3 loop sequences were determined at screening, and coreceptor use was predicted using 13 genotypic algorithms or combinations of algorithms. Genotypic predictions were compared to Trofile; dual or mixed (D/M) variants were considered as X4 variants. Both genotypic and phenotypic results were obtained for 189 patients at screening, with 54 isolates scored as X4 or D/M and 135 scored as R5 with Trofile. The highest sensitivity (59.3%) for detection of X4 was obtained with the Geno2pheno algorithm, with a false-positive rate set up at 10% (Geno2pheno10). In the 112 patients receiving MVC, a plasma viral RNA load of <50 copies/ml was obtained in 68% of cases at month 6. In multivariate analysis, the prediction of the X4 genotype at baseline with the Geno2pheno10 algorithm including baseline viral load and CD4 nadir was independently associated with a worse VR at months 1 and 3. The baseline weighted genotypic sensitivity score was associated with VR at month 6. There were strong arguments in favor of using genotypic coreceptor use assays for determining which patients would respond to CCR5 antagonist.

  19. Complexity and dynamics of HIV-1 chemokine receptor usage in a multidrug-resistant adolescent.

    PubMed

    Cavarelli, Mariangela; Mainetti, Lara; Pignataro, Angela Rosa; Bigoloni, Alba; Tolazzi, Monica; Galli, Andrea; Nozza, Silvia; Castagna, Antonella; Sampaolo, Michela; Boeri, Enzo; Scarlatti, Gabriella

    2014-12-01

    Maraviroc (MVC) is licensed in clinical practice for patients with R5 virus and virological failure; however, in anecdotal reports, dual/mixed viruses were also inhibited. We retrospectively evaluated the evolution of HIV-1 coreceptor tropism in plasma and peripheral blood mononuclear cells (PBMCs) of an infected adolescent with a CCR5/CXCR4 Trofile profile who experienced an important but temporary immunological and virological response during a 16-month period of MVC-based therapy. Coreceptor usage of biological viral clones isolated from PBMCs was investigated in U87.CD4 cells expressing wild-type or chimeric CCR5 and CXCR4. Plasma and PBMC-derived viral clones were sequenced to predict coreceptor tropism using the geno2pheno algorithm from the V3 envelope sequence and pol gene-resistant mutations. From start to 8.5 months of MVC treatment only R5X4 viral clones were observed, whereas at 16 months the phenotype enlarged to also include R5 and X4 clones. Chimeric receptor usage suggested the preferential usage of the CXCR4 coreceptor by the R5X4 biological clones. According to phenotypic data, R5 viruses were susceptible, whereas R5X4 and X4 viruses were resistant to RANTES and MVC in vitro. Clones at 16 months, but not at baseline, showed an amino acidic resistance pattern in protease and reverse transcription genes, which, however, did not drive their tropisms. The geno2pheno algorithm predicted at baseline R5 viruses in plasma, and from 5.5 months throughout follow-up only CXCR4-using viruses. An extended methodological approach is needed to unravel the complexity of the phenotype and variation of viruses resident in the different compartments of an infected individual. The accurate evaluation of the proportion of residual R5 viruses may guide therapeutic intervention in highly experienced patients with limited therapeutic options.

  20. Complexity and Dynamics of HIV-1 Chemokine Receptor Usage in a Multidrug-Resistant Adolescent

    PubMed Central

    Mainetti, Lara; Pignataro, Angela Rosa; Bigoloni, Alba; Tolazzi, Monica; Galli, Andrea; Nozza, Silvia; Castagna, Antonella; Sampaolo, Michela; Boeri, Enzo; Scarlatti, Gabriella

    2014-01-01

    Abstract Maraviroc (MVC) is licensed in clinical practice for patients with R5 virus and virological failure; however, in anecdotal reports, dual/mixed viruses were also inhibited. We retrospectively evaluated the evolution of HIV-1 coreceptor tropism in plasma and peripheral blood mononuclear cells (PBMCs) of an infected adolescent with a CCR5/CXCR4 Trofile profile who experienced an important but temporary immunological and virological response during a 16-month period of MVC-based therapy. Coreceptor usage of biological viral clones isolated from PBMCs was investigated in U87.CD4 cells expressing wild-type or chimeric CCR5 and CXCR4. Plasma and PBMC-derived viral clones were sequenced to predict coreceptor tropism using the geno2pheno algorithm from the V3 envelope sequence and pol gene-resistant mutations. From start to 8.5 months of MVC treatment only R5X4 viral clones were observed, whereas at 16 months the phenotype enlarged to also include R5 and X4 clones. Chimeric receptor usage suggested the preferential usage of the CXCR4 coreceptor by the R5X4 biological clones. According to phenotypic data, R5 viruses were susceptible, whereas R5X4 and X4 viruses were resistant to RANTES and MVC in vitro. Clones at 16 months, but not at baseline, showed an amino acidic resistance pattern in protease and reverse transcription genes, which, however, did not drive their tropisms. The geno2pheno algorithm predicted at baseline R5 viruses in plasma, and from 5.5 months throughout follow-up only CXCR4-using viruses. An extended methodological approach is needed to unravel the complexity of the phenotype and variation of viruses resident in the different compartments of an infected individual. The accurate evaluation of the proportion of residual R5 viruses may guide therapeutic intervention in highly experienced patients with limited therapeutic options. PMID:25275490

  1. Predictive Lateral Logic for Numerical Entry Guidance Algorithms

    NASA Technical Reports Server (NTRS)

    Smith, Kelly M.

    2016-01-01

    Recent entry guidance algorithm development123 has tended to focus on numerical integration of trajectories onboard in order to evaluate candidate bank profiles. Such methods enjoy benefits such as flexibility to varying mission profiles and improved robustness to large dispersions. A common element across many of these modern entry guidance algorithms is a reliance upon the concept of Apollo heritage lateral error (or azimuth error) deadbands in which the number of bank reversals to be performed is non-deterministic. This paper presents a closed-loop bank reversal method that operates with a fixed number of bank reversals defined prior to flight. However, this number of bank reversals can be modified at any point, including in flight, based on contingencies such as fuel leaks where propellant usage must be minimized.

  2. Electricity Usage Scheduling in Smart Building Environments Using Smart Devices

    PubMed Central

    Lee, Eunji; Bahn, Hyokyung

    2013-01-01

    With the recent advances in smart grid technologies as well as the increasing dissemination of smart meters, the electricity usage of every moment can be detected in modern smart building environments. Thus, the utility company adopts different price of electricity at each time slot considering the peak time. This paper presents a new electricity usage scheduling algorithm for smart buildings that adopts real-time pricing of electricity. The proposed algorithm detects the change of electricity prices by making use of a smart device and changes the power mode of each electric device dynamically. Specifically, we formulate the electricity usage scheduling problem as a real-time task scheduling problem and show that it is a complex search problem that has an exponential time complexity. An efficient heuristic based on genetic algorithms is performed on a smart device to cut down the huge searching space and find a reasonable schedule within a feasible time budget. Experimental results with various building conditions show that the proposed algorithm reduces the electricity charge of a smart building by 25.6% on average and up to 33.4%. PMID:24453860

  3. Electricity usage scheduling in smart building environments using smart devices.

    PubMed

    Lee, Eunji; Bahn, Hyokyung

    2013-01-01

    With the recent advances in smart grid technologies as well as the increasing dissemination of smart meters, the electricity usage of every moment can be detected in modern smart building environments. Thus, the utility company adopts different price of electricity at each time slot considering the peak time. This paper presents a new electricity usage scheduling algorithm for smart buildings that adopts real-time pricing of electricity. The proposed algorithm detects the change of electricity prices by making use of a smart device and changes the power mode of each electric device dynamically. Specifically, we formulate the electricity usage scheduling problem as a real-time task scheduling problem and show that it is a complex search problem that has an exponential time complexity. An efficient heuristic based on genetic algorithms is performed on a smart device to cut down the huge searching space and find a reasonable schedule within a feasible time budget. Experimental results with various building conditions show that the proposed algorithm reduces the electricity charge of a smart building by 25.6% on average and up to 33.4%.

  4. Root System Water Consumption Pattern Identification on Time Series Data

    PubMed Central

    Figueroa, Manuel; Pope, Christopher

    2017-01-01

    In agriculture, soil and meteorological sensors are used along low power networks to capture data, which allows for optimal resource usage and minimizing environmental impact. This study uses time series analysis methods for outliers’ detection and pattern recognition on soil moisture sensor data to identify irrigation and consumption patterns and to improve a soil moisture prediction and irrigation system. This study compares three new algorithms with the current detection technique in the project; the results greatly decrease the number of false positives detected. The best result is obtained by the Series Strings Comparison (SSC) algorithm averaging a precision of 0.872 on the testing sets, vastly improving the current system’s 0.348 precision. PMID:28621739

  5. Root System Water Consumption Pattern Identification on Time Series Data.

    PubMed

    Figueroa, Manuel; Pope, Christopher

    2017-06-16

    In agriculture, soil and meteorological sensors are used along low power networks to capture data, which allows for optimal resource usage and minimizing environmental impact. This study uses time series analysis methods for outliers' detection and pattern recognition on soil moisture sensor data to identify irrigation and consumption patterns and to improve a soil moisture prediction and irrigation system. This study compares three new algorithms with the current detection technique in the project; the results greatly decrease the number of false positives detected. The best result is obtained by the Series Strings Comparison (SSC) algorithm averaging a precision of 0.872 on the testing sets, vastly improving the current system's 0.348 precision.

  6. TADSim: Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics

    DOE PAGES

    Mniszewski, Susan M.; Junghans, Christoph; Voter, Arthur F.; ...

    2015-04-16

    Next-generation high-performance computing will require more scalable and flexible performance prediction tools to evaluate software--hardware co-design choices relevant to scientific applications and hardware architectures. Here, we present a new class of tools called application simulators—parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation. Parameterized choices for the algorithmic method and hardware options provide a rich space for design exploration and allow us to quickly find well-performing software--hardware combinations. We demonstrate our approach with a TADSim simulator that models the temperature-accelerated dynamics (TAD) method, an algorithmically complex and parameter-rich member of the accelerated molecular dynamics (AMD) family ofmore » molecular dynamics methods. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We accomplish this by identifying the time-intensive elements, quantifying algorithm steps in terms of those elements, abstracting them out, and replacing them by the passage of time. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We extend TADSim to model algorithm extensions, such as speculative spawning of the compute-bound stages, and predict performance improvements without having to implement such a method. Validation against the actual TAD code shows close agreement for the evolution of an example physical system, a silver surface. Finally, focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights and suggested extensions.« less

  7. Optimization of memory use of fragment extension-based protein-ligand docking with an original fast minimum cost flow algorithm.

    PubMed

    Yanagisawa, Keisuke; Komine, Shunta; Kubota, Rikuto; Ohue, Masahito; Akiyama, Yutaka

    2018-06-01

    The need to accelerate large-scale protein-ligand docking in virtual screening against a huge compound database led researchers to propose a strategy that entails memorizing the evaluation result of the partial structure of a compound and reusing it to evaluate other compounds. However, the previous method required frequent disk accesses, resulting in insufficient acceleration. Thus, more efficient memory usage can be expected to lead to further acceleration, and optimal memory usage could be achieved by solving the minimum cost flow problem. In this research, we propose a fast algorithm for the minimum cost flow problem utilizing the characteristics of the graph generated for this problem as constraints. The proposed algorithm, which optimized memory usage, was approximately seven times faster compared to existing minimum cost flow algorithms. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Development of intelligent model for personalized guidance on wheelchair tilt and recline usage for people with spinal cord injury: methodology and preliminary report.

    PubMed

    Fu, Jicheng; Jones, Maria; Jan, Yih-Kuen

    2014-01-01

    Wheelchair tilt and recline functions are two of the most desirable features for relieving seating pressure to decrease the risk of pressure ulcers. The effective guidance on wheelchair tilt and recline usage is therefore critical to pressure ulcer prevention. The aim of this study was to demonstrate the feasibility of using machine learning techniques to construct an intelligent model to provide personalized guidance to individuals with spinal cord injury (SCI). The motivation stems from the clinical evidence that the requirements of individuals vary greatly and that no universal guidance on tilt and recline usage could possibly satisfy all individuals with SCI. We explored all aspects involved in constructing the intelligent model and proposed approaches tailored to suit the characteristics of this preliminary study, such as the way of modeling research participants, using machine learning techniques to construct the intelligent model, and evaluating the performance of the intelligent model. We further improved the intelligent model's prediction accuracy by developing a two-phase feature selection algorithm to identify important attributes. Experimental results demonstrated that our approaches held the promise: they could effectively construct the intelligent model, evaluate its performance, and refine the participant model so that the intelligent model's prediction accuracy was significantly improved.

  9. Probabilistic Usage of the Multi-Factor Interaction Model

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2008-01-01

    A Multi-Factor Interaction Model (MFIM) is used to predict the insulating foam mass expulsion during the ascending of a space vehicle. The exponents in the MFIM are evaluated by an available approach which consists of least squares and an optimization algorithm. These results were subsequently used to probabilistically evaluate the effects of the uncertainties in each participating factor in the mass expulsion. The probabilistic results show that the surface temperature dominates at high probabilities and the pressure which causes the mass expulsion at low probabil

  10. Clustering Educational Digital Library Usage Data: A Comparison of Latent Class Analysis and K-Means Algorithms

    ERIC Educational Resources Information Center

    Xu, Beijie; Recker, Mimi; Qi, Xiaojun; Flann, Nicholas; Ye, Lei

    2013-01-01

    This article examines clustering as an educational data mining method. In particular, two clustering algorithms, the widely used K-means and the model-based Latent Class Analysis, are compared, using usage data from an educational digital library service, the Instructional Architect (IA.usu.edu). Using a multi-faceted approach and multiple data…

  11. Faster Trees: Strategies for Accelerated Training and Prediction of Random Forests for Classification of Polsar Images

    NASA Astrophysics Data System (ADS)

    Hänsch, Ronny; Hellwich, Olaf

    2018-04-01

    Random Forests have continuously proven to be one of the most accurate, robust, as well as efficient methods for the supervised classification of images in general and polarimetric synthetic aperture radar data in particular. While the majority of previous work focus on improving classification accuracy, we aim for accelerating the training of the classifier as well as its usage during prediction while maintaining its accuracy. Unlike other approaches we mainly consider algorithmic changes to stay as much as possible independent of platform and programming language. The final model achieves an approximately 60 times faster training and a 500 times faster prediction, while the accuracy is only marginally decreased by roughly 1 %.

  12. Pathological tremor prediction using surface EMG and acceleration: potential use in “ON-OFF” demand driven deep brain stimulator design

    PubMed Central

    Basu, Ishita; Graupe, Daniel; Tuninetti, Daniela; Shukla, Pitamber; Slavin, Konstantin V.; Metman, Leo Verhagen; Corcos, Daniel M.

    2013-01-01

    Objective We present a proof of concept for a novel method of predicting the onset of pathological tremor using non-invasively measured surface electromyogram (sEMG) and acceleration from tremor-affected extremities of patients with Parkinson’s disease (PD) and Essential tremor (ET). Approach The tremor prediction algorithm uses a set of spectral (fourier and wavelet) and non-linear time series (entropy and recurrence rate) parameters extracted from the non-invasively recorded sEMG and acceleration signals. Main results The resulting algorithm is shown to successfully predict tremor onset for all 91 trials recorded in 4 PD patients and for all 91 trials recorded in 4 ET patients. The predictor achieves a 100% sensitivity for all trials considered, along with an overall accuracy of 85.7% for all ET trials and 80.2% for all PD trials. By using a Pearson’s chi-square test, the prediction results are shown to significantly differ from a random prediction outcome. Significance The tremor prediction algorithm can be potentially used for designing the next generation of non-invasive closed-loop predictive ON-OFF controllers for deep brain stimulation (DBS), used for suppressing pathological tremor in such patients. Such a system is based on alternating ON and OFF DBS periods, an incoming tremor being predicted during the time intervals when DBS is OFF, so as to turn DBS back ON. The prediction should be a few seconds before tremor re-appears so that the patient is tremor-free for the entire DBS ON-OFF cycle as well as the tremor-free DBS OFF interval should be maximized in order to minimize the current injected in the brain and battery usage. PMID:23658233

  13. Pathological tremor prediction using surface electromyogram and acceleration: potential use in ‘ON-OFF’ demand driven deep brain stimulator design

    NASA Astrophysics Data System (ADS)

    Basu, Ishita; Graupe, Daniel; Tuninetti, Daniela; Shukla, Pitamber; Slavin, Konstantin V.; Verhagen Metman, Leo; Corcos, Daniel M.

    2013-06-01

    Objective. We present a proof of concept for a novel method of predicting the onset of pathological tremor using non-invasively measured surface electromyogram (sEMG) and acceleration from tremor-affected extremities of patients with Parkinson’s disease (PD) and essential tremor (ET). Approach. The tremor prediction algorithm uses a set of spectral (Fourier and wavelet) and nonlinear time series (entropy and recurrence rate) parameters extracted from the non-invasively recorded sEMG and acceleration signals. Main results. The resulting algorithm is shown to successfully predict tremor onset for all 91 trials recorded in 4 PD patients and for all 91 trials recorded in 4 ET patients. The predictor achieves a 100% sensitivity for all trials considered, along with an overall accuracy of 85.7% for all ET trials and 80.2% for all PD trials. By using a Pearson’s chi-square test, the prediction results are shown to significantly differ from a random prediction outcome. Significance. The tremor prediction algorithm can be potentially used for designing the next generation of non-invasive closed-loop predictive ON-OFF controllers for deep brain stimulation (DBS), used for suppressing pathological tremor in such patients. Such a system is based on alternating ON and OFF DBS periods, an incoming tremor being predicted during the time intervals when DBS is OFF, so as to turn DBS back ON. The prediction should be a few seconds before tremor re-appears so that the patient is tremor-free for the entire DBS ON-OFF cycle and the tremor-free DBS OFF interval should be maximized in order to minimize the current injected in the brain and battery usage.

  14. Accelerated Aging with Electrical Overstress and Prognostics for Power MOSFETs

    NASA Technical Reports Server (NTRS)

    Saha, Sankalita; Celaya, Jose Ramon; Vashchenko, Vladislav; Mahiuddin, Shompa; Goebel, Kai F.

    2011-01-01

    Power electronics play an increasingly important role in energy applications as part of their power converter circuits. Understanding the behavior of these devices, especially their failure modes as they age with nominal usage or sudden fault development is critical in ensuring efficiency. In this paper, a prognostics based health management of power MOSFETs undergoing accelerated aging through electrical overstress at the gate area is presented. Details of the accelerated aging methodology, modeling of the degradation process of the device and prognostics algorithm for prediction of the future state of health of the device are presented. Experiments with multiple devices demonstrate the performance of the model and the prognostics algorithm as well as the scope of application. Index Terms Power MOSFET, accelerated aging, prognostics

  15. Joint learning of labels and distance metric.

    PubMed

    Liu, Bo; Wang, Meng; Hong, Richang; Zha, Zhengjun; Hua, Xian-Sheng

    2010-06-01

    Machine learning algorithms frequently suffer from the insufficiency of training data and the usage of inappropriate distance metric. In this paper, we propose a joint learning of labels and distance metric (JLLDM) approach, which is able to simultaneously address the two difficulties. In comparison with the existing semi-supervised learning and distance metric learning methods that focus only on label prediction or distance metric construction, the JLLDM algorithm optimizes the labels of unlabeled samples and a Mahalanobis distance metric in a unified scheme. The advantage of JLLDM is multifold: 1) the problem of training data insufficiency can be tackled; 2) a good distance metric can be constructed with only very few training samples; and 3) no radius parameter is needed since the algorithm automatically determines the scale of the metric. Extensive experiments are conducted to compare the JLLDM approach with different semi-supervised learning and distance metric learning methods, and empirical results demonstrate its effectiveness.

  16. Inspection Robot Based Mobile Sensing and Power Line Tracking for Smart Grid

    PubMed Central

    Byambasuren, Bat-erdene; Kim, Donghan; Oyun-Erdene, Mandakh; Bold, Chinguun; Yura, Jargalbaatar

    2016-01-01

    Smart sensing and power line tracking is very important in a smart grid system. Illegal electricity usage can be detected by remote current measurement on overhead power lines using an inspection robot. There is a need for accurate detection methods of illegal electricity usage. Stable and correct power line tracking is a very prominent issue. In order to correctly track and make accurate measurements, the swing path of a power line should be previously fitted and predicted by a mathematical function using an inspection robot. After this, the remote inspection robot can follow the power line and measure the current. This paper presents a new power line tracking method using parabolic and circle fitting algorithms for illegal electricity detection. We demonstrate the effectiveness of the proposed tracking method by simulation and experimental results. PMID:26907274

  17. Inspection Robot Based Mobile Sensing and Power Line Tracking for Smart Grid.

    PubMed

    Byambasuren, Bat-Erdene; Kim, Donghan; Oyun-Erdene, Mandakh; Bold, Chinguun; Yura, Jargalbaatar

    2016-02-19

    Smart sensing and power line tracking is very important in a smart grid system. Illegal electricity usage can be detected by remote current measurement on overhead power lines using an inspection robot. There is a need for accurate detection methods of illegal electricity usage. Stable and correct power line tracking is a very prominent issue. In order to correctly track and make accurate measurements, the swing path of a power line should be previously fitted and predicted by a mathematical function using an inspection robot. After this, the remote inspection robot can follow the power line and measure the current. This paper presents a new power line tracking method using parabolic and circle fitting algorithms for illegal electricity detection. We demonstrate the effectiveness of the proposed tracking method by simulation and experimental results.

  18. A Model-based Prognostics Methodology for Electrolytic Capacitors Based on Electrical Overstress Accelerated Aging

    NASA Technical Reports Server (NTRS)

    Celaya, Jose; Kulkarni, Chetan; Biswas, Gautam; Saha, Sankalita; Goebel, Kai

    2011-01-01

    A remaining useful life prediction methodology for electrolytic capacitors is presented. This methodology is based on the Kalman filter framework and an empirical degradation model. Electrolytic capacitors are used in several applications ranging from power supplies on critical avionics equipment to power drivers for electro-mechanical actuators. These devices are known for their comparatively low reliability and given their criticality in electronics subsystems they are a good candidate for component level prognostics and health management. Prognostics provides a way to assess remaining useful life of a capacitor based on its current state of health and its anticipated future usage and operational conditions. We present here also, experimental results of an accelerated aging test under electrical stresses. The data obtained in this test form the basis for a remaining life prediction algorithm where a model of the degradation process is suggested. This preliminary remaining life prediction algorithm serves as a demonstration of how prognostics methodologies could be used for electrolytic capacitors. In addition, the use degradation progression data from accelerated aging, provides an avenue for validation of applications of the Kalman filter based prognostics methods typically used for remaining useful life predictions in other applications.

  19. Towards A Model-Based Prognostics Methodology for Electrolytic Capacitors: A Case Study Based on Electrical Overstress Accelerated Aging

    NASA Technical Reports Server (NTRS)

    Celaya, Jose R.; Kulkarni, Chetan S.; Biswas, Gautam; Goebel, Kai

    2012-01-01

    A remaining useful life prediction methodology for electrolytic capacitors is presented. This methodology is based on the Kalman filter framework and an empirical degradation model. Electrolytic capacitors are used in several applications ranging from power supplies on critical avionics equipment to power drivers for electro-mechanical actuators. These devices are known for their comparatively low reliability and given their criticality in electronics subsystems they are a good candidate for component level prognostics and health management. Prognostics provides a way to assess remaining useful life of a capacitor based on its current state of health and its anticipated future usage and operational conditions. We present here also, experimental results of an accelerated aging test under electrical stresses. The data obtained in this test form the basis for a remaining life prediction algorithm where a model of the degradation process is suggested. This preliminary remaining life prediction algorithm serves as a demonstration of how prognostics methodologies could be used for electrolytic capacitors. In addition, the use degradation progression data from accelerated aging, provides an avenue for validation of applications of the Kalman filter based prognostics methods typically used for remaining useful life predictions in other applications.

  20. Can the usage of human growth hormones affect facial appearance and the accuracy of face recognition systems?

    NASA Astrophysics Data System (ADS)

    Rose, Jake; Martin, Michael; Bourlai, Thirimachos

    2014-06-01

    In law enforcement and security applications, the acquisition of face images is critical in producing key trace evidence for the successful identification of potential threats. The goal of the study is to demonstrate that steroid usage significantly affects human facial appearance and hence, the performance of commercial and academic face recognition (FR) algorithms. In this work, we evaluate the performance of state-of-the-art FR algorithms on two unique face image datasets of subjects before (gallery set) and after (probe set) steroid (or human growth hormone) usage. For the purpose of this study, datasets of 73 subjects were created from multiple sources found on the Internet, containing images of men and women before and after steroid usage. Next, we geometrically pre-processed all images of both face datasets. Then, we applied image restoration techniques on the same face datasets, and finally, we applied FR algorithms in order to match the pre-processed face images of our probe datasets against the face images of the gallery set. Experimental results demonstrate that only a specific set of FR algorithms obtain the most accurate results (in terms of the rank-1 identification rate). This is because there are several factors that influence the efficiency of face matchers including (i) the time lapse between the before and after image pre-processing and restoration face photos, (ii) the usage of different drugs (e.g. Dianabol, Winstrol, and Decabolan), (iii) the usage of different cameras to capture face images, and finally, (iv) the variability of standoff distance, illumination and other noise factors (e.g. motion noise). All of the previously mentioned complicated scenarios make clear that cross-scenario matching is a very challenging problem and, thus, further investigation is required.

  1. Dual Accelerometer Usage Strategy for Onboard Space Navigation

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato; D'Souza, Chris

    2012-01-01

    This work introduces a dual accelerometer usage strategy for onboard space navigation. In the proposed algorithm the accelerometer is used to propagate the state when its value exceeds a threshold and it is used to estimate its errors otherwise. Numerical examples and comparison to other accelerometer usage schemes are presented to validate the proposed approach.

  2. Usage of Neural Network to Predict Aluminium Oxide Layer Thickness

    PubMed Central

    Michal, Peter; Vagaská, Alena; Gombár, Miroslav; Kmec, Ján; Spišák, Emil; Kučerka, Daniel

    2015-01-01

    This paper shows an influence of chemical composition of used electrolyte, such as amount of sulphuric acid in electrolyte, amount of aluminium cations in electrolyte and amount of oxalic acid in electrolyte, and operating parameters of process of anodic oxidation of aluminium such as the temperature of electrolyte, anodizing time, and voltage applied during anodizing process. The paper shows the influence of those parameters on the resulting thickness of aluminium oxide layer. The impact of these variables is shown by using central composite design of experiment for six factors (amount of sulphuric acid, amount of oxalic acid, amount of aluminium cations, electrolyte temperature, anodizing time, and applied voltage) and by usage of the cubic neural unit with Levenberg-Marquardt algorithm during the results evaluation. The paper also deals with current densities of 1 A·dm−2 and 3 A·dm−2 for creating aluminium oxide layer. PMID:25922850

  3. Usage of neural network to predict aluminium oxide layer thickness.

    PubMed

    Michal, Peter; Vagaská, Alena; Gombár, Miroslav; Kmec, Ján; Spišák, Emil; Kučerka, Daniel

    2015-01-01

    This paper shows an influence of chemical composition of used electrolyte, such as amount of sulphuric acid in electrolyte, amount of aluminium cations in electrolyte and amount of oxalic acid in electrolyte, and operating parameters of process of anodic oxidation of aluminium such as the temperature of electrolyte, anodizing time, and voltage applied during anodizing process. The paper shows the influence of those parameters on the resulting thickness of aluminium oxide layer. The impact of these variables is shown by using central composite design of experiment for six factors (amount of sulphuric acid, amount of oxalic acid, amount of aluminium cations, electrolyte temperature, anodizing time, and applied voltage) and by usage of the cubic neural unit with Levenberg-Marquardt algorithm during the results evaluation. The paper also deals with current densities of 1 A · dm(-2) and 3 A · dm(-2) for creating aluminium oxide layer.

  4. Transcriptomics-based strain optimization tool for designing secondary metabolite overproducing strains of Streptomyces coelicolor.

    PubMed

    Kim, Minsuk; Yi, Jeong Sang; Lakshmanan, Meiyappan; Lee, Dong-Yup; Kim, Byung-Gee

    2016-03-01

    In silico model-driven analysis using genome-scale model of metabolism (GEM) has been recognized as a promising method for microbial strain improvement. However, most of the current GEM-based strain design algorithms based on flux balance analysis (FBA) heavily rely on the steady-state and optimality assumptions without considering any regulatory information. Thus, their practical usage is quite limited, especially in its application to secondary metabolites overproduction. In this study, we developed a transcriptomics-based strain optimization tool (tSOT) in order to overcome such limitations by integrating transcriptomic data into GEM. Initially, we evaluated existing algorithms for integrating transcriptomic data into GEM using Streptomyces coelicolor dataset, and identified iMAT algorithm as the only and the best algorithm for characterizing the secondary metabolism of S. coelicolor. Subsequently, we developed tSOT platform where iMAT is adopted to predict the reaction states, and successfully demonstrated its applicability to secondary metabolites overproduction by designing actinorhodin (ACT), a polyketide antibiotic, overproducing strain of S. coelicolor. Mutants overexpressing tSOT targets such as ribulose 5-phosphate 3-epimerase and NADP-dependent malic enzyme showed 2 and 1.8-fold increase in ACT production, thereby validating the tSOT prediction. It is expected that tSOT can be used for solving other metabolic engineering problems which could not be addressed by current strain design algorithms, especially for the secondary metabolite overproductions. © 2015 Wiley Periodicals, Inc.

  5. Levels of HIV1 gp120 3D B-cell epitopes mutability and variability: searching for possible vaccine epitopes.

    PubMed

    Khrustalev, Vladislav Victorovich

    2010-01-01

    We used a DiscoTope 1.2 (http://www.cbs.dtu.dk/services/DiscoTope/), Epitopia (http://epitopia.tau.ac.il/) and EPCES (http://www.t38.physik.tu-muenchen.de/programs.htm) algorithms to map discontinuous B-cell epitopes in HIV1 gp120. The most mutable nucleotides in HIV genes are guanine (because of G to A hypermutagenesis) and cytosine (because of C to U and C to A mutations). The higher is the level of guanine and cytosine usage in third (neutral) codon positions and the lower is their level in first and second codon positions of the coding region, the more stable should be an epitope encoded by this region. We compared guanine and cytosine usage in regions coding for five predicted 3D B-cell epitopes of gp120. To make this comparison we used GenBank resource: 385 sequences of env gene obtained from ten HIV1-infected individuals were studied (http://www.barkovsky.hotmail.ru/Data/Seqgp120.htm). The most protected from nonsynonymous nucleotide mutations of guanine and cytosine 3D B-cell epitope is situated in the first conserved region of gp120 (it is mapped from 66th to 86th amino acid residue). We applied a test of variability to confirm this finding. Indeed, the less mutable predicted B-cell epitope is the less variable one. MEGA4 (standard PAM matrix) was used for the alignments and "VVK Consensus" algorithm (http://www.barkovsky.hotmail.ru) was used for the calculations.

  6. PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta.

    PubMed

    Chaudhury, Sidhartha; Lyskov, Sergey; Gray, Jeffrey J

    2010-03-01

    PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactively, using iPython and (ii) script-based, using Python scripting. Interactive mode contains a number of help features and is ideal for beginners while script-mode is best suited for algorithm development. PyRosetta has similar computational performance to Rosetta, can be easily scaled up for cluster applications and has been implemented for algorithms demonstrating protein docking, protein folding, loop modeling and design. PyRosetta is a stand-alone package available at http://www.pyrosetta.org under the Rosetta license which is free for academic and non-profit users. A tutorial, user's manual and sample scripts demonstrating usage are also available on the web site.

  7. PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta

    PubMed Central

    Chaudhury, Sidhartha; Lyskov, Sergey; Gray, Jeffrey J.

    2010-01-01

    Summary: PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactively, using iPython and (ii) script-based, using Python scripting. Interactive mode contains a number of help features and is ideal for beginners while script-mode is best suited for algorithm development. PyRosetta has similar computational performance to Rosetta, can be easily scaled up for cluster applications and has been implemented for algorithms demonstrating protein docking, protein folding, loop modeling and design. Availability: PyRosetta is a stand-alone package available at http://www.pyrosetta.org under the Rosetta license which is free for academic and non-profit users. A tutorial, user's manual and sample scripts demonstrating usage are also available on the web site. Contact: pyrosetta@graylab.jhu.edu PMID:20061306

  8. Use of four next-generation sequencing platforms to determine HIV-1 coreceptor tropism.

    PubMed

    Archer, John; Weber, Jan; Henry, Kenneth; Winner, Dane; Gibson, Richard; Lee, Lawrence; Paxinos, Ellen; Arts, Eric J; Robertson, David L; Mimms, Larry; Quiñones-Mateu, Miguel E

    2012-01-01

    HIV-1 coreceptor tropism assays are required to rule out the presence of CXCR4-tropic (non-R5) viruses prior treatment with CCR5 antagonists. Phenotypic (e.g., Trofile™, Monogram Biosciences) and genotypic (e.g., population sequencing linked to bioinformatic algorithms) assays are the most widely used. Although several next-generation sequencing (NGS) platforms are available, to date all published deep sequencing HIV-1 tropism studies have used the 454™ Life Sciences/Roche platform. In this study, HIV-1 co-receptor usage was predicted for twelve patients scheduled to start a maraviroc-based antiretroviral regimen. The V3 region of the HIV-1 env gene was sequenced using four NGS platforms: 454™, PacBio® RS (Pacific Biosciences), Illumina®, and Ion Torrent™ (Life Technologies). Cross-platform variation was evaluated, including number of reads, read length and error rates. HIV-1 tropism was inferred using Geno2Pheno, Web PSSM, and the 11/24/25 rule and compared with Trofile™ and virologic response to antiretroviral therapy. Error rates related to insertions/deletions (indels) and nucleotide substitutions introduced by the four NGS platforms were low compared to the actual HIV-1 sequence variation. Each platform detected all major virus variants within the HIV-1 population with similar frequencies. Identification of non-R5 viruses was comparable among the four platforms, with minor differences attributable to the algorithms used to infer HIV-1 tropism. All NGS platforms showed similar concordance with virologic response to the maraviroc-based regimen (75% to 80% range depending on the algorithm used), compared to Trofile (80%) and population sequencing (70%). In conclusion, all four NGS platforms were able to detect minority non-R5 variants at comparable levels suggesting that any NGS-based method can be used to predict HIV-1 coreceptor usage.

  9. Thermal sensation prediction by soft computing methodology.

    PubMed

    Jović, Srđan; Arsić, Nebojša; Vilimonović, Jovana; Petković, Dalibor

    2016-12-01

    Thermal comfort in open urban areas is very factor based on environmental point of view. Therefore it is need to fulfill demands for suitable thermal comfort during urban planning and design. Thermal comfort can be modeled based on climatic parameters and other factors. The factors are variables and they are changed throughout the year and days. Therefore there is need to establish an algorithm for thermal comfort prediction according to the input variables. The prediction results could be used for planning of time of usage of urban areas. Since it is very nonlinear task, in this investigation was applied soft computing methodology in order to predict the thermal comfort. The main goal was to apply extreme leaning machine (ELM) for forecasting of physiological equivalent temperature (PET) values. Temperature, pressure, wind speed and irradiance were used as inputs. The prediction results are compared with some benchmark models. Based on the results ELM can be used effectively in forecasting of PET. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Application of Monte Carlo techniques to optimization of high-energy beam transport in a stochastic environment

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.

    1971-01-01

    An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.

  11. Avoiding the Enumeration of Infeasible Elementary Flux Modes by Including Transcriptional Regulatory Rules in the Enumeration Process Saves Computational Costs

    PubMed Central

    Jungreuthmayer, Christian; Ruckerbauer, David E.; Gerstl, Matthias P.; Hanscho, Michael; Zanghellini, Jürgen

    2015-01-01

    Despite the significant progress made in recent years, the computation of the complete set of elementary flux modes of large or even genome-scale metabolic networks is still impossible. We introduce a novel approach to speed up the calculation of elementary flux modes by including transcriptional regulatory information into the analysis of metabolic networks. Taking into account gene regulation dramatically reduces the solution space and allows the presented algorithm to constantly eliminate biologically infeasible modes at an early stage of the computation procedure. Thereby, computational costs, such as runtime, memory usage, and disk space, are extremely reduced. Moreover, we show that the application of transcriptional rules identifies non-trivial system-wide effects on metabolism. Using the presented algorithm pushes the size of metabolic networks that can be studied by elementary flux modes to new and much higher limits without the loss of predictive quality. This makes unbiased, system-wide predictions in large scale metabolic networks possible without resorting to any optimization principle. PMID:26091045

  12. Bayesian Estimation of Multidimensional Item Response Models. A Comparison of Analytic and Simulation Algorithms

    ERIC Educational Resources Information Center

    Martin-Fernandez, Manuel; Revuelta, Javier

    2017-01-01

    This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MML-EM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional…

  13. A study on low-cost, high-accuracy, and real-time stereo vision algorithms for UAV power line inspection

    NASA Astrophysics Data System (ADS)

    Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue

    2018-04-01

    Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.

  14. File Usage Analysis and Resource Usage Prediction: a Measurement-Based Study. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Devarakonda, Murthy V.-S.

    1987-01-01

    A probabilistic scheme was developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The coefficient of correlation between the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82% of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.

  15. Electrochemistry-based Battery Modeling for Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Kulkarni, Chetan Shrikant

    2013-01-01

    Batteries are used in a wide variety of applications. In recent years, they have become popular as a source of power for electric vehicles such as cars, unmanned aerial vehicles, and commericial passenger aircraft. In such application domains, it becomes crucial to both monitor battery health and performance and to predict end of discharge (EOD) and end of useful life (EOL) events. To implement such technologies, it is crucial to understand how batteries work and to capture that knowledge in the form of models that can be used by monitoring, diagnosis, and prognosis algorithms. In this work, we develop electrochemistry-based models of lithium-ion batteries that capture the significant electrochemical processes, are computationally efficient, capture the effects of aging, and are of suitable accuracy for reliable EOD prediction in a variety of usage profiles. This paper reports on the progress of such a model, with results demonstrating the model validity and accurate EOD predictions.

  16. Network clustering and community detection using modulus of families of loops.

    PubMed

    Shakeri, Heman; Poggi-Corradini, Pietro; Albin, Nathan; Scoglio, Caterina

    2017-01-01

    We study the structure of loops in networks using the notion of modulus of loop families. We introduce an alternate measure of network clustering by quantifying the richness of families of (simple) loops. Modulus tries to minimize the expected overlap among loops by spreading the expected link usage optimally. We propose weighting networks using these expected link usages to improve classical community detection algorithms. We show that the proposed method enhances the performance of certain algorithms, such as spectral partitioning and modularity maximization heuristics, on standard benchmarks.

  17. Finite-Fault and Other New Capabilities of CISN ShakeAlert

    NASA Astrophysics Data System (ADS)

    Boese, M.; Felizardo, C.; Heaton, T. H.; Hudnut, K. W.; Hauksson, E.

    2013-12-01

    Over the past 6 years, scientists at Caltech, UC Berkeley, the Univ. of Southern California, the Univ. of Washington, the US Geological Survey, and ETH Zurich (Switzerland) have developed the 'ShakeAlert' earthquake early warning demonstration system for California and the Pacific Northwest. We have now started to transform this system into a stable end-to-end production system that will be integrated into the daily routine operations of the CISN and PNSN networks. To quickly determine the earthquake magnitude and location, ShakeAlert currently processes and interprets real-time data-streams from several hundred seismic stations within the California Integrated Seismic Network (CISN) and the Pacific Northwest Seismic Network (PNSN). Based on these parameters, the 'UserDisplay' software predicts and displays the arrival and intensity of shaking at a given user site. Real-time ShakeAlert feeds are currently being shared with around 160 individuals, companies, and emergency response organizations to gather feedback about the system performance, to educate potential users about EEW, and to identify needs and applications of EEW in a future operational warning system. To improve the performance during large earthquakes (M>6.5), we have started to develop, implement, and test a number of new algorithms for the ShakeAlert system: the 'FinDer' (Finite Fault Rupture Detector) algorithm provides real-time estimates of locations and extents of finite-fault ruptures from high-frequency seismic data. The 'GPSlip' algorithm estimates the fault slip along these ruptures using high-rate real-time GPS data. And, third, a new type of ground-motion prediction models derived from over 415,000 rupture simulations along active faults in southern California improves MMI intensity predictions for large earthquakes with consideration of finite-fault, rupture directivity, and basin response effects. FinDer and GPSlip are currently being real-time and offline tested in a separate internal ShakeAlert installation at Caltech. Real-time position and displacement time series from around 100 GPS sensors are obtained in JSON format from RTK/PPP(AR) solutions using the RTNet software at USGS Pasadena. However, we have also started to investigate the usage of onsite (in-receiver) processing using NetR9 with RTX and tracebuf2 output format. A number of changes to the ShakeAlert processing, xml message format, and the usage of this information in the UserDisplay software were necessary to handle the new finite-fault and slip information from the FinDer and GPSlip algorithms. In addition, we have developed a framework for end-to-end off-line testing with archived and simulated waveform data using the Earthworm tankplayer. Detailed background information about the algorithms, processing, and results from these test runs will be presented.

  18. Validation of objective records and misreporting of personal radio use in a cohort of British Police forces (the Airwave Health Monitoring Study).

    PubMed

    Vergnaud, Anne-Claire; Aresu, Maria; McRobie, Dennis; Singh, Deepa; Spear, Jeanette; Heard, Andy; Elliott, Paul

    2016-07-01

    Terrestrial Trunked Radio (TETRA) is a digital communication system progressively adopted by Police Forces in Great Britain since 2001. In 2000, the UK Independent Expert Group on Mobile Phones suggested that exposure to TETRA-like signal modulation might have adverse effects on health. The Airwave Health Monitoring Study was established to investigate possible long-term effects of TETRA use on health. This requires estimation of TETRA use among Police Force employees participating in the study. We investigated TETRA usage among 42,112 Police officers and staff. An algorithm was created to link each personal radio user to his/her objective radio usage records for the 26,035 participants with available data. We linked 16,577 personal radio users to their objective radio usage records and compared self-reported usage with data from the TETRA operator for those individuals. For weekly usage, the correlation between self-reported and operator-derived personal radio usage was r=0.69 for number and r=0.59 for the duration of calls. Compared with objective data, participants under-reported the number of calls and over-reported the duration of calls by a factor of around 4 and 1.6 respectively. Correlations were lower and bias higher when looking at daily usage. Where both objective and self-reported information were available, our study showed substantial misreporting in self-reported TETRA usage. Successful linkage of large numbers of TETRA users to objective data on their personal radios will allow objective assessment of TETRA radio usage for these participants and development of algorithms to correct bias in self-reported data for the remainder. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. An imperialist competitive algorithm for virtual machine placement in cloud computing

    NASA Astrophysics Data System (ADS)

    Jamali, Shahram; Malektaji, Sepideh; Analoui, Morteza

    2017-05-01

    Cloud computing, the recently emerged revolution in IT industry, is empowered by virtualisation technology. In this paradigm, the user's applications run over some virtual machines (VMs). The process of selecting proper physical machines to host these virtual machines is called virtual machine placement. It plays an important role on resource utilisation and power efficiency of cloud computing environment. In this paper, we propose an imperialist competitive-based algorithm for the virtual machine placement problem called ICA-VMPLC. The base optimisation algorithm is chosen to be ICA because of its ease in neighbourhood movement, good convergence rate and suitable terminology. The proposed algorithm investigates search space in a unique manner to efficiently obtain optimal placement solution that simultaneously minimises power consumption and total resource wastage. Its final solution performance is compared with several existing methods such as grouping genetic and ant colony-based algorithms as well as bin packing heuristic. The simulation results show that the proposed method is superior to other tested algorithms in terms of power consumption, resource wastage, CPU usage efficiency and memory usage efficiency.

  20. In patients with minimally symptomatic OSA can baseline characteristics and early patterns of CPAP usage predict those who are likely to be longer-term users of CPAP.

    PubMed

    Turnbull, Christopher D; Bratton, Daniel J; Craig, Sonya E; Kohler, Malcolm; Stradling, John R

    2016-02-01

    Long-term continuous positive airway pressure (CPAP) usage varies between individuals. It would be of value to be able to identify those who are likely to benefit from CPAP (and use it long term), versus those who would not, and might therefore benefit from additional help early on. First, we explored whether baseline characteristics predicted CPAP usage in minimally symptomatic obstructive sleep apnoea (OSA) patients, a group who would be expected to have low usage. Second, we explored if early CPAP usage was predictive of longer-term usage, as has been shown in more symptomatic OSA patients. The MOSAIC trial was a multi-centre randomised controlled trial where minimally symptomatic OSA patients were randomised to CPAP, or standard care, for 6 months. Here we have studied only those patients randomised to CPAP treatment. Baseline characteristics including symptoms, questionnaires [including the Epworth sleepiness score (ESS)] and sleep study parameters were recorded. CPAP usage was recorded at 2-4 weeks after initiation and after 6 months. The correlation and association between baseline characteristics and 6 months CPAP usage was assessed, as was the correlation between 2 and 4 weeks CPAP usage and 6 months CPAP usage. One hundred and ninety-five patients randomised to CPAP therapy had median [interquartile range (IQR)] CPAP usage of 2:49 (0:44, 5:13) h:min/night (h/n) at the 2-4 weeks visit, and 2:17 (0:08, 4:54) h/n at the 6 months follow-up visit. Only male gender was associated with increased long-term CPAP use (male usage 2:56 h/n, female 1:57 h/n; P=0.02). There was a moderate correlation between the usage of CPAP at 2-4 weeks and 6 months, with about 50% of the variability in long-term use being predicted by the short-term use. In patients with minimally symptomatic OSA, our study has shown that male gender (and not OSA severity or symptom burden) is associated with increased long-term use of CPAP at 6 months. Although, in general, early patterns of CPAP usage predicted longer term use, there are patients in whom this is not the case, and patients with low initial usage may need to extend their CPAP trial before a decision about longer-term use is made.

  1. A framework for porting the NeuroBayes machine learning algorithm to FPGAs

    NASA Astrophysics Data System (ADS)

    Baehr, S.; Sander, O.; Heck, M.; Feindt, M.; Becker, J.

    2016-01-01

    The NeuroBayes machine learning algorithm is deployed for online data reduction at the pixel detector of Belle II. In order to test, characterize and easily adapt its implementation on FPGAs, a framework was developed. Within the framework an HDL model, written in python using MyHDL, is used for fast exploration of possible configurations. Under usage of input data from physics simulations figures of merit like throughput, accuracy and resource demand of the implementation are evaluated in a fast and flexible way. Functional validation is supported by usage of unit tests and HDL simulation for chosen configurations.

  2. Particle Communication and Domain Neighbor Coupling: Scalable Domain Decomposed Algorithms for Monte Carlo Particle Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, M. J.; Brantley, P. S.

    2015-01-20

    In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 2 21 = 2,097,152 MPI processes on the IBMmore » BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.« less

  3. Using MaxCompiler for the high level synthesis of trigger algorithms

    NASA Astrophysics Data System (ADS)

    Summers, S.; Rose, A.; Sanders, P.

    2017-02-01

    Firmware for FPGA trigger applications at the CMS experiment is conventionally written using hardware description languages such as Verilog and VHDL. MaxCompiler is an alternative, Java based, tool for developing FPGA applications which uses a higher level of abstraction from the hardware than a hardware description language. An implementation of the jet and energy sum algorithms for the CMS Level-1 calorimeter trigger has been written using MaxCompiler to benchmark against the VHDL implementation in terms of accuracy, latency, resource usage, and code size. A Kalman Filter track fitting algorithm has been developed using MaxCompiler for a proposed CMS Level-1 track trigger for the High-Luminosity LHC upgrade. The design achieves a low resource usage, and has a latency of 187.5 ns per iteration.

  4. A fuzzy logic approach to control anaerobic digestion.

    PubMed

    Domnanovich, A M; Strik, D P; Zani, L; Pfeiffer, B; Karlovits, M; Braun, R; Holubar, P

    2003-01-01

    One of the goals of the EU-Project AMONCO (Advanced Prediction, Monitoring and Controlling of Anaerobic Digestion Process Behaviour towards Biogas Usage in Fuel Cells) is to create a control tool for the anaerobic digestion process, which predicts the volumetric organic loading rate (Bv) for the next day, to obtain a high biogas quality and production. The biogas should contain a high methane concentration (over 50%) and a low concentration of components toxic for fuel cells, e.g. hydrogen sulphide, siloxanes, ammonia and mercaptanes. For producing data to test the control tool, four 20 l anaerobic Continuously Stirred Tank Reactors (CSTR) are operated. For controlling two systems were investigated: a pure fuzzy logic system and a hybrid-system which contains a fuzzy based reactor condition calculation and a hierachial neural net in a cascade of optimisation algorithms.

  5. Comparison of four machine learning algorithms for their applicability in satellite-based optical rainfall retrievals

    NASA Astrophysics Data System (ADS)

    Meyer, Hanna; Kühnlein, Meike; Appelhans, Tim; Nauss, Thomas

    2016-03-01

    Machine learning (ML) algorithms have successfully been demonstrated to be valuable tools in satellite-based rainfall retrievals which show the practicability of using ML algorithms when faced with high dimensional and complex data. Moreover, recent developments in parallel computing with ML present new possibilities for training and prediction speed and therefore make their usage in real-time systems feasible. This study compares four ML algorithms - random forests (RF), neural networks (NNET), averaged neural networks (AVNNET) and support vector machines (SVM) - for rainfall area detection and rainfall rate assignment using MSG SEVIRI data over Germany. Satellite-based proxies for cloud top height, cloud top temperature, cloud phase and cloud water path serve as predictor variables. The results indicate an overestimation of rainfall area delineation regardless of the ML algorithm (averaged bias = 1.8) but a high probability of detection ranging from 81% (SVM) to 85% (NNET). On a 24-hour basis, the performance of the rainfall rate assignment yielded R2 values between 0.39 (SVM) and 0.44 (AVNNET). Though the differences in the algorithms' performance were rather small, NNET and AVNNET were identified as the most suitable algorithms. On average, they demonstrated the best performance in rainfall area delineation as well as in rainfall rate assignment. NNET's computational speed is an additional advantage in work with large datasets such as in remote sensing based rainfall retrievals. However, since no single algorithm performed considerably better than the others we conclude that further research in providing suitable predictors for rainfall is of greater necessity than an optimization through the choice of the ML algorithm.

  6. In patients with minimally symptomatic OSA can baseline characteristics and early patterns of CPAP usage predict those who are likely to be longer-term users of CPAP

    PubMed Central

    Bratton, Daniel J.; Craig, Sonya E.; Kohler, Malcolm; Stradling, John R.

    2016-01-01

    Background Long-term continuous positive airway pressure (CPAP) usage varies between individuals. It would be of value to be able to identify those who are likely to benefit from CPAP (and use it long term), versus those who would not, and might therefore benefit from additional help early on. First, we explored whether baseline characteristics predicted CPAP usage in minimally symptomatic obstructive sleep apnoea (OSA) patients, a group who would be expected to have low usage. Second, we explored if early CPAP usage was predictive of longer-term usage, as has been shown in more symptomatic OSA patients. Methods The MOSAIC trial was a multi-centre randomised controlled trial where minimally symptomatic OSA patients were randomised to CPAP, or standard care, for 6 months. Here we have studied only those patients randomised to CPAP treatment. Baseline characteristics including symptoms, questionnaires [including the Epworth sleepiness score (ESS)] and sleep study parameters were recorded. CPAP usage was recorded at 2–4 weeks after initiation and after 6 months. The correlation and association between baseline characteristics and 6 months CPAP usage was assessed, as was the correlation between 2 and 4 weeks CPAP usage and 6 months CPAP usage. Results One hundred and ninety-five patients randomised to CPAP therapy had median [interquartile range (IQR)] CPAP usage of 2:49 (0:44, 5:13) h:min/night (h/n) at the 2–4 weeks visit, and 2:17 (0:08, 4:54) h/n at the 6 months follow-up visit. Only male gender was associated with increased long-term CPAP use (male usage 2:56 h/n, female 1:57 h/n; P=0.02). There was a moderate correlation between the usage of CPAP at 2–4 weeks and 6 months, with about 50% of the variability in long-term use being predicted by the short-term use. Conclusions In patients with minimally symptomatic OSA, our study has shown that male gender (and not OSA severity or symptom burden) is associated with increased long-term use of CPAP at 6 months. Although, in general, early patterns of CPAP usage predicted longer term use, there are patients in whom this is not the case, and patients with low initial usage may need to extend their CPAP trial before a decision about longer-term use is made. PMID:26904268

  7. Think globally and solve locally: secondary memory-based network learning for automated multi-species function prediction

    PubMed Central

    2014-01-01

    Background Network-based learning algorithms for automated function prediction (AFP) are negatively affected by the limited coverage of experimental data and limited a priori known functional annotations. As a consequence their application to model organisms is often restricted to well characterized biological processes and pathways, and their effectiveness with poorly annotated species is relatively limited. A possible solution to this problem might consist in the construction of big networks including multiple species, but this in turn poses challenging computational problems, due to the scalability limitations of existing algorithms and the main memory requirements induced by the construction of big networks. Distributed computation or the usage of big computers could in principle respond to these issues, but raises further algorithmic problems and require resources not satisfiable with simple off-the-shelf computers. Results We propose a novel framework for scalable network-based learning of multi-species protein functions based on both a local implementation of existing algorithms and the adoption of innovative technologies: we solve “locally” the AFP problem, by designing “vertex-centric” implementations of network-based algorithms, but we do not give up thinking “globally” by exploiting the overall topology of the network. This is made possible by the adoption of secondary memory-based technologies that allow the efficient use of the large memory available on disks, thus overcoming the main memory limitations of modern off-the-shelf computers. This approach has been applied to the analysis of a large multi-species network including more than 300 species of bacteria and to a network with more than 200,000 proteins belonging to 13 Eukaryotic species. To our knowledge this is the first work where secondary-memory based network analysis has been applied to multi-species function prediction using biological networks with hundreds of thousands of proteins. Conclusions The combination of these algorithmic and technological approaches makes feasible the analysis of large multi-species networks using ordinary computers with limited speed and primary memory, and in perspective could enable the analysis of huge networks (e.g. the whole proteomes available in SwissProt), using well-equipped stand-alone machines. PMID:24843788

  8. Event Processing and Variable Part of Sample Period Determining in Combined Systems Using GA

    NASA Astrophysics Data System (ADS)

    Strémy, Maximilián; Závacký, Pavol; Jedlička, Martin

    2011-01-01

    This article deals with combined dynamic systems and usage of modern techniques in dealing with these systems, focusing particularly on sampling period design, cyclic processing tasks and related processing algorithms in the combined event management systems using genetic algorithms.

  9. Security in Wireless Sensor Networks Employing MACGSP6

    ERIC Educational Resources Information Center

    Nitipaichit, Yuttasart

    2010-01-01

    Wireless Sensor Networks (WSNs) have unique characteristics which constrain them; including small energy stores, limited computation, and short range communication capability. Most traditional security algorithms use cryptographic primitives such as Public-key cryptography and are not optimized for energy usage. Employing these algorithms for the…

  10. A genotypic method for determining HIV-2 coreceptor usage enables epidemiological studies and clinical decision support.

    PubMed

    Döring, Matthias; Borrego, Pedro; Büch, Joachim; Martins, Andreia; Friedrich, Georg; Camacho, Ricardo Jorge; Eberle, Josef; Kaiser, Rolf; Lengauer, Thomas; Taveira, Nuno; Pfeifer, Nico

    2016-12-20

    CCR5-coreceptor antagonists can be used for treating HIV-2 infected individuals. Before initiating treatment with coreceptor antagonists, viral coreceptor usage should be determined to ensure that the virus can use only the CCR5 coreceptor (R5) and cannot evade the drug by using the CXCR4 coreceptor (X4-capable). However, until now, no online tool for the genotypic identification of HIV-2 coreceptor usage had been available. Furthermore, there is a lack of knowledge on the determinants of HIV-2 coreceptor usage. Therefore, we developed a data-driven web service for the prediction of HIV-2 coreceptor usage from the V3 loop of the HIV-2 glycoprotein and used the tool to identify novel discriminatory features of X4-capable variants. Using 10 runs of tenfold cross validation, we selected a linear support vector machine (SVM) as the model for geno2pheno[coreceptor-hiv2], because it outperformed the other SVMs with an area under the ROC curve (AUC) of 0.95. We found that SVMs were highly accurate in identifying HIV-2 coreceptor usage, attaining sensitivities of 73.5% and specificities of 96% during tenfold nested cross validation. The predictive performance of SVMs was not significantly different (p value 0.37) from an existing rules-based approach. Moreover, geno2pheno[coreceptor-hiv2] achieved a predictive accuracy of 100% and outperformed the existing approach on an independent data set containing nine new isolates with corresponding phenotypic measurements of coreceptor usage. geno2pheno[coreceptor-hiv2] could not only reproduce the established markers of CXCR4-usage, but also revealed novel markers: the substitutions 27K, 15G, and 8S were significantly predictive of CXCR4 usage. Furthermore, SVMs trained on the amino-acid sequences of the V1 and V2 loops were also quite accurate in predicting coreceptor usage (AUCs of 0.84 and 0.65, respectively). In this study, we developed geno2pheno[coreceptor-hiv2], the first online tool for the prediction of HIV-2 coreceptor usage from the V3 loop. Using our method, we identified novel amino-acid markers of X4-capable variants in the V3 loop and found that HIV-2 coreceptor usage is also influenced by the V1/V2 region. The tool can aid clinicians in deciding whether coreceptor antagonists such as maraviroc are a treatment option and enables epidemiological studies investigating HIV-2 coreceptor usage. geno2pheno[coreceptor-hiv2] is freely available at http://coreceptor-hiv2.geno2pheno.org .

  11. Optimizing the Learning Order of Chinese Characters Using a Novel Topological Sort Algorithm

    PubMed Central

    Wang, Jinzhao

    2016-01-01

    We present a novel algorithm for optimizing the order in which Chinese characters are learned, one that incorporates the benefits of learning them in order of usage frequency and in order of their hierarchal structural relationships. We show that our work outperforms previously published orders and algorithms. Our algorithm is applicable to any scheduling task where nodes have intrinsic differences in importance and must be visited in topological order. PMID:27706234

  12. User-Preference-Driven Model Predictive Control of Residential Building Loads and Battery Storage for Demand Response: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Xin; Baker, Kyri A.; Christensen, Dane T.

    This paper presents a user-preference-driven home energy management system (HEMS) for demand response (DR) with residential building loads and battery storage. The HEMS is based on a multi-objective model predictive control algorithm, where the objectives include energy cost, thermal comfort, and carbon emission. A multi-criterion decision making method originating from social science is used to quickly determine user preferences based on a brief survey and derive the weights of different objectives used in the optimization process. Besides the residential appliances used in the traditional DR programs, a home battery system is integrated into the HEMS to improve the flexibility andmore » reliability of the DR resources. Simulation studies have been performed on field data from a residential building stock data set. Appliance models and usage patterns were learned from the data to predict the DR resource availability. Results indicate the HEMS was able to provide a significant amount of load reduction with less than 20% prediction error in both heating and cooling cases.« less

  13. User-Preference-Driven Model Predictive Control of Residential Building Loads and Battery Storage for Demand Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Xin; Baker, Kyri A; Isley, Steven C

    This paper presents a user-preference-driven home energy management system (HEMS) for demand response (DR) with residential building loads and battery storage. The HEMS is based on a multi-objective model predictive control algorithm, where the objectives include energy cost, thermal comfort, and carbon emission. A multi-criterion decision making method originating from social science is used to quickly determine user preferences based on a brief survey and derive the weights of different objectives used in the optimization process. Besides the residential appliances used in the traditional DR programs, a home battery system is integrated into the HEMS to improve the flexibility andmore » reliability of the DR resources. Simulation studies have been performed on field data from a residential building stock data set. Appliance models and usage patterns were learned from the data to predict the DR resource availability. Results indicate the HEMS was able to provide a significant amount of load reduction with less than 20% prediction error in both heating and cooling cases.« less

  14. Predictability of process resource usage - A measurement-based study on UNIX

    NASA Technical Reports Server (NTRS)

    Devarakonda, Murthy V.; Iyer, Ravishankar K.

    1989-01-01

    A probabilistic scheme is developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The correlation coefficient betweeen the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82 percent of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.

  15. Predictability of process resource usage: A measurement-based study of UNIX

    NASA Technical Reports Server (NTRS)

    Devarakonda, Murthy V.; Iyer, Ravishankar K.

    1987-01-01

    A probabilistic scheme is developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The correlation coefficient between the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82% of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.

  16. Flexible Residential Smart Grid Simulation Framework

    NASA Astrophysics Data System (ADS)

    Xiang, Wang

    Different scheduling and coordination algorithms controlling household appliances' operations can potentially lead to energy consumption reduction and/or load balancing in conjunction with different electricity pricing methods used in smart grid programs. In order to easily implement different algorithms and evaluate their efficiency against other ideas, a flexible simulation framework is desirable in both research and business fields. However, such a platform is currently lacking or underdeveloped. In this thesis, we provide a simulation framework to focus on demand side residential energy consumption coordination in response to different pricing methods. This simulation framework, equipped with an appliance consumption library using realistic values, aims to closely represent the average usage of different types of appliances. The simulation results of traditional usage yield close matching values compared to surveyed real life consumption records. Several sample coordination algorithms, pricing schemes, and communication scenarios are also implemented to illustrate the use of the simulation framework.

  17. Refining Prediction in Treatment-Resistant Depression: Results of Machine Learning Analyses in the TRD III Sample.

    PubMed

    Kautzky, Alexander; Dold, Markus; Bartova, Lucie; Spies, Marie; Vanicek, Thomas; Souery, Daniel; Montgomery, Stuart; Mendlewicz, Julien; Zohar, Joseph; Fabbri, Chiara; Serretti, Alessandro; Lanzenberger, Rupert; Kasper, Siegfried

    The study objective was to generate a prediction model for treatment-resistant depression (TRD) using machine learning featuring a large set of 47 clinical and sociodemographic predictors of treatment outcome. 552 Patients diagnosed with major depressive disorder (MDD) according to DSM-IV criteria were enrolled between 2011 and 2016. TRD was defined as failure to reach response to antidepressant treatment, characterized by a Montgomery-Asberg Depression Rating Scale (MADRS) score below 22 after at least 2 antidepressant trials of adequate length and dosage were administered. RandomForest (RF) was used for predicting treatment outcome phenotypes in a 10-fold cross-validation. The full model with 47 predictors yielded an accuracy of 75.0%. When the number of predictors was reduced to 15, accuracies between 67.6% and 71.0% were attained for different test sets. The most informative predictors of treatment outcome were baseline MADRS score for the current episode; impairment of family, social, and work life; the timespan between first and last depressive episode; severity; suicidal risk; age; body mass index; and the number of lifetime depressive episodes as well as lifetime duration of hospitalization. With the application of the machine learning algorithm RF, an efficient prediction model with an accuracy of 75.0% for forecasting treatment outcome could be generated, thus surpassing the predictive capabilities of clinical evaluation. We also supply a simplified algorithm of 15 easily collected clinical and sociodemographic predictors that can be obtained within approximately 10 minutes, which reached an accuracy of 70.6%. Thus, we are confident that our model will be validated within other samples to advance an accurate prediction model fit for clinical usage in TRD. © Copyright 2017 Physicians Postgraduate Press, Inc.

  18. Electricity forecasting on the individual household level enhanced based on activity patterns

    PubMed Central

    Gajowniczek, Krzysztof; Ząbkowski, Tomasz

    2017-01-01

    Leveraging smart metering solutions to support energy efficiency on the individual household level poses novel research challenges in monitoring usage and providing accurate load forecasting. Forecasting electricity usage is an especially important component that can provide intelligence to smart meters. In this paper, we propose an enhanced approach for load forecasting at the household level. The impacts of residents’ daily activities and appliance usages on the power consumption of the entire household are incorporated to improve the accuracy of the forecasting model. The contributions of this paper are threefold: (1) we addressed short-term electricity load forecasting for 24 hours ahead, not on the aggregate but on the individual household level, which fits into the Residential Power Load Forecasting (RPLF) methods; (2) for the forecasting, we utilized a household specific dataset of behaviors that influence power consumption, which was derived using segmentation and sequence mining algorithms; and (3) an extensive load forecasting study using different forecasting algorithms enhanced by the household activity patterns was undertaken. PMID:28423039

  19. Intelligent Life-Extending Controls for Aircraft Engines

    NASA Technical Reports Server (NTRS)

    Guo, Ten-Huei; Chen, Philip; Jaw, Link

    2005-01-01

    Aircraft engine controllers are designed and operated to provide desired performance and stability margins. The purpose of life-extending-control (LEC) is to study the relationship between control action and engine component life usage, and to design an intelligent control algorithm to provide proper trade-offs between performance and engine life usage. The benefit of this approach is that it is expected to maintain safety while minimizing the overall operating costs. With the advances of computer technology, engine operation models, and damage physics, it is necessary to reevaluate the control strategy fro overall operating cost consideration. This paper uses the thermo-mechanical fatigue (TMF) of a critical component to demonstrate how an intelligent engine control algorithm can drastically reduce the engine life usage with minimum sacrifice in performance. A Monte Carlo simulation is also performed to evaluate the likely engine damage accumulation under various operating conditions. The simulation results show that an optimized acceleration schedule can provide a significant life saving in selected engine components.

  20. Electricity forecasting on the individual household level enhanced based on activity patterns.

    PubMed

    Gajowniczek, Krzysztof; Ząbkowski, Tomasz

    2017-01-01

    Leveraging smart metering solutions to support energy efficiency on the individual household level poses novel research challenges in monitoring usage and providing accurate load forecasting. Forecasting electricity usage is an especially important component that can provide intelligence to smart meters. In this paper, we propose an enhanced approach for load forecasting at the household level. The impacts of residents' daily activities and appliance usages on the power consumption of the entire household are incorporated to improve the accuracy of the forecasting model. The contributions of this paper are threefold: (1) we addressed short-term electricity load forecasting for 24 hours ahead, not on the aggregate but on the individual household level, which fits into the Residential Power Load Forecasting (RPLF) methods; (2) for the forecasting, we utilized a household specific dataset of behaviors that influence power consumption, which was derived using segmentation and sequence mining algorithms; and (3) an extensive load forecasting study using different forecasting algorithms enhanced by the household activity patterns was undertaken.

  1. Compact Encoding of Robot-Generated 3D Maps for Efficient Wireless Transmission

    DTIC Science & Technology

    2003-01-01

    Lempel - Ziv -Welch (LZW) and Ziv - Lempel (LZ77) respectively. Image based compression can also be based on dic- tionaries... compression of the data , without actually displaying a 3D model, printing statistical results for comparison of the different algorithms . 1http... compression algorithms , and wavelet algorithms tuned to the specific nature of the raw laser data . For most such applications, the usage of lossless

  2. Defence Technology Strategy for the Demands of the 21st Century

    DTIC Science & Technology

    2006-10-01

    understanding of human capability in the CBM role. Ownership of the intellectual property behind algorithms may be sovereign10, but implementation will...synchronisation schemes. · coding schemes. · modulation techniques. · access schemes. · smart spectrum usage . · low probability of intercept. · implementation...modulation techniques; access schemes; smart spectrum usage ; low probability of intercept Spectrum and bandwidth management · cross layer technologies to

  3. Predicting Social Anxiety Treatment Outcome Based on Therapeutic Email Conversations.

    PubMed

    Hoogendoorn, Mark; Berger, Thomas; Schulz, Ava; Stolz, Timo; Szolovits, Peter

    2017-09-01

    Predicting therapeutic outcome in the mental health domain is of utmost importance to enable therapists to provide the most effective treatment to a patient. Using information from the writings of a patient can potentially be a valuable source of information, especially now that more and more treatments involve computer-based exercises or electronic conversations between patient and therapist. In this paper, we study predictive modeling using writings of patients under treatment for a social anxiety disorder. We extract a wealth of information from the text written by patients including their usage of words, the topics they talk about, the sentiment of the messages, and the style of writing. In addition, we study trends over time with respect to those measures. We then apply machine learning algorithms to generate the predictive models. Based on a dataset of 69 patients, we are able to show that we can predict therapy outcome with an area under the curve of 0.83 halfway through the therapy and with a precision of 0.78 when using the full data (i.e., the entire treatment period). Due to the limited number of participants, it is hard to generalize the results, but they do show great potential in this type of information.

  4. Network traffic anomaly prediction using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Ciptaningtyas, Hening Titi; Fatichah, Chastine; Sabila, Altea

    2017-03-01

    As the excessive increase of internet usage, the malicious software (malware) has also increase significantly. Malware is software developed by hacker for illegal purpose(s), such as stealing data and identity, causing computer damage, or denying service to other user[1]. Malware which attack computer or server often triggers network traffic anomaly phenomena. Based on Sophos's report[2], Indonesia is the riskiest country of malware attack and it also has high network traffic anomaly. This research uses Artificial Neural Network (ANN) to predict network traffic anomaly based on malware attack in Indonesia which is recorded by Id-SIRTII/CC (Indonesia Security Incident Response Team on Internet Infrastructure/Coordination Center). The case study is the highest malware attack (SQL injection) which has happened in three consecutive years: 2012, 2013, and 2014[4]. The data series is preprocessed first, then the network traffic anomaly is predicted using Artificial Neural Network and using two weight update algorithms: Gradient Descent and Momentum. Error of prediction is calculated using Mean Squared Error (MSE) [7]. The experimental result shows that MSE for SQL Injection is 0.03856. So, this approach can be used to predict network traffic anomaly.

  5. A Bayesian additive model for understanding public transport usage in special events.

    PubMed

    Rodrigues, Filipe; Borysov, Stanislav; Ribeiro, Bernardete; Pereira, Francisco

    2016-12-02

    Public special events, like sports games, concerts and festivals are well known to create disruptions in transportation systems, often catching the operators by surprise. Although these are usually planned well in advance, their impact is difficult to predict, even when organisers and transportation operators coordinate. The problem highly increases when several events happen concurrently. To solve these problems, costly processes, heavily reliant on manual search and personal experience, are usual practice in large cities like Singapore, London or Tokyo. This paper presents a Bayesian additive model with Gaussian process components that combines smart card records from public transport with context information about events that is continuously mined from the Web. We develop an efficient approximate inference algorithm using expectation propagation, which allows us to predict the total number of public transportation trips to the special event areas, thereby contributing to a more adaptive transportation system. Furthermore, for multiple concurrent event scenarios, the proposed algorithm is able to disaggregate gross trip counts into their most likely components related to specific events and routine behavior. Using real data from Singapore, we show that the presented model outperforms the best baseline model by up to 26% in R2 and also has explanatory power for its individual components.

  6. Katome: de novo DNA assembler implemented in rust

    NASA Astrophysics Data System (ADS)

    Neumann, Łukasz; Nowak, Robert M.; Kuśmirek, Wiktor

    2017-08-01

    Katome is a new de novo sequence assembler written in the Rust programming language, designed with respect to future parallelization of the algorithms, run time and memory usage optimization. The application uses new algorithms for the correct assembly of repetitive sequences. Performance and quality tests were performed on various data, comparing the new application to `dnaasm', `ABySS' and `Velvet' genome assemblers. Quality tests indicate that the new assembler creates more contigs than well-established solutions, but the contigs have better quality with regard to mismatches per 100kbp and indels per 100kbp. Additionally, benchmarks indicate that the Rust-based implementation outperforms `dnaasm', `ABySS' and `Velvet' assemblers, written in C++, in terms of assembly time. Lower memory usage in comparison to `dnaasm' is observed.

  7. Antecedents of Continued Usage Intentions of Web-Based Learning Management System in Tanzania

    ERIC Educational Resources Information Center

    Lwoga, Edda Tandi; Komba, Mercy

    2015-01-01

    Purpose: The purpose of this paper is to examine factors that predict students' continued usage intention of web-based learning management systems (LMS) in Tanzania, with a specific focus on the School of Business of Mzumbe University. Specifically, the study investigated major predictors of actual usage and continued usage intentions of…

  8. Potential application of machine vision technology to saffron (Crocus sativus L.) quality characterization.

    PubMed

    Kiani, Sajad; Minaei, Saeid

    2016-12-01

    Saffron quality characterization is an important issue in the food industry and of interest to the consumers. This paper proposes an expert system based on the application of machine vision technology for characterization of saffron and shows how it can be employed in practical usage. There is a correlation between saffron color and its geographic location of production and some chemical attributes which could be properly used for characterization of saffron quality and freshness. This may be accomplished by employing image processing techniques coupled with multivariate data analysis for quantification of saffron properties. Expert algorithms can be made available for prediction of saffron characteristics such as color as well as for product classification. Copyright © 2016. Published by Elsevier Ltd.

  9. Evaluating segmentation error without ground truth.

    PubMed

    Kohlberger, Timo; Singh, Vivek; Alvino, Chris; Bahlmann, Claus; Grady, Leo

    2012-01-01

    The automatic delineation of the boundaries of organs and other anatomical structures is a key component of many medical image processing systems. In this paper we present a generic learning approach based on a novel space of segmentation features, which can be trained to predict the overlap error and Dice coefficient of an arbitrary organ segmentation without knowing the ground truth delineation. We show the regressor to be much stronger a predictor of these error metrics than the responses of probabilistic boosting classifiers trained on the segmentation boundary. The presented approach not only allows us to build reliable confidence measures and fidelity checks, but also to rank several segmentation hypotheses against each other during online usage of the segmentation algorithm in clinical practice.

  10. Drug-target interaction prediction: A Bayesian ranking approach.

    PubMed

    Peska, Ladislav; Buza, Krisztian; Koller, Júlia

    2017-12-01

    In silico prediction of drug-target interactions (DTI) could provide valuable information and speed-up the process of drug repositioning - finding novel usage for existing drugs. In our work, we focus on machine learning algorithms supporting drug-centric repositioning approach, which aims to find novel usage for existing or abandoned drugs. We aim at proposing a per-drug ranking-based method, which reflects the needs of drug-centric repositioning research better than conventional drug-target prediction approaches. We propose Bayesian Ranking Prediction of Drug-Target Interactions (BRDTI). The method is based on Bayesian Personalized Ranking matrix factorization (BPR) which has been shown to be an excellent approach for various preference learning tasks, however, it has not been used for DTI prediction previously. In order to successfully deal with DTI challenges, we extended BPR by proposing: (i) the incorporation of target bias, (ii) a technique to handle new drugs and (iii) content alignment to take structural similarities of drugs and targets into account. Evaluation on five benchmark datasets shows that BRDTI outperforms several state-of-the-art approaches in terms of per-drug nDCG and AUC. BRDTI results w.r.t. nDCG are 0.929, 0.953, 0.948, 0.897 and 0.690 for G-Protein Coupled Receptors (GPCR), Ion Channels (IC), Nuclear Receptors (NR), Enzymes (E) and Kinase (K) datasets respectively. Additionally, BRDTI significantly outperformed other methods (BLM-NII, WNN-GIP, NetLapRLS and CMF) w.r.t. nDCG in 17 out of 20 cases. Furthermore, BRDTI was also shown to be able to predict novel drug-target interactions not contained in the original datasets. The average recall at top-10 predicted targets for each drug was 0.762, 0.560, 1.000 and 0.404 for GPCR, IC, NR, and E datasets respectively. Based on the evaluation, we can conclude that BRDTI is an appropriate choice for researchers looking for an in silico DTI prediction technique to be used in drug-centric repositioning scenarios. BRDTI Software and supplementary materials are available online at www.ksi.mff.cuni.cz/∼peska/BRDTI. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Cluster analysis of word frequency dynamics

    NASA Astrophysics Data System (ADS)

    Maslennikova, Yu S.; Bochkarev, V. V.; Belashova, I. A.

    2015-01-01

    This paper describes the analysis and modelling of word usage frequency time series. During one of previous studies, an assumption was put forward that all word usage frequencies have uniform dynamics approaching the shape of a Gaussian function. This assumption can be checked using the frequency dictionaries of the Google Books Ngram database. This database includes 5.2 million books published between 1500 and 2008. The corpus contains over 500 billion words in American English, British English, French, German, Spanish, Russian, Hebrew, and Chinese. We clustered time series of word usage frequencies using a Kohonen neural network. The similarity between input vectors was estimated using several algorithms. As a result of the neural network training procedure, more than ten different forms of time series were found. They describe the dynamics of word usage frequencies from birth to death of individual words. Different groups of word forms were found to have different dynamics of word usage frequency variations.

  12. Knowledge-based prediction of protein backbone conformation using a structural alphabet.

    PubMed

    Vetrivel, Iyanar; Mahajan, Swapnil; Tyagi, Manoj; Hoffmann, Lionel; Sanejouand, Yves-Henri; Srinivasan, Narayanaswamy; de Brevern, Alexandre G; Cadet, Frédéric; Offmann, Bernard

    2017-01-01

    Libraries of structural prototypes that abstract protein local structures are known as structural alphabets and have proven to be very useful in various aspects of protein structure analyses and predictions. One such library, Protein Blocks, is composed of 16 standard 5-residues long structural prototypes. This form of analyzing proteins involves drafting its structure as a string of Protein Blocks. Predicting the local structure of a protein in terms of protein blocks is the general objective of this work. A new approach, PB-kPRED is proposed towards this aim. It involves (i) organizing the structural knowledge in the form of a database of pentapeptide fragments extracted from all protein structures in the PDB and (ii) applying a knowledge-based algorithm that does not rely on any secondary structure predictions and/or sequence alignment profiles, to scan this database and predict most probable backbone conformations for the protein local structures. Though PB-kPRED uses the structural information from homologues in preference, if available. The predictions were evaluated rigorously on 15,544 query proteins representing a non-redundant subset of the PDB filtered at 30% sequence identity cut-off. We have shown that the kPRED method was able to achieve mean accuracies ranging from 40.8% to 66.3% depending on the availability of homologues. The impact of the different strategies for scanning the database on the prediction was evaluated and is discussed. Our results highlight the usefulness of the method in the context of proteins without any known structural homologues. A scoring function that gives a good estimate of the accuracy of prediction was further developed. This score estimates very well the accuracy of the algorithm (R2 of 0.82). An online version of the tool is provided freely for non-commercial usage at http://www.bo-protscience.fr/kpred/.

  13. Information needs of generalists and specialists using online best-practice algorithms to answer clinical questions.

    PubMed

    Cook, David A; Sorensen, Kristi J; Linderbaum, Jane A; Pencille, Laurie J; Rhodes, Deborah J

    2017-07-01

    To better understand clinician information needs and learning opportunities by exploring the use of best-practice algorithms across different training levels and specialties. We developed interactive online algorithms (care process models [CPMs]) that integrate current guidelines, recent evidence, and local expertise to represent cross-disciplinary best practices for managing clinical problems. We reviewed CPM usage logs from January 2014 to June 2015 and compared usage across specialty and provider type. During the study period, 4009 clinicians (2014 physicians in practice, 1117 resident physicians, and 878 nurse practitioners/physician assistants [NP/PAs]) viewed 140 CPMs a total of 81 764 times. Usage varied from 1 to 809 views per person, and from 9 to 4615 views per CPM. Residents and NP/PAs viewed CPMs more often than practicing physicians. Among 2742 users with known specialties, generalists ( N  = 1397) used CPMs more often (mean 31.8, median 7 views) than specialists ( N  = 1345; mean 6.8, median 2; P  < .0001). The topics used by specialists largely aligned with topics within their specialties. The top 20% of available CPMs (28/140) collectively accounted for 61% of uses. In all, 2106 clinicians (52%) returned to the same CPM more than once (average 7.8 views per topic; median 4, maximum 195). Generalists revisited topics more often than specialists (mean 8.8 vs 5.1 views per topic; P  < .0001). CPM usage varied widely across topics, specialties, and individual clinicians. Frequently viewed and recurrently viewed topics might warrant special attention. Specialists usually view topics within their specialty and may have unique information needs. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  14. Overview of fast algorithm in 3D dynamic holographic display

    NASA Astrophysics Data System (ADS)

    Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian

    2013-08-01

    3D dynamic holographic display is one of the most attractive techniques for achieving real 3D vision with full depth cue without any extra devices. However, huge 3D information and data should be preceded and be computed in real time for generating the hologram in 3D dynamic holographic display, and it is a challenge even for the most advanced computer. Many fast algorithms are proposed for speeding the calculation and reducing the memory usage, such as:look-up table (LUT), compressed look-up table (C-LUT), split look-up table (S-LUT), and novel look-up table (N-LUT) based on the point-based method, and full analytical polygon-based methods, one-step polygon-based method based on the polygon-based method. In this presentation, we overview various fast algorithms based on the point-based method and the polygon-based method, and focus on the fast algorithm with low memory usage, the C-LUT, and one-step polygon-based method by the 2D Fourier analysis of the 3D affine transformation. The numerical simulations and the optical experiments are presented, and several other algorithms are compared. The results show that the C-LUT algorithm and the one-step polygon-based method are efficient methods for saving calculation time. It is believed that those methods could be used in the real-time 3D holographic display in future.

  15. Partial Storage Optimization and Load Control Strategy of Cloud Data Centers

    PubMed Central

    2015-01-01

    We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner. PMID:25973444

  16. Partial storage optimization and load control strategy of cloud data centers.

    PubMed

    Al Nuaimi, Klaithem; Mohamed, Nader; Al Nuaimi, Mariam; Al-Jaroodi, Jameela

    2015-01-01

    We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner.

  17. Genotypic tropism testing by massively parallel sequencing: qualitative and quantitative analysis.

    PubMed

    Däumer, Martin; Kaiser, Rolf; Klein, Rolf; Lengauer, Thomas; Thiele, Bernhard; Thielen, Alexander

    2011-05-13

    Inferring viral tropism from genotype is a fast and inexpensive alternative to phenotypic testing. While being highly predictive when performed on clonal samples, sensitivity of predicting CXCR4-using (X4) variants drops substantially in clinical isolates. This is mainly attributed to minor variants not detected by standard bulk-sequencing. Massively parallel sequencing (MPS) detects single clones thereby being much more sensitive. Using this technology we wanted to improve genotypic prediction of coreceptor usage. Plasma samples from 55 antiretroviral-treated patients tested for coreceptor usage with the Monogram Trofile Assay were sequenced with standard population-based approaches. Fourteen of these samples were selected for further analysis with MPS. Tropism was predicted from each sequence with geno2pheno[coreceptor]. Prediction based on bulk-sequencing yielded 59.1% sensitivity and 90.9% specificity compared to the trofile assay. With MPS, 7600 reads were generated on average per isolate. Minorities of sequences with high confidence in CXCR4-usage were found in all samples, irrespective of phenotype. When using the default false-positive-rate of geno2pheno[coreceptor] (10%), and defining a minority cutoff of 5%, the results were concordant in all but one isolate. The combination of MPS and coreceptor usage prediction results in a fast and accurate alternative to phenotypic assays. The detection of X4-viruses in all isolates suggests that coreceptor usage as well as fitness of minorities is important for therapy outcome. The high sensitivity of this technology in combination with a quantitative description of the viral population may allow implementing meaningful cutoffs for predicting response to CCR5-antagonists in the presence of X4-minorities.

  18. Rank Dynamics of Word Usage at Multiple Scales

    NASA Astrophysics Data System (ADS)

    Morales, José A.; Colman, Ewan; Sánchez, Sergio; Sánchez-Puig, Fernanda; Pineda, Carlos; Iñiguez, Gerardo; Cocho, Germinal; Flores, Jorge; Gershenson, Carlos

    2018-05-01

    The recent dramatic increase in online data availability has allowed researchers to explore human culture with unprecedented detail, such as the growth and diversification of language. In particular, it provides statistical tools to explore whether word use is similar across languages, and if so, whether these generic features appear at different scales of language structure. Here we use the Google Books N-grams dataset to analyze the temporal evolution of word usage in several languages. We apply measures proposed recently to study rank dynamics, such as the diversity of N-grams in a given rank, the probability that an N-gram changes rank between successive time intervals, the rank entropy, and the rank complexity. Using different methods, results show that there are generic properties for different languages at different scales, such as a core of words necessary to minimally understand a language. We also propose a null model to explore the relevance of linguistic structure across multiple scales, concluding that N-gram statistics cannot be reduced to word statistics. We expect our results to be useful in improving text prediction algorithms, as well as in shedding light on the large-scale features of language use, beyond linguistic and cultural differences across human populations.

  19. Online Artifact Removal for Brain-Computer Interfaces Using Support Vector Machines and Blind Source Separation

    PubMed Central

    Halder, Sebastian; Bensch, Michael; Mellinger, Jürgen; Bogdan, Martin; Kübler, Andrea; Birbaumer, Niels; Rosenstiel, Wolfgang

    2007-01-01

    We propose a combination of blind source separation (BSS) and independent component analysis (ICA) (signal decomposition into artifacts and nonartifacts) with support vector machines (SVMs) (automatic classification) that are designed for online usage. In order to select a suitable BSS/ICA method, three ICA algorithms (JADE, Infomax, and FastICA) and one BSS algorithm (AMUSE) are evaluated to determine their ability to isolate electromyographic (EMG) and electrooculographic (EOG) artifacts into individual components. An implementation of the selected BSS/ICA method with SVMs trained to classify EMG and EOG artifacts, which enables the usage of the method as a filter in measurements with online feedback, is described. This filter is evaluated on three BCI datasets as a proof-of-concept of the method. PMID:18288259

  20. Online artifact removal for brain-computer interfaces using support vector machines and blind source separation.

    PubMed

    Halder, Sebastian; Bensch, Michael; Mellinger, Jürgen; Bogdan, Martin; Kübler, Andrea; Birbaumer, Niels; Rosenstiel, Wolfgang

    2007-01-01

    We propose a combination of blind source separation (BSS) and independent component analysis (ICA) (signal decomposition into artifacts and nonartifacts) with support vector machines (SVMs) (automatic classification) that are designed for online usage. In order to select a suitable BSS/ICA method, three ICA algorithms (JADE, Infomax, and FastICA) and one BSS algorithm (AMUSE) are evaluated to determine their ability to isolate electromyographic (EMG) and electrooculographic (EOG) artifacts into individual components. An implementation of the selected BSS/ICA method with SVMs trained to classify EMG and EOG artifacts, which enables the usage of the method as a filter in measurements with online feedback, is described. This filter is evaluated on three BCI datasets as a proof-of-concept of the method.

  1. Switching neuronal state: optimal stimuli revealed using a stochastically-seeded gradient algorithm.

    PubMed

    Chang, Joshua; Paydarfar, David

    2014-12-01

    Inducing a switch in neuronal state using energy optimal stimuli is relevant to a variety of problems in neuroscience. Analytical techniques from optimal control theory can identify such stimuli; however, solutions to the optimization problem using indirect variational approaches can be elusive in models that describe neuronal behavior. Here we develop and apply a direct gradient-based optimization algorithm to find stimulus waveforms that elicit a change in neuronal state while minimizing energy usage. We analyze standard models of neuronal behavior, the Hodgkin-Huxley and FitzHugh-Nagumo models, to show that the gradient-based algorithm: (1) enables automated exploration of a wide solution space, using stochastically generated initial waveforms that converge to multiple locally optimal solutions; and (2) finds optimal stimulus waveforms that achieve a physiological outcome condition, without a priori knowledge of the optimal terminal condition of all state variables. Analysis of biological systems using stochastically-seeded gradient methods can reveal salient dynamical mechanisms underlying the optimal control of system behavior. The gradient algorithm may also have practical applications in future work, for example, finding energy optimal waveforms for therapeutic neural stimulation that minimizes power usage and diminishes off-target effects and damage to neighboring tissue.

  2. Algorithm 782 : codes for rank-revealing QR factorizations of dense matrices.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bischof, C. H.; Quintana-Orti, G.; Mathematics and Computer Science

    1998-06-01

    This article describes a suite of codes as well as associated testing and timing drivers for computing rank-revealing QR (RRQR) factorizations of dense matrices. The main contribution is an efficient block algorithm for approximating an RRQR factorization, employing a windowed version of the commonly used Golub pivoting strategy and improved versions of the RRQR algorithms for triangular matrices originally suggested by Chandrasekaran and Ipsen and by Pan and Tang, respectively, We highlight usage and features of these codes.

  3. Premarital Contraceptives Usage among Male and Female Adolescents.

    ERIC Educational Resources Information Center

    Hornick, Joesph P.; And Others

    1979-01-01

    Variables important in predicting female contraception usage were found to be those which involved dyadic commitment, conditions of love, self-esteem, and father's occupation (social class). The best predictors of male contraception usage involved experience in dating and internalization of role models via mother's and father's permissiveness.…

  4. Modeling Preservice Teachers' TPACK Competencies Based on ICT Usage

    ERIC Educational Resources Information Center

    Yurdakul, I. Kabakci; Coklar, A. N.

    2014-01-01

    The purpose of this study was to build a model that predicts the relationships between the Technological Pedagogical Content Knowledge (TPACK) competencies and information and communication technology (ICT) usages. Research data were collected from 3105 Turkish preservice teachers. The TPACK-Deep Scale, ICT usage phase survey and the ICT usage…

  5. Does the Usage of an Online EFL Workbook Conform to Benford's Law?

    ERIC Educational Resources Information Center

    Olszewski, Mikolaj; Lodzikowski, Kacper; Zwolinski, Jan; Warnakulasooriya, Rasil; Black, Adam

    2016-01-01

    The aim of this paper is to explore if English as a Foreign Language (EFL) learners' usage of an online workbook follows Benford's law, which predicts the frequency of leading digits in numbers describing natural phenomena. According to Benford (1938), one can predict the frequency distribution of leading digits in numbers describing natural…

  6. A generalized Condat's algorithm of 1D total variation regularization

    NASA Astrophysics Data System (ADS)

    Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly

    2017-09-01

    A common way for solving the denosing problem is to utilize the total variation (TV) regularization. Many efficient numerical algorithms have been developed for solving the TV regularization problem. Condat described a fast direct algorithm to compute the processed 1D signal. Also there exists a direct algorithm with a linear time for 1D TV denoising referred to as the taut string algorithm. The Condat's algorithm is based on a dual problem to the 1D TV regularization. In this paper, we propose a variant of the Condat's algorithm based on the direct 1D TV regularization problem. The usage of the Condat's algorithm with the taut string approach leads to a clear geometric description of the extremal function. Computer simulation results are provided to illustrate the performance of the proposed algorithm for restoration of degraded signals.

  7. An integrative machine learning strategy for improved prediction of essential genes in Escherichia coli metabolism using flux-coupled features.

    PubMed

    Nandi, Sutanu; Subramanian, Abhishek; Sarkar, Ram Rup

    2017-07-25

    Prediction of essential genes helps to identify a minimal set of genes that are absolutely required for the appropriate functioning and survival of a cell. The available machine learning techniques for essential gene prediction have inherent problems, like imbalanced provision of training datasets, biased choice of the best model for a given balanced dataset, choice of a complex machine learning algorithm, and data-based automated selection of biologically relevant features for classification. Here, we propose a simple support vector machine-based learning strategy for the prediction of essential genes in Escherichia coli K-12 MG1655 metabolism that integrates a non-conventional combination of an appropriate sample balanced training set, a unique organism-specific genotype, phenotype attributes that characterize essential genes, and optimal parameters of the learning algorithm to generate the best machine learning model (the model with the highest accuracy among all the models trained for different sample training sets). For the first time, we also introduce flux-coupled metabolic subnetwork-based features for enhancing the classification performance. Our strategy proves to be superior as compared to previous SVM-based strategies in obtaining a biologically relevant classification of genes with high sensitivity and specificity. This methodology was also trained with datasets of other recent supervised classification techniques for essential gene classification and tested using reported test datasets. The testing accuracy was always high as compared to the known techniques, proving that our method outperforms known methods. Observations from our study indicate that essential genes are conserved among homologous bacterial species, demonstrate high codon usage bias, GC content and gene expression, and predominantly possess a tendency to form physiological flux modules in metabolism.

  8. Software Tools to Support Research on Airport Departure Planning

    NASA Technical Reports Server (NTRS)

    Carr, Francis; Evans, Antony; Feron, Eric; Clarke, John-Paul

    2003-01-01

    A simple, portable and useful collection of software tools has been developed for the analysis of airport surface traffic. The tools are based on a flexible and robust traffic-flow model, and include calibration, validation and simulation functionality for this model. Several different interfaces have been developed to help promote usage of these tools, including a portable Matlab(TM) implementation of the basic algorithms; a web-based interface which provides online access to automated analyses of airport traffic based on a database of real-world operations data which covers over 250 U.S. airports over a 5-year period; and an interactive simulation-based tool currently in use as part of a college-level educational module. More advanced applications for airport departure traffic include taxi-time prediction and evaluation of "windowing" congestion control.

  9. Predicting Forearm Physical Exposures During Computer Work Using Self-Reports, Software-Recorded Computer Usage Patterns, and Anthropometric and Workstation Measurements.

    PubMed

    Huysmans, Maaike A; Eijckelhof, Belinda H W; Garza, Jennifer L Bruno; Coenen, Pieter; Blatter, Birgitte M; Johnson, Peter W; van Dieën, Jaap H; van der Beek, Allard J; Dennerlein, Jack T

    2017-12-15

    Alternative techniques to assess physical exposures, such as prediction models, could facilitate more efficient epidemiological assessments in future large cohort studies examining physical exposures in relation to work-related musculoskeletal symptoms. The aim of this study was to evaluate two types of models that predict arm-wrist-hand physical exposures (i.e. muscle activity, wrist postures and kinematics, and keyboard and mouse forces) during computer use, which only differed with respect to the candidate predicting variables; (i) a full set of predicting variables, including self-reported factors, software-recorded computer usage patterns, and worksite measurements of anthropometrics and workstation set-up (full models); and (ii) a practical set of predicting variables, only including the self-reported factors and software-recorded computer usage patterns, that are relatively easy to assess (practical models). Prediction models were build using data from a field study among 117 office workers who were symptom-free at the time of measurement. Arm-wrist-hand physical exposures were measured for approximately two hours while workers performed their own computer work. Each worker's anthropometry and workstation set-up were measured by an experimenter, computer usage patterns were recorded using software and self-reported factors (including individual factors, job characteristics, computer work behaviours, psychosocial factors, workstation set-up characteristics, and leisure-time activities) were collected by an online questionnaire. We determined the predictive quality of the models in terms of R2 and root mean squared (RMS) values and exposure classification agreement to low-, medium-, and high-exposure categories (in the practical model only). The full models had R2 values that ranged from 0.16 to 0.80, whereas for the practical models values ranged from 0.05 to 0.43. Interquartile ranges were not that different for the two models, indicating that only for some physical exposures the full models performed better. Relative RMS errors ranged between 5% and 19% for the full models, and between 10% and 19% for the practical model. When the predicted physical exposures were classified into low, medium, and high, classification agreement ranged from 26% to 71%. The full prediction models, based on self-reported factors, software-recorded computer usage patterns, and additional measurements of anthropometrics and workstation set-up, show a better predictive quality as compared to the practical models based on self-reported factors and recorded computer usage patterns only. However, predictive quality varied largely across different arm-wrist-hand exposure parameters. Future exploration of the relation between predicted physical exposure and symptoms is therefore only recommended for physical exposures that can be reasonably well predicted. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  10. Predicting Baseline for Analysis of Electricity Pricing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, T.; Lee, D.; Choi, J.

    2016-05-03

    To understand the impact of new pricing structure on residential electricity demands, we need a baseline model that captures every factor other than the new price. The standard baseline is a randomized control group, however, a good control group is hard to design. This motivates us to devlop data-driven approaches. We explored many techniques and designed a strategy, named LTAP, that could predict the hourly usage years ahead. The key challenge in this process is that the daily cycle of electricity demand peaks a few hours after the temperature reaching its peak. Existing methods rely on the lagged variables ofmore » recent past usages to enforce this daily cycle. These methods have trouble making predictions years ahead. LTAP avoids this trouble by assuming the daily usage profile is determined by temperature and other factors. In a comparison against a well-designed control group, LTAP is found to produce accurate predictions.« less

  11. Interface Generation and Compositional Verification in JavaPathfinder

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Pasareanu, Corina

    2009-01-01

    We present a novel algorithm for interface generation of software components. Given a component, our algorithm uses learning techniques to compute a permissive interface representing legal usage of the component. Unlike our previous work, this algorithm does not require knowledge about the component s environment. Furthermore, in contrast to other related approaches, our algorithm computes permissive interfaces even in the presence of non-determinism in the component. Our algorithm is implemented in the JavaPathfinder model checking framework for UML statechart components. We have also added support for automated assume-guarantee style compositional verification in JavaPathfinder, using component interfaces. We report on the application of the presented approach to the generation of interfaces for flight software components.

  12. Serum peptide expression and treatment responses in patients with advanced non-small-cell lung cancer

    PubMed Central

    An, Juan; Tang, Chuan-Hao; Wang, Na; Liu, Yi; Lv, Jin; Xu, Bin; Li, Xiao-Yan; Guo, Wan-Feng; Gao, Hong-Jun; He, Kun; Liu, Xiao-Qing

    2018-01-01

    Epidermal growth factor receptor (EGFR) mutation is an important predictor for response to personalized treatments of patients with advanced non-small-cell lung cancer (NSCLC). However its usage is limited due to the difficult of obtaining tissue specimens. A novel prediction system using matrix assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) has been reported to be a perspective tool in European countries to identify patients who are likely to benefit from EGFR tyrosine kinase inhibitor (TKI) treatment. In the present study, MALDI-TOF MS was used on pretreatment serum samples of patients with advanced non-small-cell lung cancer to discriminate the spectra between disease control and disease progression groups in one cohort of Chinese patients. The candidate features for classification were subsequently validated in a blinded fashion in another set of patients. The correlation between plasma EGFR mutation status and the intensities of representative spectra for classification was evaluated. A total of 103 patients that were treated with EGFR-TKIs were included. It was determined that 8 polypeptides peaks were significant different between the disease control and disease progression group. A total of 6 polypeptides were established in the classification algorithm. The sensitivity of the algorithm to predict treatment responses was 76.2% (16/21) and the specificity was 81.8% (18/22). The accuracy rate of the algorithm was 79.1% (34/43). A total of 3 polypeptides were significantly correlated with EGFR mutations (P=0.04, P=0.03 and P=0.04, respectively). The present study confirmed that MALDI-TOF MS analysis can be used to predict responses to EGFR-TKI treatment of the Asian population where the EGFR mutation status differs from the European population. Furthermore, the expression intensities of the three polypeptides in the classification model were associated with EGFR mutation. PMID:29844828

  13. A Method of Predicting Queuing at Library Online PCs

    ERIC Educational Resources Information Center

    Beranek, Lea G.

    2006-01-01

    On-campus networked personal computer (PC) usage at La Trobe University Library was surveyed during September 2005. The survey's objectives were to confirm peak usage times, to measure some of the relevant parameters of online PC usage, and to determine the effect that 24 new networked PCs had on service quality. The survey found that clients…

  14. Are Students Really Connected? Predicting College Adjustment from Social Network Usage

    ERIC Educational Resources Information Center

    Raacke, John; Bonds-Raacke, Jennifer

    2015-01-01

    The rapid growth in popularity of social networking sites has spurred research exploring the impact of usage in a variety of areas. The current study furthered this line of research by examining the relationships between social network usage and adjustment to college in the academic, social, personal-emotional and university affiliation domains.…

  15. Predicting Negative Emotions Based on Mobile Phone Usage Patterns: An Exploratory Study

    PubMed Central

    Yang, Pei-Ching; Chang, Chia-Chi; Chiang, Jung-Hsien; Chen, Ying-Yeh

    2016-01-01

    Background Prompt recognition and intervention of negative emotions is crucial for patients with depression. Mobile phones and mobile apps are suitable technologies that can be used to recognize negative emotions and intervene if necessary. Objective Mobile phone usage patterns can be associated with concurrent emotional states. The objective of this study is to adapt machine-learning methods to analyze such patterns for the prediction of negative emotion. Methods We developed an Android-based app to capture emotional states and mobile phone usage patterns, which included call logs (and use of apps). Visual analog scales (VASs) were used to report negative emotions in dimensions of depression, anxiety, and stress. In the system-training phase, participants were requested to tag their emotions for 14 consecutive days. Five feature-selection methods were used to determine individual usage patterns and four machine-learning methods were tested. Finally, rank product scoring was used to select the best combination to construct the prediction model. In the system evaluation phase, participants were then requested to verify the predicted negative emotions for at least 5 days. Results Out of 40 enrolled healthy participants, we analyzed data from 28 participants, including 30% (9/28) women with a mean (SD) age of 29.2 (5.1) years with sufficient emotion tags. The combination of time slots of 2 hours, greedy forward selection, and Naïve Bayes method was chosen for the prediction model. We further validated the personalized models in 18 participants who performed at least 5 days of model evaluation. Overall, the predictive accuracy for negative emotions was 86.17%. Conclusion We developed a system capable of predicting negative emotions based on mobile phone usage patterns. This system has potential for ecological momentary intervention (EMI) for depressive disorders by automatically recognizing negative emotions and providing people with preventive treatments before it escalates to clinical depression. PMID:27511748

  16. Adaptive Resource Utilization Prediction System for Infrastructure as a Service Cloud.

    PubMed

    Zia Ullah, Qazi; Hassan, Shahzad; Khan, Gul Muhammad

    2017-01-01

    Infrastructure as a Service (IaaS) cloud provides resources as a service from a pool of compute, network, and storage resources. Cloud providers can manage their resource usage by knowing future usage demand from the current and past usage patterns of resources. Resource usage prediction is of great importance for dynamic scaling of cloud resources to achieve efficiency in terms of cost and energy consumption while keeping quality of service. The purpose of this paper is to present a real-time resource usage prediction system. The system takes real-time utilization of resources and feeds utilization values into several buffers based on the type of resources and time span size. Buffers are read by R language based statistical system. These buffers' data are checked to determine whether their data follows Gaussian distribution or not. In case of following Gaussian distribution, Autoregressive Integrated Moving Average (ARIMA) is applied; otherwise Autoregressive Neural Network (AR-NN) is applied. In ARIMA process, a model is selected based on minimum Akaike Information Criterion (AIC) values. Similarly, in AR-NN process, a network with the lowest Network Information Criterion (NIC) value is selected. We have evaluated our system with real traces of CPU utilization of an IaaS cloud of one hundred and twenty servers.

  17. Adaptive Resource Utilization Prediction System for Infrastructure as a Service Cloud

    PubMed Central

    Hassan, Shahzad; Khan, Gul Muhammad

    2017-01-01

    Infrastructure as a Service (IaaS) cloud provides resources as a service from a pool of compute, network, and storage resources. Cloud providers can manage their resource usage by knowing future usage demand from the current and past usage patterns of resources. Resource usage prediction is of great importance for dynamic scaling of cloud resources to achieve efficiency in terms of cost and energy consumption while keeping quality of service. The purpose of this paper is to present a real-time resource usage prediction system. The system takes real-time utilization of resources and feeds utilization values into several buffers based on the type of resources and time span size. Buffers are read by R language based statistical system. These buffers' data are checked to determine whether their data follows Gaussian distribution or not. In case of following Gaussian distribution, Autoregressive Integrated Moving Average (ARIMA) is applied; otherwise Autoregressive Neural Network (AR-NN) is applied. In ARIMA process, a model is selected based on minimum Akaike Information Criterion (AIC) values. Similarly, in AR-NN process, a network with the lowest Network Information Criterion (NIC) value is selected. We have evaluated our system with real traces of CPU utilization of an IaaS cloud of one hundred and twenty servers. PMID:28811819

  18. Implementation of software-based sensor linearization algorithms on low-cost microcontrollers.

    PubMed

    Erdem, Hamit

    2010-10-01

    Nonlinear sensors and microcontrollers are used in many embedded system designs. As the input-output characteristic of most sensors is nonlinear in nature, obtaining data from a nonlinear sensor by using an integer microcontroller has always been a design challenge. This paper discusses the implementation of six software-based sensor linearization algorithms for low-cost microcontrollers. The comparative study of the linearization algorithms is performed by using a nonlinear optical distance-measuring sensor. The performance of the algorithms is examined with respect to memory space usage, linearization accuracy and algorithm execution time. The implementation and comparison results can be used for selection of a linearization algorithm based on the sensor transfer function, expected linearization accuracy and microcontroller capacity. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Cost-effectiveness of novel algorithms for rapid diagnosis of tuberculosis in HIV-infected individuals in Uganda.

    PubMed

    Shah, Maunank; Dowdy, David; Joloba, Moses; Ssengooba, Willy; Manabe, Yukari C; Ellner, Jerrold; Dorman, Susan E

    2013-11-28

    Xpert MTB/RIF ('Xpert') and urinary lateral-flow lipoarabinomannan (LF-LAM) assays offer rapid tuberculosis (TB) diagnosis. This study evaluated the cost-effectiveness of novel diagnostic algorithms utilizing combinations of Xpert and LF-LAM for the detection of active TB among people living with HIV. Cost-effectiveness analysis using data from a comparative study of LF-LAM and Xpert, with a target population of HIV-infected individuals with signs/symptoms of TB in Uganda. A decision-analysis model compared multiple strategies for rapid TB diagnosis:sputum smear-microscopy; sputum Xpert; smear-microscopy combined with LF-LAM; and Xpert combined with LF-LAM. Primary outcomes were the costs and DALY's averted for each algorithm. Cost-effectiveness was represented using incremental cost-effectiveness ratios (ICER). Compared with an algorithm of Xpert testing alone, the combination of Xpert with LF-LAM was considered highly cost-effective (ICER $57/DALY-averted) at a willingness to pay threshold of Ugandan GDP per capita. Addition of urine LF-LAM testing to smear-microscopy was a less effective strategy than Xpert replacement of smear-microscopy, but was less costly and also considered highly cost-effective (ICER $33 per DALY-averted) compared with continued usage of smear-microscopy alone. Cost-effectiveness of the Xpert plus LF-LAM algorithm was most influenced by HIV/ART costs and life-expectancy of patients after TB treatment. The addition of urinary LF-LAM to TB diagnostic algorithms for HIV-infected individuals is highly cost-effective compared with usage of either sputum smear-microscopy or Xpert alone.

  20. Development and Evaluation of a Mobile Personalized Blood Glucose Prediction System for Patients With Gestational Diabetes Mellitus.

    PubMed

    Pustozerov, Evgenii; Popova, Polina; Tkachuk, Aleksandra; Bolotko, Yana; Yuldashev, Zafar; Grineva, Elena

    2018-01-09

    Personalized blood glucose (BG) prediction for diabetes patients is an important goal that is pursued by many researchers worldwide. Despite many proposals, only a few projects are dedicated to the development of complete recommender system infrastructures that incorporate BG prediction algorithms for diabetes patients. The development and implementation of such a system aided by mobile technology is of particular interest to patients with gestational diabetes mellitus (GDM), especially considering the significant importance of quickly achieving adequate BG control for these patients in a short period (ie, during pregnancy) and a typically higher acceptance rate for mobile health (mHealth) solutions for short- to midterm usage. This study was conducted with the objective of developing infrastructure comprising data processing algorithms, BG prediction models, and an appropriate mobile app for patients' electronic record management to guide BG prediction-based personalized recommendations for patients with GDM. A mobile app for electronic diary management was developed along with data exchange and continuous BG signal processing software. Both components were coupled to obtain the necessary data for use in the personalized BG prediction system. Necessary data on meals, BG measurements, and other events were collected via the implemented mobile app and continuous glucose monitoring (CGM) system processing software. These data were used to tune and evaluate the BG prediction model, which included an algorithm for dynamic coefficients tuning. In the clinical study, 62 participants (GDM: n=49; control: n=13) took part in a 1-week monitoring trial during which they used the mobile app to track their meals and self-measurements of BG and CGM system for continuous BG monitoring. The data on 909 food intakes and corresponding postprandial BG curves as well as the set of patients' characteristics (eg, glycated hemoglobin, body mass index [BMI], age, and lifestyle parameters) were selected as inputs for the BG prediction models. The prediction results by the models for BG levels 1 hour after food intake were root mean square error=0.87 mmol/L, mean absolute error=0.69 mmol/L, and mean absolute percentage error=12.8%, which correspond to an adequate prediction accuracy for BG control decisions. The mobile app for the collection and processing of relevant data, appropriate software for CGM system signals processing, and BG prediction models were developed for a recommender system. The developed system may help improve BG control in patients with GDM; this will be the subject of evaluation in a subsequent study. ©Evgenii Pustozerov, Polina Popova, Aleksandra Tkachuk, Yana Bolotko, Zafar Yuldashev, Elena Grineva. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 09.01.2018.

  1. A parameter estimation subroutine package

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Nead, M. W.

    1978-01-01

    Linear least squares estimation and regression analyses continue to play a major role in orbit determination and related areas. In this report we document a library of FORTRAN subroutines that have been developed to facilitate analyses of a variety of estimation problems. Our purpose is to present an easy to use, multi-purpose set of algorithms that are reasonably efficient and which use a minimal amount of computer storage. Subroutine inputs, outputs, usage and listings are given along with examples of how these routines can be used. The following outline indicates the scope of this report: Section (1) introduction with reference to background material; Section (2) examples and applications; Section (3) subroutine directory summary; Section (4) the subroutine directory user description with input, output, and usage explained; and Section (5) subroutine FORTRAN listings. The routines are compact and efficient and are far superior to the normal equation and Kalman filter data processing algorithms that are often used for least squares analyses.

  2. Protecting core networks with dual-homing: A study on enhanced network availability, resource efficiency, and energy-savings

    NASA Astrophysics Data System (ADS)

    Abeywickrama, Sandu; Furdek, Marija; Monti, Paolo; Wosinska, Lena; Wong, Elaine

    2016-12-01

    Core network survivability affects the reliability performance of telecommunication networks and remains one of the most important network design considerations. This paper critically examines the benefits arising from utilizing dual-homing in the optical access networks to provide resource-efficient protection against link and node failures in the optical core segment. Four novel, heuristic-based RWA algorithms that provide dedicated path protection in networks with dual-homing are proposed and studied. These algorithms protect against different failure scenarios (i.e. single link or node failures) and are implemented with different optimization objectives (i.e., minimization of wavelength usage and path length). Results obtained through simulations and comparison with baseline architectures indicate that exploiting dual-homed architecture in the access segment can bring significant improvements in terms of core network resource usage, connection availability, and power consumption.

  3. Churn prediction of mobile and online casual games using play log data.

    PubMed

    Kim, Seungwook; Choi, Daeyoung; Lee, Eunjung; Rhee, Wonjong

    2017-01-01

    Internet-connected devices, especially mobile devices such as smartphones, have become widely accessible in the past decade. Interaction with such devices has evolved into frequent and short-duration usage, and this phenomenon has resulted in a pervasive popularity of casual games in the game sector. On the other hand, development of casual games has become easier than ever as a result of the advancement of development tools. With the resulting fierce competition, now both acquisition and retention of users are the prime concerns in the field. In this study, we focus on churn prediction of mobile and online casual games. While churn prediction and analysis can provide important insights and action cues on retention, its application using play log data has been primitive or very limited in the casual game area. Most of the existing methods cannot be applied to casual games because casual game players tend to churn very quickly and they do not pay periodic subscription fees. Therefore, we focus on the new players and formally define churn using observation period (OP) and churn prediction period (CP). Using the definition, we develop a standard churn analysis process for casual games. We cover essential topics such as pre-processing of raw data, feature engineering including feature analysis, churn prediction modeling using traditional machine learning algorithms (logistic regression, gradient boosting, and random forests) and two deep learning algorithms (CNN and LSTM), and sensitivity analysis for OP and CP. Play log data of three different casual games are considered by analyzing a total of 193,443 unique player records and 10,874,958 play log records. While the analysis results provide useful insights, the overall results indicate that a small number of well-chosen features used as performance metrics might be sufficient for making important action decisions and that OP and CP should be properly chosen depending on the analysis goal.

  4. Churn prediction of mobile and online casual games using play log data

    PubMed Central

    Kim, Seungwook; Choi, Daeyoung; Lee, Eunjung

    2017-01-01

    Internet-connected devices, especially mobile devices such as smartphones, have become widely accessible in the past decade. Interaction with such devices has evolved into frequent and short-duration usage, and this phenomenon has resulted in a pervasive popularity of casual games in the game sector. On the other hand, development of casual games has become easier than ever as a result of the advancement of development tools. With the resulting fierce competition, now both acquisition and retention of users are the prime concerns in the field. In this study, we focus on churn prediction of mobile and online casual games. While churn prediction and analysis can provide important insights and action cues on retention, its application using play log data has been primitive or very limited in the casual game area. Most of the existing methods cannot be applied to casual games because casual game players tend to churn very quickly and they do not pay periodic subscription fees. Therefore, we focus on the new players and formally define churn using observation period (OP) and churn prediction period (CP). Using the definition, we develop a standard churn analysis process for casual games. We cover essential topics such as pre-processing of raw data, feature engineering including feature analysis, churn prediction modeling using traditional machine learning algorithms (logistic regression, gradient boosting, and random forests) and two deep learning algorithms (CNN and LSTM), and sensitivity analysis for OP and CP. Play log data of three different casual games are considered by analyzing a total of 193,443 unique player records and 10,874,958 play log records. While the analysis results provide useful insights, the overall results indicate that a small number of well-chosen features used as performance metrics might be sufficient for making important action decisions and that OP and CP should be properly chosen depending on the analysis goal. PMID:28678880

  5. GeneBee-net: Internet-based server for analyzing biopolymers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodsky, L.I.; Ivanov, V.V.; Nikolaev, V.K.

    This work describes a network server for searching databanks of biopolymer structures and performing other biocomputing procedures; it is available via direct Internet connection. Basic server procedures are dedicated to homology (similarity) search of sequence and 3D structure of proteins. The homologies found could be used to build multiple alignments, predict protein and RNA secondary structure, and construct phylogenetic trees. In addition to traditional methods of sequence similarity search, the authors propose {open_quotes}non-matrix{close_quotes} (correlational) search. An analogous approach is used to identify regions of similar tertiary structure of proteins. Algorithm concepts and usage examples are presented for new methods. Servicemore » logic is based upon interaction of a client program and server procedures. The client program allows the compilation of queries and the processing of results of an analysis.« less

  6. Minimal spanning trees at the percolation threshold: A numerical calculation

    NASA Astrophysics Data System (ADS)

    Sweeney, Sean M.; Middleton, A. Alan

    2013-09-01

    The fractal dimension of minimal spanning trees on percolation clusters is estimated for dimensions d up to d=5. A robust analysis technique is developed for correlated data, as seen in such trees. This should be a robust method suitable for analyzing a wide array of randomly generated fractal structures. The trees analyzed using these techniques are built using a combination of Prim's and Kruskal's algorithms for finding minimal spanning trees. This combination reduces memory usage and allows for simulation of larger systems than would otherwise be possible. The path length fractal dimension ds of MSTs on critical percolation clusters is found to be compatible with the predictions of the perturbation expansion developed by T. S. Jackson and N. Read [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.81.021131 81, 021131 (2010)].

  7. Bit selection using field drilling data and mathematical investigation

    NASA Astrophysics Data System (ADS)

    Momeni, M. S.; Ridha, S.; Hosseini, S. J.; Meyghani, B.; Emamian, S. S.

    2018-03-01

    A drilling process will not be complete without the usage of a drill bit. Therefore, bit selection is considered to be an important task in drilling optimization process. To select a bit is considered as an important issue in planning and designing a well. This is simply because the cost of drilling bit in total cost is quite high. Thus, to perform this task, aback propagation ANN Model is developed. This is done by training the model using several wells and it is done by the usage of drilling bit records from offset wells. In this project, two models are developed by the usage of the ANN. One is to find predicted IADC bit code and one is to find Predicted ROP. Stage 1 was to find the IADC bit code by using all the given filed data. The output is the Targeted IADC bit code. Stage 2 was to find the Predicted ROP values using the gained IADC bit code in Stage 1. Next is Stage 3 where the Predicted ROP value is used back again in the data set to gain Predicted IADC bit code value. The output is the Predicted IADC bit code. Thus, at the end, there are two models that give the Predicted ROP values and Predicted IADC bit code values.

  8. Projected effects of intermittent changes in withdrawal of water from the Arikaree Aquifer near Wheatland, southeastern Wyoming

    USGS Publications Warehouse

    Hoxie, Dwight T.

    1979-01-01

    Effects on streamflows and ground-water levels attributable to a proposed intermittent change in use and sites of withdrawal of 3 ,146 acre-feet of water from the Arikaree aquifer in central Platte County, WY, are assessed with a previously developed ground-water flow model. This water has been permitted for agricultural use by the State of Wyoming, and under the proposal would supplement, when needed, existing industrial surface- and ground-water supplies for the Laramie River Station of the Missouri Basin Power Project. Under a scenario wherein the supplemental industrial usage occurs in every 10th year commencing in 1980, the model predicts a cumulative streamflow-depletion rate in the Laramie and North Laramie Rivers of 7.7 cubic feet per second in the year 2020 compared to a rate of 6.9 cubic feet per second that is predicted if the intermittent industrial usage does not occur. Areas in which drawdowns relative to the simulated 1973 head configuration exceed 5, 10, 25, and 50 feet are predicted to be 107, 78, 38, and 2 square miles, respectively, in 2020 under the intermittent-usage scenario compared to corresponding areas of 104, 76, 36, and 2 square miles that are predicted if the intermittent industrial usage does not occur. (USGS).

  9. Mutations in gp41 are correlated with coreceptor tropism but do not improve prediction methods substantially.

    PubMed

    Thielen, Alexander; Lengauer, Thomas; Swenson, Luke C; Dong, Winnie W Y; McGovern, Rachel A; Lewis, Marilyn; James, Ian; Heera, Jayvant; Valdez, Hernan; Harrigan, P Richard

    2011-01-01

    The main determinants of HIV-1 coreceptor usage are located in the V3-loop of gp120, although mutations in V2 and gp41 are also known. Incorporation of V2 is known to improve prediction algorithms; however, this has not been confirmed for gp41 mutations. Samples with V3 and gp41 genotypes and Trofile assay (Monogram Biosciences, South San Francisco, CA, USA) results were taken from the HOMER cohort (n=444) and from patients screened for the MOTIVATE studies (n=1,916; 859 with maraviroc outcome data). Correlations of mutations with tropism were assessed using Fisher's exact test and prediction models trained using support vector machines. Models were validated by cross-validation, by testing models from one dataset on the other, and by analysing virological outcome. Several mutations within gp41 were highly significant for CXCR4 usage; most strikingly an insertion occurring in 7.7% of HOMER-R5 and 46.3% of HOMER-X4 samples (MOTIVATE 5.7% and 25.2%, respectively). Models trained on gp41 sequence alone achieved relatively high areas under the receiver-operating characteristic curve (AUCs; HOMER 0.713 and MOTIVATE 0.736) that were almost as good as V3 models (0.773 and 0.884, respectively). However, combining the two regions improved predictions only marginally (0.813 and 0.902, respectively). Similar results were found when models were trained on HOMER and validated on MOTIVATE or vice versa. The difference in median log viral load decrease at week 24 between patients with R5 and X4 virus was 1.65 (HOMER 2.45 and MOTIVATE 0.79) for V3 models, 1.59 for gp41-models (2.42 and 0.83, respectively) and 1.58 for the combined predictor (2.44 and 0.86, respectively). Several mutations within gp41 showed strong correlation with tropism in two independent datasets. However, incorporating gp41 mutations into prediction models is not mandatory because they do not improve substantially on models trained on V3 sequences alone.

  10. A Feasibility Study of Life-Extending Controls for Aircraft Turbine Engines Using a Generic Air Force Model (Preprint)

    DTIC Science & Technology

    2006-12-01

    intelligent control algorithm embedded in the FADEC . This paper evaluates the LEC, based on critical components research, to demonstrate how an...control action, engine component life usage, and designing an intelligent control algorithm embedded in the FADEC . This paper evaluates the LEC, based on...simulation code for each simulator. One is typically configured to operate as a Full- Authority Digital Electronic Controller ( FADEC

  11. A Method for Correcting Broken Hyphenations in Noisy English Text

    DTIC Science & Technology

    2012-04-01

    words, such as a frequency list . An algorithm that would make use of word validation, taking into account the various usages of hyphens in English, is...commas, and question marks from the surrounding words. The British National Corpus (2) (BNC) frequency list was used to perform the validation...rather than a separate spell checking program. This was primarily because implementation of the algorithm using a frequency list was quite trivial

  12. Participatory visualization with Wordle.

    PubMed

    Viégas, Fernanda B; Wattenberg, Martin; Feinberg, Jonathan

    2009-01-01

    We discuss the design and usage of "Wordle," a web-based tool for visualizing text. Wordle creates tag-cloud-like displays that give careful attention to typography, color, and composition. We describe the algorithms used to balance various aesthetic criteria and create the distinctive Wordle layouts. We then present the results of a study of Wordle usage, based both on spontaneous behaviour observed in the wild, and on a large-scale survey of Wordle users. The results suggest that Wordles have become a kind of medium of expression, and that a "participatory culture" has arisen around them.

  13. MISR Data Product Specifications

    Atmospheric Science Data Center

    2016-11-25

    ... and usage of metadata. Improvements to MISR algorithmic software occasionally result in changes to file formats. While these changes ...  (DPS).   DPS Revision:   Rev. S Software Version:  5.0.9 Date:  September 20, 2010, updated April ...

  14. Why don’t you use Evolutionary Algorithms in Big Data?

    NASA Astrophysics Data System (ADS)

    Stanovov, Vladimir; Brester, Christina; Kolehmainen, Mikko; Semenkina, Olga

    2017-02-01

    In this paper we raise the question of using evolutionary algorithms in the area of Big Data processing. We show that evolutionary algorithms provide evident advantages due to their high scalability and flexibility, their ability to solve global optimization problems and optimize several criteria at the same time for feature selection, instance selection and other data reduction problems. In particular, we consider the usage of evolutionary algorithms with all kinds of machine learning tools, such as neural networks and fuzzy systems. All our examples prove that Evolutionary Machine Learning is becoming more and more important in data analysis and we expect to see the further development of this field especially in respect to Big Data.

  15. Prime Numbers Comparison using Sieve of Eratosthenes and Sieve of Sundaram Algorithm

    NASA Astrophysics Data System (ADS)

    Abdullah, D.; Rahim, R.; Apdilah, D.; Efendi, S.; Tulus, T.; Suwilo, S.

    2018-03-01

    Prime numbers are numbers that have their appeal to researchers due to the complexity of these numbers, many algorithms that can be used to generate prime numbers ranging from simple to complex computations, Sieve of Eratosthenes and Sieve of Sundaram are two algorithm that can be used to generate Prime numbers of randomly generated or sequential numbered random numbers, testing in this study to find out which algorithm is better used for large primes in terms of time complexity, the test also assisted with applications designed using Java language with code optimization and Maximum memory usage so that the testing process can be simultaneously and the results obtained can be objective

  16. Document localization algorithms based on feature points and straight lines

    NASA Astrophysics Data System (ADS)

    Skoryukina, Natalya; Shemiakina, Julia; Arlazarov, Vladimir L.; Faradjev, Igor

    2018-04-01

    The important part of the system of a planar rectangular object analysis is the localization: the estimation of projective transform from template image of an object to its photograph. The system also includes such subsystems as the selection and recognition of text fields, the usage of contexts etc. In this paper three localization algorithms are described. All algorithms use feature points and two of them also analyze near-horizontal and near- vertical lines on the photograph. The algorithms and their combinations are tested on a dataset of real document photographs. Also the method of localization quality estimation is proposed that allows configuring the localization subsystem independently of the other subsystems quality.

  17. A novel hybrid genetic algorithm for optimal design of IPM machines for electric vehicle

    NASA Astrophysics Data System (ADS)

    Wang, Aimeng; Guo, Jiayu

    2017-12-01

    A novel hybrid genetic algorithm (HGA) is proposed to optimize the rotor structure of an IPM machine which is used in EV application. The finite element (FE) simulation results of the HGA design is compared with the genetic algorithm (GA) design and those before optimized. It is shown that the performance of the IPMSM is effectively improved by employing the GA and HGA, especially by HGA. Moreover, higher flux-weakening capability and less magnet usage are also obtained. Therefore, the validity of HGA method in IPMSM optimization design is verified.

  18. Adolescents’ Attitudes toward Anti-marijuana Ads, Usage Intentions, and Actual Marijuana Usage

    PubMed Central

    Alvaro, Eusebio M.; Crano, William D.; Siegel, Jason T.; Hohman, Zachary; Johnson, Ian; Nakawaki, Brandon

    2015-01-01

    The association of adolescents’ appraisals of the anti-marijuana television ads used in the National Youth Anti-drug Media Campaign with future marijuana use was investigated. The 12 to 18 year old respondents (N = 2993) were first classified as users, resolute nonusers, or vulnerable nonusers (Crano, Siegel, Alvaro, Lac, & Hemovich, 2008). Usage status and the covariates of gender, age, and attitudes toward marijuana were used to predict attitudes toward the ads (Aad) in the first phase of a multi-level linear analysis. All covariates were significantly associated with Aad, as was usage status: resolute nonusers evaluated the ads significantly more positively than vulnerable nonusers and users (all p < .001), who did not differ. In the second phase, the covariates along with Aad and respondents’ usage status predicted intentions and actual usage one year after initial measurement. The lagged analysis disclosed negative associations between Aad and usage intentions, and between Aad and actual marijuana use (both p < .05); however, this association held only for users (p < .01), not vulnerable or resolute nonusers. Users reporting more positive attitudes towards the ads were less likely to report intention to use marijuana and to continue marijuana use at 1-year follow-up. These findings may inform designers of persuasion-based prevention campaigns, guiding pre-implementation efforts in the design of ads that targeted groups find appealing and thus, influential. PMID:23528197

  19. Adolescents' attitudes toward antimarijuana ads, usage intentions, and actual marijuana usage.

    PubMed

    Alvaro, Eusebio M; Crano, William D; Siegel, Jason T; Hohman, Zachary; Johnson, Ian; Nakawaki, Brandon

    2013-12-01

    The association of adolescents' appraisals of the antimarijuana TV ads used in the National Youth Antidrug Media Campaign with future marijuana use was investigated. The 12- to 18-year-old respondents (N = 2,993) were first classified as users, resolute nonusers, or vulnerable nonusers (Crano, Siegel, Alvaro, Lac, & Hemovich, 2008). Usage status and the covariates of gender, age, and attitudes toward marijuana were used to predict attitudes toward the ads (Aad) in the first phase of a multilevel linear analysis. All covariates were significantly associated with Aad, as was usage status: Resolute nonusers evaluated the ads significantly more positively than vulnerable nonusers and users (all ps < .001), who did not differ. In the second phase, the covariates along with Aad and respondents' usage status predicted intentions and actual usage 1 year after initial measurement. The lagged analysis disclosed negative associations between Aad and usage intentions and between Aad and actual marijuana use (both ps < .05); however, this association held only for users (p < .01), not vulnerable or resolute nonusers. Users who reported more positive attitudes toward the ads were less likely to report intention to use marijuana and to continue marijuana use at 1-year follow-up. These findings may inform designers of persuasion-based prevention campaigns, guiding preimplementation efforts in the design of ads that targeted groups find appealing and thus, influential. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katti, Amogh; Di Fatta, Giuseppe; Naughton III, Thomas J

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI_Comm_shrink operation requires a fault tolerant failure detection and consensus algorithm. This paper presents and compares two novel failure detection and consensus algorithms. The proposed algorithms are based on Gossip protocols and are inherently fault-tolerant and scalable. The proposed algorithms were implementedmore » and tested using the Extreme-scale Simulator. The results show that in both algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus.« less

  1. User perspectives on the usability of a regional health information exchange

    PubMed Central

    Ho, Yun-Xian; Cala, Cather Marie; Blakemore, Dana; Chen, Qingxia; Frisse, Mark E; Johnson, Kevin B

    2011-01-01

    Objective We assessed the usability of a health information exchange (HIE) in a densely populated metropolitan region. This grant-funded HIE had been deployed rapidly to address the imminent needs of the patient population and the need to draw wider participation from regional entities. Design We conducted a cross-sectional survey of individuals given access to the HIE at participating organizations and examined some of the usability and usage factors related to the technology acceptance model. Measurements We probed user perceptions using the Questionnaire for User Interaction Satisfaction, an author-generated Trust scale, and user characteristic questions (eg, age, weekly system usage time). Results Overall, users viewed the system favorably (ratings for all usability items were greater than neutral (one-sample Wilcoxon test, p<0.0014, Bonferroni-corrected for 35 tests). System usage was regressed on usability, trust, and demographic and user characteristic factors. Three usability factors were positively predictive of system usage: overall reactions (p<0 0.01), learning (p<0.05), and system functionality (p<0.01). Although trust is an important component in collaborative relationships, we did not find that user trust of other participating healthcare entities was significantly predictive of usage. An analysis of respondents' comments revealed ways to improve the HIE. Conclusion We used a rapid deployment model to develop an HIE and found that perceptions of system usability were positive. We also found that system usage was predicted well by some aspects of usability. Results from this study suggest that a rapid development approach may serve as a viable model for developing usable HIEs serving communities with limited resources. PMID:21622933

  2. On randomized algorithms for numerical solution of applied Fredholm integral equations of the second kind

    NASA Astrophysics Data System (ADS)

    Voytishek, Anton V.; Shipilov, Nikolay M.

    2017-11-01

    In this paper, the systematization of numerical (implemented on a computer) randomized functional algorithms for approximation of a solution of Fredholm integral equation of the second kind is carried out. Wherein, three types of such algorithms are distinguished: the projection, the mesh and the projection-mesh methods. The possibilities for usage of these algorithms for solution of practically important problems is investigated in detail. The disadvantages of the mesh algorithms, related to the necessity of calculation values of the kernels of integral equations in fixed points, are identified. On practice, these kernels have integrated singularities, and calculation of their values is impossible. Thus, for applied problems, related to solving Fredholm integral equation of the second kind, it is expedient to use not mesh, but the projection and the projection-mesh randomized algorithms.

  3. Objective Quality Assessment for Color-to-Gray Image Conversion.

    PubMed

    Ma, Kede; Zhao, Tiesong; Zeng, Kai; Wang, Zhou

    2015-12-01

    Color-to-gray (C2G) image conversion is the process of transforming a color image into a grayscale one. Despite its wide usage in real-world applications, little work has been dedicated to compare the performance of C2G conversion algorithms. Subjective evaluation is reliable but is also inconvenient and time consuming. Here, we make one of the first attempts to develop an objective quality model that automatically predicts the perceived quality of C2G converted images. Inspired by the philosophy of the structural similarity index, we propose a C2G structural similarity (C2G-SSIM) index, which evaluates the luminance, contrast, and structure similarities between the reference color image and the C2G converted image. The three components are then combined depending on image type to yield an overall quality measure. Experimental results show that the proposed C2G-SSIM index has close agreement with subjective rankings and significantly outperforms existing objective quality metrics for C2G conversion. To explore the potentials of C2G-SSIM, we further demonstrate its use in two applications: 1) automatic parameter tuning for C2G conversion algorithms and 2) adaptive fusion of C2G converted images.

  4. A Particle Swarm Optimization-Based Approach with Local Search for Predicting Protein Folding.

    PubMed

    Yang, Cheng-Hong; Lin, Yu-Shiun; Chuang, Li-Yeh; Chang, Hsueh-Wei

    2017-10-01

    The hydrophobic-polar (HP) model is commonly used for predicting protein folding structures and hydrophobic interactions. This study developed a particle swarm optimization (PSO)-based algorithm combined with local search algorithms; specifically, the high exploration PSO (HEPSO) algorithm (which can execute global search processes) was combined with three local search algorithms (hill-climbing algorithm, greedy algorithm, and Tabu table), yielding the proposed HE-L-PSO algorithm. By using 20 known protein structures, we evaluated the performance of the HE-L-PSO algorithm in predicting protein folding in the HP model. The proposed HE-L-PSO algorithm exhibited favorable performance in predicting both short and long amino acid sequences with high reproducibility and stability, compared with seven reported algorithms. The HE-L-PSO algorithm yielded optimal solutions for all predicted protein folding structures. All HE-L-PSO-predicted protein folding structures possessed a hydrophobic core that is similar to normal protein folding.

  5. Generic, Type-Safe and Object Oriented Computer Algebra Software

    NASA Astrophysics Data System (ADS)

    Kredel, Heinz; Jolly, Raphael

    Advances in computer science, in particular object oriented programming, and software engineering have had little practical impact on computer algebra systems in the last 30 years. The software design of existing systems is still dominated by ad-hoc memory management, weakly typed algorithm libraries and proprietary domain specific interactive expression interpreters. We discuss a modular approach to computer algebra software: usage of state-of-the-art memory management and run-time systems (e.g. JVM) usage of strongly typed, generic, object oriented programming languages (e.g. Java) and usage of general purpose, dynamic interactive expression interpreters (e.g. Python) To illustrate the workability of this approach, we have implemented and studied computer algebra systems in Java and Scala. In this paper we report on the current state of this work by presenting new examples.

  6. The evolution of social and semantic networks in epistemic communities

    NASA Astrophysics Data System (ADS)

    Margolin, Drew Berkley

    This study describes and tests a model of scientific inquiry as an evolving, organizational phenomenon. Arguments are derived from organizational ecology and evolutionary theory. The empirical subject of study is an epistemic community of scientists publishing on a research topic in physics: the string theoretic concept of "D-branes." The study uses evolutionary theory as a means of predicting change in the way members of the community choose concepts to communicate acceptable knowledge claims. It is argued that the pursuit of new knowledge is risky, because the reliability of a novel knowledge claim cannot be verified until after substantial resources have been invested. Using arguments from both philosophy of science and organizational ecology, it is suggested that scientists can mitigate and sensibly share the risks of knowledge discovery within the community by articulating their claims in legitimate forms, i.e., forms that are testable within and relevant to the community. Evidence from empirical studies of semantic usage suggests that the legitimacy of a knowledge claim is influenced by the characteristics of the concepts in which it is articulated. A model of conceptual retention, variation, and selection is then proposed for predicting the usage of concepts and conceptual co-occurrences in the future publications of the community, based on its past. Results substantially supported hypothesized retention and selection mechanisms. Future concept usage was predictable from previous concept usage, but was limited by conceptual carrying capacity as predicted by density dependence theory. Also as predicted, retention was stronger when the community showed a more cohesive social structure. Similarly, concepts that showed structural signatures of high testability and relevance were more likely to be selected after previous usage frequency was controlled for. By contrast, hypotheses for variation mechanisms were not supported. Surprisingly, concepts whose structural position suggested they would be easiest to discover through search processes were used less frequently, once previous usage frequency was controlled for. The study also makes a theoretical contribution by suggesting ways that evolutionary theory can be used to integrate findings from the study of science with insights from organizational communication. A variety of concrete directions for future studies of social and semantic network evolution are also proposed.

  7. A Hybrid Data Mining Approach for Credit Card Usage Behavior Analysis

    NASA Astrophysics Data System (ADS)

    Tsai, Chieh-Yuan

    Credit card is one of the most popular e-payment approaches in current online e-commerce. To consolidate valuable customers, card issuers invest a lot of money to maintain good relationship with their customers. Although several efforts have been done in studying card usage motivation, few researches emphasize on credit card usage behavior analysis when time periods change from t to t+1. To address this issue, an integrated data mining approach is proposed in this paper. First, the customer profile and their transaction data at time period t are retrieved from databases. Second, a LabelSOM neural network groups customers into segments and identify critical characteristics for each group. Third, a fuzzy decision tree algorithm is used to construct usage behavior rules of interesting customer groups. Finally, these rules are used to analysis the behavior changes between time periods t and t+1. An implementation case using a practical credit card database provided by a commercial bank in Taiwan is illustrated to show the benefits of the proposed framework.

  8. Generate Optimized Genetic Rhythm for Enzyme Expression in Non-native systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-11-03

    Most amino acids are represented by more than one codon, resulting in redundancy in the genetic code. Silent codon substitutions that do not alter the amino acid sequence still have an effect on protein expression. We have developed an algorithm, GoGREEN, to enhance the expression of foreign proteins in a host organism. GoGREEN selects codons according to frequency patterns seen in the gene of interest using the codon usage table from the host organism. GoGREEN is also designed to accommodate gaps in the sequence.This software takes for input (1) the aligned protein sequences for genes the user wishes to express,more » (2) the codon usage table for the host organism, (3) and the DNA sequence for the target protein found in the host organism. The program will select codons based on codon usage patterns for the target DNA sequence. The program will also select codons for “gaps” found in the aligned protein sequences using the codon usage table from the host organism.« less

  9. MQAPRank: improved global protein model quality assessment by learning-to-rank.

    PubMed

    Jing, Xiaoyang; Dong, Qiwen

    2017-05-25

    Protein structure prediction has achieved a lot of progress during the last few decades and a greater number of models for a certain sequence can be predicted. Consequently, assessing the qualities of predicted protein models in perspective is one of the key components of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, which could be roughly divided into three categories: single methods, quasi-single methods and clustering (or consensus) methods. Although these methods achieve much success at different levels, accurate protein model quality assessment is still an open problem. Here, we present the MQAPRank, a global protein model quality assessment program based on learning-to-rank. The MQAPRank first sorts the decoy models by using single method based on learning-to-rank algorithm to indicate their relative qualities for the target protein. And then it takes the first five models as references to predict the qualities of other models by using average GDT_TS scores between reference models and other models. Benchmarked on CASP11 and 3DRobot datasets, the MQAPRank achieved better performances than other leading protein model quality assessment methods. Recently, the MQAPRank participated in the CASP12 under the group name FDUBio and achieved the state-of-the-art performances. The MQAPRank provides a convenient and powerful tool for protein model quality assessment with the state-of-the-art performances, it is useful for protein structure prediction and model quality assessment usages.

  10. Developing Control System of Electrical Devices with Operational Expense Prediction

    NASA Astrophysics Data System (ADS)

    Sendari, Siti; Wahyu Herwanto, Heru; Rahmawati, Yuni; Mukti Putranto, Dendi; Fitri, Shofiana

    2017-04-01

    The purpose of this research is to develop a system that can monitor and record home electrical device’s electricity usage. This system has an ability to control electrical devices in distance and predict the operational expense. The system was developed using micro-controllers and WiFi modules connected to PC server. The communication between modules is arranged by server via WiFi. Beside of reading home electrical devices electricity usage, the unique point of the proposed-system is the ability of micro-controllers to send electricity data to server for recording the usage of electrical devices. The testing of this research was done by Black-box method to test the functionality of system. Testing system run well with 0% error.

  11. Drug Resistance and Viral Tropism in HIV-1 Subtype C-Infected Patients in KwaZulu-Natal, South Africa: Implications for Future Treatment Options

    PubMed Central

    Singh, Ashika; Sunpath, Henry; Green, Taryn N.; Padayachi, Nagavelli; Hiramen, Keshni; Lie, Yolanda; Anton, Elizabeth D.; Murphy, Richard; Reeves, Jacqueline D.; Kuritzkes, Daniel R.; Ndung’u, Thumbi

    2011-01-01

    Background Drug resistance poses a significant challenge for the successful application of highly active antiretroviral therapy (HAART) globally. Furthermore, emergence of HIV-1 isolates that preferentially utilize CXCR4 as a coreceptor for cell entry, either as a consequence of natural viral evolution or HAART use may compromise the efficacy of CCR5 antagonists as alternative antiviral therapy. Methods We sequenced the pol gene of viruses from 45 individuals failing at least six months of HAART in Durban, South Africa to determine the prevalence and patterns of drug resistance mutations. Coreceptor usage profiles of these viruses and those from 45 HAART-naive individuals were analyzed using phenotypic and genotypic approaches. Results Ninety-five percent of HAART-failing patients had at least one drug resistance mutation. Thymidine analog mutations (TAMs) were present in 55% of patients with 9% of individuals possessing mutations indicative of the TAM1 pathway, 44% had TAM2 while 7% had mutations common to both pathways. Sixty percent of HAART-failing subjects had X4/dual//mixed-tropic viruses compared to 30% of HAART-naïve subjects (p<0.02). Genetic coreceptor usage prediction algorithms correlated with phenotypic results with 60% of samples from HAART-failing subjects predicted to possess CXCR4-using (X4/dual/mixed viruses) versus 15% of HAART-naïve patients. Conclusions The high proportion of TAMs and X4/dual/mixed HIV-1 viruses among patients failing therapy highlight the need for intensified monitoring of patients taking HAART and the problem of diminished drug options (including CCR5 antagonists) for patients failing therapy in resource-poor settings. PMID:21709569

  12. Minimalist ensemble algorithms for genome-wide protein localization prediction.

    PubMed

    Lin, Jhih-Rong; Mondal, Ananda Mohan; Liu, Rong; Hu, Jianjun

    2012-07-03

    Computational prediction of protein subcellular localization can greatly help to elucidate its functions. Despite the existence of dozens of protein localization prediction algorithms, the prediction accuracy and coverage are still low. Several ensemble algorithms have been proposed to improve the prediction performance, which usually include as many as 10 or more individual localization algorithms. However, their performance is still limited by the running complexity and redundancy among individual prediction algorithms. This paper proposed a novel method for rational design of minimalist ensemble algorithms for practical genome-wide protein subcellular localization prediction. The algorithm is based on combining a feature selection based filter and a logistic regression classifier. Using a novel concept of contribution scores, we analyzed issues of algorithm redundancy, consensus mistakes, and algorithm complementarity in designing ensemble algorithms. We applied the proposed minimalist logistic regression (LR) ensemble algorithm to two genome-wide datasets of Yeast and Human and compared its performance with current ensemble algorithms. Experimental results showed that the minimalist ensemble algorithm can achieve high prediction accuracy with only 1/3 to 1/2 of individual predictors of current ensemble algorithms, which greatly reduces computational complexity and running time. It was found that the high performance ensemble algorithms are usually composed of the predictors that together cover most of available features. Compared to the best individual predictor, our ensemble algorithm improved the prediction accuracy from AUC score of 0.558 to 0.707 for the Yeast dataset and from 0.628 to 0.646 for the Human dataset. Compared with popular weighted voting based ensemble algorithms, our classifier-based ensemble algorithms achieved much better performance without suffering from inclusion of too many individual predictors. We proposed a method for rational design of minimalist ensemble algorithms using feature selection and classifiers. The proposed minimalist ensemble algorithm based on logistic regression can achieve equal or better prediction performance while using only half or one-third of individual predictors compared to other ensemble algorithms. The results also suggested that meta-predictors that take advantage of a variety of features by combining individual predictors tend to achieve the best performance. The LR ensemble server and related benchmark datasets are available at http://mleg.cse.sc.edu/LRensemble/cgi-bin/predict.cgi.

  13. Minimalist ensemble algorithms for genome-wide protein localization prediction

    PubMed Central

    2012-01-01

    Background Computational prediction of protein subcellular localization can greatly help to elucidate its functions. Despite the existence of dozens of protein localization prediction algorithms, the prediction accuracy and coverage are still low. Several ensemble algorithms have been proposed to improve the prediction performance, which usually include as many as 10 or more individual localization algorithms. However, their performance is still limited by the running complexity and redundancy among individual prediction algorithms. Results This paper proposed a novel method for rational design of minimalist ensemble algorithms for practical genome-wide protein subcellular localization prediction. The algorithm is based on combining a feature selection based filter and a logistic regression classifier. Using a novel concept of contribution scores, we analyzed issues of algorithm redundancy, consensus mistakes, and algorithm complementarity in designing ensemble algorithms. We applied the proposed minimalist logistic regression (LR) ensemble algorithm to two genome-wide datasets of Yeast and Human and compared its performance with current ensemble algorithms. Experimental results showed that the minimalist ensemble algorithm can achieve high prediction accuracy with only 1/3 to 1/2 of individual predictors of current ensemble algorithms, which greatly reduces computational complexity and running time. It was found that the high performance ensemble algorithms are usually composed of the predictors that together cover most of available features. Compared to the best individual predictor, our ensemble algorithm improved the prediction accuracy from AUC score of 0.558 to 0.707 for the Yeast dataset and from 0.628 to 0.646 for the Human dataset. Compared with popular weighted voting based ensemble algorithms, our classifier-based ensemble algorithms achieved much better performance without suffering from inclusion of too many individual predictors. Conclusions We proposed a method for rational design of minimalist ensemble algorithms using feature selection and classifiers. The proposed minimalist ensemble algorithm based on logistic regression can achieve equal or better prediction performance while using only half or one-third of individual predictors compared to other ensemble algorithms. The results also suggested that meta-predictors that take advantage of a variety of features by combining individual predictors tend to achieve the best performance. The LR ensemble server and related benchmark datasets are available at http://mleg.cse.sc.edu/LRensemble/cgi-bin/predict.cgi. PMID:22759391

  14. The routing, modulation level, and spectrum allocation algorithm in the virtual optical network mapping

    NASA Astrophysics Data System (ADS)

    Wang, Yunyun; Li, Hui; Liu, Yuze; Ji, Yuefeng; Li, Hongfa

    2017-10-01

    With the development of large video services and cloud computing, the network is increasingly in the form of services. In SDON, the SDN controller holds the underlying physical resource information, thus allocating the appropriate resources and bandwidth to the VON service. However, for some services that require extremely strict QoT (quality of transmission), the shortest distance path algorithm is often unable to meet the requirements because it does not take the link spectrum resources into account. And in accordance with the choice of the most unoccupied links, there may be more spectrum fragments. So here we propose a new RMLSA (the routing, modulation Level, and spectrum allocation) algorithm to reduce the blocking probability. The results show about 40% less blocking probability than the shortest-distance algorithm and the minimum usage of the spectrum priority algorithm. This algorithm is used to satisfy strict request of QoT for demands.

  15. Quantum machine learning for quantum anomaly detection

    NASA Astrophysics Data System (ADS)

    Liu, Nana; Rebentrost, Patrick

    2018-04-01

    Anomaly detection is used for identifying data that deviate from "normal" data patterns. Its usage on classical data finds diverse applications in many important areas such as finance, fraud detection, medical diagnoses, data cleaning, and surveillance. With the advent of quantum technologies, anomaly detection of quantum data, in the form of quantum states, may become an important component of quantum applications. Machine-learning algorithms are playing pivotal roles in anomaly detection using classical data. Two widely used algorithms are the kernel principal component analysis and the one-class support vector machine. We find corresponding quantum algorithms to detect anomalies in quantum states. We show that these two quantum algorithms can be performed using resources that are logarithmic in the dimensionality of quantum states. For pure quantum states, these resources can also be logarithmic in the number of quantum states used for training the machine-learning algorithm. This makes these algorithms potentially applicable to big quantum data applications.

  16. A High Fuel Consumption Efficiency Management Scheme for PHEVs Using an Adaptive Genetic Algorithm

    PubMed Central

    Lee, Wah Ching; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei; Wu, Chung Kit; Chui, Kwok Tai; Lau, Wing Hong; Leung, Yat Wah

    2015-01-01

    A high fuel efficiency management scheme for plug-in hybrid electric vehicles (PHEVs) has been developed. In order to achieve fuel consumption reduction, an adaptive genetic algorithm scheme has been designed to adaptively manage the energy resource usage. The objective function of the genetic algorithm is implemented by designing a fuzzy logic controller which closely monitors and resembles the driving conditions and environment of PHEVs, thus trading off between petrol versus electricity for optimal driving efficiency. Comparison between calculated results and publicized data shows that the achieved efficiency of the fuzzified genetic algorithm is better by 10% than existing schemes. The developed scheme, if fully adopted, would help reduce over 600 tons of CO2 emissions worldwide every day. PMID:25587974

  17. Nipype: a flexible, lightweight and extensible neuroimaging data processing framework in python.

    PubMed

    Gorgolewski, Krzysztof; Burns, Christopher D; Madison, Cindee; Clark, Dav; Halchenko, Yaroslav O; Waskom, Michael L; Ghosh, Satrajit S

    2011-01-01

    Current neuroimaging software offer users an incredible opportunity to analyze their data in different ways, with different underlying assumptions. Several sophisticated software packages (e.g., AFNI, BrainVoyager, FSL, FreeSurfer, Nipy, R, SPM) are used to process and analyze large and often diverse (highly multi-dimensional) data. However, this heterogeneous collection of specialized applications creates several issues that hinder replicable, efficient, and optimal use of neuroimaging analysis approaches: (1) No uniform access to neuroimaging analysis software and usage information; (2) No framework for comparative algorithm development and dissemination; (3) Personnel turnover in laboratories often limits methodological continuity and training new personnel takes time; (4) Neuroimaging software packages do not address computational efficiency; and (5) Methods sections in journal articles are inadequate for reproducing results. To address these issues, we present Nipype (Neuroimaging in Python: Pipelines and Interfaces; http://nipy.org/nipype), an open-source, community-developed, software package, and scriptable library. Nipype solves the issues by providing Interfaces to existing neuroimaging software with uniform usage semantics and by facilitating interaction between these packages using Workflows. Nipype provides an environment that encourages interactive exploration of algorithms, eases the design of Workflows within and between packages, allows rapid comparative development of algorithms and reduces the learning curve necessary to use different packages. Nipype supports both local and remote execution on multi-core machines and clusters, without additional scripting. Nipype is Berkeley Software Distribution licensed, allowing anyone unrestricted usage. An open, community-driven development philosophy allows the software to quickly adapt and address the varied needs of the evolving neuroimaging community, especially in the context of increasing demand for reproducible research.

  18. Nipype: A Flexible, Lightweight and Extensible Neuroimaging Data Processing Framework in Python

    PubMed Central

    Gorgolewski, Krzysztof; Burns, Christopher D.; Madison, Cindee; Clark, Dav; Halchenko, Yaroslav O.; Waskom, Michael L.; Ghosh, Satrajit S.

    2011-01-01

    Current neuroimaging software offer users an incredible opportunity to analyze their data in different ways, with different underlying assumptions. Several sophisticated software packages (e.g., AFNI, BrainVoyager, FSL, FreeSurfer, Nipy, R, SPM) are used to process and analyze large and often diverse (highly multi-dimensional) data. However, this heterogeneous collection of specialized applications creates several issues that hinder replicable, efficient, and optimal use of neuroimaging analysis approaches: (1) No uniform access to neuroimaging analysis software and usage information; (2) No framework for comparative algorithm development and dissemination; (3) Personnel turnover in laboratories often limits methodological continuity and training new personnel takes time; (4) Neuroimaging software packages do not address computational efficiency; and (5) Methods sections in journal articles are inadequate for reproducing results. To address these issues, we present Nipype (Neuroimaging in Python: Pipelines and Interfaces; http://nipy.org/nipype), an open-source, community-developed, software package, and scriptable library. Nipype solves the issues by providing Interfaces to existing neuroimaging software with uniform usage semantics and by facilitating interaction between these packages using Workflows. Nipype provides an environment that encourages interactive exploration of algorithms, eases the design of Workflows within and between packages, allows rapid comparative development of algorithms and reduces the learning curve necessary to use different packages. Nipype supports both local and remote execution on multi-core machines and clusters, without additional scripting. Nipype is Berkeley Software Distribution licensed, allowing anyone unrestricted usage. An open, community-driven development philosophy allows the software to quickly adapt and address the varied needs of the evolving neuroimaging community, especially in the context of increasing demand for reproducible research. PMID:21897815

  19. An arbitrary-order staggered time integrator for the linear acoustic wave equation

    NASA Astrophysics Data System (ADS)

    Lee, Jaejoon; Park, Hyunseo; Park, Yoonseo; Shin, Changsoo

    2018-02-01

    We suggest a staggered time integrator whose order of accuracy can arbitrarily be extended to solve the linear acoustic wave equation. A strategy to select the appropriate order of accuracy is also proposed based on the error analysis that quantitatively predicts the truncation error of the numerical solution. This strategy not only reduces the computational cost several times, but also allows us to flexibly set the modelling parameters such as the time step length, grid interval and P-wave speed. It is demonstrated that the proposed method can almost eliminate temporal dispersive errors during long term simulations regardless of the heterogeneity of the media and time step lengths. The method can also be successfully applied to the source problem with an absorbing boundary condition, which is frequently encountered in the practical usage for the imaging algorithms or the inverse problems.

  20. Systems Biology-Driven Hypotheses Tested In Vivo: The Need to Advancing Molecular Imaging Tools.

    PubMed

    Verma, Garima; Palombo, Alessandro; Grigioni, Mauro; La Monaca, Morena; D'Avenio, Giuseppe

    2018-01-01

    Processing and interpretation of biological images may provide invaluable insights on complex, living systems because images capture the overall dynamics as a "whole." Therefore, "extraction" of key, quantitative morphological parameters could be, at least in principle, helpful in building a reliable systems biology approach in understanding living objects. Molecular imaging tools for system biology models have attained widespread usage in modern experimental laboratories. Here, we provide an overview on advances in the computational technology and different instrumentations focused on molecular image processing and analysis. Quantitative data analysis through various open source software and algorithmic protocols will provide a novel approach for modeling the experimental research program. Besides this, we also highlight the predictable future trends regarding methods for automatically analyzing biological data. Such tools will be very useful to understand the detailed biological and mathematical expressions under in-silico system biology processes with modeling properties.

  1. Characteristic-based algorithms for flows in thermo-chemical nonequilibrium

    NASA Technical Reports Server (NTRS)

    Walters, Robert W.; Cinnella, Pasquale; Slack, David C.; Halt, David

    1990-01-01

    A generalized finite-rate chemistry algorithm with Steger-Warming, Van Leer, and Roe characteristic-based flux splittings is presented in three-dimensional generalized coordinates for the Navier-Stokes equations. Attention is placed on convergence to steady-state solutions with fully coupled chemistry. Time integration schemes including explicit m-stage Runge-Kutta, implicit approximate-factorization, relaxation and LU decomposition are investigated and compared in terms of residual reduction per unit of CPU time. Practical issues such as code vectorization and memory usage on modern supercomputers are discussed.

  2. Predicting the survival of diabetes using neural network

    NASA Astrophysics Data System (ADS)

    Mamuda, Mamman; Sathasivam, Saratha

    2017-08-01

    Data mining techniques at the present time are used in predicting diseases of health care industries. Neural Network is one among the prevailing method in data mining techniques of an intelligent field for predicting diseases in health care industries. This paper presents a study on the prediction of the survival of diabetes diseases using different learning algorithms from the supervised learning algorithms of neural network. Three learning algorithms are considered in this study: (i) The levenberg-marquardt learning algorithm (ii) The Bayesian regulation learning algorithm and (iii) The scaled conjugate gradient learning algorithm. The network is trained using the Pima Indian Diabetes Dataset with the help of MATLAB R2014(a) software. The performance of each algorithm is further discussed through regression analysis. The prediction accuracy of the best algorithm is further computed to validate the accurate prediction

  3. ABodyBuilder: Automated antibody structure prediction with data–driven accuracy estimation

    PubMed Central

    Leem, Jinwoo; Dunbar, James; Georges, Guy; Shi, Jiye; Deane, Charlotte M.

    2016-01-01

    ABSTRACT Computational modeling of antibody structures plays a critical role in therapeutic antibody design. Several antibody modeling pipelines exist, but no freely available methods currently model nanobodies, provide estimates of expected model accuracy, or highlight potential issues with the antibody's experimental development. Here, we describe our automated antibody modeling pipeline, ABodyBuilder, designed to overcome these issues. The algorithm itself follows the standard 4 steps of template selection, orientation prediction, complementarity-determining region (CDR) loop modeling, and side chain prediction. ABodyBuilder then annotates the ‘confidence’ of the model as a probability that a component of the antibody (e.g., CDRL3 loop) will be modeled within a root–mean square deviation threshold. It also flags structural motifs on the model that are known to cause issues during in vitro development. ABodyBuilder was tested on 4 separate datasets, including the 11 antibodies from the Antibody Modeling Assessment–II competition. ABodyBuilder builds models that are of similar quality to other methodologies, with sub–Angstrom predictions for the ‘canonical’ CDR loops. Its ability to model nanobodies, and rapidly generate models (∼30 seconds per model) widens its potential usage. ABodyBuilder can also help users in decision–making for the development of novel antibodies because it provides model confidence and potential sequence liabilities. ABodyBuilder is freely available at http://opig.stats.ox.ac.uk/webapps/abodybuilder. PMID:27392298

  4. OpenCL-based vicinity computation for 3D multiresolution mesh compression

    NASA Astrophysics Data System (ADS)

    Hachicha, Soumaya; Elkefi, Akram; Ben Amar, Chokri

    2017-03-01

    3D multiresolution mesh compression systems are still widely addressed in many domains. These systems are more and more requiring volumetric data to be processed in real-time. Therefore, the performance is becoming constrained by material resources usage and an overall reduction in the computational time. In this paper, our contribution entirely lies on computing, in real-time, triangles neighborhood of 3D progressive meshes for a robust compression algorithm based on the scan-based wavelet transform(WT) technique. The originality of this latter algorithm is to compute the WT with minimum memory usage by processing data as they are acquired. However, with large data, this technique is considered poor in term of computational complexity. For that, this work exploits the GPU to accelerate the computation using OpenCL as a heterogeneous programming language. Experiments demonstrate that, aside from the portability across various platforms and the flexibility guaranteed by the OpenCL-based implementation, this method can improve performance gain in speedup factor of 5 compared to the sequential CPU implementation.

  5. Full-custom design of split-set data weighted averaging with output register for jitter suppression

    NASA Astrophysics Data System (ADS)

    Jubay, M. C.; Gerasta, O. J.

    2015-06-01

    A full-custom design of an element selection algorithm, named as Split-set Data Weighted Averaging (SDWA) is implemented in 90nm CMOS Technology Synopsys Library. SDWA is applied in seven unit elements (3-bit) using a thermometer-coded input. Split-set DWA is an improved DWA algorithm which caters the requirement for randomization along with long-term equal element usage. Randomization and equal element-usage improve the spectral response of the unit elements due to higher Spurious-free dynamic range (SFDR) and without significantly degrading signal-to-noise ratio (SNR). Since a full-custom, the design is brought to transistor-level and the chip custom layout is also provided, having a total area of 0.3mm2, a power consumption of 0.566 mW, and simulated at 50MHz clock frequency. On this implementation, SDWA is successfully derived and improved by introducing a register at the output that suppresses the jitter introduced at the final stage due to switching loops and successive delays.

  6. Track finding in ATLAS using GPUs

    NASA Astrophysics Data System (ADS)

    Mattmann, J.; Schmitt, C.

    2012-12-01

    The reconstruction and simulation of collision events is a major task in modern HEP experiments involving several ten thousands of standard CPUs. On the other hand the graphics processors (GPUs) have become much more powerful and are by far outperforming the standard CPUs in terms of floating point operations due to their massive parallel approach. The usage of these GPUs could therefore significantly reduce the overall reconstruction time per event or allow for the usage of more sophisticated algorithms. In this paper the track finding in the ATLAS experiment will be used as an example on how the GPUs can be used in this context: the implementation on the GPU requires a change in the algorithmic flow to allow the code to work in the rather limited environment on the GPU in terms of memory, cache, and transfer speed from and to the GPU and to make use of the massive parallel computation. Both, the specific implementation of parts of the ATLAS track reconstruction chain and the performance improvements obtained will be discussed.

  7. Passive Facebook usage undermines affective well-being: Experimental and longitudinal evidence.

    PubMed

    Verduyn, Philippe; Lee, David Seungjae; Park, Jiyoung; Shablack, Holly; Orvell, Ariana; Bayer, Joseph; Ybarra, Oscar; Jonides, John; Kross, Ethan

    2015-04-01

    Prior research indicates that Facebook usage predicts declines in subjective well-being over time. How does this come about? We examined this issue in 2 studies using experimental and field methods. In Study 1, cueing people in the laboratory to use Facebook passively (rather than actively) led to declines in affective well-being over time. Study 2 replicated these findings in the field using experience-sampling techniques. It also demonstrated how passive Facebook usage leads to declines in affective well-being: by increasing envy. Critically, the relationship between passive Facebook usage and changes in affective well-being remained significant when controlling for active Facebook use, non-Facebook online social network usage, and direct social interactions, highlighting the specificity of this result. These findings demonstrate that passive Facebook usage undermines affective well-being. (c) 2015 APA, all rights reserved).

  8. Comparative study on ATR-FTIR calibration models for monitoring solution concentration in cooling crystallization

    NASA Astrophysics Data System (ADS)

    Zhang, Fangkun; Liu, Tao; Wang, Xue Z.; Liu, Jingxiang; Jiang, Xiaobin

    2017-02-01

    In this paper calibration model building based on using an ATR-FTIR spectroscopy is investigated for in-situ measurement of the solution concentration during a cooling crystallization process. The cooling crystallization of L-glutamic Acid (LGA) as a case is studied here. It was found that using the metastable zone (MSZ) data for model calibration can guarantee the prediction accuracy for monitoring the operating window of cooling crystallization, compared to the usage of undersaturated zone (USZ) spectra for model building as traditionally practiced. Calibration experiments were made for LGA solution under different concentrations. Four candidate calibration models were established using different zone data for comparison, by using a multivariate partial least-squares (PLS) regression algorithm for the collected spectra together with the corresponding temperature values. Experiments under different process conditions including the changes of solution concentration and operating temperature were conducted. The results indicate that using the MSZ spectra for model calibration can give more accurate prediction of the solution concentration during the crystallization process, while maintaining accuracy in changing the operating temperature. The primary reason of prediction error was clarified as spectral nonlinearity for in-situ measurement between USZ and MSZ. In addition, an LGA cooling crystallization experiment was performed to verify the sensitivity of these calibration models for monitoring the crystal growth process.

  9. Artificial intelligence based model for optimization of COD removal efficiency of an up-flow anaerobic sludge blanket reactor in the saline wastewater treatment.

    PubMed

    Picos-Benítez, Alain R; López-Hincapié, Juan D; Chávez-Ramírez, Abraham U; Rodríguez-García, Adrián

    2017-03-01

    The complex non-linear behavior presented in the biological treatment of wastewater requires an accurate model to predict the system performance. This study evaluates the effectiveness of an artificial intelligence (AI) model, based on the combination of artificial neural networks (ANNs) and genetic algorithms (GAs), to find the optimum performance of an up-flow anaerobic sludge blanket reactor (UASB) for saline wastewater treatment. Chemical oxygen demand (COD) removal was predicted using conductivity, organic loading rate (OLR) and temperature as input variables. The ANN model was built from experimental data and performance was assessed through the maximum mean absolute percentage error (= 9.226%) computed from the measured and model predicted values of the COD. Accordingly, the ANN model was used as a fitness function in a GA to find the best operational condition. In the worst case scenario (low energy requirements, high OLR usage and high salinity) this model guaranteed COD removal efficiency values above 70%. This result is consistent and was validated experimentally, confirming that this ANN-GA model can be used as a tool to achieve the best performance of a UASB reactor with the minimum requirement of energy for saline wastewater treatment.

  10. SU-C-207B-02: Maximal Noise Reduction Filter with Anatomical Structures Preservation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maitree, R; Guzman, G; Chundury, A

    Purpose: All medical images contain noise, which can result in an undesirable appearance and can reduce the visibility of anatomical details. There are varieties of techniques utilized to reduce noise such as increasing the image acquisition time and using post-processing noise reduction algorithms. However, these techniques are increasing the imaging time and cost or reducing tissue contrast and effective spatial resolution which are useful diagnosis information. The three main focuses in this study are: 1) to develop a novel approach that can adaptively and maximally reduce noise while preserving valuable details of anatomical structures, 2) to evaluate the effectiveness ofmore » available noise reduction algorithms in comparison to the proposed algorithm, and 3) to demonstrate that the proposed noise reduction approach can be used clinically. Methods: To achieve a maximal noise reduction without destroying the anatomical details, the proposed approach automatically estimated the local image noise strength levels and detected the anatomical structures, i.e. tissue boundaries. Such information was used to adaptively adjust strength of the noise reduction filter. The proposed algorithm was tested on 34 repeating swine head datasets and 54 patients MRI and CT images. The performance was quantitatively evaluated by image quality metrics and manually validated for clinical usages by two radiation oncologists and one radiologist. Results: Qualitative measurements on repeated swine head images demonstrated that the proposed algorithm efficiently removed noise while preserving the structures and tissues boundaries. In comparisons, the proposed algorithm obtained competitive noise reduction performance and outperformed other filters in preserving anatomical structures. Assessments from the manual validation indicate that the proposed noise reduction algorithm is quite adequate for some clinical usages. Conclusion: According to both clinical evaluation (human expert ranking) and qualitative assessment, the proposed approach has superior noise reduction and anatomical structures preservation capabilities over existing noise removal methods. Senior Author Dr. Deshan Yang received research funding form ViewRay and Varian.« less

  11. Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS), Version 5.0, Revision 2.0 (User’s Guide)

    DTIC Science & Technology

    2012-05-03

    output (I/O) system. The framework provides tools for common modeling functions, as well as regridding, data decomposition, and communication on...Within this script, the user must specify both the site (DSRC or local) and the platform ( DAVINCI , EINSTEIN, or local machine) on which COAMPS is...being run. For example: site=navy_dsrc (for DSRC usage) site=nrlssc (for local NRL-SSC usage) platform= davinci or einstein (for DSRC usage

  12. Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) Version 5.0, Rev. 2.0 (User’s Guide)

    DTIC Science & Technology

    2012-05-03

    output (I/O) system. The framework provides tools for common modeling functions, as well as regridding, data decomposition, and communication on...Within this script, the user must specify both the site (DSRC or local) and the platform ( DAVINCI , EINSTEIN, or local machine) on which COAMPS is...being run. For example: site=navy_dsrc (for DSRC usage) site=nrlssc (for local NRL-SSC usage) platform= davinci or einstein (for DSRC usage

  13. Personality variables as predictors of Facebook usage.

    PubMed

    Caci, Barbara; Cardaci, Maurizio; Tabacchi, Marco E; Scrima, Fabrizio

    2014-04-01

    This study investigates the role of personality factors as predictors of Facebook usage. Data concerning Facebook usage and personality factors from 654 Facebook users were gathered using a web survey. Using path analysis, the results showed Openness was a predictor of Facebook early adoption, Conscientiousness with sparing use, Extraversion with long sessions and abundant friendships, and Neuroticism with high frequency of sessions. The possible role of Agreeableness in predicting low session frequency and friendships needs further validation.

  14. Multi-agent coordination algorithms for control of distributed energy resources in smart grids

    NASA Astrophysics Data System (ADS)

    Cortes, Andres

    Sustainable energy is a top-priority for researchers these days, since electricity and transportation are pillars of modern society. Integration of clean energy technologies such as wind, solar, and plug-in electric vehicles (PEVs), is a major engineering challenge in operation and management of power systems. This is due to the uncertain nature of renewable energy technologies and the large amount of extra load that PEVs would add to the power grid. Given the networked structure of a power system, multi-agent control and optimization strategies are natural approaches to address the various problems of interest for the safe and reliable operation of the power grid. The distributed computation in multi-agent algorithms addresses three problems at the same time: i) it allows for the handling of problems with millions of variables that a single processor cannot compute, ii) it allows certain independence and privacy to electricity customers by not requiring any usage information, and iii) it is robust to localized failures in the communication network, being able to solve problems by simply neglecting the failing section of the system. We propose various algorithms to coordinate storage, generation, and demand resources in a power grid using multi-agent computation and decentralized decision making. First, we introduce a hierarchical vehicle-one-grid (V1G) algorithm for coordination of PEVs under usage constraints, where energy only flows from the grid in to the batteries of PEVs. We then present a hierarchical vehicle-to-grid (V2G) algorithm for PEV coordination that takes into consideration line capacity constraints in the distribution grid, and where energy flows both ways, from the grid in to the batteries, and from the batteries to the grid. Next, we develop a greedy-like hierarchical algorithm for management of demand response events with on/off loads. Finally, we introduce distributed algorithms for the optimal control of distributed energy resources, i.e., generation and storage in a microgrid. The algorithms we present are provably correct and tested in simulation. Each algorithm is assumed to work on a particular network topology, and simulation studies are carried out in order to demonstrate their convergence properties to a desired solution.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less

  16. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  17. Vital Sign Monitoring and Mobile Phone Usage Detection Using IR-UWB Radar for Intended Use in Car Crash Prevention.

    PubMed

    Leem, Seong Kyu; Khan, Faheem; Cho, Sung Ho

    2017-05-30

    In order to avoid car crashes, active safety systems are becoming more and more important. Many crashes are caused due to driver drowsiness or mobile phone usage. Detecting the drowsiness of the driver is very important for the safety of a car. Monitoring of vital signs such as respiration rate and heart rate is important to determine the occurrence of driver drowsiness. In this paper, robust vital signs monitoring through impulse radio ultra-wideband (IR-UWB) radar is discussed. We propose a new algorithm that can estimate the vital signs even if there is motion caused by the driving activities. We analyzed the whole fast time vital detection region and found the signals at those fast time locations that have useful information related to the vital signals. We segmented those signals into sub-signals and then constructed the desired vital signal using the correlation method. In this way, the vital signs of the driver can be monitored noninvasively, which can be used by researchers to detect the drowsiness of the driver which is related to the vital signs i.e., respiration and heart rate. In addition, texting on a mobile phone during driving may cause visual, manual or cognitive distraction of the driver. In order to reduce accidents caused by a distracted driver, we proposed an algorithm that can detect perfectly a driver's mobile phone usage even if there are various motions of the driver in the car or changes in background objects. These novel techniques, which monitor vital signs associated with drowsiness and detect phone usage before a driver makes a mistake, may be very helpful in developing techniques for preventing a car crash.

  18. Vital Sign Monitoring and Mobile Phone Usage Detection Using IR-UWB Radar for Intended Use in Car Crash Prevention

    PubMed Central

    Leem, Seong Kyu; Khan, Faheem; Cho, Sung Ho

    2017-01-01

    In order to avoid car crashes, active safety systems are becoming more and more important. Many crashes are caused due to driver drowsiness or mobile phone usage. Detecting the drowsiness of the driver is very important for the safety of a car. Monitoring of vital signs such as respiration rate and heart rate is important to determine the occurrence of driver drowsiness. In this paper, robust vital signs monitoring through impulse radio ultra-wideband (IR-UWB) radar is discussed. We propose a new algorithm that can estimate the vital signs even if there is motion caused by the driving activities. We analyzed the whole fast time vital detection region and found the signals at those fast time locations that have useful information related to the vital signals. We segmented those signals into sub-signals and then constructed the desired vital signal using the correlation method. In this way, the vital signs of the driver can be monitored noninvasively, which can be used by researchers to detect the drowsiness of the driver which is related to the vital signs i.e., respiration and heart rate. In addition, texting on a mobile phone during driving may cause visual, manual or cognitive distraction of the driver. In order to reduce accidents caused by a distracted driver, we proposed an algorithm that can detect perfectly a driver's mobile phone usage even if there are various motions of the driver in the car or changes in background objects. These novel techniques, which monitor vital signs associated with drowsiness and detect phone usage before a driver makes a mistake, may be very helpful in developing techniques for preventing a car crash. PMID:28556818

  19. Compacting de Bruijn graphs from sequencing data quickly and in low memory.

    PubMed

    Chikhi, Rayan; Limasset, Antoine; Medvedev, Paul

    2016-06-15

    As the quantity of data per sequencing experiment increases, the challenges of fragment assembly are becoming increasingly computational. The de Bruijn graph is a widely used data structure in fragment assembly algorithms, used to represent the information from a set of reads. Compaction is an important data reduction step in most de Bruijn graph based algorithms where long simple paths are compacted into single vertices. Compaction has recently become the bottleneck in assembly pipelines, and improving its running time and memory usage is an important problem. We present an algorithm and a tool bcalm 2 for the compaction of de Bruijn graphs. bcalm 2 is a parallel algorithm that distributes the input based on a minimizer hashing technique, allowing for good balance of memory usage throughout its execution. For human sequencing data, bcalm 2 reduces the computational burden of compacting the de Bruijn graph to roughly an hour and 3 GB of memory. We also applied bcalm 2 to the 22 Gbp loblolly pine and 20 Gbp white spruce sequencing datasets. Compacted graphs were constructed from raw reads in less than 2 days and 40 GB of memory on a single machine. Hence, bcalm 2 is at least an order of magnitude more efficient than other available methods. Source code of bcalm 2 is freely available at: https://github.com/GATB/bcalm rayan.chikhi@univ-lille1.fr. © The Author 2016. Published by Oxford University Press.

  20. Investigating power capping toward energy-efficient scientific applications: Investigating Power Capping toward Energy-Efficient Scientific Applications

    DOE PAGES

    Haidar, Azzam; Jagode, Heike; Vaccaro, Phil; ...

    2018-03-22

    The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less

  1. Investigating power capping toward energy-efficient scientific applications: Investigating Power Capping toward Energy-Efficient Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haidar, Azzam; Jagode, Heike; Vaccaro, Phil

    The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less

  2. The development of an artificial organic networks toolkit for LabVIEW.

    PubMed

    Ponce, Hiram; Ponce, Pedro; Molina, Arturo

    2015-03-15

    Two of the most challenging problems that scientists and researchers face when they want to experiment with new cutting-edge algorithms are the time-consuming for encoding and the difficulties for linking them with other technologies and devices. In that sense, this article introduces the artificial organic networks toolkit for LabVIEW™ (AON-TL) from the implementation point of view. The toolkit is based on the framework provided by the artificial organic networks technique, giving it the potential to add new algorithms in the future based on this technique. Moreover, the toolkit inherits both the rapid prototyping and the easy-to-use characteristics of the LabVIEW™ software (e.g., graphical programming, transparent usage of other softwares and devices, built-in programming event-driven for user interfaces), to make it simple for the end-user. In fact, the article describes the global architecture of the toolkit, with particular emphasis in the software implementation of the so-called artificial hydrocarbon networks algorithm. Lastly, the article includes two case studies for engineering purposes (i.e., sensor characterization) and chemistry applications (i.e., blood-brain barrier partitioning data model) to show the usage of the toolkit and the potential scalability of the artificial organic networks technique. © 2015 Wiley Periodicals, Inc.

  3. Comparison of the accuracy of three algorithms in predicting accessory pathways among adult Wolff-Parkinson-White syndrome patients.

    PubMed

    Maden, Orhan; Balci, Kevser Gülcihan; Selcuk, Mehmet Timur; Balci, Mustafa Mücahit; Açar, Burak; Unal, Sefa; Kara, Meryem; Selcuk, Hatice

    2015-12-01

    The aim of this study was to investigate the accuracy of three algorithms in predicting accessory pathway locations in adult patients with Wolff-Parkinson-White syndrome in Turkish population. A total of 207 adult patients with Wolff-Parkinson-White syndrome were retrospectively analyzed. The most preexcited 12-lead electrocardiogram in sinus rhythm was used for analysis. Two investigators blinded to the patient data used three algorithms for prediction of accessory pathway location. Among all locations, 48.5% were left-sided, 44% were right-sided, and 7.5% were located in the midseptum or anteroseptum. When only exact locations were accepted as match, predictive accuracy for Chiang was 71.5%, 72.4% for d'Avila, and 71.5% for Arruda. The percentage of predictive accuracy of all algorithms did not differ between the algorithms (p = 1.000; p = 0.875; p = 0.885, respectively). The best algorithm for prediction of right-sided, left-sided, and anteroseptal and midseptal accessory pathways was Arruda (p < 0.001). Arruda was significantly better than d'Avila in predicting adjacent sites (p = 0.035) and the percent of the contralateral site prediction was higher with d'Avila than Arruda (p = 0.013). All algorithms were similar in predicting accessory pathway location and the predicted accuracy was lower than previously reported by their authors. However, according to the accessory pathway site, the algorithm designed by Arruda et al. showed better predictions than the other algorithms and using this algorithm may provide advantages before a planned ablation.

  4. A MULTICORE BASED PARALLEL IMAGE REGISTRATION METHOD

    PubMed Central

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.

    2012-01-01

    Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform. PMID:19964921

  5. A Framework for Web Usage Mining in Electronic Government

    NASA Astrophysics Data System (ADS)

    Zhou, Ping; Le, Zhongjian

    Web usage mining has been a major component of management strategy to enhance organizational analysis and decision. The literature on Web usage mining that deals with strategies and technologies for effectively employing Web usage mining is quite vast. In recent years, E-government has received much attention from researchers and practitioners. Huge amounts of user access data are produced in Electronic government Web site everyday. The role of these data in the success of government management cannot be overstated because they affect government analysis, prediction, strategies, tactical, operational planning and control. Web usage miming in E-government has an important role to play in setting government objectives, discovering citizen behavior, and determining future courses of actions. Web usage mining in E-government has not received adequate attention from researchers or practitioners. We developed a framework to promote a better understanding of the importance of Web usage mining in E-government. Using the current literature, we developed the framework presented herein, in hopes that it would stimulate more interest in this important area.

  6. Methodology for the inference of gene function from phenotype data.

    PubMed

    Ascensao, Joao A; Dolan, Mary E; Hill, David P; Blake, Judith A

    2014-12-12

    Biomedical ontologies are increasingly instrumental in the advancement of biological research primarily through their use to efficiently consolidate large amounts of data into structured, accessible sets. However, ontology development and usage can be hampered by the segregation of knowledge by domain that occurs due to independent development and use of the ontologies. The ability to infer data associated with one ontology to data associated with another ontology would prove useful in expanding information content and scope. We here focus on relating two ontologies: the Gene Ontology (GO), which encodes canonical gene function, and the Mammalian Phenotype Ontology (MP), which describes non-canonical phenotypes, using statistical methods to suggest GO functional annotations from existing MP phenotype annotations. This work is in contrast to previous studies that have focused on inferring gene function from phenotype primarily through lexical or semantic similarity measures. We have designed and tested a set of algorithms that represents a novel methodology to define rules for predicting gene function by examining the emergent structure and relationships between the gene functions and phenotypes rather than inspecting the terms semantically. The algorithms inspect relationships among multiple phenotype terms to deduce if there are cases where they all arise from a single gene function. We apply this methodology to data about genes in the laboratory mouse that are formally represented in the Mouse Genome Informatics (MGI) resource. From the data, 7444 rule instances were generated from five generalized rules, resulting in 4818 unique GO functional predictions for 1796 genes. We show that our method is capable of inferring high-quality functional annotations from curated phenotype data. As well as creating inferred annotations, our method has the potential to allow for the elucidation of unforeseen, biologically significant associations between gene function and phenotypes that would be overlooked by a semantics-based approach. Future work will include the implementation of the described algorithms for a variety of other model organism databases, taking full advantage of the abundance of available high quality curated data.

  7. Using the theory of planned behavior to predict infant restraint use in Saudi Arabia.

    PubMed

    Nelson, Anna; Modeste, Naomi N; Marshak, Helen H; Hopp, Joyce W

    2014-09-01

    To determine whether the theory of planned behavior (TPB) predicted intent of child restraint system (CRS) use among pregnant women in the Kingdom of Saudi Arabia (KSA). In this cross-sectional study conducted in Dallah Hospital, Riyadh, KSA during June-July 2013, 196 pregnant women completed surveys assessing their beliefs regarding CRS. Simultaneous observations were conducted among a different sample of 150 women to determine CRS usage at hospital discharge following maternity stay. Logistic regression model with TPB constructs and covariates as predictors of CRS usage intent was significant (χ2=64.986, p<0.0001) and predicted 38% of intent. There was an increase in odds of intent for attitudes (31.5%, p<0.05), subjective norm (55.3%, p<0.001), and perceived behavioral control (76.9%, p<0.001). The 3 logistic regression models testing the association of the relevant set of composite belief scores were also significant for attitudes (χ2=16.803, p<0.05), subjective norm (χ2=29.681, p<0.0001), and perceived behavioral control (χ2=20.516, p<0.05). The behavioral observation showed that none of the 150 women observed used CRS for their newborn at discharge. The TPB constructs were significantly and independently associated with higher intent for CRS usage. While TPB appears to be a useful tool to identify beliefs related to CRS usage intentions in KSA, the results of the separate behavioral observation indicate that intentions may not be related to the actual usage of CRS in the Kingdom. Further studies are recommended to examine this association.

  8. Algorithm of first-aid management of dental trauma for medics and corpsmen.

    PubMed

    Zadik, Yehuda

    2008-12-01

    In order to fill the discrepancy between the necessity of providing prompt and proper treatment to dental trauma patients, and the inadequate knowledge among medics and corpsmen, as well as the lack of instructions in first-aid textbook and manuals, and after reviewing the dental literature, a simple algorithm for non-professional first-aid management for various injuries to hard (teeth) and soft oral tissues, is presented. The recommended management of tooth avulsion, subluxation and luxation, crown fracture and lip, tongue or gingival laceration included in the algorithm. Along with a list of after-hour dental clinics, this symptoms- and clinical-appearance-based algorithm is suited to tuck easily into a pocket for quick utilization by medics/corpsmen in an emergency situation. Although the algorithm was developed for the usage of military non-dental health-care providers, this method could be adjusted and employed in the civilian environment as well.

  9. An algorithm for management of deep brain stimulation battery replacements: devising a web-based battery estimator and clinical symptom approach.

    PubMed

    Montuno, Michael A; Kohner, Andrew B; Foote, Kelly D; Okun, Michael S

    2013-01-01

    Deep brain stimulation (DBS) is an effective technique that has been utilized to treat advanced and medication-refractory movement and psychiatric disorders. In order to avoid implanted pulse generator (IPG) failure and consequent adverse symptoms, a better understanding of IPG battery longevity and management is necessary. Existing methods for battery estimation lack the specificity required for clinical incorporation. Technical challenges prevent higher accuracy longevity estimations, and a better approach to managing end of DBS battery life is needed. The literature was reviewed and DBS battery estimators were constructed by the authors and made available on the web at http://mdc.mbi.ufl.edu/surgery/dbs-battery-estimator. A clinical algorithm for management of DBS battery life was constructed. The algorithm takes into account battery estimations and clinical symptoms. Existing methods of DBS battery life estimation utilize an interpolation of averaged current drains to calculate how long a battery will last. Unfortunately, this technique can only provide general approximations. There are inherent errors in this technique, and these errors compound with each iteration of the battery estimation. Some of these errors cannot be accounted for in the estimation process, and some of the errors stem from device variation, battery voltage dependence, battery usage, battery chemistry, impedance fluctuations, interpolation error, usage patterns, and self-discharge. We present web-based battery estimators along with an algorithm for clinical management. We discuss the perils of using a battery estimator without taking into account the clinical picture. Future work will be needed to provide more reliable management of implanted device batteries; however, implementation of a clinical algorithm that accounts for both estimated battery life and for patient symptoms should improve the care of DBS patients. © 2012 International Neuromodulation Society.

  10. Improving Early Warning Systems with Categorized Course Resource Usage

    ERIC Educational Resources Information Center

    Waddington, R. Joseph; Nam, SungJin; Lonn, Steven; Teasley, Stephanie D.

    2016-01-01

    Early Warning Systems (EWSs) aggregate multiple sources of data to provide timely information to stakeholders about students in need of academic support. There is an increasing need to incorporate relevant data about student behaviors into the algorithms underlying EWSs to improve predictors of students' success or failure. Many EWSs currently…

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shishlo, Andrei P; Chu, Paul; Pelaia II, Tom

    A data plotting package residing in the XAL tools set is presented. This package is based on Java SWING, and therefore it has the same portability as Java itself. The data types for charts, bar-charts, and color-surface plots are described. The algorithms, performance, interactive capabilities, limitations, and the best usage practices of this plotting package are discussed.

  12. A new method of real-time detection of changes in periodic data stream

    NASA Astrophysics Data System (ADS)

    Lyu, Chen; Lu, Guoliang; Cheng, Bin; Zheng, Xiangwei

    2017-07-01

    The change point detection in periodic time series is much desirable in many practical usages. We present a novel algorithm for this task, which includes two phases: 1) anomaly measure- on the basis of a typical regression model, we propose a new computation method to measure anomalies in time series which does not require any reference data from other measurement(s); 2) change detection- we introduce a new martingale test for detection which can be operated in an unsupervised and nonparametric way. We have conducted extensive experiments to systematically test our algorithm. The results make us believe that our algorithm can be directly applicable in many real-world change-point-detection applications.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blom, Philip Stephen; Marcillo, Omar Eduardo; Euler, Garrett Gene

    InfraPy is a Python-based analysis toolkit being development at LANL. The algorithms are intended for ground-based nuclear detonation detection applications to detect, locate, and characterize explosive sources using infrasonic observations. The implementation is usable as a stand-alone Python library or as a command line driven tool operating directly on a database. With multiple scientists working on the project, we've begun using a LANL git repository for collaborative development and version control. Current and planned work on InfraPy focuses on the development of new algorithms and propagation models. Collaboration with Southern Methodist University (SMU) has helped identify bugs and limitations ofmore » the algorithms. The current focus of usage development is focused on library imports and CLI.« less

  14. Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms.

    PubMed

    Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel

    2014-01-01

    With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies.

  15. CrossSim

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plimpton, Steven J.; Agarwal, Sapan; Schiek, Richard

    2016-09-02

    CrossSim is a simulator for modeling neural-inspired machine learning algorithms on analog hardware, such as resistive memory crossbars. It includes noise models for reading and updating the resistances, which can be based on idealized equations or experimental data. It can also introduce noise and finite precision effects when converting values from digital to analog and vice versa. All of these effects can be turned on or off as an algorithm processes a data set and attempts to learn its salient attributes so that it can be categorized in the machine learning training/classification context. CrossSim thus allows the robustness, accuracy, andmore » energy usage of a machine learning algorithm to be tested on simulated hardware.« less

  16. A Comparison of Vibration and Oil Debris Gear Damage Detection Methods Applied to Pitting Damage

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.

    2000-01-01

    Helicopter Health Usage Monitoring Systems (HUMS) must provide reliable, real-time performance monitoring of helicopter operating parameters to prevent damage of flight critical components. Helicopter transmission diagnostics are an important part of a helicopter HUMS. In order to improve the reliability of transmission diagnostics, many researchers propose combining two technologies, vibration and oil monitoring, using data fusion and intelligent systems. Some benefits of combining multiple sensors to make decisions include improved detection capabilities and increased probability the event is detected. However, if the sensors are inaccurate, or the features extracted from the sensors are poor predictors of transmission health, integration of these sensors will decrease the accuracy of damage prediction. For this reason, one must verify the individual integrity of vibration and oil analysis methods prior to integrating the two technologies. This research focuses on comparing the capability of two vibration algorithms, FM4 and NA4, and a commercially available on-line oil debris monitor to detect pitting damage on spur gears in the NASA Glenn Research Center Spur Gear Fatigue Test Rig. Results from this research indicate that the rate of change of debris mass measured by the oil debris monitor is comparable to the vibration algorithms in detecting gear pitting damage.

  17. Requirements Flowdown for Prognostics and Health Management

    NASA Technical Reports Server (NTRS)

    Goebel, Kai; Saxena, Abhinav; Roychoudhury, Indranil; Celaya, Jose R.; Saha, Bhaskar; Saha, Sankalita

    2012-01-01

    Prognostics and Health Management (PHM) principles have considerable promise to change the game of lifecycle cost of engineering systems at high safety levels by providing a reliable estimate of future system states. This estimate is a key for planning and decision making in an operational setting. While technology solutions have made considerable advances, the tie-in into the systems engineering process is lagging behind, which delays fielding of PHM-enabled systems. The derivation of specifications from high level requirements for algorithm performance to ensure quality predictions is not well developed. From an engineering perspective some key parameters driving the requirements for prognostics performance include: (1) maximum allowable Probability of Failure (PoF) of the prognostic system to bound the risk of losing an asset, (2) tolerable limits on proactive maintenance to minimize missed opportunity of asset usage, (3) lead time to specify the amount of advanced warning needed for actionable decisions, and (4) required confidence to specify when prognosis is sufficiently good to be used. This paper takes a systems engineering view towards the requirements specification process and presents a method for the flowdown process. A case study based on an electric Unmanned Aerial Vehicle (e-UAV) scenario demonstrates how top level requirements for performance, cost, and safety flow down to the health management level and specify quantitative requirements for prognostic algorithm performance.

  18. Analyzing gene expression from relative codon usage bias in Yeast genome: a statistical significance and biological relevance.

    PubMed

    Das, Shibsankar; Roymondal, Uttam; Sahoo, Satyabrata

    2009-08-15

    Based on the hypothesis that highly expressed genes are often characterized by strong compositional bias in terms of codon usage, there are a number of measures currently in use that quantify codon usage bias in genes, and hence provide numerical indices to predict the expression levels of genes. With the recent advent of expression measure from the score of the relative codon usage bias (RCBS), we have explicitly tested the performance of this numerical measure to predict the gene expression level and illustrate this with an analysis of Yeast genomes. In contradiction with previous other studies, we observe a weak correlations between GC content and RCBS, but a selective pressure on the codon preferences in highly expressed genes. The assertion that the expression of a given gene depends on the score of relative codon usage bias (RCBS) is supported by the data. We further observe a strong correlation between RCBS and protein length indicating natural selection in favour of shorter genes to be expressed at higher level. We also attempt a statistical analysis to assess the strength of relative codon bias in genes as a guide to their likely expression level, suggesting a decrease of the informational entropy in the highly expressed genes.

  19. Managing time-substitutable electricity usage using dynamic controls

    DOEpatents

    Ghosh, Soumyadip; Hosking, Jonathan R.; Natarajan, Ramesh; Subramaniam, Shivaram; Zhang, Xiaoxuan

    2017-02-07

    A predictive-control approach allows an electricity provider to monitor and proactively manage peak and off-peak residential intra-day electricity usage in an emerging smart energy grid using time-dependent dynamic pricing incentives. The daily load is modeled as time-shifted, but cost-differentiated and substitutable, copies of the continuously-consumed electricity resource, and a consumer-choice prediction model is constructed to forecast the corresponding intra-day shares of total daily load according to this model. This is embedded within an optimization framework for managing the daily electricity usage. A series of transformations are employed, including the reformulation-linearization technique (RLT) to obtain a Mixed-Integer Programming (MIP) model representation of the resulting nonlinear optimization problem. In addition, various regulatory and pricing constraints are incorporated in conjunction with the specified profit and capacity utilization objectives.

  20. Managing time-substitutable electricity usage using dynamic controls

    DOEpatents

    Ghosh, Soumyadip; Hosking, Jonathan R.; Natarajan, Ramesh; Subramaniam, Shivaram; Zhang, Xiaoxuan

    2017-02-21

    A predictive-control approach allows an electricity provider to monitor and proactively manage peak and off-peak residential intra-day electricity usage in an emerging smart energy grid using time-dependent dynamic pricing incentives. The daily load is modeled as time-shifted, but cost-differentiated and substitutable, copies of the continuously-consumed electricity resource, and a consumer-choice prediction model is constructed to forecast the corresponding intra-day shares of total daily load according to this model. This is embedded within an optimization framework for managing the daily electricity usage. A series of transformations are employed, including the reformulation-linearization technique (RLT) to obtain a Mixed-Integer Programming (MIP) model representation of the resulting nonlinear optimization problem. In addition, various regulatory and pricing constraints are incorporated in conjunction with the specified profit and capacity utilization objectives.

  1. NOVA: A new multi-level logic simulator

    NASA Technical Reports Server (NTRS)

    Miles, L.; Prins, P.; Cameron, K.; Shovic, J.

    1990-01-01

    A new logic simulator that was developed at the NASA Space Engineering Research Center for VLSI Design was described. The simulator is multi-level, being able to simulate from the switch level through the functional model level. NOVA is currently in the Beta test phase and was used to simulate chips designed for the NASA Space Station and the Explorer missions. A new algorithm was devised to simulate bi-directional pass transistors and a preliminary version of the algorithm is presented. The usage of functional models in NOVA is also described and performance figures are presented.

  2. PELEC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-05-17

    PeleC is an adaptive-mesh compressible hydrodynamics code for reacting flows. It solves the compressible Navier-Stokes with multispecies transport in a block structured framework. The resulting algorithm is well suited for flows with localized resolution requirements and robust to discontinuities. User controllable refinement crieteria has the potential to result in extremely small numerical dissipation and dispersion, making this code appropriate for both research and applied usage. The code is built on the AMReX library which facilitates hierarchical parallelism and manages distributed memory parallism. PeleC algorithms are implemented to express shared memory parallelism.

  3. Frequency Estimator Performance for a Software-Based Beacon Receiver

    NASA Technical Reports Server (NTRS)

    Zemba, Michael J.; Morse, Jacquelynne Rose; Nessel, James A.; Miranda, Felix

    2014-01-01

    As propagation terminals have evolved, their design has trended more toward a software-based approach that facilitates convenient adjustment and customization of the receiver algorithms. One potential improvement is the implementation of a frequency estimation algorithm, through which the primary frequency component of the received signal can be estimated with a much greater resolution than with a simple peak search of the FFT spectrum. To select an estimator for usage in a QV-band beacon receiver, analysis of six frequency estimators was conducted to characterize their effectiveness as they relate to beacon receiver design.

  4. Solution of nonlinear multivariable constrained systems using a gradient projection digital algorithm that is insensitive to the initial state

    NASA Technical Reports Server (NTRS)

    Hargrove, A.

    1982-01-01

    Optimal digital control of nonlinear multivariable constrained systems was studied. The optimal controller in the form of an algorithm was improved and refined by reducing running time and storage requirements. A particularly difficult system of nine nonlinear state variable equations was chosen as a test problem for analyzing and improving the controller. Lengthy analysis, modeling, computing and optimization were accomplished. A remote interactive teletype terminal was installed. Analysis requiring computer usage of short duration was accomplished using Tuskegee's VAX 11/750 system.

  5. Lutheran School Teachers' Instructional Usage of the Interactive Whiteboard

    ERIC Educational Resources Information Center

    Powers, Jillian R.

    2014-01-01

    The purpose of this mixed methods study was twofold. First, the study assessed whether Davis' (1989) Technology Acceptance Model (TAM) was useful in predicting instructional usage of the interactive whiteboard (IWB), as reported by K-8 teachers. Second, the study set out to understand what motivated those teachers to use the IWB for classroom…

  6. Predicting Student Success via Online Homework Usage

    ERIC Educational Resources Information Center

    Bowman, Charles R.; Gulacar, Ozcan; King, Daniel B.

    2014-01-01

    With the amount of data available through an online homework system about students' study habits, it stands to reason that such systems can be used to identify likely student outcomes. A study was conducted to see how student usage of an online chemistry homework system (OWL) correlated with student success in a general chemistry course. Online…

  7. Data-driven modeling of hydroclimatic trends and soil moisture: Multi-scale data integration and decision support

    NASA Astrophysics Data System (ADS)

    Coopersmith, Evan Joseph

    The techniques and information employed for decision-making vary with the spatial and temporal scope of the assessment required. In modern agriculture, the farm owner or manager makes decisions on a day-to-day or even hour-to-hour basis for dozens of fields scattered over as much as a fifty-mile radius from some central location. Following precipitation events, land begins to dry. Land-owners and managers often trace serpentine paths of 150+ miles every morning to inspect the conditions of their various parcels. His or her objective lies in appropriate resource usage -- is a given tract of land dry enough to be workable at this moment or would he or she be better served waiting patiently? Longer-term, these owners and managers decide upon which seeds will grow most effectively and which crops will make their operations profitable. At even longer temporal scales, decisions are made regarding which fields must be acquired and sold and what types of equipment will be necessary in future operations. This work develops and validates algorithms for these shorter-term decisions, along with models of national climate patterns and climate changes to enable longer-term operational planning. A test site at the University of Illinois South Farms (Urbana, IL, USA) served as the primary location to validate machine learning algorithms, employing public sources of precipitation and potential evapotranspiration to model the wetting/drying process. In expanding such local decision support tools to locations on a national scale, one must recognize the heterogeneity of hydroclimatic and soil characteristics throughout the United States. Machine learning algorithms modeling the wetting/drying process must address this variability, and yet it is wholly impractical to construct a separate algorithm for every conceivable location. For this reason, a national hydrological classification system is presented, allowing clusters of hydroclimatic similarity to emerge naturally from annual regime curve data and facilitate the development of cluster-specific algorithms. Given the desire to enable intelligent decision-making at any location, this classification system is developed in a manner that will allow for classification anywhere in the U.S., even in an ungauged basin. Daily time series data from 428 catchments in the MOPEX database are analyzed to produce an empirical classification tree, partitioning the United States into regions of hydroclimatic similarity. In constructing a classification tree based upon 55 years of data, it is important to recognize the non-stationary nature of climate data. The shifts in climatic regimes will cause certain locations to shift their ultimate position within the classification tree, requiring decision-makers to alter land usage, farming practices, and equipment needs, and algorithms to adjust accordingly. This work adapts the classification model to address the issue of regime shifts over larger temporal scales and suggests how land-usage and farming protocol may vary from hydroclimatic shifts in decades to come. Finally, the generalizability of the hydroclimatic classification system is tested with a physically-based soil moisture model calibrated at several locations throughout the continental United States. The soil moisture model is calibrated at a given site and then applied with the same parameters at other sites within and outside the same hydroclimatic class. The model's performance deteriorates minimally if the calibration and validation location are within the same hydroclimatic class, but deteriorates significantly if the calibration and validates sites are located in different hydroclimatic classes. These soil moisture estimates at the field scale are then further refined by the introduction of LiDAR elevation data, distinguishing faster-drying peaks and ridges from slower-drying valleys. The inclusion of LiDAR enabled multiple locations within the same field to be predicted accurately despite non-identical topography. This cross-application of parametric calibrations and LiDAR-driven disaggregation facilitates decision-support at locations without proximally-located soil moisture sensors.

  8. A Collaborative Location Based Travel Recommendation System through Enhanced Rating Prediction for the Group of Users.

    PubMed

    Ravi, Logesh; Vairavasundaram, Subramaniyaswamy

    2016-01-01

    Rapid growth of web and its applications has created a colossal importance for recommender systems. Being applied in various domains, recommender systems were designed to generate suggestions such as items or services based on user interests. Basically, recommender systems experience many issues which reflects dwindled effectiveness. Integrating powerful data management techniques to recommender systems can address such issues and the recommendations quality can be increased significantly. Recent research on recommender systems reveals an idea of utilizing social network data to enhance traditional recommender system with better prediction and improved accuracy. This paper expresses views on social network data based recommender systems by considering usage of various recommendation algorithms, functionalities of systems, different types of interfaces, filtering techniques, and artificial intelligence techniques. After examining the depths of objectives, methodologies, and data sources of the existing models, the paper helps anyone interested in the development of travel recommendation systems and facilitates future research direction. We have also proposed a location recommendation system based on social pertinent trust walker (SPTW) and compared the results with the existing baseline random walk models. Later, we have enhanced the SPTW model for group of users recommendations. The results obtained from the experiments have been presented.

  9. Study on data compression algorithm and its implementation in portable electronic device for Internet of Things applications

    NASA Astrophysics Data System (ADS)

    Asilah Khairi, Nor; Bahari Jambek, Asral

    2017-11-01

    An Internet of Things (IoT) device is usually powered by a small battery, which does not last long. As a result, saving energy in IoT devices has become an important issue when it comes to this subject. Since power consumption is the primary cause of radio communication, some researchers have proposed several compression algorithms with the purpose of overcoming this particular problem. Several data compression algorithms from previous reference papers are discussed in this paper. The description of the compression algorithm in the reference papers was collected and summarized in a table form. From the analysis, MAS compression algorithm was selected as a project prototype due to its high potential for meeting the project requirements. Besides that, it also produced better performance regarding energy-saving, better memory usage, and data transmission efficiency. This method is also suitable to be implemented in WSN. MAS compression algorithm will be prototyped and applied in portable electronic devices for Internet of Things applications.

  10. A Feature and Algorithm Selection Method for Improving the Prediction of Protein Structural Class.

    PubMed

    Ni, Qianwu; Chen, Lei

    2017-01-01

    Correct prediction of protein structural class is beneficial to investigation on protein functions, regulations and interactions. In recent years, several computational methods have been proposed in this regard. However, based on various features, it is still a great challenge to select proper classification algorithm and extract essential features to participate in classification. In this study, a feature and algorithm selection method was presented for improving the accuracy of protein structural class prediction. The amino acid compositions and physiochemical features were adopted to represent features and thirty-eight machine learning algorithms collected in Weka were employed. All features were first analyzed by a feature selection method, minimum redundancy maximum relevance (mRMR), producing a feature list. Then, several feature sets were constructed by adding features in the list one by one. For each feature set, thirtyeight algorithms were executed on a dataset, in which proteins were represented by features in the set. The predicted classes yielded by these algorithms and true class of each protein were collected to construct a dataset, which were analyzed by mRMR method, yielding an algorithm list. From the algorithm list, the algorithm was taken one by one to build an ensemble prediction model. Finally, we selected the ensemble prediction model with the best performance as the optimal ensemble prediction model. Experimental results indicate that the constructed model is much superior to models using single algorithm and other models that only adopt feature selection procedure or algorithm selection procedure. The feature selection procedure or algorithm selection procedure are really helpful for building an ensemble prediction model that can yield a better performance. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  11. Model Verification and Validation Concepts for a Probabilistic Fracture Assessment Model to Predict Cracking of Knife Edge Seals in the Space Shuttle Main Engine High Pressure Oxidizer

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Riha, David S.

    2013-01-01

    Physics-based models are routinely used to predict the performance of engineered systems to make decisions such as when to retire system components, how to extend the life of an aging system, or if a new design will be safe or available. Model verification and validation (V&V) is a process to establish credibility in model predictions. Ideally, carefully controlled validation experiments will be designed and performed to validate models or submodels. In reality, time and cost constraints limit experiments and even model development. This paper describes elements of model V&V during the development and application of a probabilistic fracture assessment model to predict cracking in space shuttle main engine high-pressure oxidizer turbopump knife-edge seals. The objective of this effort was to assess the probability of initiating and growing a crack to a specified failure length in specific flight units for different usage and inspection scenarios. The probabilistic fracture assessment model developed in this investigation combined a series of submodels describing the usage, temperature history, flutter tendencies, tooth stresses and numbers of cycles, fatigue cracking, nondestructive inspection, and finally the probability of failure. The analysis accounted for unit-to-unit variations in temperature, flutter limit state, flutter stress magnitude, and fatigue life properties. The investigation focused on the calculation of relative risk rather than absolute risk between the usage scenarios. Verification predictions were first performed for three units with known usage and cracking histories to establish credibility in the model predictions. Then, numerous predictions were performed for an assortment of operating units that had flown recently or that were projected for future flights. Calculations were performed using two NASA-developed software tools: NESSUS(Registered Trademark) for the probabilistic analysis, and NASGRO(Registered Trademark) for the fracture mechanics analysis. The goal of these predictions was to provide additional information to guide decisions on the potential of reusing existing and installed units prior to the new design certification.

  12. Application of XGBoost algorithm in hourly PM2.5 concentration prediction

    NASA Astrophysics Data System (ADS)

    Pan, Bingyue

    2018-02-01

    In view of prediction techniques of hourly PM2.5 concentration in China, this paper applied the XGBoost(Extreme Gradient Boosting) algorithm to predict hourly PM2.5 concentration. The monitoring data of air quality in Tianjin city was analyzed by using XGBoost algorithm. The prediction performance of the XGBoost method is evaluated by comparing observed and predicted PM2.5 concentration using three measures of forecast accuracy. The XGBoost method is also compared with the random forest algorithm, multiple linear regression, decision tree regression and support vector machines for regression models using computational results. The results demonstrate that the XGBoost algorithm outperforms other data mining methods.

  13. High-performance technology for indexing of high volumes of Earth remote sensing data

    NASA Astrophysics Data System (ADS)

    Strotov, Valery V.; Taganov, Alexander I.; Kolesenkov, Aleksandr N.; Kostrov, Boris V.

    2017-10-01

    The present paper has suggested a technology for search, indexing, cataloging and distribution of aerospace images on the basis of geo-information approach, cluster and spectral analysis. It has considered information and algorithmic support of the system. Functional circuit of the system and structure of the geographical data base have been developed on the basis of the geographical online portal technology. Taking into account heterogeneity of information obtained from various sources it is reasonable to apply a geoinformation platform that allows analyzing space location of objects and territories and executing complex processing of information. Geoinformation platform is based on cartographic fundamentals with the uniform coordinate system, the geographical data base, a set of algorithms and program modules for execution of various tasks. The technology for adding by particular users and companies of images taken by means of professional and amateur devices and also processed by various software tools to the array system has been suggested. Complex usage of visual and instrumental approaches allows significantly expanding an application area of Earth remote sensing data. Development and implementation of new algorithms based on the complex usage of new methods for processing of structured and unstructured data of high volumes will increase periodicity and rate of data updating. The paper has shown that application of original algorithms for search, indexing and cataloging of aerospace images will provide an easy access to information spread by hundreds of suppliers and allow increasing an access rate to aerospace images up to 5 times in comparison with current analogues.

  14. Calibration of a crop model to irrigated water use using a genetic algorithm

    USDA-ARS?s Scientific Manuscript database

    Near-term consumption of groundwater for irrigated agriculture in the High Plains Aquifer supports a dynamic bio-socio-economic system, all parts of which will be impacted by a future transition to sustainable usage that matches natural recharge rates. Plants are the foundation of this system and so...

  15. Design of a Solar Tracking Interactive Kiosk

    ERIC Educational Resources Information Center

    Greene, Nathaniel R.; Brunskill, Jeffrey C.

    2017-01-01

    A two-axis solar tracker and its interactive kiosk were designed by an interdisciplinary team of students and faculty. The objective was to develop a publicly accessible kiosk that would facilitate the study of energy usage and production on campus. Tracking is accomplished by an open-loop algorithm, microcontroller, and ham radio rotator. Solar…

  16. Reasoning abstractly about resources

    NASA Technical Reports Server (NTRS)

    Clement, B.; Barrett, A.

    2001-01-01

    r describes a way to schedule high level activities before distributing them across multiple rovers in order to coordinate the resultant use of shared resources regardless of how each rover decides how to perform its activities. We present an algorithm for summarizing the metric resource requirements of an abstract activity based n the resource usages of its potential refinements.

  17. Using the theory of planned behavior to predict infant restraint use in Saudi Arabia

    PubMed Central

    Nelson, Anna; Modeste, Naomi N.; Marshak, Helen H.; Hopp, Joyce W.

    2014-01-01

    Objectives To determine whether the theory of planned behavior (TPB) predicted intent of child restraint system (CRS) use among pregnant women in the Kingdom of Saudi Arabia (KSA). Methods In this cross-sectional study conducted in Dallah Hospital, Riyadh, KSA during June-July 2013, 196 pregnant women completed surveys assessing their beliefs regarding CRS. Simultaneous observations were conducted among a different sample of 150 women to determine CRS usage at hospital discharge following maternity stay. Results Logistic regression model with TPB constructs and covariates as predictors of CRS usage intent was significant (χ2=64.986, p<0.0001) and predicted 38% of intent. There was an increase in odds of intent for attitudes (31.5%, p<0.05), subjective norm (55.3%, p<0.001), and perceived behavioral control (76.9%, p<0.001). The 3 logistic regression models testing the association of the relevant set of composite belief scores were also significant for attitudes (χ2=16.803, p<0.05), subjective norm (χ2=29.681, p<0.0001), and perceived behavioral control (χ2=20.516, p<0.05). The behavioral observation showed that none of the 150 women observed used CRS for their newborn at discharge. Conclusion: The TPB constructs were significantly and independently associated with higher intent for CRS usage. While TPB appears to be a useful tool to identify beliefs related to CRS usage intentions in KSA, the results of the separate behavioral observation indicate that intentions may not be related to the actual usage of CRS in the Kingdom. Further studies are recommended to examine this association. PMID:25228177

  18. Mutation Bias Favors Protein Folding Stability in the Evolution of Small Populations

    PubMed Central

    Porto, Markus; Bastolla, Ugo

    2010-01-01

    Mutation bias in prokaryotes varies from extreme adenine and thymine (AT) in obligatory endosymbiotic or parasitic bacteria to extreme guanine and cytosine (GC), for instance in actinobacteria. GC mutation bias deeply influences the folding stability of proteins, making proteins on the average less hydrophobic and therefore less stable with respect to unfolding but also less susceptible to misfolding and aggregation. We study a model where proteins evolve subject to selection for folding stability under given mutation bias, population size, and neutrality. We find a non-neutral regime where, for any given population size, there is an optimal mutation bias that maximizes fitness. Interestingly, this optimal GC usage is small for small populations, large for intermediate populations and around 50% for large populations. This result is robust with respect to the definition of the fitness function and to the protein structures studied. Our model suggests that small populations evolving with small GC usage eventually accumulate a significant selective advantage over populations evolving without this bias. This provides a possible explanation to the observation that most species adopting obligatory intracellular lifestyles with a consequent reduction of effective population size shifted their mutation spectrum towards AT. The model also predicts that large GC usage is optimal for intermediate population size. To test these predictions we estimated the effective population sizes of bacterial species using the optimal codon usage coefficients computed by dos Reis et al. and the synonymous to non-synonymous substitution ratio computed by Daubin and Moran. We found that the population sizes estimated in these ways are significantly smaller for species with small and large GC usage compared to species with no bias, which supports our prediction. PMID:20463869

  19. E-Book Usage and the "Choice" Outstanding Academic Book List: Is There a Correlation?

    ERIC Educational Resources Information Center

    Carter Williams, Karen; Best, Rickey

    2006-01-01

    In this study, the staff of the library at Auburn University at Montgomery analyzed circulation patterns for electronic books in the fields of Political Science, Public Administration and Law to see if favorable "Choice" reviews can be used to predict usage of electronic books. A comparison of the circulations between print and…

  20. Bilingual Children's Acquisition of the Past Tense: A Usage-Based Approach

    ERIC Educational Resources Information Center

    Paradis, Johanne; Nicoladis, Elena; Crago, Martha; Genesee, Fred

    2011-01-01

    Bilingual and monolingual children's (mean age = 4;10) elicited production of the past tense in both English and French was examined in order to test predictions from Usage-Based theory regarding the sensitivity of children's acquisition rates to input factors such as variation in exposure time and the type/token frequency of morphosyntactic…

  1. Epidemic failure detection and consensus for extreme parallelism

    DOE PAGES

    Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas; ...

    2017-02-01

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less

  2. Learning Instance-Specific Predictive Models

    PubMed Central

    Visweswaran, Shyam; Cooper, Gregory F.

    2013-01-01

    This paper introduces a Bayesian algorithm for constructing predictive models from data that are optimized to predict a target variable well for a particular instance. This algorithm learns Markov blanket models, carries out Bayesian model averaging over a set of models to predict a target variable of the instance at hand, and employs an instance-specific heuristic to locate a set of suitable models to average over. We call this method the instance-specific Markov blanket (ISMB) algorithm. The ISMB algorithm was evaluated on 21 UCI data sets using five different performance measures and its performance was compared to that of several commonly used predictive algorithms, including nave Bayes, C4.5 decision tree, logistic regression, neural networks, k-Nearest Neighbor, Lazy Bayesian Rules, and AdaBoost. Over all the data sets, the ISMB algorithm performed better on average on all performance measures against all the comparison algorithms. PMID:25045325

  3. A High Performance Cloud-Based Protein-Ligand Docking Prediction Algorithm

    PubMed Central

    Chen, Jui-Le; Yang, Chu-Sing

    2013-01-01

    The potential of predicting druggability for a particular disease by integrating biological and computer science technologies has witnessed success in recent years. Although the computer science technologies can be used to reduce the costs of the pharmaceutical research, the computation time of the structure-based protein-ligand docking prediction is still unsatisfied until now. Hence, in this paper, a novel docking prediction algorithm, named fast cloud-based protein-ligand docking prediction algorithm (FCPLDPA), is presented to accelerate the docking prediction algorithm. The proposed algorithm works by leveraging two high-performance operators: (1) the novel migration (information exchange) operator is designed specially for cloud-based environments to reduce the computation time; (2) the efficient operator is aimed at filtering out the worst search directions. Our simulation results illustrate that the proposed method outperforms the other docking algorithms compared in this paper in terms of both the computation time and the quality of the end result. PMID:23762864

  4. Sleeping with technology: cognitive, affective, and technology usage predictors of sleep problems among college students.

    PubMed

    Rosen, Larry; Carrier, Louis M; Miller, Aimee; Rokkum, Jeffrey; Ruiz, Abraham

    2016-03-01

    Sleep problems related to technology affect college students through several potential mechanisms including displacement of sleep due to technology use, executive functioning abilities, and the impact of emotional states related to stress and anxiety about technology availability. In the present study, cognitive and affective factors that influence technology usage were examined for their impact upon sleep problems. More than 700 US college students completed an online questionnaire addressing technology usage, anxiety/dependence, executive functioning, nighttime phone usage, bedtime phone location, and sleep problems. A path model controlling for background variables was tested using the data. The results showed that executive dysfunction directly predicted sleep problems as well as affected sleep problems through nighttime awakenings. In addition, anxiety/dependence increased daily smartphone usage and also increased nighttime awakenings, which, in turn, affected sleep problems. Thus, both the affective and cognitive factors that influence technology usage affected sleep problems. Copyright © 2016 National Sleep Foundation. Published by Elsevier Inc. All rights reserved.

  5. Prediction and characterization of application power use in a high-performance computing environment

    DOE PAGES

    Bugbee, Bruce; Phillips, Caleb; Egan, Hilary; ...

    2017-02-27

    Power use in data centers and high-performance computing (HPC) facilities has grown in tandem with increases in the size and number of these facilities. Substantial innovation is needed to enable meaningful reduction in energy footprints in leadership-class HPC systems. In this paper, we focus on characterizing and investigating application-level power usage. We demonstrate potential methods for predicting power usage based on a priori and in situ characteristics. Lastly, we highlight a potential use case of this method through a simulated power-aware scheduler using historical jobs from a real scientific HPC system.

  6. A test to evaluate the earthquake prediction algorithm, M8

    USGS Publications Warehouse

    Healy, John H.; Kossobokov, Vladimir G.; Dewey, James W.

    1992-01-01

    A test of the algorithm M8 is described. The test is constructed to meet four rules, which we propose to be applicable to the test of any method for earthquake prediction:  1. An earthquake prediction technique should be presented as a well documented, logical algorithm that can be used by  investigators without restrictions. 2. The algorithm should be coded in a common programming language and implementable on widely available computer systems. 3. A test of the earthquake prediction technique should involve future predictions with a black box version of the algorithm in which potentially adjustable parameters are fixed in advance. The source of the input data must be defined and ambiguities in these data must be resolved automatically by the algorithm. 4. At least one reasonable null hypothesis should be stated in advance of testing the earthquake prediction method, and it should be stated how this null hypothesis will be used to estimate the statistical significance of the earthquake predictions. The M8 algorithm has successfully predicted several destructive earthquakes, in the sense that the earthquakes occurred inside regions with linear dimensions from 384 to 854 km that the algorithm had identified as being in times of increased probability for strong earthquakes. In addition, M8 has successfully "post predicted" high percentages of strong earthquakes in regions to which it has been applied in retroactive studies. The statistical significance of previous predictions has not been established, however, and post-prediction studies in general are notoriously subject to success-enhancement through hindsight. Nor has it been determined how much more precise an M8 prediction might be than forecasts and probability-of-occurrence estimates made by other techniques. We view our test of M8 both as a means to better determine the effectiveness of M8 and as an experimental structure within which to make observations that might lead to improvements in the algorithm or conceivably lead to a radically different approach to earthquake prediction.

  7. A traveling salesman approach for predicting protein functions.

    PubMed

    Johnson, Olin; Liu, Jing

    2006-10-12

    Protein-protein interaction information can be used to predict unknown protein functions and to help study biological pathways. Here we present a new approach utilizing the classic Traveling Salesman Problem to study the protein-protein interactions and to predict protein functions in budding yeast Saccharomyces cerevisiae. We apply the global optimization tool from combinatorial optimization algorithms to cluster the yeast proteins based on the global protein interaction information. We then use this clustering information to help us predict protein functions. We use our algorithm together with the direct neighbor algorithm 1 on characterized proteins and compare the prediction accuracy of the two methods. We show our algorithm can produce better predictions than the direct neighbor algorithm, which only considers the immediate neighbors of the query protein. Our method is a promising one to be used as a general tool to predict functions of uncharacterized proteins and a successful sample of using computer science knowledge and algorithms to study biological problems.

  8. A traveling salesman approach for predicting protein functions

    PubMed Central

    Johnson, Olin; Liu, Jing

    2006-01-01

    Background Protein-protein interaction information can be used to predict unknown protein functions and to help study biological pathways. Results Here we present a new approach utilizing the classic Traveling Salesman Problem to study the protein-protein interactions and to predict protein functions in budding yeast Saccharomyces cerevisiae. We apply the global optimization tool from combinatorial optimization algorithms to cluster the yeast proteins based on the global protein interaction information. We then use this clustering information to help us predict protein functions. We use our algorithm together with the direct neighbor algorithm [1] on characterized proteins and compare the prediction accuracy of the two methods. We show our algorithm can produce better predictions than the direct neighbor algorithm, which only considers the immediate neighbors of the query protein. Conclusion Our method is a promising one to be used as a general tool to predict functions of uncharacterized proteins and a successful sample of using computer science knowledge and algorithms to study biological problems. PMID:17147783

  9. Personality and mental health treatment: Traits as predictors of presentation, usage, and outcome.

    PubMed

    Thalmayer, Amber Gayle

    2018-03-08

    Self-report scores on personality inventories predict important life outcomes, including health and longevity, marital outcomes, career success, and mental health problems, but the ways they predict mental health treatment have not been widely explored. Psychotherapy is sought for diverse problems, but about half of those who begin therapy drop out, and only about half who complete therapy experience lasting improvements. Several authors have argued that understanding how personality traits relate to treatment could lead to better targeted, more successful services. Here self-report scores on Big Five and Big Six personality dimensions are explored as predictors of therapy presentation, usage, and outcomes in a sample of community clinic clients (N = 306). Participants received evidence-based treatments in the context of individual-, couples-, or family-therapy sessions. One measure of initial functioning and three indicators of outcome were used. All personality trait scores except Openness associated with initial psychological functioning. Higher Conscientiousness scores predicted more sessions attended for family therapy but fewer for couples-therapy clients. Higher Honesty-Propriety and Extraversion scores predicted fewer sessions attended for family-therapy clients. Better termination outcome was predicted by higher Conscientiousness scores for family- and higher Extraversion scores for individual-therapy clients. Higher Honesty-Propriety and Neuroticism scores predicted more improvement in psychological functioning in terms of successive Outcome Questionnaire-45 administrations. Taken together, the results provide some support for the role of personality traits in predicting treatment usage and outcome and for the utility of a 6-factor model in this context. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. Overview of Digital Forensics Algorithms in Dslr Cameras

    NASA Astrophysics Data System (ADS)

    Aminova, E.; Trapeznikov, I.; Priorov, A.

    2017-05-01

    The widespread usage of the mobile technologies and the improvement of the digital photo devices getting has led to more frequent cases of falsification of images including in the judicial practice. Consequently, the actual task for up-to-date digital image processing tools is the development of algorithms for determining the source and model of the DSLR (Digital Single Lens Reflex) camera and improve image formation algorithms. Most research in this area based on the mention that the extraction of unique sensor trace of DSLR camera could be possible on the certain stage of the imaging process into the camera. It is considered that the study focuses on the problem of determination of unique feature of DSLR cameras based on optical subsystem artifacts and sensor noises.

  11. Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms

    PubMed Central

    Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel

    2017-01-01

    With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies. PMID:29399237

  12. Requirements and Techniques for Developing and Measuring Simulant Materials

    NASA Technical Reports Server (NTRS)

    Rickman, Doug; Owens, Charles; Howard, Rick

    2006-01-01

    The 1989 workshop report entitled Workshop on Production and Uses of Simulated Lunar Materials and the Lunar Regolith Simulant Materials: Recommendations for Standardization, Production, and Usage, NASA Technical Publication identify and reinforced a need for a set of standards and requirements for the production and usage of the lunar simulant materials. As NASA need prepares to return to the moon, a set of requirements have been developed for simulant materials and methods to produce and measure those simulants have been defined. Addressed in the requirements document are: 1) a method for evaluating the quality of any simulant of a regolith, 2) the minimum Characteristics for simulants of lunar regolith, and 3) a method to produce lunar regolith simulants needed for NASA's exploration mission. A method to evaluate new and current simulants has also been rigorously defined through the mathematics of Figures of Merit (FoM), a concept new to simulant development. A single FoM is conceptually an algorithm defining a single characteristic of a simulant and provides a clear comparison of that characteristic for both the simulant and a reference material. Included as an intrinsic part of the algorithm is a minimum acceptable performance for the characteristic of interest. The algorithms for the FoM for Standard Lunar Regolith Simulants are also explicitly keyed to a recommended method to make lunar simulants.

  13. Optimization of Energy Efficiency and Conservation in Green Building Design Using Duelist, Killer-Whale and Rain-Water Algorithms

    NASA Astrophysics Data System (ADS)

    Biyanto, T. R.; Matradji; Syamsi, M. N.; Fibrianto, H. Y.; Afdanny, N.; Rahman, A. H.; Gunawan, K. S.; Pratama, J. A. D.; Malwindasari, A.; Abdillah, A. I.; Bethiana, T. N.; Putra, Y. A.

    2017-11-01

    The development of green building has been growing in both design and quality. The development of green building was limited by the issue of expensive investment. Actually, green building can reduce the energy usage inside the building especially in utilization of cooling system. External load plays major role in reducing the usage of cooling system. External load is affected by type of wall sheathing, glass and roof. The proper selection of wall, type of glass and roof material are very important to reduce external load. Hence, the optimization of energy efficiency and conservation in green building design is required. Since this optimization consist of integer and non-linear equations, this problem falls into Mixed-Integer-Non-Linear-Programming (MINLP) that required global optimization technique such as stochastic optimization algorithms. In this paper the optimized variables i.e. type of glass and roof were chosen using Duelist, Killer-Whale and Rain-Water Algorithms to obtain the optimum energy and considering the minimal investment. The optimization results exhibited the single glass Planibel-G with the 3.2 mm thickness and glass wool insulation provided maximum ROI of 36.8486%, EUI reduction of 54 kWh/m2·year, CO2 emission reduction of 486.8971 tons/year and reduce investment of 4,078,905,465 IDR.

  14. External validation of the international risk prediction algorithm for major depressive episode in the US general population: the PredictD-US study.

    PubMed

    Nigatu, Yeshambel T; Liu, Yan; Wang, JianLi

    2016-07-22

    Multivariable risk prediction algorithms are useful for making clinical decisions and for health planning. While prediction algorithms for new onset of major depression in the primary care attendees in Europe and elsewhere have been developed, the performance of these algorithms in different populations is not known. The objective of this study was to validate the PredictD algorithm for new onset of major depressive episode (MDE) in the US general population. Longitudinal study design was conducted with approximate 3-year follow-up data from a nationally representative sample of the US general population. A total of 29,621 individuals who participated in Wave 1 and 2 of the US National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) and who did not have an MDE in the past year at Wave 1 were included. The PredictD algorithm was directly applied to the selected participants. MDE was assessed by the Alcohol Use Disorder and Associated Disabilities Interview Schedule, based on the DSM-IV criteria. Among the participants, 8 % developed an MDE over three years. The PredictD algorithm had acceptable discriminative power (C-statistics = 0.708, 95 % CI: 0.696, 0.720), but poor calibration (p < 0.001) with the NESARC data. In the European primary care attendees, the algorithm had a C-statistics of 0.790 (95 % CI: 0.767, 0.813) with a perfect calibration. The PredictD algorithm has acceptable discrimination, but the calibration capacity was poor in the US general population despite of re-calibration. Therefore, based on the results, at current stage, the use of PredictD in the US general population for predicting individual risk of MDE is not encouraged. More independent validation research is needed.

  15. Effective Dose Calculation Program (EDCP) for the usage of NORM-added consumer product.

    PubMed

    Yoo, Do Hyeon; Lee, Jaekook; Min, Chul Hee

    2018-04-09

    The aim of this study is to develop the Effective Dose Calculation Program (EDCP) for the usage of Naturally Occurring Radioactive Material (NORM) added consumer products. The EDCP was developed based on a database of effective dose conversion coefficient and the Matrix Laboratory (MATLAB) program to incorporate a Graphic User Interface (GUI) for ease of use. To validate EDCP, the effective dose calculated with EDCP by manually determining the source region by using the GUI and that by using the reference mathematical algorithm were compared for pillow, waist supporter, eye-patch and sleeping mattress. The results show that the annual effective dose calculated with EDCP was almost identical to that calculated using the reference mathematical algorithm in most of the assessment cases. With the assumption of the gamma energy of 1 MeV and activity of 1 MBq, the annual effective doses of pillow, waist supporter, sleeping mattress, and eye-patch determined using the reference algorithm were 3.444 mSv year -1 , 2.770 mSv year -1 , 4.629 mSv year -1 , and 3.567 mSv year -1 , respectively, while those calculated using EDCP were 3.561 mSv year -1 , 2.630 mSv year -1 , 4.740 mSv year -1 , and 3.780 mSv year -1 , respectively. The differences in the annual effective doses were less than 5%, despite the different calculation methods employed. The EDCP can therefore be effectively used for radiation protection management in the context of the usage of NORM-added consumer products. Additionally, EDCP can be used by members of the public through the GUI for various studies in the field of radiation protection, thus facilitating easy access to the program. Copyright © 2018. Published by Elsevier Ltd.

  16. Enroute NASA/FAA low-frequency propfan test in Alabama (October 1987): A versatile atmospheric aircraft long-range noise prediction system

    NASA Astrophysics Data System (ADS)

    Tsouka, Despina G.

    In order to obtain a flight-to-static noise prediction of an advanced Turboprop (propfan) Aircraft, FAA went on an elaboration of the data that were measured during a full scale measuring program that was conducted by NASA and FAA/DOT/TSC on October 1987 in Alabama. The elaboration process was based on aircraft simulation to a point source, on an atmospheric two dimensional noise model, on the American National Standard algorithm for the calculation of atmospheric absortion, and on the DOT/TSC convention for ground reflection effects. Using the data of the Alabama measurements, the present paper examines the development of a generalized, flexible and more accurate process for the evaluation of the static and flight low-frequency long-range noise data. This paper also examines the applicability of the assumptions made by the Integrated Noise Model about linear propagation, of the three dimensional Hamiltonian Rays Tracing model and of the Weyl-Van der Pol model. The model proposes some assumptions in order to increase the calculations flexibility without significant loss of accuracy. In addition, it proposes the usage of the three dimensional Hamiltonian Rays Tracing model and the Weyl-Van der Pol model in order to increase the accuracy and to ensure the generalization of noise propagation prediction over grounds with variable impedance.

  17. Research on wind field algorithm of wind lidar based on BP neural network and grey prediction

    NASA Astrophysics Data System (ADS)

    Chen, Yong; Chen, Chun-Li; Luo, Xiong; Zhang, Yan; Yang, Ze-hou; Zhou, Jie; Shi, Xiao-ding; Wang, Lei

    2018-01-01

    This paper uses the BP neural network and grey algorithm to forecast and study radar wind field. In order to reduce the residual error in the wind field prediction which uses BP neural network and grey algorithm, calculating the minimum value of residual error function, adopting the residuals of the gray algorithm trained by BP neural network, using the trained network model to forecast the residual sequence, using the predicted residual error sequence to modify the forecast sequence of the grey algorithm. The test data show that using the grey algorithm modified by BP neural network can effectively reduce the residual value and improve the prediction precision.

  18. NASA C-17 Usage Overview

    NASA Technical Reports Server (NTRS)

    Miller, Christopher R.

    2008-01-01

    The usage and integrated vehicle health management of the NASA C-17. Propulsion health management flight objectives for the aircraft include mapping of the High Pressure Compressor in order to calibrate a Pratt and Whitney engine model and the fusion of data collected from existing sensors and signals to develop models, analysis methods and information fusion algorithms. An additional health manage flight objective is to demonstrate that the Commercial Modular Aero-Propulsion Systems Simulation engine model can successfully execute in real time onboard the C-17 T-1 aircraft using engine and aircraft flight data as inputs. Future work will address aircraft durability and aging, airframe health management, and propulsion health management research in the areas of gas path and engine vibration.

  19. Fast Demand Forecast of Electric Vehicle Charging Stations for Cell Phone Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Majidpour, Mostafa; Qiu, Charlie; Chung, Ching-Yen

    This paper describes the core cellphone application algorithm which has been implemented for the prediction of energy consumption at Electric Vehicle (EV) Charging Stations at UCLA. For this interactive user application, the total time of accessing database, processing the data and making the prediction, needs to be within a few seconds. We analyze four relatively fast Machine Learning based time series prediction algorithms for our prediction engine: Historical Average, kNearest Neighbor, Weighted k-Nearest Neighbor, and Lazy Learning. The Nearest Neighbor algorithm (k Nearest Neighbor with k=1) shows better performance and is selected to be the prediction algorithm implemented for themore » cellphone application. Two applications have been designed on top of the prediction algorithm: one predicts the expected available energy at the station and the other one predicts the expected charging finishing time. The total time, including accessing the database, data processing, and prediction is about one second for both applications.« less

  20. TIP: protein backtranslation aided by genetic algorithms.

    PubMed

    Moreira, Andrés; Maass, Alejandro

    2004-09-01

    Several applications require the backtranslation of a protein sequence into a nucleic acid sequence. The degeneracy of the genetic code makes this process ambiguous; moreover, not every translation is equally viable. The usual answer is to mimic the codon usage of the target species; however, this does not capture all the relevant features of the 'genomic styles' from different taxa. The program TIP ' Traducción Inversa de Proteínas') applies genetic algorithms to improve the backtranslation, by minimizing the difference of some coding statistics with respect to their average value in the target. http://www.cmm.uchile.cl/genoma/tip/

  1. Digital image processing using parallel computing based on CUDA technology

    NASA Astrophysics Data System (ADS)

    Skirnevskiy, I. P.; Pustovit, A. V.; Abdrashitova, M. O.

    2017-01-01

    This article describes expediency of using a graphics processing unit (GPU) in big data processing in the context of digital images processing. It provides a short description of a parallel computing technology and its usage in different areas, definition of the image noise and a brief overview of some noise removal algorithms. It also describes some basic requirements that should be met by certain noise removal algorithm in the projection to computer tomography. It provides comparison of the performance with and without using GPU as well as with different percentage of using CPU and GPU.

  2. A parameter estimation subroutine package

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Nead, M. W.

    1978-01-01

    Linear least squares estimation and regression analyses continue to play a major role in orbit determination and related areas. A library of FORTRAN subroutines were developed to facilitate analyses of a variety of estimation problems. An easy to use, multi-purpose set of algorithms that are reasonably efficient and which use a minimal amount of computer storage are presented. Subroutine inputs, outputs, usage and listings are given, along with examples of how these routines can be used. The routines are compact and efficient and are far superior to the normal equation and Kalman filter data processing algorithms that are often used for least squares analyses.

  3. Frequency Estimator Performance for a Software-Based Beacon Receiver

    NASA Technical Reports Server (NTRS)

    Zemba, Michael J.; Morse, Jacquelynne R.; Nessel, James A.

    2014-01-01

    As propagation terminals have evolved, their design has trended more toward a software-based approach that facilitates convenient adjustment and customization of the receiver algorithms. One potential improvement is the implementation of a frequency estimation algorithm, through which the primary frequency component of the received signal can be estimated with a much greater resolution than with a simple peak search of the FFT spectrum. To select an estimator for usage in a Q/V-band beacon receiver, analysis of six frequency estimators was conducted to characterize their effectiveness as they relate to beacon receiver design.

  4. A review of predictive coding algorithms.

    PubMed

    Spratling, M W

    2017-03-01

    Predictive coding is a leading theory of how the brain performs probabilistic inference. However, there are a number of distinct algorithms which are described by the term "predictive coding". This article provides a concise review of these different predictive coding algorithms, highlighting their similarities and differences. Five algorithms are covered: linear predictive coding which has a long and influential history in the signal processing literature; the first neuroscience-related application of predictive coding to explaining the function of the retina; and three versions of predictive coding that have been proposed to model cortical function. While all these algorithms aim to fit a generative model to sensory data, they differ in the type of generative model they employ, in the process used to optimise the fit between the model and sensory data, and in the way that they are related to neurobiology. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Evaluation of various thrust calculation techniques on an F404 engine

    NASA Technical Reports Server (NTRS)

    Ray, Ronald J.

    1990-01-01

    In support of performance testing of the X-29A aircraft at the NASA-Ames, various thrust calculation techniques were developed and evaluated for use on the F404-GE-400 engine. The engine was thrust calibrated at NASA-Lewis. Results from these tests were used to correct the manufacturer's in-flight thrust program to more accurately calculate thrust for the specific test engine. Data from these tests were also used to develop an independent, simplified thrust calculation technique for real-time thrust calculation. Comparisons were also made to thrust values predicted by the engine specification model. Results indicate uninstalled gross thrust accuracies on the order of 1 to 4 percent for the various in-flight thrust methods. The various thrust calculations are described and their usage, uncertainty, and measured accuracies are explained. In addition, the advantages of a real-time thrust algorithm for flight test use and the importance of an accurate thrust calculation to the aircraft performance analysis are described. Finally, actual data obtained from flight test are presented.

  6. Real-Time Stability and Control Derivative Extraction From F-15 Flight Data

    NASA Technical Reports Server (NTRS)

    Smith, Mark S.; Moes, Timothy R.; Morelli, Eugene A.

    2003-01-01

    A real-time, frequency-domain, equation-error parameter identification (PID) technique was used to estimate stability and control derivatives from flight data. This technique is being studied to support adaptive control system concepts currently being developed by NASA (National Aeronautics and Space Administration), academia, and industry. This report describes the basic real-time algorithm used for this study and implementation issues for onboard usage as part of an indirect-adaptive control system. A confidence measures system for automated evaluation of PID results is discussed. Results calculated using flight data from a modified F-15 aircraft are presented. Test maneuvers included pilot input doublets and automated inputs at several flight conditions. Estimated derivatives are compared to aerodynamic model predictions. Data indicate that the real-time PID used for this study performs well enough to be used for onboard parameter estimation. For suitable test inputs, the parameter estimates converged rapidly to sufficient levels of accuracy. The devised confidence measures used were moderately successful.

  7. Families in a High-Tech Age: Technology Usage Patterns, Work and Family Correlates, and Gender

    ERIC Educational Resources Information Center

    Chesley, Noelle

    2006-01-01

    This study analyzes a couple-level ("N" = 581), longitudinal data set of employees to provide evidence about technology use over time, the factors that predict use, and the potential for a spouse to influence an individual's use. Although longitudinal usage patterns suggest a trend toward adoption and use of e-mail, the Internet, cell phones, and…

  8. Landscape predictors of pathogen prevalence and range contractions in US bumblebees.

    PubMed

    McArt, Scott H; Urbanowicz, Christine; McCoshum, Shaun; Irwin, Rebecca E; Adler, Lynn S

    2017-11-29

    Several species of bumblebees have recently experienced range contractions and possible extinctions. While threats to bees are numerous, few analyses have attempted to understand the relative importance of multiple stressors. Such analyses are critical for prioritizing conservation strategies. Here, we describe a landscape analysis of factors predicted to cause bumblebee declines in the USA. We quantified 24 habitat, land-use and pesticide usage variables across 284 sampling locations, assessing which variables predicted pathogen prevalence and range contractions via machine learning model selection techniques. We found that greater usage of the fungicide chlorothalonil was the best predictor of pathogen ( Nosema bombi ) prevalence in four declining species of bumblebees. Nosema bombi has previously been found in greater prevalence in some declining US bumblebee species compared to stable species. Greater usage of total fungicides was the strongest predictor of range contractions in declining species, with bumblebees in the northern USA experiencing greater likelihood of loss from previously occupied areas. These results extend several recent laboratory and semi-field studies that have found surprising links between fungicide exposure and bee health. Specifically, our data suggest landscape-scale connections between fungicide usage, pathogen prevalence and declines of threatened and endangered bumblebees. © 2017 The Author(s).

  9. Exhaustive identification of steady state cycles in large stoichiometric networks

    PubMed Central

    Wright, Jeremiah; Wagner, Andreas

    2008-01-01

    Background Identifying cyclic pathways in chemical reaction networks is important, because such cycles may indicate in silico violation of energy conservation, or the existence of feedback in vivo. Unfortunately, our ability to identify cycles in stoichiometric networks, such as signal transduction and genome-scale metabolic networks, has been hampered by the computational complexity of the methods currently used. Results We describe a new algorithm for the identification of cycles in stoichiometric networks, and we compare its performance to two others by exhaustively identifying the cycles contained in the genome-scale metabolic networks of H. pylori, M. barkeri, E. coli, and S. cerevisiae. Our algorithm can substantially decrease both the execution time and maximum memory usage in comparison to the two previous algorithms. Conclusion The algorithm we describe improves our ability to study large, real-world, biochemical reaction networks, although additional methodological improvements are desirable. PMID:18616835

  10. Uni10: an open-source library for tensor network algorithms

    NASA Astrophysics Data System (ADS)

    Kao, Ying-Jer; Hsieh, Yun-Da; Chen, Pochung

    2015-09-01

    We present an object-oriented open-source library for developing tensor network algorithms written in C++ called Uni10. With Uni10, users can build a symmetric tensor from a collection of bonds, while the bonds are constructed from a list of quantum numbers associated with different quantum states. It is easy to label and permute the indices of the tensors and access a block associated with a particular quantum number. Furthermore a network class is used to describe arbitrary tensor network structure and to perform network contractions efficiently. We give an overview of the basic structure of the library and the hierarchy of the classes. We present examples of the construction of a spin-1 Heisenberg Hamiltonian and the implementation of the tensor renormalization group algorithm to illustrate the basic usage of the library. The library described here is particularly well suited to explore and fast prototype novel tensor network algorithms and to implement highly efficient codes for existing algorithms.

  11. A Node Linkage Approach for Sequential Pattern Mining

    PubMed Central

    Navarro, Osvaldo; Cumplido, René; Villaseñor-Pineda, Luis; Feregrino-Uribe, Claudia; Carrasco-Ochoa, Jesús Ariel

    2014-01-01

    Sequential Pattern Mining is a widely addressed problem in data mining, with applications such as analyzing Web usage, examining purchase behavior, and text mining, among others. Nevertheless, with the dramatic increase in data volume, the current approaches prove inefficient when dealing with large input datasets, a large number of different symbols and low minimum supports. In this paper, we propose a new sequential pattern mining algorithm, which follows a pattern-growth scheme to discover sequential patterns. Unlike most pattern growth algorithms, our approach does not build a data structure to represent the input dataset, but instead accesses the required sequences through pseudo-projection databases, achieving better runtime and reducing memory requirements. Our algorithm traverses the search space in a depth-first fashion and only preserves in memory a pattern node linkage and the pseudo-projections required for the branch being explored at the time. Experimental results show that our new approach, the Node Linkage Depth-First Traversal algorithm (NLDFT), has better performance and scalability in comparison with state of the art algorithms. PMID:24933123

  12. A Robustly Stabilizing Model Predictive Control Algorithm

    NASA Technical Reports Server (NTRS)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  13. The incorrect usage of singular spectral analysis and discrete wavelet transform in hybrid models to predict hydrological time series

    NASA Astrophysics Data System (ADS)

    Du, Kongchang; Zhao, Ying; Lei, Jiaqiang

    2017-09-01

    In hydrological time series prediction, singular spectrum analysis (SSA) and discrete wavelet transform (DWT) are widely used as preprocessing techniques for artificial neural network (ANN) and support vector machine (SVM) predictors. These hybrid or ensemble models seem to largely reduce the prediction error. In current literature researchers apply these techniques to the whole observed time series and then obtain a set of reconstructed or decomposed time series as inputs to ANN or SVM. However, through two comparative experiments and mathematical deduction we found the usage of SSA and DWT in building hybrid models is incorrect. Since SSA and DWT adopt 'future' values to perform the calculation, the series generated by SSA reconstruction or DWT decomposition contain information of 'future' values. These hybrid models caused incorrect 'high' prediction performance and may cause large errors in practice.

  14. Monthly prediction of air temperature in Australia and New Zealand with machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Salcedo-Sanz, S.; Deo, R. C.; Carro-Calvo, L.; Saavedra-Moreno, B.

    2016-07-01

    Long-term air temperature prediction is of major importance in a large number of applications, including climate-related studies, energy, agricultural, or medical. This paper examines the performance of two Machine Learning algorithms (Support Vector Regression (SVR) and Multi-layer Perceptron (MLP)) in a problem of monthly mean air temperature prediction, from the previous measured values in observational stations of Australia and New Zealand, and climate indices of importance in the region. The performance of the two considered algorithms is discussed in the paper and compared to alternative approaches. The results indicate that the SVR algorithm is able to obtain the best prediction performance among all the algorithms compared in the paper. Moreover, the results obtained have shown that the mean absolute error made by the two algorithms considered is significantly larger for the last 20 years than in the previous decades, in what can be interpreted as a change in the relationship among the prediction variables involved in the training of the algorithms.

  15. Facebook addiction among Turkish college students: the role of psychological health, demographic, and usage characteristics.

    PubMed

    Koc, Mustafa; Gulyagci, Seval

    2013-04-01

    This study explored Facebook addiction among Turkish college students and its behavioral, demographic, and psychological health predictors. The Facebook Addiction Scale (FAS) was developed and its construct validity was assessed through factor analyses. A total of 447 students reported their personal information and Facebook usage and completed the FAS and General Health Questionnaire (GHQ-28). The results revealed that weekly time commitment, social motives, severe depression, and anxiety and insomnia positively predicted Facebook addiction. Neither demographic variables nor the interactions of gender by usage characteristics were found to be significant predictors.

  16. Deadbeat Predictive Controllers

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Phan, Minh

    1997-01-01

    Several new computational algorithms are presented to compute the deadbeat predictive control law. The first algorithm makes use of a multi-step-ahead output prediction to compute the control law without explicitly calculating the controllability matrix. The system identification must be performed first and then the predictive control law is designed. The second algorithm uses the input and output data directly to compute the feedback law. It combines the system identification and the predictive control law into one formulation. The third algorithm uses an observable-canonical form realization to design the predictive controller. The relationship between all three algorithms is established through the use of the state-space representation. All algorithms are applicable to multi-input, multi-output systems with disturbance inputs. In addition to the feedback terms, feed forward terms may also be added for disturbance inputs if they are measurable. Although the feedforward terms do not influence the stability of the closed-loop feedback law, they enhance the performance of the controlled system.

  17. Genotypic analysis of human immunodeficiency virus type 1 env V3 loop sequences: bioinformatics prediction of coreceptor usage among 28 infected mother-infant pairs in a drug-naive population.

    PubMed

    Duri, Kerina; Soko, White; Gumbo, Felicity; Kristiansen, Knut; Mapingure, Munyaradzi; Stray-Pedersen, Babill; Muller, Fredrik

    2011-04-01

    We sought to predict virus coreceptor utilization using a simple bioinformatics method based on genotypic analysis of human immunodeficiency virus types 1 (HIV-1) env V3 loop sequences of 28 infected but drug-naive women during pregnancy and their infected infants and to better understand coreceptor usage in vertical transmission dynamics. The HIV-1 env V3 loop was sequenced from plasma samples and analyzed for viral coreceptor usage and subtype in a cohort of HIV-1-infected pregnant women. Predicted maternal frequencies of the X4, R5X4, and R5 genotypes were 7%, 11%, and 82%, respectively. Antenatal plasma viral load was higher, with a mean log(10) (SD) of 4.8 (1.6) and 3.6 (1.2) for women with the X4 and R5 genotypes, respectively, p = 0.078. Amino acid substitution from the conserved V3 loop crown motif GPGQ to GPGR and lymphadenopathy were associated with the X4 genotype, p = 0.031 and 0.043, respectively. The maternal viral coreceptor genotype was generally preserved in vertical transmission and was predictive of the newborn's viral genotype. Infants born to mothers with X4 genotypes were more likely to have lower birth weights relative to those born to mothers with the R5 genotype, with a mean weight (SD) of 2870 (±332) and 3069 (±300) g, respectively. These data show that at least in HIV-1 subtype C, R5 coreceptor usage is the most predominant genotype, which is generally preserved following vertical transmission and is associated with the V3 GPGQ crown motif. Therefore, antiretroviral-naive pregnant women and their infants can benefit from ARV combination therapies that include R5 entry inhibitors following prediction of their coreceptor genotype using simple bioinformatics methods.

  18. The Media and Technology Usage and Attitudes Scale: An empirical investigation

    PubMed Central

    Rosen, L.D.; Whaling, K.; Carrier, L.M.; Cheever, N.A.; Rokkum, J.

    2015-01-01

    Current approaches to measuring people’s everyday usage of technology-based media and other computer-related activities have proved to be problematic as they use varied outcome measures, fail to measure behavior in a broad range of technology-related domains and do not take into account recently developed types of technology including smartphones. In the present study, a wide variety of items, covering a range of up-to-date technology and media usage behaviors. Sixty-six items concerning technology and media usage, along with 18 additional items assessing attitudes toward technology, were administered to two independent samples of individuals, comprising 942 participants. Factor analyses were used to create 11 usage subscales representing smartphone usage, general social media usage, Internet searching, e-mailing, media sharing, text messaging, video gaming, online friendships, Facebook friendships, phone calling, and watching television in addition to four attitude-based subscales: positive attitudes, negative attitudes, technological anxiety/dependence, and attitudes toward task-switching. All subscales showed strong reliabilities and relationships between the subscales and pre-existing measures of daily media usage and Internet addiction were as predicted. Given the reliability and validity results, the new Media and Technology Usage and Attitudes Scale was suggested as a method of measuring media and technology involvement across a variety of types of research studies either as a single 60-item scale or any subset of the 15 subscales. PMID:25722534

  19. The Media and Technology Usage and Attitudes Scale: An empirical investigation.

    PubMed

    Rosen, L D; Whaling, K; Carrier, L M; Cheever, N A; Rokkum, J

    2013-11-01

    Current approaches to measuring people's everyday usage of technology-based media and other computer-related activities have proved to be problematic as they use varied outcome measures, fail to measure behavior in a broad range of technology-related domains and do not take into account recently developed types of technology including smartphones. In the present study, a wide variety of items, covering a range of up-to-date technology and media usage behaviors. Sixty-six items concerning technology and media usage, along with 18 additional items assessing attitudes toward technology, were administered to two independent samples of individuals, comprising 942 participants. Factor analyses were used to create 11 usage subscales representing smartphone usage, general social media usage, Internet searching, e-mailing, media sharing, text messaging, video gaming, online friendships, Facebook friendships, phone calling, and watching television in addition to four attitude-based subscales: positive attitudes, negative attitudes, technological anxiety/dependence, and attitudes toward task-switching. All subscales showed strong reliabilities and relationships between the subscales and pre-existing measures of daily media usage and Internet addiction were as predicted. Given the reliability and validity results, the new Media and Technology Usage and Attitudes Scale was suggested as a method of measuring media and technology involvement across a variety of types of research studies either as a single 60-item scale or any subset of the 15 subscales.

  20. Application of model predictive control for optimal operation of wind turbines

    NASA Astrophysics Data System (ADS)

    Yuan, Yuan; Cao, Pei; Tang, J.

    2017-04-01

    For large-scale wind turbines, reducing maintenance cost is a major challenge. Model predictive control (MPC) is a promising approach to deal with multiple conflicting objectives using the weighed sum approach. In this research, model predictive control method is applied to wind turbine to find an optimal balance between multiple objectives, such as the energy capture, loads on turbine components, and the pitch actuator usage. The actuator constraints are integrated into the objective function at the control design stage. The analysis is carried out in both the partial load region and full load region, and the performances are compared with those of a baseline gain scheduling PID controller. The application of this strategy achieves enhanced balance of component loads, the average power and actuator usages in partial load region.

  1. Can human experts predict solubility better than computers?

    PubMed

    Boobier, Samuel; Osbourn, Anne; Mitchell, John B O

    2017-12-13

    In this study, we design and carry out a survey, asking human experts to predict the aqueous solubility of druglike organic compounds. We investigate whether these experts, drawn largely from the pharmaceutical industry and academia, can match or exceed the predictive power of algorithms. Alongside this, we implement 10 typical machine learning algorithms on the same dataset. The best algorithm, a variety of neural network known as a multi-layer perceptron, gave an RMSE of 0.985 log S units and an R 2 of 0.706. We would not have predicted the relative success of this particular algorithm in advance. We found that the best individual human predictor generated an almost identical prediction quality with an RMSE of 0.942 log S units and an R 2 of 0.723. The collection of algorithms contained a higher proportion of reasonably good predictors, nine out of ten compared with around half of the humans. We found that, for either humans or algorithms, combining individual predictions into a consensus predictor by taking their median generated excellent predictivity. While our consensus human predictor achieved very slightly better headline figures on various statistical measures, the difference between it and the consensus machine learning predictor was both small and statistically insignificant. We conclude that human experts can predict the aqueous solubility of druglike molecules essentially equally well as machine learning algorithms. We find that, for either humans or algorithms, combining individual predictions into a consensus predictor by taking their median is a powerful way of benefitting from the wisdom of crowds.

  2. A Parallel Point Matching Algorithm for Landmark Based Image Registration Using Multicore Platform

    PubMed Central

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.

    2013-01-01

    Point matching is crucial for many computer vision applications. Establishing the correspondence between a large number of data points is a computationally intensive process. Some point matching related applications, such as medical image registration, require real time or near real time performance if applied to critical clinical applications like image assisted surgery. In this paper, we report a new multicore platform based parallel algorithm for fast point matching in the context of landmark based medical image registration. We introduced a non-regular data partition algorithm which utilizes the K-means clustering algorithm to group the landmarks based on the number of available processing cores, which optimize the memory usage and data transfer. We have tested our method using the IBM Cell Broadband Engine (Cell/B.E.) platform. The results demonstrated a significant speed up over its sequential implementation. The proposed data partition and parallelization algorithm, though tested only on one multicore platform, is generic by its design. Therefore the parallel algorithm can be extended to other computing platforms, as well as other point matching related applications. PMID:24308014

  3. Discrete sequence prediction and its applications

    NASA Technical Reports Server (NTRS)

    Laird, Philip

    1992-01-01

    Learning from experience to predict sequences of discrete symbols is a fundamental problem in machine learning with many applications. We apply sequence prediction using a simple and practical sequence-prediction algorithm, called TDAG. The TDAG algorithm is first tested by comparing its performance with some common data compression algorithms. Then it is adapted to the detailed requirements of dynamic program optimization, with excellent results.

  4. The effect of machine learning regression algorithms and sample size on individualized behavioral prediction with functional connectivity features.

    PubMed

    Cui, Zaixu; Gong, Gaolang

    2018-06-02

    Individualized behavioral/cognitive prediction using machine learning (ML) regression approaches is becoming increasingly applied. The specific ML regression algorithm and sample size are two key factors that non-trivially influence prediction accuracies. However, the effects of the ML regression algorithm and sample size on individualized behavioral/cognitive prediction performance have not been comprehensively assessed. To address this issue, the present study included six commonly used ML regression algorithms: ordinary least squares (OLS) regression, least absolute shrinkage and selection operator (LASSO) regression, ridge regression, elastic-net regression, linear support vector regression (LSVR), and relevance vector regression (RVR), to perform specific behavioral/cognitive predictions based on different sample sizes. Specifically, the publicly available resting-state functional MRI (rs-fMRI) dataset from the Human Connectome Project (HCP) was used, and whole-brain resting-state functional connectivity (rsFC) or rsFC strength (rsFCS) were extracted as prediction features. Twenty-five sample sizes (ranged from 20 to 700) were studied by sub-sampling from the entire HCP cohort. The analyses showed that rsFC-based LASSO regression performed remarkably worse than the other algorithms, and rsFCS-based OLS regression performed markedly worse than the other algorithms. Regardless of the algorithm and feature type, both the prediction accuracy and its stability exponentially increased with increasing sample size. The specific patterns of the observed algorithm and sample size effects were well replicated in the prediction using re-testing fMRI data, data processed by different imaging preprocessing schemes, and different behavioral/cognitive scores, thus indicating excellent robustness/generalization of the effects. The current findings provide critical insight into how the selected ML regression algorithm and sample size influence individualized predictions of behavior/cognition and offer important guidance for choosing the ML regression algorithm or sample size in relevant investigations. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. A prediction algorithm for first onset of major depression in the general population: development and validation.

    PubMed

    Wang, JianLi; Sareen, Jitender; Patten, Scott; Bolton, James; Schmitz, Norbert; Birney, Arden

    2014-05-01

    Prediction algorithms are useful for making clinical decisions and for population health planning. However, such prediction algorithms for first onset of major depression do not exist. The objective of this study was to develop and validate a prediction algorithm for first onset of major depression in the general population. Longitudinal study design with approximate 3-year follow-up. The study was based on data from a nationally representative sample of the US general population. A total of 28 059 individuals who participated in Waves 1 and 2 of the US National Epidemiologic Survey on Alcohol and Related Conditions and who had not had major depression at Wave 1 were included. The prediction algorithm was developed using logistic regression modelling in 21 813 participants from three census regions. The algorithm was validated in participants from the 4th census region (n=6246). Major depression occurred since Wave 1 of the National Epidemiologic Survey on Alcohol and Related Conditions, assessed by the Alcohol Use Disorder and Associated Disabilities Interview Schedule-diagnostic and statistical manual for mental disorders IV. A prediction algorithm containing 17 unique risk factors was developed. The algorithm had good discriminative power (C statistics=0.7538, 95% CI 0.7378 to 0.7699) and excellent calibration (F-adjusted test=1.00, p=0.448) with the weighted data. In the validation sample, the algorithm had a C statistic of 0.7259 and excellent calibration (Hosmer-Lemeshow χ(2)=3.41, p=0.906). The developed prediction algorithm has good discrimination and calibration capacity. It can be used by clinicians, mental health policy-makers and service planners and the general public to predict future risk of having major depression. The application of the algorithm may lead to increased personalisation of treatment, better clinical decisions and more optimal mental health service planning.

  6. Energy profiles of four American states

    NASA Astrophysics Data System (ADS)

    Song, Jiamei

    2018-06-01

    Energy production and usage are the major portion of any economy. With the constant consumption of the polluting energy and the deteriorating environment, people are paying more and more attention to clean, renewable energy. Based on autoregressive model and TOPSIS, though analyzing the past data, this paper establishes the energy profiles of four American states from 1960 to 2009, predict the energy profiles for 2025 and 2050 and obtain the ideal criteria for future clean, renewable energy usage at last. This study finds that by analyzing and predicting the energy profile, human beings can better understand and grasp the trend of energy development and take appropriate measures to deal with future energy trends.

  7. Prediction of pork quality parameters by applying fractals and data mining on MRI.

    PubMed

    Caballero, Daniel; Pérez-Palacios, Trinidad; Caro, Andrés; Amigo, José Manuel; Dahl, Anders B; ErsbØll, Bjarne K; Antequera, Teresa

    2017-09-01

    This work firstly investigates the use of MRI, fractal algorithms and data mining techniques to determine pork quality parameters non-destructively. The main objective was to evaluate the capability of fractal algorithms (Classical Fractal algorithm, CFA; Fractal Texture Algorithm, FTA and One Point Fractal Texture Algorithm, OPFTA) to analyse MRI in order to predict quality parameters of loin. In addition, the effect of the sequence acquisition of MRI (Gradient echo, GE; Spin echo, SE and Turbo 3D, T3D) and the predictive technique of data mining (Isotonic regression, IR and Multiple linear regression, MLR) were analysed. Both fractal algorithm, FTA and OPFTA are appropriate to analyse MRI of loins. The sequence acquisition, the fractal algorithm and the data mining technique seems to influence on the prediction results. For most physico-chemical parameters, prediction equations with moderate to excellent correlation coefficients were achieved by using the following combinations of acquisition sequences of MRI, fractal algorithms and data mining techniques: SE-FTA-MLR, SE-OPFTA-IR, GE-OPFTA-MLR, SE-OPFTA-MLR, with the last one offering the best prediction results. Thus, SE-OPFTA-MLR could be proposed as an alternative technique to determine physico-chemical traits of fresh and dry-cured loins in a non-destructive way with high accuracy. Copyright © 2017. Published by Elsevier Ltd.

  8. Analysis of Physicochemical and Structural Properties Determining HIV-1 Coreceptor Usage

    PubMed Central

    Bozek, Katarzyna; Lengauer, Thomas; Sierra, Saleta; Kaiser, Rolf; Domingues, Francisco S.

    2013-01-01

    The relationship of HIV tropism with disease progression and the recent development of CCR5-blocking drugs underscore the importance of monitoring virus coreceptor usage. As an alternative to costly phenotypic assays, computational methods aim at predicting virus tropism based on the sequence and structure of the V3 loop of the virus gp120 protein. Here we present a numerical descriptor of the V3 loop encoding its physicochemical and structural properties. The descriptor allows for structure-based prediction of HIV tropism and identification of properties of the V3 loop that are crucial for coreceptor usage. Use of the proposed descriptor for prediction results in a statistically significant improvement over the prediction based solely on V3 sequence with 3 percentage points improvement in AUC and 7 percentage points in sensitivity at the specificity of the 11/25 rule (95%). We additionally assessed the predictive power of the new method on clinically derived ‘bulk’ sequence data and obtained a statistically significant improvement in AUC of 3 percentage points over sequence-based prediction. Furthermore, we demonstrated the capacity of our method to predict therapy outcome by applying it to 53 samples from patients undergoing Maraviroc therapy. The analysis of structural features of the loop informative of tropism indicates the importance of two loop regions and their physicochemical properties. The regions are located on opposite strands of the loop stem and the respective features are predominantly charge-, hydrophobicity- and structure-related. These regions are in close proximity in the bound conformation of the loop potentially forming a site determinant for the coreceptor binding. The method is available via server under http://structure.bioinf.mpi-inf.mpg.de/. PMID:23555214

  9. RevEcoR: an R package for the reverse ecology analysis of microbiomes.

    PubMed

    Cao, Yang; Wang, Yuanyuan; Zheng, Xiaofei; Li, Fei; Bo, Xiaochen

    2016-07-29

    All species live in complex ecosystems. The structure and complexity of a microbial community reflects not only diversity and function, but also the environment in which it occurs. However, traditional ecological methods can only be applied on a small scale and for relatively well-understood biological systems. Recently, a graph-theory-based algorithm called the reverse ecology approach has been developed that can analyze the metabolic networks of all the species in a microbial community, and predict the metabolic interface between species and their environment. Here, we present RevEcoR, an R package and a Shiny Web application that implements the reverse ecology algorithm for determining microbe-microbe interactions in microbial communities. This software allows users to obtain large-scale ecological insights into species' ecology directly from high-throughput metagenomic data. The software has great potential for facilitating the study of microbiomes. RevEcoR is open source software for the study of microbial community ecology. The RevEcoR R package is freely available under the GNU General Public License v. 2.0 at http://cran.r-project.org/web/packages/RevEcoR/ with the vignette and typical usage examples, and the interactive Shiny web application is available at http://yiluheihei.shinyapps.io/shiny-RevEcoR , or can be installed locally with the source code accessed from https://github.com/yiluheihei/shiny-RevEcoR .

  10. Algorithm 971: An Implementation of a Randomized Algorithm for Principal Component Analysis

    PubMed Central

    LI, HUAMIN; LINDERMAN, GEORGE C.; SZLAM, ARTHUR; STANTON, KELLY P.; KLUGER, YUVAL; TYGERT, MARK

    2017-01-01

    Recent years have witnessed intense development of randomized methods for low-rank approximation. These methods target principal component analysis and the calculation of truncated singular value decompositions. The present article presents an essentially black-box, foolproof implementation for Mathworks’ MATLAB, a popular software platform for numerical computation. As illustrated via several tests, the randomized algorithms for low-rank approximation outperform or at least match the classical deterministic techniques (such as Lanczos iterations run to convergence) in basically all respects: accuracy, computational efficiency (both speed and memory usage), ease-of-use, parallelizability, and reliability. However, the classical procedures remain the methods of choice for estimating spectral norms and are far superior for calculating the least singular values and corresponding singular vectors (or singular subspaces). PMID:28983138

  11. Evaluation of model-based versus non-parametric monaural noise-reduction approaches for hearing aids.

    PubMed

    Harlander, Niklas; Rosenkranz, Tobias; Hohmann, Volker

    2012-08-01

    Single channel noise reduction has been well investigated and seems to have reached its limits in terms of speech intelligibility improvement, however, the quality of such schemes can still be advanced. This study tests to what extent novel model-based processing schemes might improve performance in particular for non-stationary noise conditions. Two prototype model-based algorithms, a speech-model-based, and a auditory-model-based algorithm were compared to a state-of-the-art non-parametric minimum statistics algorithm. A speech intelligibility test, preference rating, and listening effort scaling were performed. Additionally, three objective quality measures for the signal, background, and overall distortions were applied. For a better comparison of all algorithms, particular attention was given to the usage of the similar Wiener-based gain rule. The perceptual investigation was performed with fourteen hearing-impaired subjects. The results revealed that the non-parametric algorithm and the auditory model-based algorithm did not affect speech intelligibility, whereas the speech-model-based algorithm slightly decreased intelligibility. In terms of subjective quality, both model-based algorithms perform better than the unprocessed condition and the reference in particular for highly non-stationary noise environments. Data support the hypothesis that model-based algorithms are promising for improving performance in non-stationary noise conditions.

  12. Predicting the random drift of MEMS gyroscope based on K-means clustering and OLS RBF Neural Network

    NASA Astrophysics Data System (ADS)

    Wang, Zhen-yu; Zhang, Li-jie

    2017-10-01

    Measure error of the sensor can be effectively compensated with prediction. Aiming at large random drift error of MEMS(Micro Electro Mechanical System))gyroscope, an improved learning algorithm of Radial Basis Function(RBF) Neural Network(NN) based on K-means clustering and Orthogonal Least-Squares (OLS) is proposed in this paper. The algorithm selects the typical samples as the initial cluster centers of RBF NN firstly, candidates centers with K-means algorithm secondly, and optimizes the candidate centers with OLS algorithm thirdly, which makes the network structure simpler and makes the prediction performance better. Experimental results show that the proposed K-means clustering OLS learning algorithm can predict the random drift of MEMS gyroscope effectively, the prediction error of which is 9.8019e-007°/s and the prediction time of which is 2.4169e-006s

  13. A Formal Basis for Safety Case Patterns

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Pai, Ganesh

    2013-01-01

    By capturing common structures of successful arguments, safety case patterns provide an approach for reusing strategies for reasoning about safety. In the current state of the practice, patterns exist as descriptive specifications with informal semantics, which not only offer little opportunity for more sophisticated usage such as automated instantiation, composition and manipulation, but also impede standardization efforts and tool interoperability. To address these concerns, this paper gives (i) a formal definition for safety case patterns, clarifying both restrictions on the usage of multiplicity and well-founded recursion in structural abstraction, (ii) formal semantics to patterns, and (iii) a generic data model and algorithm for pattern instantiation. We illustrate our contributions by application to a new pattern, the requirements breakdown pattern, which builds upon our previous work

  14. Mechanobiological simulations of peri-acetabular bone ingrowth: a comparative analysis of cell-phenotype specific and phenomenological algorithms.

    PubMed

    Mukherjee, Kaushik; Gupta, Sanjay

    2017-03-01

    Several mechanobiology algorithms have been employed to simulate bone ingrowth around porous coated implants. However, there is a scarcity of quantitative comparison between the efficacies of commonly used mechanoregulatory algorithms. The objectives of this study are: (1) to predict peri-acetabular bone ingrowth using cell-phenotype specific algorithm and to compare these predictions with those obtained using phenomenological algorithm and (2) to investigate the influences of cellular parameters on bone ingrowth. The variation in host bone material property and interfacial micromotion of the implanted pelvis were mapped onto the microscale model of implant-bone interface. An overall variation of 17-88 % in peri-acetabular bone ingrowth was observed. Despite differences in predicted tissue differentiation patterns during the initial period, both the algorithms predicted similar spatial distribution of neo-tissue layer, after attainment of equilibrium. Results indicated that phenomenological algorithm, being computationally faster than the cell-phenotype specific algorithm, might be used to predict peri-prosthetic bone ingrowth. The cell-phenotype specific algorithm, however, was found to be useful in numerically investigating the influence of alterations in cellular activities on bone ingrowth, owing to biologically related factors. Amongst the host of cellular activities, matrix production rate of bone tissue was found to have predominant influence on peri-acetabular bone ingrowth.

  15. Can you hear me now?

    PubMed Central

    Shacham, Enbal; Stamm, Kate E.; Overton, Edgar T.

    2013-01-01

    Recent studies support technology-based behavioral interventions for individuals with HIV. This study focused on the use of cell phone and internet technologies among a cohort of 515 HIV-infected individuals. Socio-demographic and clinic data were collected among individuals presenting at an urban Midwestern university HIV clinic in 2007. Regular internet usage occurred more often with males, Caucasians, those who were employed, had higher salaries, and were more educated. Higher levels of education and salary >$10,000 predicted regular usage when controlling for race, employment, and gender. Cell phone ownership was associated with being Caucasian, employed, more educated, and salary > $10,000. Employment was the only predictor of owning a cell phone when controlling for income, race, and education. Individuals who were <40 years of age, employed, and more educated were more likely to know how to text message. Employment and post-high school education predicted knowledge of text messaging, when controlling for age. Disparities among internet, cell phone, and text messaging usage exist among HIV-infected individuals. PMID:20024756

  16. A Collaborative Location Based Travel Recommendation System through Enhanced Rating Prediction for the Group of Users

    PubMed Central

    Ravi, Logesh; Vairavasundaram, Subramaniyaswamy

    2016-01-01

    Rapid growth of web and its applications has created a colossal importance for recommender systems. Being applied in various domains, recommender systems were designed to generate suggestions such as items or services based on user interests. Basically, recommender systems experience many issues which reflects dwindled effectiveness. Integrating powerful data management techniques to recommender systems can address such issues and the recommendations quality can be increased significantly. Recent research on recommender systems reveals an idea of utilizing social network data to enhance traditional recommender system with better prediction and improved accuracy. This paper expresses views on social network data based recommender systems by considering usage of various recommendation algorithms, functionalities of systems, different types of interfaces, filtering techniques, and artificial intelligence techniques. After examining the depths of objectives, methodologies, and data sources of the existing models, the paper helps anyone interested in the development of travel recommendation systems and facilitates future research direction. We have also proposed a location recommendation system based on social pertinent trust walker (SPTW) and compared the results with the existing baseline random walk models. Later, we have enhanced the SPTW model for group of users recommendations. The results obtained from the experiments have been presented. PMID:27069468

  17. Improving optimal control of grid-connected lithium-ion batteries through more accurate battery and degradation modelling

    NASA Astrophysics Data System (ADS)

    Reniers, Jorn M.; Mulder, Grietus; Ober-Blöbaum, Sina; Howey, David A.

    2018-03-01

    The increased deployment of intermittent renewable energy generators opens up opportunities for grid-connected energy storage. Batteries offer significant flexibility but are relatively expensive at present. Battery lifetime is a key factor in the business case, and it depends on usage, but most techno-economic analyses do not account for this. For the first time, this paper quantifies the annual benefits of grid-connected batteries including realistic physical dynamics and nonlinear electrochemical degradation. Three lithium-ion battery models of increasing realism are formulated, and the predicted degradation of each is compared with a large-scale experimental degradation data set (Mat4Bat). A respective improvement in RMS capacity prediction error from 11% to 5% is found by increasing the model accuracy. The three models are then used within an optimal control algorithm to perform price arbitrage over one year, including degradation. Results show that the revenue can be increased substantially while degradation can be reduced by using more realistic models. The estimated best case profit using a sophisticated model is a 175% improvement compared with the simplest model. This illustrates that using a simplistic battery model in a techno-economic assessment of grid-connected batteries might substantially underestimate the business case and lead to erroneous conclusions.

  18. Improving personalized link prediction by hybrid diffusion

    NASA Astrophysics Data System (ADS)

    Liu, Jin-Hu; Zhu, Yu-Xiao; Zhou, Tao

    2016-04-01

    Inspired by traditional link prediction and to solve the problem of recommending friends in social networks, we introduce the personalized link prediction in this paper, in which each individual will get equal number of diversiform predictions. While the performances of many classical algorithms are not satisfactory under this framework, thus new algorithms are in urgent need. Motivated by previous researches in other fields, we generalize heat conduction process to the framework of personalized link prediction and find that this method outperforms many classical similarity-based algorithms, especially in the performance of diversity. In addition, we demonstrate that adding one ground node that is supposed to connect all the nodes in the system will greatly benefit the performance of heat conduction. Finally, better hybrid algorithms composed of local random walk and heat conduction have been proposed. Numerical results show that the hybrid algorithms can outperform other algorithms simultaneously in all four adopted metrics: AUC, precision, recall and hamming distance. In a word, this work may shed some light on the in-depth understanding of the effect of physical processes in personalized link prediction.

  19. Evolutionary Dynamic Multiobjective Optimization Via Kalman Filter Prediction.

    PubMed

    Muruganantham, Arrchana; Tan, Kay Chen; Vadakkepat, Prahlad

    2016-12-01

    Evolutionary algorithms are effective in solving static multiobjective optimization problems resulting in the emergence of a number of state-of-the-art multiobjective evolutionary algorithms (MOEAs). Nevertheless, the interest in applying them to solve dynamic multiobjective optimization problems has only been tepid. Benchmark problems, appropriate performance metrics, as well as efficient algorithms are required to further the research in this field. One or more objectives may change with time in dynamic optimization problems. The optimization algorithm must be able to track the moving optima efficiently. A prediction model can learn the patterns from past experience and predict future changes. In this paper, a new dynamic MOEA using Kalman filter (KF) predictions in decision space is proposed to solve the aforementioned problems. The predictions help to guide the search toward the changed optima, thereby accelerating convergence. A scoring scheme is devised to hybridize the KF prediction with a random reinitialization method. Experimental results and performance comparisons with other state-of-the-art algorithms demonstrate that the proposed algorithm is capable of significantly improving the dynamic optimization performance.

  20. Predicting Vasovagal Syncope from Heart Rate and Blood Pressure: A Prospective Study in 140 Subjects.

    PubMed

    Virag, Nathalie; Erickson, Mark; Taraborrelli, Patricia; Vetter, Rolf; Lim, Phang Boon; Sutton, Richard

    2018-04-28

    We developed a vasovagal syncope (VVS) prediction algorithm for use during head-up tilt with simultaneous analysis of heart rate (HR) and systolic blood pressure (SBP). We previously tested this algorithm retrospectively in 1155 subjects, showing sensitivity 95%, specificity 93% and median prediction time of 59s. This study was prospective, single center, on 140 subjects to evaluate this VVS prediction algorithm and assess if retrospective results were reproduced and clinically relevant. Primary endpoint was VVS prediction: sensitivity and specificity >80%. In subjects, referred for 60° head-up tilt (Italian protocol), non-invasive HR and SBP were supplied to the VVS prediction algorithm: simultaneous analysis of RR intervals, SBP trends and their variability represented by low-frequency power generated cumulative risk which was compared with a predetermined VVS risk threshold. When cumulative risk exceeded threshold, an alert was generated. Prediction time was duration between first alert and syncope. Of 140 subjects enrolled, data was usable for 134. Of 83 tilt+ve (61.9%), 81 VVS events were correctly predicted and of 51 tilt-ve subjects (38.1%), 45 were correctly identified as negative by the algorithm. Resulting algorithm performance was sensitivity 97.6%, specificity 88.2%, meeting primary endpoint. Mean VVS prediction time was 2min 26s±3min16s with median 1min 25s. Using only HR and HR variability (without SBP) the mean prediction time reduced to 1min34s±1min45s with median 1min13s. The VVS prediction algorithm, is clinically-relevant tool and could offer applications including providing a patient alarm, shortening tilt-test time, or triggering pacing intervention in implantable devices. Copyright © 2018. Published by Elsevier Inc.

  1. Prediction of cardiovascular risk in rheumatoid arthritis: performance of original and adapted SCORE algorithms.

    PubMed

    Arts, E E A; Popa, C D; Den Broeder, A A; Donders, R; Sandoo, A; Toms, T; Rollefstad, S; Ikdahl, E; Semb, A G; Kitas, G D; Van Riel, P L C M; Fransen, J

    2016-04-01

    Predictive performance of cardiovascular disease (CVD) risk calculators appears suboptimal in rheumatoid arthritis (RA). A disease-specific CVD risk algorithm may improve CVD risk prediction in RA. The objectives of this study are to adapt the Systematic COronary Risk Evaluation (SCORE) algorithm with determinants of CVD risk in RA and to assess the accuracy of CVD risk prediction calculated with the adapted SCORE algorithm. Data from the Nijmegen early RA inception cohort were used. The primary outcome was first CVD events. The SCORE algorithm was recalibrated by reweighing included traditional CVD risk factors and adapted by adding other potential predictors of CVD. Predictive performance of the recalibrated and adapted SCORE algorithms was assessed and the adapted SCORE was externally validated. Of the 1016 included patients with RA, 103 patients experienced a CVD event. Discriminatory ability was comparable across the original, recalibrated and adapted SCORE algorithms. The Hosmer-Lemeshow test results indicated that all three algorithms provided poor model fit (p<0.05) for the Nijmegen and external validation cohort. The adapted SCORE algorithm mainly improves CVD risk estimation in non-event cases and does not show a clear advantage in reclassifying patients with RA who develop CVD (event cases) into more appropriate risk groups. This study demonstrates for the first time that adaptations of the SCORE algorithm do not provide sufficient improvement in risk prediction of future CVD in RA to serve as an appropriate alternative to the original SCORE. Risk assessment using the original SCORE algorithm may underestimate CVD risk in patients with RA. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  2. Parallel approach in RDF query processing

    NASA Astrophysics Data System (ADS)

    Vajgl, Marek; Parenica, Jan

    2017-07-01

    Parallel approach is nowadays a very cheap solution to increase computational power due to possibility of usage of multithreaded computational units. This hardware became typical part of nowadays personal computers or notebooks and is widely spread. This contribution deals with experiments how evaluation of computational complex algorithm of the inference over RDF data can be parallelized over graphical cards to decrease computational time.

  3. Architecting Integrated System Health Management for Airworthiness

    DTIC Science & Technology

    2013-09-01

    aircraft safety and reliability through condition-based maintenance [Miller et al., 1991]. With the same motivation, Integrated System Health Management...diagnostics and prognostics algorithms. 2.2.2 Health and Usage Monitoring System (HUMS) in Helicopters Increased demand for improved operational safety ...offshore shuttle helicopters traversing the petrol installations in the North Sea, and increased demand for improved operational safety and reduced

  4. Distributed topology control algorithm for multihop wireless netoworks

    NASA Technical Reports Server (NTRS)

    Borbash, S. A.; Jennings, E. H.

    2002-01-01

    We present a network initialization algorithmfor wireless networks with distributed intelligence. Each node (agent) has only local, incomplete knowledge and it must make local decisions to meet a predefined global objective. Our objective is to use power control to establish a topology based onthe relative neighborhood graph which has good overall performance in terms of power usage, low interference, and reliability.

  5. Automatic Online Educational Game Content Creation by Identifying Similar Chinese Characters with Radical Extraction and Graph Matching Algorithms

    ERIC Educational Resources Information Center

    Lai, Jason Kwong-Hung; Leung, Howard; Hu, Zhi-Hui; Tang, Jeff K. T.; Xu, Yun

    2010-01-01

    One of the difficulties in learning Chinese characters is distinguishing similar characters. This can cause misunderstanding and miscommunication in daily life. Thus, it is important for students learning the Chinese language to be able to distinguish similar characters and understand their proper usage. In this paper, the authors propose a game…

  6. Using ant-behavior-based simulation model AntWeb to improve website organization

    NASA Astrophysics Data System (ADS)

    Li, Weigang; Pinheiro Dib, Marcos V.; Teles, Wesley M.; Morais de Andrade, Vlaudemir; Alves de Melo, Alba C. M.; Cariolano, Judas T.

    2002-03-01

    Some web usage mining algorithms showed the potential application to find the difference among the organizations expected by visitors to the website. However, there are still no efficient method and criterion for a web administrator to measure the performance of the modification. In this paper, we developed an AntWeb, a model inspired by ants' behavior to simulate the sequence of visiting the website, in order to measure the efficient of the web structure. We implemented a web usage mining algorithm using backtrack to the intranet website of the Politec Informatic Ltd., Brazil. We defined throughput (the number of visitors to reach their target pages per time unit relates to the total number of visitors) as an index to measure the website's performance. We also used the link in a web page to represent the effect of visitors' pheromone trails. For every modification in the website organization, for example, putting a link from the expected location to the target object, the simulation reported the value of throughput as a quick answer about this modification. The experiment showed the stability of our simulation model, and a positive modification to the intranet website of the Politec.

  7. Influenza detection and prediction algorithms: comparative accuracy trial in Östergötland county, Sweden, 2008-2012.

    PubMed

    Spreco, A; Eriksson, O; Dahlström, Ö; Timpka, T

    2017-07-01

    Methods for the detection of influenza epidemics and prediction of their progress have seldom been comparatively evaluated using prospective designs. This study aimed to perform a prospective comparative trial of algorithms for the detection and prediction of increased local influenza activity. Data on clinical influenza diagnoses recorded by physicians and syndromic data from a telenursing service were used. Five detection and three prediction algorithms previously evaluated in public health settings were calibrated and then evaluated over 3 years. When applied on diagnostic data, only detection using the Serfling regression method and prediction using the non-adaptive log-linear regression method showed acceptable performances during winter influenza seasons. For the syndromic data, none of the detection algorithms displayed a satisfactory performance, while non-adaptive log-linear regression was the best performing prediction method. We conclude that evidence was found for that available algorithms for influenza detection and prediction display satisfactory performance when applied on local diagnostic data during winter influenza seasons. When applied on local syndromic data, the evaluated algorithms did not display consistent performance. Further evaluations and research on combination of methods of these types in public health information infrastructures for 'nowcasting' (integrated detection and prediction) of influenza activity are warranted.

  8. Two-dimensional joint inversion of Magnetotelluric and local earthquake data: Discussion on the contribution to the solution of deep subsurface structures

    NASA Astrophysics Data System (ADS)

    Demirci, İsmail; Dikmen, Ünal; Candansayar, M. Emin

    2018-02-01

    Joint inversion of data sets collected by using several geophysical exploration methods has gained importance and associated algorithms have been developed. To explore the deep subsurface structures, Magnetotelluric and local earthquake tomography algorithms are generally used individually. Due to the usage of natural resources in both methods, it is not possible to increase data quality and resolution of model parameters. For this reason, the solution of the deep structures with the individual usage of the methods cannot be fully attained. In this paper, we firstly focused on the effects of both Magnetotelluric and local earthquake data sets on the solution of deep structures and discussed the results on the basis of the resolving power of the methods. The presence of deep-focus seismic sources increase the resolution of deep structures. Moreover, conductivity distribution of relatively shallow structures can be solved with high resolution by using MT algorithm. Therefore, we developed a new joint inversion algorithm based on the cross gradient function in order to jointly invert Magnetotelluric and local earthquake data sets. In the study, we added a new regularization parameter into the second term of the parameter correction vector of Gallardo and Meju (2003). The new regularization parameter is enhancing the stability of the algorithm and controls the contribution of the cross gradient term in the solution. The results show that even in cases where resistivity and velocity boundaries are different, both methods influence each other positively. In addition, the region of common structural boundaries of the models are clearly mapped compared with original models. Furthermore, deep structures are identified satisfactorily even with using the minimum number of seismic sources. In this paper, in order to understand the future studies, we discussed joint inversion of Magnetotelluric and local earthquake data sets only in two-dimensional space. In the light of these results and by means of the acceleration on the three-dimensional modelling and inversion algorithms, it is thought that it may be easier to identify underground structures with high resolution.

  9. NWRA AVOSS Wake Vortex Prediction Algorithm. 3.1.1

    NASA Technical Reports Server (NTRS)

    Robins, R. E.; Delisi, D. P.; Hinton, David (Technical Monitor)

    2002-01-01

    This report provides a detailed description of the wake vortex prediction algorithm used in the Demonstration Version of NASA's Aircraft Vortex Spacing System (AVOSS). The report includes all equations used in the algorithm, an explanation of how to run the algorithm, and a discussion of how the source code for the algorithm is organized. Several appendices contain important supplementary information, including suggestions for enhancing the algorithm and results from test cases.

  10. The Scope of Usage-Based Theory

    PubMed Central

    Ibbotson, Paul

    2013-01-01

    Usage-based approaches typically draw on a relatively small set of cognitive processes, such as categorization, analogy, and chunking to explain language structure and function. The goal of this paper is to first review the extent to which the “cognitive commitment” of usage-based theory has had success in explaining empirical findings across domains, including language acquisition, processing, and typology. We then look at the overall strengths and weaknesses of usage-based theory and highlight where there are significant debates. Finally, we draw special attention to a set of culturally generated structural patterns that seem to lie beyond the explanation of core usage-based cognitive processes. In this context we draw a distinction between cognition permitting language structure vs. cognition entailing language structure. As well as addressing the need for greater clarity on the mechanisms of generalizations and the fundamental units of grammar, we suggest that integrating culturally generated structures within existing cognitive models of use will generate tighter predictions about how language works. PMID:23658552

  11. An approximate dynamic programming approach to resource management in multi-cloud scenarios

    NASA Astrophysics Data System (ADS)

    Pietrabissa, Antonio; Priscoli, Francesco Delli; Di Giorgio, Alessandro; Giuseppi, Alessandro; Panfili, Martina; Suraci, Vincenzo

    2017-03-01

    The programmability and the virtualisation of network resources are crucial to deploy scalable Information and Communications Technology (ICT) services. The increasing demand of cloud services, mainly devoted to the storage and computing, requires a new functional element, the Cloud Management Broker (CMB), aimed at managing multiple cloud resources to meet the customers' requirements and, simultaneously, to optimise their usage. This paper proposes a multi-cloud resource allocation algorithm that manages the resource requests with the aim of maximising the CMB revenue over time. The algorithm is based on Markov decision process modelling and relies on reinforcement learning techniques to find online an approximate solution.

  12. Essays on Information Assurance: Examination of Detrimental Consequences of Information Security, Privacy, and Extreme Event Concerns on Individual and Organizational Use of Systems

    ERIC Educational Resources Information Center

    Park, Insu

    2010-01-01

    The purpose of this study is to explore systems users' behavior on IS under the various circumstances (e.g., email usage and malware threats, online communication at the individual level, and IS usage in organizations). Specifically, the first essay develops a method for analyzing and predicting the impact category of malicious code, particularly…

  13. Predicting Loss-of-Control Boundaries Toward a Piloting Aid

    NASA Technical Reports Server (NTRS)

    Barlow, Jonathan; Stepanyan, Vahram; Krishnakumar, Kalmanje

    2012-01-01

    This work presents an approach to predicting loss-of-control with the goal of providing the pilot a decision aid focused on maintaining the pilot's control action within predicted loss-of-control boundaries. The predictive architecture combines quantitative loss-of-control boundaries, a data-based predictive control boundary estimation algorithm and an adaptive prediction method to estimate Markov model parameters in real-time. The data-based loss-of-control boundary estimation algorithm estimates the boundary of a safe set of control inputs that will keep the aircraft within the loss-of-control boundaries for a specified time horizon. The adaptive prediction model generates estimates of the system Markov Parameters, which are used by the data-based loss-of-control boundary estimation algorithm. The combined algorithm is applied to a nonlinear generic transport aircraft to illustrate the features of the architecture.

  14. Diagnosis and prediction of periodontally compromised teeth using a deep learning-based convolutional neural network algorithm.

    PubMed

    Lee, Jae-Hong; Kim, Do-Hyung; Jeong, Seong-Nyum; Choi, Seong-Ho

    2018-04-01

    The aim of the current study was to develop a computer-assisted detection system based on a deep convolutional neural network (CNN) algorithm and to evaluate the potential usefulness and accuracy of this system for the diagnosis and prediction of periodontally compromised teeth (PCT). Combining pretrained deep CNN architecture and a self-trained network, periapical radiographic images were used to determine the optimal CNN algorithm and weights. The diagnostic and predictive accuracy, sensitivity, specificity, positive predictive value, negative predictive value, receiver operating characteristic (ROC) curve, area under the ROC curve, confusion matrix, and 95% confidence intervals (CIs) were calculated using our deep CNN algorithm, based on a Keras framework in Python. The periapical radiographic dataset was split into training (n=1,044), validation (n=348), and test (n=348) datasets. With the deep learning algorithm, the diagnostic accuracy for PCT was 81.0% for premolars and 76.7% for molars. Using 64 premolars and 64 molars that were clinically diagnosed as severe PCT, the accuracy of predicting extraction was 82.8% (95% CI, 70.1%-91.2%) for premolars and 73.4% (95% CI, 59.9%-84.0%) for molars. We demonstrated that the deep CNN algorithm was useful for assessing the diagnosis and predictability of PCT. Therefore, with further optimization of the PCT dataset and improvements in the algorithm, a computer-aided detection system can be expected to become an effective and efficient method of diagnosing and predicting PCT.

  15. A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs).

    PubMed

    Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong

    2014-01-01

    Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.

  16. Machine learning: a useful radiological adjunct in determination of a newly diagnosed glioma's grade and IDH status.

    PubMed

    De Looze, Céline; Beausang, Alan; Cryan, Jane; Loftus, Teresa; Buckley, Patrick G; Farrell, Michael; Looby, Seamus; Reilly, Richard; Brett, Francesca; Kearney, Hugh

    2018-05-16

    Machine learning methods have been introduced as a computer aided diagnostic tool, with applications to glioma characterisation on MRI. Such an algorithmic approach may provide a useful adjunct for a rapid and accurate diagnosis of a glioma. The aim of this study is to devise a machine learning algorithm that may be used by radiologists in routine practice to aid diagnosis of both: WHO grade and IDH mutation status in de novo gliomas. To evaluate the status quo, we interrogated the accuracy of neuroradiology reports in relation to WHO grade: grade II 96.49% (95% confidence intervals [CI] 0.88, 0.99); III 36.51% (95% CI 0.24, 0.50); IV 72.9% (95% CI 0.67, 0.78). We derived five MRI parameters from the same diagnostic brain scans, in under two minutes per case, and then supplied these data to a random forest algorithm. Machine learning resulted in a high level of accuracy in prediction of tumour grade: grade II/III; area under the receiver operating characteristic curve (AUC) = 98%, sensitivity = 0.82, specificity = 0.94; grade II/IV; AUC = 100%, sensitivity = 1.0, specificity = 1.0; grade III/IV; AUC = 97%, sensitivity = 0.83, specificity = 0.97. Furthermore, machine learning also facilitated the discrimination of IDH status: AUC of 88%, sensitivity = 0.81, specificity = 0.77. These data demonstrate the ability of machine learning to accurately classify diffuse gliomas by both WHO grade and IDH status from routine MRI alone-without significant image processing, which may facilitate usage as a diagnostic adjunct in clinical practice.

  17. An improved reversible data hiding algorithm based on modification of prediction errors

    NASA Astrophysics Data System (ADS)

    Jafar, Iyad F.; Hiary, Sawsan A.; Darabkh, Khalid A.

    2014-04-01

    Reversible data hiding algorithms are concerned with the ability of hiding data and recovering the original digital image upon extraction. This issue is of interest in medical and military imaging applications. One particular class of such algorithms relies on the idea of histogram shifting of prediction errors. In this paper, we propose an improvement over one popular algorithm in this class. The improvement is achieved by employing a different predictor, the use of more bins in the prediction error histogram in addition to multilevel embedding. The proposed extension shows significant improvement over the original algorithm and its variations.

  18. Unmitigated numerical solution to the diffraction term in the parabolic nonlinear ultrasound wave equation.

    PubMed

    Hasani, Mojtaba H; Gharibzadeh, Shahriar; Farjami, Yaghoub; Tavakkoli, Jahan

    2013-09-01

    Various numerical algorithms have been developed to solve the Khokhlov-Kuznetsov-Zabolotskaya (KZK) parabolic nonlinear wave equation. In this work, a generalized time-domain numerical algorithm is proposed to solve the diffraction term of the KZK equation. This algorithm solves the transverse Laplacian operator of the KZK equation in three-dimensional (3D) Cartesian coordinates using a finite-difference method based on the five-point implicit backward finite difference and the five-point Crank-Nicolson finite difference discretization techniques. This leads to a more uniform discretization of the Laplacian operator which in turn results in fewer calculation gridding nodes without compromising accuracy in the diffraction term. In addition, a new empirical algorithm based on the LU decomposition technique is proposed to solve the system of linear equations obtained from this discretization. The proposed empirical algorithm improves the calculation speed and memory usage, while the order of computational complexity remains linear in calculation of the diffraction term in the KZK equation. For evaluating the accuracy of the proposed algorithm, two previously published algorithms are used as comparison references: the conventional 2D Texas code and its generalization for 3D geometries. The results show that the accuracy/efficiency performance of the proposed algorithm is comparable with the established time-domain methods.

  19. An accelerated non-Gaussianity based multichannel predictive deconvolution method with the limited supporting region of filters

    NASA Astrophysics Data System (ADS)

    Li, Zhong-xiao; Li, Zhen-chun

    2016-09-01

    The multichannel predictive deconvolution can be conducted in overlapping temporal and spatial data windows to solve the 2D predictive filter for multiple removal. Generally, the 2D predictive filter can better remove multiples at the cost of more computation time compared with the 1D predictive filter. In this paper we first use the cross-correlation strategy to determine the limited supporting region of filters where the coefficients play a major role for multiple removal in the filter coefficient space. To solve the 2D predictive filter the traditional multichannel predictive deconvolution uses the least squares (LS) algorithm, which requires primaries and multiples are orthogonal. To relax the orthogonality assumption the iterative reweighted least squares (IRLS) algorithm and the fast iterative shrinkage thresholding (FIST) algorithm have been used to solve the 2D predictive filter in the multichannel predictive deconvolution with the non-Gaussian maximization (L1 norm minimization) constraint of primaries. The FIST algorithm has been demonstrated as a faster alternative to the IRLS algorithm. In this paper we introduce the FIST algorithm to solve the filter coefficients in the limited supporting region of filters. Compared with the FIST based multichannel predictive deconvolution without the limited supporting region of filters the proposed method can reduce the computation burden effectively while achieving a similar accuracy. Additionally, the proposed method can better balance multiple removal and primary preservation than the traditional LS based multichannel predictive deconvolution and FIST based single channel predictive deconvolution. Synthetic and field data sets demonstrate the effectiveness of the proposed method.

  20. Mortality risk score prediction in an elderly population using machine learning.

    PubMed

    Rose, Sherri

    2013-03-01

    Standard practice for prediction often relies on parametric regression methods. Interesting new methods from the machine learning literature have been introduced in epidemiologic studies, such as random forest and neural networks. However, a priori, an investigator will not know which algorithm to select and may wish to try several. Here I apply the super learner, an ensembling machine learning approach that combines multiple algorithms into a single algorithm and returns a prediction function with the best cross-validated mean squared error. Super learning is a generalization of stacking methods. I used super learning in the Study of Physical Performance and Age-Related Changes in Sonomans (SPPARCS) to predict death among 2,066 residents of Sonoma, California, aged 54 years or more during the period 1993-1999. The super learner for predicting death (risk score) improved upon all single algorithms in the collection of algorithms, although its performance was similar to that of several algorithms. Super learner outperformed the worst algorithm (neural networks) by 44% with respect to estimated cross-validated mean squared error and had an R2 value of 0.201. The improvement of super learner over random forest with respect to R2 was approximately 2-fold. Alternatives for risk score prediction include the super learner, which can provide improved performance.

  1. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.

  2. A Novel Method to Predict Highly Expressed Genes Based on Radius Clustering and Relative Synonymous Codon Usage.

    PubMed

    Tran, Tuan-Anh; Vo, Nam Tri; Nguyen, Hoang Duc; Pham, Bao The

    2015-12-01

    Recombinant proteins play an important role in many aspects of life and have generated a huge income, notably in the industrial enzyme business. A gene is introduced into a vector and expressed in a host organism-for example, E. coli-to obtain a high productivity of target protein. However, transferred genes from particular organisms are not usually compatible with the host's expression system because of various reasons, for example, codon usage bias, GC content, repetitive sequences, and secondary structure. The solution is developing programs to optimize for designing a nucleotide sequence whose origin is from peptide sequences using properties of highly expressed genes (HEGs) of the host organism. Existing data of HEGs determined by practical and computer-based methods do not satisfy for qualifying and quantifying. Therefore, the demand for developing a new HEG prediction method is critical. We proposed a new method for predicting HEGs and criteria to evaluate gene optimization. Codon usage bias was weighted by amplifying the difference between HEGs and non-highly expressed genes (non-HEGs). The number of predicted HEGs is 5% of the genome. In comparison with Puigbò's method, the result is twice as good as Puigbò's one, in kernel ratio and kernel sensitivity. Concerning transcription/translation factor proteins (TF), the proposed method gives low TF sensitivity, while Puigbò's method gives moderate one. In summary, the results indicated that the proposed method can be a good optional applying method to predict optimized genes for particular organisms, and we generated an HEG database for further researches in gene design.

  3. Utilizing Machine Learning and Automated Performance Metrics to Evaluate Robot-Assisted Radical Prostatectomy Performance and Predict Outcomes.

    PubMed

    Hung, Andrew J; Chen, Jian; Che, Zhengping; Nilanon, Tanachat; Jarc, Anthony; Titus, Micha; Oh, Paul J; Gill, Inderbir S; Liu, Yan

    2018-05-01

    Surgical performance is critical for clinical outcomes. We present a novel machine learning (ML) method of processing automated performance metrics (APMs) to evaluate surgical performance and predict clinical outcomes after robot-assisted radical prostatectomy (RARP). We trained three ML algorithms utilizing APMs directly from robot system data (training material) and hospital length of stay (LOS; training label) (≤2 days and >2 days) from 78 RARP cases, and selected the algorithm with the best performance. The selected algorithm categorized the cases as "Predicted as expected LOS (pExp-LOS)" and "Predicted as extended LOS (pExt-LOS)." We compared postoperative outcomes of the two groups (Kruskal-Wallis/Fisher's exact tests). The algorithm then predicted individual clinical outcomes, which we compared with actual outcomes (Spearman's correlation/Fisher's exact tests). Finally, we identified five most relevant APMs adopted by the algorithm during predicting. The "Random Forest-50" (RF-50) algorithm had the best performance, reaching 87.2% accuracy in predicting LOS (73 cases as "pExp-LOS" and 5 cases as "pExt-LOS"). The "pExp-LOS" cases outperformed the "pExt-LOS" cases in surgery time (3.7 hours vs 4.6 hours, p = 0.007), LOS (2 days vs 4 days, p = 0.02), and Foley duration (9 days vs 14 days, p = 0.02). Patient outcomes predicted by the algorithm had significant association with the "ground truth" in surgery time (p < 0.001, r = 0.73), LOS (p = 0.05, r = 0.52), and Foley duration (p < 0.001, r = 0.45). The five most relevant APMs, adopted by the RF-50 algorithm in predicting, were largely related to camera manipulation. To our knowledge, ours is the first study to show that APMs and ML algorithms may help assess surgical RARP performance and predict clinical outcomes. With further accrual of clinical data (oncologic and functional data), this process will become increasingly relevant and valuable in surgical assessment and training.

  4. Can machine-learning improve cardiovascular risk prediction using routine clinical data?

    PubMed Central

    Kai, Joe; Garibaldi, Jonathan M.; Qureshi, Nadeem

    2017-01-01

    Background Current approaches to predict cardiovascular risk fail to identify many people who would benefit from preventive treatment, while others receive unnecessary intervention. Machine-learning offers opportunity to improve accuracy by exploiting complex interactions between risk factors. We assessed whether machine-learning can improve cardiovascular risk prediction. Methods Prospective cohort study using routine clinical data of 378,256 patients from UK family practices, free from cardiovascular disease at outset. Four machine-learning algorithms (random forest, logistic regression, gradient boosting machines, neural networks) were compared to an established algorithm (American College of Cardiology guidelines) to predict first cardiovascular event over 10-years. Predictive accuracy was assessed by area under the ‘receiver operating curve’ (AUC); and sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) to predict 7.5% cardiovascular risk (threshold for initiating statins). Findings 24,970 incident cardiovascular events (6.6%) occurred. Compared to the established risk prediction algorithm (AUC 0.728, 95% CI 0.723–0.735), machine-learning algorithms improved prediction: random forest +1.7% (AUC 0.745, 95% CI 0.739–0.750), logistic regression +3.2% (AUC 0.760, 95% CI 0.755–0.766), gradient boosting +3.3% (AUC 0.761, 95% CI 0.755–0.766), neural networks +3.6% (AUC 0.764, 95% CI 0.759–0.769). The highest achieving (neural networks) algorithm predicted 4,998/7,404 cases (sensitivity 67.5%, PPV 18.4%) and 53,458/75,585 non-cases (specificity 70.7%, NPV 95.7%), correctly predicting 355 (+7.6%) more patients who developed cardiovascular disease compared to the established algorithm. Conclusions Machine-learning significantly improves accuracy of cardiovascular risk prediction, increasing the number of patients identified who could benefit from preventive treatment, while avoiding unnecessary treatment of others. PMID:28376093

  5. Can machine-learning improve cardiovascular risk prediction using routine clinical data?

    PubMed

    Weng, Stephen F; Reps, Jenna; Kai, Joe; Garibaldi, Jonathan M; Qureshi, Nadeem

    2017-01-01

    Current approaches to predict cardiovascular risk fail to identify many people who would benefit from preventive treatment, while others receive unnecessary intervention. Machine-learning offers opportunity to improve accuracy by exploiting complex interactions between risk factors. We assessed whether machine-learning can improve cardiovascular risk prediction. Prospective cohort study using routine clinical data of 378,256 patients from UK family practices, free from cardiovascular disease at outset. Four machine-learning algorithms (random forest, logistic regression, gradient boosting machines, neural networks) were compared to an established algorithm (American College of Cardiology guidelines) to predict first cardiovascular event over 10-years. Predictive accuracy was assessed by area under the 'receiver operating curve' (AUC); and sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) to predict 7.5% cardiovascular risk (threshold for initiating statins). 24,970 incident cardiovascular events (6.6%) occurred. Compared to the established risk prediction algorithm (AUC 0.728, 95% CI 0.723-0.735), machine-learning algorithms improved prediction: random forest +1.7% (AUC 0.745, 95% CI 0.739-0.750), logistic regression +3.2% (AUC 0.760, 95% CI 0.755-0.766), gradient boosting +3.3% (AUC 0.761, 95% CI 0.755-0.766), neural networks +3.6% (AUC 0.764, 95% CI 0.759-0.769). The highest achieving (neural networks) algorithm predicted 4,998/7,404 cases (sensitivity 67.5%, PPV 18.4%) and 53,458/75,585 non-cases (specificity 70.7%, NPV 95.7%), correctly predicting 355 (+7.6%) more patients who developed cardiovascular disease compared to the established algorithm. Machine-learning significantly improves accuracy of cardiovascular risk prediction, increasing the number of patients identified who could benefit from preventive treatment, while avoiding unnecessary treatment of others.

  6. A unified algorithm for predicting partition coefficients for PBPK modeling of drugs and environmental chemicals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peyret, Thomas; Poulin, Patrick; Krishnan, Kannan, E-mail: kannan.krishnan@umontreal.ca

    The algorithms in the literature focusing to predict tissue:blood PC (P{sub tb}) for environmental chemicals and tissue:plasma PC based on total (K{sub p}) or unbound concentration (K{sub pu}) for drugs differ in their consideration of binding to hemoglobin, plasma proteins and charged phospholipids. The objective of the present study was to develop a unified algorithm such that P{sub tb}, K{sub p} and K{sub pu} for both drugs and environmental chemicals could be predicted. The development of the unified algorithm was accomplished by integrating all mechanistic algorithms previously published to compute the PCs. Furthermore, the algorithm was structured in such amore » way as to facilitate predictions of the distribution of organic compounds at the macro (i.e. whole tissue) and micro (i.e. cells and fluids) levels. The resulting unified algorithm was applied to compute the rat P{sub tb}, K{sub p} or K{sub pu} of muscle (n = 174), liver (n = 139) and adipose tissue (n = 141) for acidic, neutral, zwitterionic and basic drugs as well as ketones, acetate esters, alcohols, aliphatic hydrocarbons, aromatic hydrocarbons and ethers. The unified algorithm reproduced adequately the values predicted previously by the published algorithms for a total of 142 drugs and chemicals. The sensitivity analysis demonstrated the relative importance of the various compound properties reflective of specific mechanistic determinants relevant to prediction of PC values of drugs and environmental chemicals. Overall, the present unified algorithm uniquely facilitates the computation of macro and micro level PCs for developing organ and cellular-level PBPK models for both chemicals and drugs.« less

  7. Development and Validation of an Algorithm to Identify Planned Readmissions From Claims Data.

    PubMed

    Horwitz, Leora I; Grady, Jacqueline N; Cohen, Dorothy B; Lin, Zhenqiu; Volpe, Mark; Ngo, Chi K; Masica, Andrew L; Long, Theodore; Wang, Jessica; Keenan, Megan; Montague, Julia; Suter, Lisa G; Ross, Joseph S; Drye, Elizabeth E; Krumholz, Harlan M; Bernheim, Susannah M

    2015-10-01

    It is desirable not to include planned readmissions in readmission measures because they represent deliberate, scheduled care. To develop an algorithm to identify planned readmissions, describe its performance characteristics, and identify improvements. Consensus-driven algorithm development and chart review validation study at 7 acute-care hospitals in 2 health systems. For development, all discharges qualifying for the publicly reported hospital-wide readmission measure. For validation, all qualifying same-hospital readmissions that were characterized by the algorithm as planned, and a random sampling of same-hospital readmissions that were characterized as unplanned. We calculated weighted sensitivity and specificity, and positive and negative predictive values of the algorithm (version 2.1), compared to gold standard chart review. In consultation with 27 experts, we developed an algorithm that characterizes 7.8% of readmissions as planned. For validation we reviewed 634 readmissions. The weighted sensitivity of the algorithm was 45.1% overall, 50.9% in large teaching centers and 40.2% in smaller community hospitals. The weighted specificity was 95.9%, positive predictive value was 51.6%, and negative predictive value was 94.7%. We identified 4 minor changes to improve algorithm performance. The revised algorithm had a weighted sensitivity 49.8% (57.1% at large hospitals), weighted specificity 96.5%, positive predictive value 58.7%, and negative predictive value 94.5%. Positive predictive value was poor for the 2 most common potentially planned procedures: diagnostic cardiac catheterization (25%) and procedures involving cardiac devices (33%). An administrative claims-based algorithm to identify planned readmissions is feasible and can facilitate public reporting of primarily unplanned readmissions. © 2015 Society of Hospital Medicine.

  8. Matching CCD images to a stellar catalog using locality-sensitive hashing

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Yu, Jia-Zong; Peng, Qing-Yu

    2018-02-01

    The usage of a subset of observed stars in a CCD image to find their corresponding matched stars in a stellar catalog is an important issue in astronomical research. Subgraph isomorphic-based algorithms are the most widely used methods in star catalog matching. When more subgraph features are provided, the CCD images are recognized better. However, when the navigation feature database is large, the method requires more time to match the observing model. To solve this problem, this study investigates further and improves subgraph isomorphic matching algorithms. We present an algorithm based on a locality-sensitive hashing technique, which allocates quadrilateral models in the navigation feature database into different hash buckets and reduces the search range to the bucket in which the observed quadrilateral model is located. Experimental results indicate the effectivity of our method.

  9. Parametric Bayesian priors and better choice of negative examples improve protein function prediction.

    PubMed

    Youngs, Noah; Penfold-Brown, Duncan; Drew, Kevin; Shasha, Dennis; Bonneau, Richard

    2013-05-01

    Computational biologists have demonstrated the utility of using machine learning methods to predict protein function from an integration of multiple genome-wide data types. Yet, even the best performing function prediction algorithms rely on heuristics for important components of the algorithm, such as choosing negative examples (proteins without a given function) or determining key parameters. The improper choice of negative examples, in particular, can hamper the accuracy of protein function prediction. We present a novel approach for choosing negative examples, using a parameterizable Bayesian prior computed from all observed annotation data, which also generates priors used during function prediction. We incorporate this new method into the GeneMANIA function prediction algorithm and demonstrate improved accuracy of our algorithm over current top-performing function prediction methods on the yeast and mouse proteomes across all metrics tested. Code and Data are available at: http://bonneaulab.bio.nyu.edu/funcprop.html

  10. Compressed sensing based missing nodes prediction in temporal communication network

    NASA Astrophysics Data System (ADS)

    Cheng, Guangquan; Ma, Yang; Liu, Zhong; Xie, Fuli

    2018-02-01

    The reconstruction of complex network topology is of great theoretical and practical significance. Most research so far focuses on the prediction of missing links. There are many mature algorithms for link prediction which have achieved good results, but research on the prediction of missing nodes has just begun. In this paper, we propose an algorithm for missing node prediction in complex networks. We detect the position of missing nodes based on their neighbor nodes under the theory of compressed sensing, and extend the algorithm to the case of multiple missing nodes using spectral clustering. Experiments on real public network datasets and simulated datasets show that our algorithm can detect the locations of hidden nodes effectively with high precision.

  11. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    PubMed

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-11-11

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.

  12. Increasing Prediction the Original Final Year Project of Student Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Saragih, Rijois Iboy Erwin; Turnip, Mardi; Sitanggang, Delima; Aritonang, Mendarissan; Harianja, Eva

    2018-04-01

    Final year project is very important forgraduation study of a student. Unfortunately, many students are not seriouslydidtheir final projects. Many of studentsask for someone to do it for them. In this paper, an application of genetic algorithms to predict the original final year project of a studentis proposed. In the simulation, the data of the final project for the last 5 years is collected. The genetic algorithm has several operators namely population, selection, crossover, and mutation. The result suggest that genetic algorithm can do better prediction than other comparable model. Experimental results of predicting showed that 70% was more accurate than the previous researched.

  13. Appendix F. Developmental enforcement algorithm definition document : predictive braking enforcement algorithm definition document.

    DOT National Transportation Integrated Search

    2012-05-01

    The purpose of this document is to fully define and describe the logic flow and mathematical equations for a predictive braking enforcement algorithm intended for implementation in a Positive Train Control (PTC) system.

  14. CAT-PUMA: CME Arrival Time Prediction Using Machine learning Algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Jiajia; Ye, Yudong; Shen, Chenglong; Wang, Yuming; Erdélyi, Robert

    2018-04-01

    CAT-PUMA (CME Arrival Time Prediction Using Machine learning Algorithms) quickly and accurately predicts the arrival of Coronal Mass Ejections (CMEs) of CME arrival time. The software was trained via detailed analysis of CME features and solar wind parameters using 182 previously observed geo-effective partial-/full-halo CMEs and uses algorithms of the Support Vector Machine (SVM) to make its predictions, which can be made within minutes of providing the necessary input parameters of a CME.

  15. A low computation cost method for seizure prediction.

    PubMed

    Zhang, Yanli; Zhou, Weidong; Yuan, Qi; Wu, Qi

    2014-10-01

    The dynamic changes of electroencephalograph (EEG) signals in the period prior to epileptic seizures play a major role in the seizure prediction. This paper proposes a low computation seizure prediction algorithm that combines a fractal dimension with a machine learning algorithm. The presented seizure prediction algorithm extracts the Higuchi fractal dimension (HFD) of EEG signals as features to classify the patient's preictal or interictal state with Bayesian linear discriminant analysis (BLDA) as a classifier. The outputs of BLDA are smoothed by a Kalman filter for reducing possible sporadic and isolated false alarms and then the final prediction results are produced using a thresholding procedure. The algorithm was evaluated on the intracranial EEG recordings of 21 patients in the Freiburg EEG database. For seizure occurrence period of 30 min and 50 min, our algorithm obtained an average sensitivity of 86.95% and 89.33%, an average false prediction rate of 0.20/h, and an average prediction time of 24.47 min and 39.39 min, respectively. The results confirm that the changes of HFD can serve as a precursor of ictal activities and be used for distinguishing between interictal and preictal epochs. Both HFD and BLDA classifier have a low computational complexity. All of these make the proposed algorithm suitable for real-time seizure prediction. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Popularity Prediction Tool for ATLAS Distributed Data Management

    NASA Astrophysics Data System (ADS)

    Beermann, T.; Maettig, P.; Stewart, G.; Lassnig, M.; Garonne, V.; Barisits, M.; Vigne, R.; Serfon, C.; Goossens, L.; Nairz, A.; Molfetas, A.; Atlas Collaboration

    2014-06-01

    This paper describes a popularity prediction tool for data-intensive data management systems, such as ATLAS distributed data management (DDM). It is fed by the DDM popularity system, which produces historical reports about ATLAS data usage, providing information about files, datasets, users and sites where data was accessed. The tool described in this contribution uses this historical information to make a prediction about the future popularity of data. It finds trends in the usage of data using a set of neural networks and a set of input parameters and predicts the number of accesses in the near term future. This information can then be used in a second step to improve the distribution of replicas at sites, taking into account the cost of creating new replicas (bandwidth and load on the storage system) compared to gain of having new ones (faster access of data for analysis). To evaluate the benefit of the redistribution a grid simulator is introduced that is able replay real workload on different data distributions. This article describes the popularity prediction method and the simulator that is used to evaluate the redistribution.

  17. Mine or Theirs, Where Do Users Go? A Comparison of E-Journal Usage at the OhioLINK Electronic Journal Center Platform versus the Elsevier ScienceDirect Platform

    ERIC Educational Resources Information Center

    Swanson, Juleah

    2015-01-01

    This research provides librarians with a model for assessing and predicting which platforms patrons will use to access the same content, specifically comparing usage at the Ohio Library and Information Network (OhioLINK) Electronic Journal Center (EJC) and at Elsevier's ScienceDirect from 2007 to 2013. Findings show that in the earlier years, the…

  18. Infrastructure, Components and System Level Testing and Analysis of Electric Vehicles: Cooperative Research and Development Final Report, CRADA Number CRD-09-353

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neubauer, J.

    2013-05-01

    Battery technology is critical for the development of innovative electric vehicle networks, which can enhance transportation sustainability and reduce dependence on petroleum. This cooperative research proposed by Better Place and NREL will focus on predicting the life-cycle economics of batteries, characterizing battery technologies under various operating and usage conditions, and designing optimal usage profiles for battery recharging and use.

  19. Modeling nexus of urban heat island mitigation strategies with electricity/power usage and consumer costs: a case study for Phoenix, Arizona, USA

    NASA Astrophysics Data System (ADS)

    Silva, Humberto; Fillpot, Baron S.

    2018-01-01

    A reduction in both power and electricity usage was determined using a previously validated zero-dimensional energy balance model that implements mitigation strategies used to reduce the urban heat island (UHI) effect. The established model has been applied to show the change in urban characteristic temperature when executing four common mitigation strategies: increasing the overall (1) emissivity, (2) vegetated area, (3) thermal conductivity, and (4) albedo of the urban environment in a series of increases by 5, 10, 15, and 20% from baseline values. Separately, a correlation analysis was performed involving meteorological data and total daily energy (TDE) consumption where the 24-h average temperature was shown to have the greatest correlation to electricity service data in the Phoenix, Arizona, USA, metropolitan region. A methodology was then developed for using the model to predict TDE consumption reduction and corresponding cost-saving analysis when implementing the four mitigation strategies. The four modeled UHI mitigation strategies, taken in combination, would lead to the largest percent reduction in annual energy usage, where increasing the thermal conductivity is the single most effective mitigation strategy. The single least effective mitigation strategy, increasing the emissivity by 5% from the baseline value, resulted in an average calculated reduction of about 1570 GWh in yearly energy usage with a corresponding 157 million dollar cost savings. When the four parameters were increased in unison by 20% from baseline values, an average calculated reduction of about 2050 GWh in yearly energy usage was predicted with a corresponding 205 million dollar cost savings.

  20. Weighted graph cuts without eigenvectors a multilevel approach.

    PubMed

    Dhillon, Inderjit S; Guan, Yuqiang; Kulis, Brian

    2007-11-01

    A variety of clustering algorithms have recently been proposed to handle data that is not linearly separable; spectral clustering and kernel k-means are two of the main methods. In this paper, we discuss an equivalence between the objective functions used in these seemingly different methods--in particular, a general weighted kernel k-means objective is mathematically equivalent to a weighted graph clustering objective. We exploit this equivalence to develop a fast, high-quality multilevel algorithm that directly optimizes various weighted graph clustering objectives, such as the popular ratio cut, normalized cut, and ratio association criteria. This eliminates the need for any eigenvector computation for graph clustering problems, which can be prohibitive for very large graphs. Previous multilevel graph partitioning methods, such as Metis, have suffered from the restriction of equal-sized clusters; our multilevel algorithm removes this restriction by using kernel k-means to optimize weighted graph cuts. Experimental results show that our multilevel algorithm outperforms a state-of-the-art spectral clustering algorithm in terms of speed, memory usage, and quality. We demonstrate that our algorithm is applicable to large-scale clustering tasks such as image segmentation, social network analysis and gene network analysis.

  1. Improving depth maps of plants by using a set of five cameras

    NASA Astrophysics Data System (ADS)

    Kaczmarek, Adam L.

    2015-03-01

    Obtaining high-quality depth maps and disparity maps with the use of a stereo camera is a challenging task for some kinds of objects. The quality of these maps can be improved by taking advantage of a larger number of cameras. The research on the usage of a set of five cameras to obtain disparity maps is presented. The set consists of a central camera and four side cameras. An algorithm for making disparity maps called multiple similar areas (MSA) is introduced. The algorithm was specially designed for the set of five cameras. Experiments were performed with the MSA algorithm and the stereo matching algorithm based on the sum of sum of squared differences (sum of SSD, SSSD) measure. Moreover, the following measures were included in the experiments: sum of absolute differences (SAD), zero-mean SAD (ZSAD), zero-mean SSD (ZSSD), locally scaled SAD (LSAD), locally scaled SSD (LSSD), normalized cross correlation (NCC), and zero-mean NCC (ZNCC). Algorithms presented were applied to images of plants. Making depth maps of plants is difficult because parts of leaves are similar to each other. The potential usability of the described algorithms is especially high in agricultural applications such as robotic fruit harvesting.

  2. On the Usage of GPUs for Efficient Motion Estimation in Medical Image Sequences

    PubMed Central

    Thiyagalingam, Jeyarajan; Goodman, Daniel; Schnabel, Julia A.; Trefethen, Anne; Grau, Vicente

    2011-01-01

    Images are ubiquitous in biomedical applications from basic research to clinical practice. With the rapid increase in resolution, dimensionality of the images and the need for real-time performance in many applications, computational requirements demand proper exploitation of multicore architectures. Towards this, GPU-specific implementations of image analysis algorithms are particularly promising. In this paper, we investigate the mapping of an enhanced motion estimation algorithm to novel GPU-specific architectures, the resulting challenges and benefits therein. Using a database of three-dimensional image sequences, we show that the mapping leads to substantial performance gains, up to a factor of 60, and can provide near-real-time experience. We also show how architectural peculiarities of these devices can be best exploited in the benefit of algorithms, most specifically for addressing the challenges related to their access patterns and different memory configurations. Finally, we evaluate the performance of the algorithm on three different GPU architectures and perform a comprehensive analysis of the results. PMID:21869880

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stinnett, Jacob; Sullivan, Clair J.; Xiong, Hao

    Low-resolution isotope identifiers are widely deployed for nuclear security purposes, but these detectors currently demonstrate problems in making correct identifications in many typical usage scenarios. While there are many hardware alternatives and improvements that can be made, performance on existing low resolution isotope identifiers should be able to be improved by developing new identification algorithms. We have developed a wavelet-based peak extraction algorithm and an implementation of a Bayesian classifier for automated peak-based identification. The peak extraction algorithm has been extended to compute uncertainties in the peak area calculations. To build empirical joint probability distributions of the peak areas andmore » uncertainties, a large set of spectra were simulated in MCNP6 and processed with the wavelet-based feature extraction algorithm. Kernel density estimation was then used to create a new component of the likelihood function in the Bayesian classifier. Furthermore, identification performance is demonstrated on a variety of real low-resolution spectra, including Category I quantities of special nuclear material.« less

  4. Model predictive control design for polytopic uncertain systems by synthesising multi-step prediction scenarios

    NASA Astrophysics Data System (ADS)

    Lu, Jianbo; Xi, Yugeng; Li, Dewei; Xu, Yuli; Gan, Zhongxue

    2018-01-01

    A common objective of model predictive control (MPC) design is the large initial feasible region, low online computational burden as well as satisfactory control performance of the resulting algorithm. It is well known that interpolation-based MPC can achieve a favourable trade-off among these different aspects. However, the existing results are usually based on fixed prediction scenarios, which inevitably limits the performance of the obtained algorithms. So by replacing the fixed prediction scenarios with the time-varying multi-step prediction scenarios, this paper provides a new insight into improvement of the existing MPC designs. The adopted control law is a combination of predetermined multi-step feedback control laws, based on which two MPC algorithms with guaranteed recursive feasibility and asymptotic stability are presented. The efficacy of the proposed algorithms is illustrated by a numerical example.

  5. Mapping the EORTC QLQ-C30 onto the EQ-5D-3L: assessing the external validity of existing mapping algorithms.

    PubMed

    Doble, Brett; Lorgelly, Paula

    2016-04-01

    To determine the external validity of existing mapping algorithms for predicting EQ-5D-3L utility values from EORTC QLQ-C30 responses and to establish their generalizability in different types of cancer. A main analysis (pooled) sample of 3560 observations (1727 patients) and two disease severity patient samples (496 and 93 patients) with repeated observations over time from Cancer 2015 were used to validate the existing algorithms. Errors were calculated between observed and predicted EQ-5D-3L utility values using a single pooled sample and ten pooled tumour type-specific samples. Predictive accuracy was assessed using mean absolute error (MAE) and standardized root-mean-squared error (RMSE). The association between observed and predicted EQ-5D utility values and other covariates across the distribution was tested using quantile regression. Quality-adjusted life years (QALYs) were calculated using observed and predicted values to test responsiveness. Ten 'preferred' mapping algorithms were identified. Two algorithms estimated via response mapping and ordinary least-squares regression using dummy variables performed well on number of validation criteria, including accurate prediction of the best and worst QLQ-C30 health states, predicted values within the EQ-5D tariff range, relatively small MAEs and RMSEs, and minimal differences between estimated QALYs. Comparison of predictive accuracy across ten tumour type-specific samples highlighted that algorithms are relatively insensitive to grouping by tumour type and affected more by differences in disease severity. Two of the 'preferred' mapping algorithms suggest more accurate predictions, but limitations exist. We recommend extensive scenario analyses if mapped utilities are used in cost-utility analyses.

  6. A Systematic Investigation of Computation Models for Predicting Adverse Drug Reactions (ADRs)

    PubMed Central

    Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong

    2014-01-01

    Background Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. Principal Findings In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Conclusion Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms. PMID:25180585

  7. A digital prediction algorithm for a single-phase boost PFC

    NASA Astrophysics Data System (ADS)

    Qing, Wang; Ning, Chen; Weifeng, Sun; Shengli, Lu; Longxing, Shi

    2012-12-01

    A novel digital control algorithm for digital control power factor correction is presented, which is called the prediction algorithm and has a feature of a higher PF (power factor) with lower total harmonic distortion, and a faster dynamic response with the change of the input voltage or load current. For a certain system, based on the current system state parameters, the prediction algorithm can estimate the track of the output voltage and the inductor current at the next switching cycle and get a set of optimized control sequences to perfectly track the trajectory of input voltage. The proposed prediction algorithm is verified at different conditions, and computer simulation and experimental results under multi-situations confirm the effectiveness of the prediction algorithm. Under the circumstances that the input voltage is in the range of 90-265 V and the load current in the range of 20%-100%, the PF value is larger than 0.998. The startup and the recovery times respectively are about 0.1 s and 0.02 s without overshoot. The experimental results also verify the validity of the proposed method.

  8. Protein docking prediction using predicted protein-protein interface.

    PubMed

    Li, Bin; Kihara, Daisuke

    2012-01-10

    Many important cellular processes are carried out by protein complexes. To provide physical pictures of interacting proteins, many computational protein-protein prediction methods have been developed in the past. However, it is still difficult to identify the correct docking complex structure within top ranks among alternative conformations. We present a novel protein docking algorithm that utilizes imperfect protein-protein binding interface prediction for guiding protein docking. Since the accuracy of protein binding site prediction varies depending on cases, the challenge is to develop a method which does not deteriorate but improves docking results by using a binding site prediction which may not be 100% accurate. The algorithm, named PI-LZerD (using Predicted Interface with Local 3D Zernike descriptor-based Docking algorithm), is based on a pair wise protein docking prediction algorithm, LZerD, which we have developed earlier. PI-LZerD starts from performing docking prediction using the provided protein-protein binding interface prediction as constraints, which is followed by the second round of docking with updated docking interface information to further improve docking conformation. Benchmark results on bound and unbound cases show that PI-LZerD consistently improves the docking prediction accuracy as compared with docking without using binding site prediction or using the binding site prediction as post-filtering. We have developed PI-LZerD, a pairwise docking algorithm, which uses imperfect protein-protein binding interface prediction to improve docking accuracy. PI-LZerD consistently showed better prediction accuracy over alternative methods in the series of benchmark experiments including docking using actual docking interface site predictions as well as unbound docking cases.

  9. Pitch-Learning Algorithm For Speech Encoders

    NASA Technical Reports Server (NTRS)

    Bhaskar, B. R. Udaya

    1988-01-01

    Adaptive algorithm detects and corrects errors in sequence of estimates of pitch period of speech. Algorithm operates in conjunction with techniques used to estimate pitch period. Used in such parametric and hybrid speech coders as linear predictive coders and adaptive predictive coders.

  10. Contextualising Water Use in Residential Settings: A Survey of Non-Intrusive Techniques and Approaches

    PubMed Central

    Carboni, Davide; Gluhak, Alex; McCann, Julie A.; Beach, Thomas H.

    2016-01-01

    Water monitoring in households is important to ensure the sustainability of fresh water reserves on our planet. It provides stakeholders with the statistics required to formulate optimal strategies in residential water management. However, this should not be prohibitive and appliance-level water monitoring cannot practically be achieved by deploying sensors on every faucet or water-consuming device of interest due to the higher hardware costs and complexity, not to mention the risk of accidental leakages that can derive from the extra plumbing needed. Machine learning and data mining techniques are promising techniques to analyse monitored data to obtain non-intrusive water usage disaggregation. This is because they can discern water usage from the aggregated data acquired from a single point of observation. This paper provides an overview of water usage disaggregation systems and related techniques adopted for water event classification. The state-of-the art of algorithms and testbeds used for fixture recognition are reviewed and a discussion on the prominent challenges and future research are also included. PMID:27213397

  11. Adaptive Trajectory Prediction Algorithm for Climbing Flights

    NASA Technical Reports Server (NTRS)

    Schultz, Charles Alexander; Thipphavong, David P.; Erzberger, Heinz

    2012-01-01

    Aircraft climb trajectories are difficult to predict, and large errors in these predictions reduce the potential operational benefits of some advanced features for NextGen. The algorithm described in this paper improves climb trajectory prediction accuracy by adjusting trajectory predictions based on observed track data. It utilizes rate-of-climb and airspeed measurements derived from position data to dynamically adjust the aircraft weight modeled for trajectory predictions. In simulations with weight uncertainty, the algorithm is able to adapt to within 3 percent of the actual gross weight within two minutes of the initial adaptation. The root-mean-square of altitude errors for five-minute predictions was reduced by 73 percent. Conflict detection performance also improved, with a 15 percent reduction in missed alerts and a 10 percent reduction in false alerts. In a simulation with climb speed capture intent and weight uncertainty, the algorithm improved climb trajectory prediction accuracy by up to 30 percent and conflict detection performance, reducing missed and false alerts by up to 10 percent.

  12. Exploring the relationship between parental worry about their children's health and usage of an internet intervention for pediatric encopresis.

    PubMed

    Magee, Joshua C; Ritterband, Lee M; Thorndike, Frances P; Cox, Daniel J; Borowitz, Stephen M

    2009-06-01

    To investigate whether parental worry about their children's health predicts usage of a pediatric Internet intervention for encopresis. Thirty-nine families with a child diagnosed with encopresis completed a national clinical trial of an Internet-based intervention for encopresis (www.ucanpooptoo.com). Parents rated worry about their children's health, encopresis severity, current parent treatment for depression, and parent comfort with the Internet. Usage indicators were collected while participants utilized the intervention. Regression analyses showed that parents who reported higher baseline levels of worry about their children's health showed greater subsequent intervention use (beta =.52, p =.002), even after accounting for other plausible predictors. Exploratory analyses indicated that this effect may be stronger for families with younger children. Characteristics of individuals using Internet-based treatment programs, such as parental worry about their children's health, can influence intervention usage, and should be considered by developers of Internet interventions.

  13. An enhanced deterministic K-Means clustering algorithm for cancer subtype prediction from gene expression data.

    PubMed

    Nidheesh, N; Abdul Nazeer, K A; Ameer, P M

    2017-12-01

    Clustering algorithms with steps involving randomness usually give different results on different executions for the same dataset. This non-deterministic nature of algorithms such as the K-Means clustering algorithm limits their applicability in areas such as cancer subtype prediction using gene expression data. It is hard to sensibly compare the results of such algorithms with those of other algorithms. The non-deterministic nature of K-Means is due to its random selection of data points as initial centroids. We propose an improved, density based version of K-Means, which involves a novel and systematic method for selecting initial centroids. The key idea of the algorithm is to select data points which belong to dense regions and which are adequately separated in feature space as the initial centroids. We compared the proposed algorithm to a set of eleven widely used single clustering algorithms and a prominent ensemble clustering algorithm which is being used for cancer data classification, based on the performances on a set of datasets comprising ten cancer gene expression datasets. The proposed algorithm has shown better overall performance than the others. There is a pressing need in the Biomedical domain for simple, easy-to-use and more accurate Machine Learning tools for cancer subtype prediction. The proposed algorithm is simple, easy-to-use and gives stable results. Moreover, it provides comparatively better predictions of cancer subtypes from gene expression data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Assessing the external validity of algorithms to estimate EQ-5D-3L from the WOMAC.

    PubMed

    Kiadaliri, Aliasghar A; Englund, Martin

    2016-10-04

    The use of mapping algorithms have been suggested as a solution to predict health utilities when no preference-based measure is included in the study. However, validity and predictive performance of these algorithms are highly variable and hence assessing the accuracy and validity of algorithms before use them in a new setting is of importance. The aim of the current study was to assess the predictive accuracy of three mapping algorithms to estimate the EQ-5D-3L from the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) among Swedish people with knee disorders. Two of these algorithms developed using ordinary least squares (OLS) models and one developed using mixture model. The data from 1078 subjects mean (SD) age 69.4 (7.2) years with frequent knee pain and/or knee osteoarthritis from the Malmö Osteoarthritis study in Sweden were used. The algorithms' performance was assessed using mean error, mean absolute error, and root mean squared error. Two types of prediction were estimated for mixture model: weighted average (WA), and conditional on estimated component (CEC). The overall mean was overpredicted by an OLS model and underpredicted by two other algorithms (P < 0.001). All predictions but the CEC predictions of mixture model had a narrower range than the observed scores (22 to 90 %). All algorithms suffered from overprediction for severe health states and underprediction for mild health states with lesser extent for mixture model. While the mixture model outperformed OLS models at the extremes of the EQ-5D-3D distribution, it underperformed around the center of the distribution. While algorithm based on mixture model reflected the distribution of EQ-5D-3L data more accurately compared with OLS models, all algorithms suffered from systematic bias. This calls for caution in applying these mapping algorithms in a new setting particularly in samples with milder knee problems than original sample. Assessing the impact of the choice of these algorithms on cost-effectiveness studies through sensitivity analysis is recommended.

  15. ChloroMitoCU: Codon patterns across organelle genomes for functional genomics and evolutionary applications.

    PubMed

    Sablok, Gaurav; Chen, Ting-Wen; Lee, Chi-Ching; Yang, Chi; Gan, Ruei-Chi; Wegrzyn, Jill L; Porta, Nicola L; Nayak, Kinshuk C; Huang, Po-Jung; Varotto, Claudio; Tang, Petrus

    2017-06-01

    Organelle genomes are widely thought to have arisen from reduction events involving cyanobacterial and archaeal genomes, in the case of chloroplasts, or α-proteobacterial genomes, in the case of mitochondria. Heterogeneity in base composition and codon preference has long been the subject of investigation of topics ranging from phylogenetic distortion to the design of overexpression cassettes for transgenic expression. From the overexpression point of view, it is critical to systematically analyze the codon usage patterns of the organelle genomes. In light of the importance of codon usage patterns in the development of hyper-expression organelle transgenics, we present ChloroMitoCU, the first-ever curated, web-based reference catalog of the codon usage patterns in organelle genomes. ChloroMitoCU contains the pre-compiled codon usage patterns of 328 chloroplast genomes (29,960 CDS) and 3,502 mitochondrial genomes (49,066 CDS), enabling genome-wide exploration and comparative analysis of codon usage patterns across species. ChloroMitoCU allows the phylogenetic comparison of codon usage patterns across organelle genomes, the prediction of codon usage patterns based on user-submitted transcripts or assembled organelle genes, and comparative analysis with the pre-compiled patterns across species of interest. ChloroMitoCU can increase our understanding of the biased patterns of codon usage in organelle genomes across multiple clades. ChloroMitoCU can be accessed at: http://chloromitocu.cgu.edu.tw/. © The Author 2017. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  16. A link prediction approach to cancer drug sensitivity prediction.

    PubMed

    Turki, Turki; Wei, Zhi

    2017-10-03

    Predicting the response to a drug for cancer disease patients based on genomic information is an important problem in modern clinical oncology. This problem occurs in part because many available drug sensitivity prediction algorithms do not consider better quality cancer cell lines and the adoption of new feature representations; both lead to the accurate prediction of drug responses. By predicting accurate drug responses to cancer, oncologists gain a more complete understanding of the effective treatments for each patient, which is a core goal in precision medicine. In this paper, we model cancer drug sensitivity as a link prediction, which is shown to be an effective technique. We evaluate our proposed link prediction algorithms and compare them with an existing drug sensitivity prediction approach based on clinical trial data. The experimental results based on the clinical trial data show the stability of our link prediction algorithms, which yield the highest area under the ROC curve (AUC) and are statistically significant. We propose a link prediction approach to obtain new feature representation. Compared with an existing approach, the results show that incorporating the new feature representation to the link prediction algorithms has significantly improved the performance.

  17. New algorithm for toric intraocular lens power calculation considering the posterior corneal astigmatism.

    PubMed

    Canovas, Carmen; Alarcon, Aixa; Rosén, Robert; Kasthurirangan, Sanjeev; Ma, Joseph J K; Koch, Douglas D; Piers, Patricia

    2018-02-01

    To assess the accuracy of toric intraocular lens (IOL) power calculations of a new algorithm that incorporates the effect of posterior corneal astigmatism (PCA). Abbott Medical Optics, Inc., Groningen, the Netherlands. Retrospective case report. In eyes implanted with toric IOLs, the exact vergence formula of the Tecnis toric calculator was used to predict refractive astigmatism from preoperative biometry, surgeon-estimated surgically induced astigmatism (SIA), and implanted IOL power, with and without including the new PCA algorithm. For each calculation method, the error in predicted refractive astigmatism was calculated as the vector difference between the prediction and the actual refraction. Calculations were also made using postoperative keratometry (K) values to eliminate the potential effect of incorrect SIA estimates. The study comprised 274 eyes. The PCA algorithm significantly reduced the centroid error in predicted refractive astigmatism (P < .001). With the PCA algorithm, the centroid error reduced from 0.50 @ 1 to 0.19 @ 3 when using preoperative K values and from 0.30 @ 0 to 0.02 @ 84 when using postoperative K values. Patients who had anterior corneal against-the-rule, with-the-rule, and oblique astigmatism had improvement with the PCA algorithm. In addition, the PCA algorithm reduced the median absolute error in all groups (P < .001). The use of the new PCA algorithm decreased the error in the prediction of residual refractive astigmatism in eyes implanted with toric IOLs. Therefore, the new PCA algorithm, in combination with an exact vergence IOL power calculation formula, led to an increased predictability of toric IOL power. Copyright © 2018 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  18. Predicting missing links in complex networks based on common neighbors and distance

    PubMed Central

    Yang, Jinxuan; Zhang, Xiao-Dong

    2016-01-01

    The algorithms based on common neighbors metric to predict missing links in complex networks are very popular, but most of these algorithms do not account for missing links between nodes with no common neighbors. It is not accurate enough to reconstruct networks by using these methods in some cases especially when between nodes have less common neighbors. We proposed in this paper a new algorithm based on common neighbors and distance to improve accuracy of link prediction. Our proposed algorithm makes remarkable effect in predicting the missing links between nodes with no common neighbors and performs better than most existing currently used methods for a variety of real-world networks without increasing complexity. PMID:27905526

  19. An improved shuffled frog leaping algorithm based evolutionary framework for currency exchange rate prediction

    NASA Astrophysics Data System (ADS)

    Dash, Rajashree

    2017-11-01

    Forecasting purchasing power of one currency with respect to another currency is always an interesting topic in the field of financial time series prediction. Despite the existence of several traditional and computational models for currency exchange rate forecasting, there is always a need for developing simpler and more efficient model, which will produce better prediction capability. In this paper, an evolutionary framework is proposed by using an improved shuffled frog leaping (ISFL) algorithm with a computationally efficient functional link artificial neural network (CEFLANN) for prediction of currency exchange rate. The model is validated by observing the monthly prediction measures obtained for three currency exchange data sets such as USD/CAD, USD/CHF, and USD/JPY accumulated within same period of time. The model performance is also compared with two other evolutionary learning techniques such as Shuffled frog leaping algorithm and Particle Swarm optimization algorithm. Practical analysis of results suggest that, the proposed model developed using the ISFL algorithm with CEFLANN network is a promising predictor model for currency exchange rate prediction compared to other models included in the study.

  20. 3D Protein structure prediction with genetic tabu search algorithm

    PubMed Central

    2010-01-01

    Background Protein structure prediction (PSP) has important applications in different fields, such as drug design, disease prediction, and so on. In protein structure prediction, there are two important issues. The first one is the design of the structure model and the second one is the design of the optimization technology. Because of the complexity of the realistic protein structure, the structure model adopted in this paper is a simplified model, which is called off-lattice AB model. After the structure model is assumed, optimization technology is needed for searching the best conformation of a protein sequence based on the assumed structure model. However, PSP is an NP-hard problem even if the simplest model is assumed. Thus, many algorithms have been developed to solve the global optimization problem. In this paper, a hybrid algorithm, which combines genetic algorithm (GA) and tabu search (TS) algorithm, is developed to complete this task. Results In order to develop an efficient optimization algorithm, several improved strategies are developed for the proposed genetic tabu search algorithm. The combined use of these strategies can improve the efficiency of the algorithm. In these strategies, tabu search introduced into the crossover and mutation operators can improve the local search capability, the adoption of variable population size strategy can maintain the diversity of the population, and the ranking selection strategy can improve the possibility of an individual with low energy value entering into next generation. Experiments are performed with Fibonacci sequences and real protein sequences. Experimental results show that the lowest energy obtained by the proposed GATS algorithm is lower than that obtained by previous methods. Conclusions The hybrid algorithm has the advantages from both genetic algorithm and tabu search algorithm. It makes use of the advantage of multiple search points in genetic algorithm, and can overcome poor hill-climbing capability in the conventional genetic algorithm by using the flexible memory functions of TS. Compared with some previous algorithms, GATS algorithm has better performance in global optimization and can predict 3D protein structure more effectively. PMID:20522256

  1. The Behavioral and Neural Mechanisms Underlying the Tracking of Expertise

    PubMed Central

    Boorman, Erie D.; O’Doherty, John P.; Adolphs, Ralph; Rangel, Antonio

    2013-01-01

    Summary Evaluating the abilities of others is fundamental for successful economic and social behavior. We investigated the computational and neurobiological basis of ability tracking by designing an fMRI task that required participants to use and update estimates of both people and algorithms’ expertise through observation of their predictions. Behaviorally, we find a model-based algorithm characterized subject predictions better than several alternative models. Notably, when the agent’s prediction was concordant rather than discordant with the subject’s own likely prediction, participants credited people more than algorithms for correct predictions and penalized them less for incorrect predictions. Neurally, many components of the mentalizing network—medial prefrontal cortex, anterior cingulate gyrus, temporoparietal junction, and precuneus—represented or updated expertise beliefs about both people and algorithms. Moreover, activity in lateral orbitofrontal and medial prefrontal cortex reflected behavioral differences in learning about people and algorithms. These findings provide basic insights into the neural basis of social learning. PMID:24360551

  2. Can you hear me now? Limited use of technology among an urban HIV-infected cohort.

    PubMed

    Shacham, E; Stamm, K; Overton, E T

    2009-08-01

    Recent studies support technology-based behavioral interventions for individuals with HIV. This study focused on the use of cell phone and internet technologies among a cohort of 515 HIV-infected individuals. Socio-demographic and clinic data were collected among individuals presenting at an urban Midwestern university HIV clinic in 2007. Regular internet usage occurred more often with males, Caucasians, those who were employed, had higher income, and were more educated. Higher levels of education and income >$10,000 predicted regular usage when controlling for race, employment, and gender. Cell phone ownership was associated with being Caucasian, employed, more educated, and salary >$10,000. Employment was the only predictor of owning a cell phone when controlling for income, race, and education. Individuals who were <40 years of age, employed, and more educated were more likely to know how to text message. Employment and post-high school education predicted knowledge of text messaging, when controlling for age. Disparities among internet, cell phone, and text messaging usage exist among HIV-infected individuals.

  3. Neural Generalized Predictive Control: A Newton-Raphson Implementation

    NASA Technical Reports Server (NTRS)

    Soloway, Donald; Haley, Pamela J.

    1997-01-01

    An efficient implementation of Generalized Predictive Control using a multi-layer feedforward neural network as the plant's nonlinear model is presented. In using Newton-Raphson as the optimization algorithm, the number of iterations needed for convergence is significantly reduced from other techniques. The main cost of the Newton-Raphson algorithm is in the calculation of the Hessian, but even with this overhead the low iteration numbers make Newton-Raphson faster than other techniques and a viable algorithm for real-time control. This paper presents a detailed derivation of the Neural Generalized Predictive Control algorithm with Newton-Raphson as the minimization algorithm. Simulation results show convergence to a good solution within two iterations and timing data show that real-time control is possible. Comments about the algorithm's implementation are also included.

  4. A time series based sequence prediction algorithm to detect activities of daily living in smart home.

    PubMed

    Marufuzzaman, M; Reaz, M B I; Ali, M A M; Rahman, L F

    2015-01-01

    The goal of smart homes is to create an intelligent environment adapting the inhabitants need and assisting the person who needs special care and safety in their daily life. This can be reached by collecting the ADL (activities of daily living) data and further analysis within existing computing elements. In this research, a very recent algorithm named sequence prediction via enhanced episode discovery (SPEED) is modified and in order to improve accuracy time component is included. The modified SPEED or M-SPEED is a sequence prediction algorithm, which modified the previous SPEED algorithm by using time duration of appliance's ON-OFF states to decide the next state. M-SPEED discovered periodic episodes of inhabitant behavior, trained it with learned episodes, and made decisions based on the obtained knowledge. The results showed that M-SPEED achieves 96.8% prediction accuracy, which is better than other time prediction algorithms like PUBS, ALZ with temporal rules and the previous SPEED. Since human behavior shows natural temporal patterns, duration times can be used to predict future events more accurately. This inhabitant activity prediction system will certainly improve the smart homes by ensuring safety and better care for elderly and handicapped people.

  5. Algorithm for Training a Recurrent Multilayer Perceptron

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Rais, Omar T.; Menon, Sunil K.; Atiya, Amir F.

    2004-01-01

    An improved algorithm has been devised for training a recurrent multilayer perceptron (RMLP) for optimal performance in predicting the behavior of a complex, dynamic, and noisy system multiple time steps into the future. [An RMLP is a computational neural network with self-feedback and cross-talk (both delayed by one time step) among neurons in hidden layers]. Like other neural-network-training algorithms, this algorithm adjusts network biases and synaptic-connection weights according to a gradient-descent rule. The distinguishing feature of this algorithm is a combination of global feedback (the use of predictions as well as the current output value in computing the gradient at each time step) and recursiveness. The recursive aspect of the algorithm lies in the inclusion of the gradient of predictions at each time step with respect to the predictions at the preceding time step; this recursion enables the RMLP to learn the dynamics. It has been conjectured that carrying the recursion to even earlier time steps would enable the RMLP to represent a noisier, more complex system.

  6. A system for learning statistical motion patterns.

    PubMed

    Hu, Weiming; Xiao, Xuejuan; Fu, Zhouyu; Xie, Dan; Tan, Tieniu; Maybank, Steve

    2006-09-01

    Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction.

  7. Analysis of Bioactive Amino Acids from Fish Hydrolysates with a New Bioinformatic Intelligent System Approach.

    PubMed

    Elaziz, Mohamed Abd; Hemdan, Ahmed Monem; Hassanien, AboulElla; Oliva, Diego; Xiong, Shengwu

    2017-09-07

    The current economics of the fish protein industry demand rapid, accurate and expressive prediction algorithms at every step of protein production especially with the challenge of global climate change. This help to predict and analyze functional and nutritional quality then consequently control food allergies in hyper allergic patients. As, it is quite expensive and time-consuming to know these concentrations by the lab experimental tests, especially to conduct large-scale projects. Therefore, this paper introduced a new intelligent algorithm using adaptive neuro-fuzzy inference system based on whale optimization algorithm. This algorithm is used to predict the concentration levels of bioactive amino acids in fish protein hydrolysates at different times during the year. The whale optimization algorithm is used to determine the optimal parameters in adaptive neuro-fuzzy inference system. The results of proposed algorithm are compared with others and it is indicated the higher performance of the proposed algorithm.

  8. Di-codon Usage for Gene Classification

    NASA Astrophysics Data System (ADS)

    Nguyen, Minh N.; Ma, Jianmin; Fogel, Gary B.; Rajapakse, Jagath C.

    Classification of genes into biologically related groups facilitates inference of their functions. Codon usage bias has been described previously as a potential feature for gene classification. In this paper, we demonstrate that di-codon usage can further improve classification of genes. By using both codon and di-codon features, we achieve near perfect accuracies for the classification of HLA molecules into major classes and sub-classes. The method is illustrated on 1,841 HLA sequences which are classified into two major classes, HLA-I and HLA-II. Major classes are further classified into sub-groups. A binary SVM using di-codon usage patterns achieved 99.95% accuracy in the classification of HLA genes into major HLA classes; and multi-class SVM achieved accuracy rates of 99.82% and 99.03% for sub-class classification of HLA-I and HLA-II genes, respectively. Furthermore, by combining codon and di-codon usages, the prediction accuracies reached 100%, 99.82%, and 99.84% for HLA major class classification, and for sub-class classification of HLA-I and HLA-II genes, respectively.

  9. Predicting biomedical metadata in CEDAR: A study of Gene Expression Omnibus (GEO).

    PubMed

    Panahiazar, Maryam; Dumontier, Michel; Gevaert, Olivier

    2017-08-01

    A crucial and limiting factor in data reuse is the lack of accurate, structured, and complete descriptions of data, known as metadata. Towards improving the quantity and quality of metadata, we propose a novel metadata prediction framework to learn associations from existing metadata that can be used to predict metadata values. We evaluate our framework in the context of experimental metadata from the Gene Expression Omnibus (GEO). We applied four rule mining algorithms to the most common structured metadata elements (sample type, molecular type, platform, label type and organism) from over 1.3million GEO records. We examined the quality of well supported rules from each algorithm and visualized the dependencies among metadata elements. Finally, we evaluated the performance of the algorithms in terms of accuracy, precision, recall, and F-measure. We found that PART is the best algorithm outperforming Apriori, Predictive Apriori, and Decision Table. All algorithms perform significantly better in predicting class values than the majority vote classifier. We found that the performance of the algorithms is related to the dimensionality of the GEO elements. The average performance of all algorithm increases due of the decreasing of dimensionality of the unique values of these elements (2697 platforms, 537 organisms, 454 labels, 9 molecules, and 5 types). Our work suggests that experimental metadata such as present in GEO can be accurately predicted using rule mining algorithms. Our work has implications for both prospective and retrospective augmentation of metadata quality, which are geared towards making data easier to find and reuse. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Enhanced clinical pharmacy service targeting tools: risk-predictive algorithms.

    PubMed

    El Hajji, Feras W D; Scullin, Claire; Scott, Michael G; McElnay, James C

    2015-04-01

    This study aimed to determine the value of using a mix of clinical pharmacy data and routine hospital admission spell data in the development of predictive algorithms. Exploration of risk factors in hospitalized patients, together with the targeting strategies devised, will enable the prioritization of clinical pharmacy services to optimize patient outcomes. Predictive algorithms were developed using a number of detailed steps using a 75% sample of integrated medicines management (IMM) patients, and validated using the remaining 25%. IMM patients receive targeted clinical pharmacy input throughout their hospital stay. The algorithms were applied to the validation sample, and predicted risk probability was generated for each patient from the coefficients. Risk threshold for the algorithms were determined by identifying the cut-off points of risk scores at which the algorithm would have the highest discriminative performance. Clinical pharmacy staffing levels were obtained from the pharmacy department staffing database. Numbers of previous emergency admissions and admission medicines together with age-adjusted co-morbidity and diuretic receipt formed a 12-month post-discharge and/or readmission risk algorithm. Age-adjusted co-morbidity proved to be the best index to predict mortality. Increased numbers of clinical pharmacy staff at ward level was correlated with a reduction in risk-adjusted mortality index (RAMI). Algorithms created were valid in predicting risk of in-hospital and post-discharge mortality and risk of hospital readmission 3, 6 and 12 months post-discharge. The provision of ward-based clinical pharmacy services is a key component to reducing RAMI and enabling the full benefits of pharmacy input to patient care to be realized. © 2014 John Wiley & Sons, Ltd.

  11. A comprehensive performance evaluation on the prediction results of existing cooperative transcription factors identification algorithms.

    PubMed

    Lai, Fu-Jou; Chang, Hong-Tsun; Huang, Yueh-Min; Wu, Wei-Sheng

    2014-01-01

    Eukaryotic transcriptional regulation is known to be highly connected through the networks of cooperative transcription factors (TFs). Measuring the cooperativity of TFs is helpful for understanding the biological relevance of these TFs in regulating genes. The recent advances in computational techniques led to various predictions of cooperative TF pairs in yeast. As each algorithm integrated different data resources and was developed based on different rationales, it possessed its own merit and claimed outperforming others. However, the claim was prone to subjectivity because each algorithm compared with only a few other algorithms and only used a small set of performance indices for comparison. This motivated us to propose a series of indices to objectively evaluate the prediction performance of existing algorithms. And based on the proposed performance indices, we conducted a comprehensive performance evaluation. We collected 14 sets of predicted cooperative TF pairs (PCTFPs) in yeast from 14 existing algorithms in the literature. Using the eight performance indices we adopted/proposed, the cooperativity of each PCTFP was measured and a ranking score according to the mean cooperativity of the set was given to each set of PCTFPs under evaluation for each performance index. It was seen that the ranking scores of a set of PCTFPs vary with different performance indices, implying that an algorithm used in predicting cooperative TF pairs is of strength somewhere but may be of weakness elsewhere. We finally made a comprehensive ranking for these 14 sets. The results showed that Wang J's study obtained the best performance evaluation on the prediction of cooperative TF pairs in yeast. In this study, we adopted/proposed eight performance indices to make a comprehensive performance evaluation on the prediction results of 14 existing cooperative TFs identification algorithms. Most importantly, these proposed indices can be easily applied to measure the performance of new algorithms developed in the future, thus expedite progress in this research field.

  12. Characterization and prediction of the backscattered form function of an immersed cylindrical shell using hybrid fuzzy clustering and bio-inspired algorithms.

    PubMed

    Agounad, Said; Aassif, El Houcein; Khandouch, Younes; Maze, Gérard; Décultot, Dominique

    2018-02-01

    The acoustic scattering of a plane wave by an elastic cylindrical shell is studied. A new approach is developed to predict the form function of an immersed cylindrical shell of the radius ratio b/a ('b' is the inner radius and 'a' is the outer radius). The prediction of the backscattered form function is investigated by a combined approach between fuzzy clustering algorithms and bio-inspired algorithms. Four famous fuzzy clustering algorithms: the fuzzy c-means (FCM), the Gustafson-Kessel algorithm (GK), the fuzzy c-regression model (FCRM) and the Gath-Geva algorithm (GG) are combined with particle swarm optimization and genetic algorithm. The symmetric and antisymmetric circumferential waves A, S 0 , A 1 , S 1 and S 2 are investigated in a reduced frequency (k 1 a) range extends over 0.1

  13. Blind column selection protocol for two-dimensional high performance liquid chromatography.

    PubMed

    Burns, Niki K; Andrighetto, Luke M; Conlan, Xavier A; Purcell, Stuart D; Barnett, Neil W; Denning, Jacquie; Francis, Paul S; Stevenson, Paul G

    2016-07-01

    The selection of two orthogonal columns for two-dimensional high performance liquid chromatography (LC×LC) separation of natural product extracts can be a labour intensive and time consuming process and in many cases is an entirely trial-and-error approach. This paper introduces a blind optimisation method for column selection of a black box of constituent components. A data processing pipeline, created in the open source application OpenMS®, was developed to map the components within the mixture of equal mass across a library of HPLC columns; LC×LC separation space utilisation was compared by measuring the fractional surface coverage, fcoverage. It was found that for a test mixture from an opium poppy (Papaver somniferum) extract, the combination of diphenyl and C18 stationary phases provided a predicted fcoverage of 0.48 and was matched with an actual usage of 0.43. OpenMS®, in conjunction with algorithms designed in house, have allowed for a significantly quicker selection of two orthogonal columns, which have been optimised for a LC×LC separation of crude extractions of plant material. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Secure and Efficient Transmission of Hyperspectral Images for Geosciences Applications

    NASA Astrophysics Data System (ADS)

    Carpentieri, Bruno; Pizzolante, Raffaele

    2017-12-01

    Hyperspectral images are acquired through air-borne or space-borne special cameras (sensors) that collect information coming from the electromagnetic spectrum of the observed terrains. Hyperspectral remote sensing and hyperspectral images are used for a wide range of purposes: originally, they were developed for mining applications and for geology because of the capability of this kind of images to correctly identify various types of underground minerals by analysing the reflected spectrums, but their usage has spread in other application fields, such as ecology, military and surveillance, historical research and even archaeology. The large amount of data obtained by the hyperspectral sensors, the fact that these images are acquired at a high cost by air-borne sensors and that they are generally transmitted to a base, makes it necessary to provide an efficient and secure transmission protocol. In this paper, we propose a novel framework that allows secure and efficient transmission of hyperspectral images, by combining a reversible invisible watermarking scheme, used in conjunction with digital signature techniques, and a state-of-art predictive-based lossless compression algorithm.

  15. New realisation of Preisach model using adaptive polynomial approximation

    NASA Astrophysics Data System (ADS)

    Liu, Van-Tsai; Lin, Chun-Liang; Wing, Home-Young

    2012-09-01

    Modelling system with hysteresis has received considerable attention recently due to the increasing accurate requirement in engineering applications. The classical Preisach model (CPM) is the most popular model to demonstrate hysteresis which can be represented by infinite but countable first-order reversal curves (FORCs). The usage of look-up tables is one way to approach the CPM in actual practice. The data in those tables correspond with the samples of a finite number of FORCs. This approach, however, faces two major problems: firstly, it requires a large amount of memory space to obtain an accurate prediction of hysteresis; secondly, it is difficult to derive efficient ways to modify the data table to reflect the timing effect of elements with hysteresis. To overcome, this article proposes the idea of using a set of polynomials to emulate the CPM instead of table look-up. The polynomial approximation requires less memory space for data storage. Furthermore, the polynomial coefficients can be obtained accurately by using the least-square approximation or adaptive identification algorithm, such as the possibility of accurate tracking of hysteresis model parameters.

  16. Prediction algorithms for urban traffic control

    DOT National Transportation Integrated Search

    1979-02-01

    The objectives of this study are to 1) review and assess the state-of-the-art of prediction algorithms for urban traffic control in terms of their accuracy and application, and 2) determine the prediction accuracy obtainable by examining the performa...

  17. The comparison of robust partial least squares regression with robust principal component regression on a real

    NASA Astrophysics Data System (ADS)

    Polat, Esra; Gunay, Suleyman

    2013-10-01

    One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.

  18. Variability in the Propagation Phase of CFD-Based Noise Prediction: Summary of Results From Category 8 of the BANC-III Workshop

    NASA Technical Reports Server (NTRS)

    Lopes, Leonard; Redonnet, Stephane; Imamura, Taro; Ikeda, Tomoaki; Zawodny, Nikolas; Cunha, Guilherme

    2015-01-01

    The usage of Computational Fluid Dynamics (CFD) in noise prediction typically has been a two part process: accurately predicting the flow conditions in the near-field and then propagating the noise from the near-field to the observer. Due to the increase in computing power and the cost benefit when weighed against wind tunnel testing, the usage of CFD to estimate the local flow field of complex geometrical structures has become more routine. Recently, the Benchmark problems in Airframe Noise Computation (BANC) workshops have provided a community focus on accurately simulating the local flow field near the body with various CFD approaches. However, to date, little effort has been given into assessing the impact of the propagation phase of noise prediction. This paper includes results from the BANC-III workshop which explores variability in the propagation phase of CFD-based noise prediction. This includes two test cases: an analytical solution of a quadrupole source near a sphere and a computational solution around a nose landing gear. Agreement between three codes was very good for the analytic test case, but CFD-based noise predictions indicate that the propagation phase can introduce 3dB or more of variability in noise predictions.

  19. Phase 2 development of Great Lakes algorithms for Nimbus-7 coastal zone color scanner

    NASA Technical Reports Server (NTRS)

    Tanis, Fred J.

    1984-01-01

    A series of experiments have been conducted in the Great Lakes designed to evaluate the application of the NIMBUS-7 Coastal Zone Color Scanner (CZCS). Atmospheric and water optical models were used to relate surface and subsurface measurements to satellite measured radiances. Absorption and scattering measurements were reduced to obtain a preliminary optical model for the Great Lakes. Algorithms were developed for geometric correction, correction for Rayleigh and aerosol path radiance, and prediction of chlorophyll-a pigment and suspended mineral concentrations. The atmospheric algorithm developed compared favorably with existing algorithms and was the only algorithm found to adequately predict the radiance variations in the 670 nm band. The atmospheric correction algorithm developed was designed to extract needed algorithm parameters from the CZCS radiance values. The Gordon/NOAA ocean algorithms could not be demonstrated to work for Great Lakes waters. Predicted values of chlorophyll-a concentration compared favorably with expected and measured data for several areas of the Great Lakes.

  20. Optimal design of low-density SNP arrays for genomic prediction: algorithm and applications

    USDA-ARS?s Scientific Manuscript database

    Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for their optimal design. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optim...

  1. [Classification of results of studying blood plasma with laser correlation spectroscopy based on semiotics of preclinical and clinical states].

    PubMed

    Ternovoĭ, K S; Kryzhanovskiĭ, G N; Musiĭchuk, Iu I; Noskin, L A; Klopov, N V; Noskin, V A; Starodub, N F

    1998-01-01

    The usage of laser correlation spectroscopy for verification of preclinical and clinical states is substantiated. Developed "semiotic" classifier for solving the problems of preclinical and clinical states is presented. The substantiation of biological algorithms as well as the mathematical support and software for the proposed classifier for the data of laser correlation spectroscopy of blood plasma are presented.

  2. A parameter estimation subroutine package

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Nead, W. M.

    1977-01-01

    Linear least squares estimation and regression analyses continue to play a major role in orbit determination and related areas. FORTRAN subroutines have been developed to facilitate analyses of a variety of parameter estimation problems. Easy to use multipurpose sets of algorithms are reported that are reasonably efficient and which use a minimal amount of computer storage. Subroutine inputs, outputs, usage and listings are given, along with examples of how these routines can be used.

  3. Dicotyledon Weed Quantification Algorithm for Selective Herbicide Application in Maize Crops.

    PubMed

    Laursen, Morten Stigaard; Jørgensen, Rasmus Nyholm; Midtiby, Henrik Skov; Jensen, Kjeld; Christiansen, Martin Peter; Giselsson, Thomas Mosgaard; Mortensen, Anders Krogh; Jensen, Peter Kryger

    2016-11-04

    The stricter legislation within the European Union for the regulation of herbicides that are prone to leaching causes a greater economic burden on the agricultural industry through taxation. Owing to the increased economic burden, research in reducing herbicide usage has been prompted. High-resolution images from digital cameras support the studying of plant characteristics. These images can also be utilized to analyze shape and texture characteristics for weed identification. Instead of detecting weed patches, weed density can be estimated at a sub-patch level, through which even the identification of a single plant is possible. The aim of this study is to adapt the monocot and dicot coverage ratio vision (MoDiCoVi) algorithm to estimate dicotyledon leaf cover, perform grid spraying in real time, and present initial results in terms of potential herbicide savings in maize. The authors designed and executed an automated, large-scale field trial supported by the Armadillo autonomous tool carrier robot. The field trial consisted of 299 maize plots. Half of the plots (parcels) were planned with additional seeded weeds; the other half were planned with naturally occurring weeds. The in-situ evaluation showed that, compared to conventional broadcast spraying, the proposed method can reduce herbicide usage by 65% without measurable loss in biological effect.

  4. Dicotyledon Weed Quantification Algorithm for Selective Herbicide Application in Maize Crops

    PubMed Central

    Laursen, Morten Stigaard; Jørgensen, Rasmus Nyholm; Midtiby, Henrik Skov; Jensen, Kjeld; Christiansen, Martin Peter; Giselsson, Thomas Mosgaard; Mortensen, Anders Krogh; Jensen, Peter Kryger

    2016-01-01

    The stricter legislation within the European Union for the regulation of herbicides that are prone to leaching causes a greater economic burden on the agricultural industry through taxation. Owing to the increased economic burden, research in reducing herbicide usage has been prompted. High-resolution images from digital cameras support the studying of plant characteristics. These images can also be utilized to analyze shape and texture characteristics for weed identification. Instead of detecting weed patches, weed density can be estimated at a sub-patch level, through which even the identification of a single plant is possible. The aim of this study is to adapt the monocot and dicot coverage ratio vision (MoDiCoVi) algorithm to estimate dicotyledon leaf cover, perform grid spraying in real time, and present initial results in terms of potential herbicide savings in maize. The authors designed and executed an automated, large-scale field trial supported by the Armadillo autonomous tool carrier robot. The field trial consisted of 299 maize plots. Half of the plots (parcels) were planned with additional seeded weeds; the other half were planned with naturally occurring weeds. The in-situ evaluation showed that, compared to conventional broadcast spraying, the proposed method can reduce herbicide usage by 65% without measurable loss in biological effect. PMID:27827908

  5. Development of a generally applicable morphokinetic algorithm capable of predicting the implantation potential of embryos transferred on Day 3.

    PubMed

    Petersen, Bjørn Molt; Boel, Mikkel; Montag, Markus; Gardner, David K

    2016-10-01

    Can a generally applicable morphokinetic algorithm suitable for Day 3 transfers of time-lapse monitored embryos originating from different culture conditions and fertilization methods be developed for the purpose of supporting the embryologist's decision on which embryo to transfer back to the patient in assisted reproduction? The algorithm presented here can be used independently of culture conditions and fertilization method and provides predictive power not surpassed by other published algorithms for ranking embryos according to their blastocyst formation potential. Generally applicable algorithms have so far been developed only for predicting blastocyst formation. A number of clinics have reported validated implantation prediction algorithms, which have been developed based on clinic-specific culture conditions and clinical environment. However, a generally applicable embryo evaluation algorithm based on actual implantation outcome has not yet been reported. Retrospective evaluation of data extracted from a database of known implantation data (KID) originating from 3275 embryos transferred on Day 3 conducted in 24 clinics between 2009 and 2014. The data represented different culture conditions (reduced and ambient oxygen with various culture medium strategies) and fertilization methods (IVF, ICSI). The capability to predict blastocyst formation was evaluated on an independent set of morphokinetic data from 11 218 embryos which had been cultured to Day 5. PARTICIPANTS/MATERIALS, SETTING, The algorithm was developed by applying automated recursive partitioning to a large number of annotation types and derived equations, progressing to a five-fold cross-validation test of the complete data set and a validation test of different incubation conditions and fertilization methods. The results were expressed as receiver operating characteristics curves using the area under the curve (AUC) to establish the predictive strength of the algorithm. By applying the here developed algorithm (KIDScore), which was based on six annotations (the number of pronuclei equals 2 at the 1-cell stage, time from insemination to pronuclei fading at the 1-cell stage, time from insemination to the 2-cell stage, time from insemination to the 3-cell stage, time from insemination to the 5-cell stage and time from insemination to the 8-cell stage) and ranking the embryos in five groups, the implantation potential of the embryos was predicted with an AUC of 0.650. On Day 3 the KIDScore algorithm was capable of predicting blastocyst development with an AUC of 0.745 and blastocyst quality with an AUC of 0.679. In a comparison of blastocyst prediction including six other published algorithms and KIDScore, only KIDScore and one more algorithm surpassed an algorithm constructed on conventional Alpha/ESHRE consensus timings in terms of predictive power. Some morphological assessments were not available and consequently three of the algorithms in the comparison were not used in full and may therefore have been put at a disadvantage. Algorithms based on implantation data from Day 3 embryo transfers require adjustments to be capable of predicting the implantation potential of Day 5 embryo transfers. The current study is restricted by its retrospective nature and absence of live birth information. Prospective Randomized Controlled Trials should be used in future studies to establish the value of time-lapse technology and morphokinetic evaluation. Algorithms applicable to different culture conditions can be developed if based on large data sets of heterogeneous origin. This study was funded by Vitrolife A/S, Denmark and Vitrolife AB, Sweden. B.M.P.'s company BMP Analytics is performing consultancy for Vitrolife A/S. M.B. is employed at Vitrolife A/S. M.M.'s company ilabcomm GmbH received honorarium for consultancy from Vitrolife AB. D.K.G. received research support from Vitrolife AB. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology.

  6. Development of a generally applicable morphokinetic algorithm capable of predicting the implantation potential of embryos transferred on Day 3

    PubMed Central

    Petersen, Bjørn Molt; Boel, Mikkel; Montag, Markus; Gardner, David K.

    2016-01-01

    STUDY QUESTION Can a generally applicable morphokinetic algorithm suitable for Day 3 transfers of time-lapse monitored embryos originating from different culture conditions and fertilization methods be developed for the purpose of supporting the embryologist's decision on which embryo to transfer back to the patient in assisted reproduction? SUMMARY ANSWER The algorithm presented here can be used independently of culture conditions and fertilization method and provides predictive power not surpassed by other published algorithms for ranking embryos according to their blastocyst formation potential. WHAT IS KNOWN ALREADY Generally applicable algorithms have so far been developed only for predicting blastocyst formation. A number of clinics have reported validated implantation prediction algorithms, which have been developed based on clinic-specific culture conditions and clinical environment. However, a generally applicable embryo evaluation algorithm based on actual implantation outcome has not yet been reported. STUDY DESIGN, SIZE, DURATION Retrospective evaluation of data extracted from a database of known implantation data (KID) originating from 3275 embryos transferred on Day 3 conducted in 24 clinics between 2009 and 2014. The data represented different culture conditions (reduced and ambient oxygen with various culture medium strategies) and fertilization methods (IVF, ICSI). The capability to predict blastocyst formation was evaluated on an independent set of morphokinetic data from 11 218 embryos which had been cultured to Day 5. PARTICIPANTS/MATERIALS, SETTING, METHODS The algorithm was developed by applying automated recursive partitioning to a large number of annotation types and derived equations, progressing to a five-fold cross-validation test of the complete data set and a validation test of different incubation conditions and fertilization methods. The results were expressed as receiver operating characteristics curves using the area under the curve (AUC) to establish the predictive strength of the algorithm. MAIN RESULTS AND THE ROLE OF CHANCE By applying the here developed algorithm (KIDScore), which was based on six annotations (the number of pronuclei equals 2 at the 1-cell stage, time from insemination to pronuclei fading at the 1-cell stage, time from insemination to the 2-cell stage, time from insemination to the 3-cell stage, time from insemination to the 5-cell stage and time from insemination to the 8-cell stage) and ranking the embryos in five groups, the implantation potential of the embryos was predicted with an AUC of 0.650. On Day 3 the KIDScore algorithm was capable of predicting blastocyst development with an AUC of 0.745 and blastocyst quality with an AUC of 0.679. In a comparison of blastocyst prediction including six other published algorithms and KIDScore, only KIDScore and one more algorithm surpassed an algorithm constructed on conventional Alpha/ESHRE consensus timings in terms of predictive power. LIMITATIONS, REASONS FOR CAUTION Some morphological assessments were not available and consequently three of the algorithms in the comparison were not used in full and may therefore have been put at a disadvantage. Algorithms based on implantation data from Day 3 embryo transfers require adjustments to be capable of predicting the implantation potential of Day 5 embryo transfers. The current study is restricted by its retrospective nature and absence of live birth information. Prospective Randomized Controlled Trials should be used in future studies to establish the value of time-lapse technology and morphokinetic evaluation. WIDER IMPLICATIONS OF THE FINDINGS Algorithms applicable to different culture conditions can be developed if based on large data sets of heterogeneous origin. STUDY FUNDING/COMPETING INTEREST(S) This study was funded by Vitrolife A/S, Denmark and Vitrolife AB, Sweden. B.M.P.’s company BMP Analytics is performing consultancy for Vitrolife A/S. M.B. is employed at Vitrolife A/S. M.M.’s company ilabcomm GmbH received honorarium for consultancy from Vitrolife AB. D.K.G. received research support from Vitrolife AB. PMID:27609980

  7. High Speed Research Noise Prediction Code (HSRNOISE) User's and Theoretical Manual

    NASA Technical Reports Server (NTRS)

    Golub, Robert (Technical Monitor); Rawls, John W., Jr.; Yeager, Jessie C.

    2004-01-01

    This report describes a computer program, HSRNOISE, that predicts noise levels for a supersonic aircraft powered by mixed flow turbofan engines with rectangular mixer-ejector nozzles. It fully documents the noise prediction algorithms, provides instructions for executing the HSRNOISE code, and provides predicted noise levels for the High Speed Research (HSR) program Technology Concept (TC) aircraft. The component source noise prediction algorithms were developed jointly by Boeing, General Electric Aircraft Engines (GEAE), NASA and Pratt & Whitney during the course of the NASA HSR program. Modern Technologies Corporation developed an alternative mixer ejector jet noise prediction method under contract to GEAE that has also been incorporated into the HSRNOISE prediction code. Algorithms for determining propagation effects and calculating noise metrics were taken from the NASA Aircraft Noise Prediction Program.

  8. Real-time prediction and gating of respiratory motion using an extended Kalman filter and Gaussian process regression

    NASA Astrophysics Data System (ADS)

    Bukhari, W.; Hong, S.-M.

    2015-01-01

    Motion-adaptive radiotherapy aims to deliver a conformal dose to the target tumour with minimal normal tissue exposure by compensating for tumour motion in real time. The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting and gating respiratory motion that utilizes a model-based and a model-free Bayesian framework by combining them in a cascade structure. The algorithm, named EKF-GPR+, implements a gating function without pre-specifying a particular region of the patient’s breathing cycle. The algorithm first employs an extended Kalman filter (LCM-EKF) to predict the respiratory motion and then uses a model-free Gaussian process regression (GPR) to correct the error of the LCM-EKF prediction. The GPR is a non-parametric Bayesian algorithm that yields predictive variance under Gaussian assumptions. The EKF-GPR+ algorithm utilizes the predictive variance from the GPR component to capture the uncertainty in the LCM-EKF prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification allows us to pause the treatment beam over such instances. EKF-GPR+ implements the gating function by using simple calculations based on the predictive variance with no additional detection mechanism. A sparse approximation of the GPR algorithm is employed to realize EKF-GPR+ in real time. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPR+. The experimental results show that the EKF-GPR+ algorithm effectively reduces the prediction error in a root-mean-square (RMS) sense by employing the gating function, albeit at the cost of a reduced duty cycle. As an example, EKF-GPR+ reduces the patient-wise RMS error to 37%, 39% and 42% in percent ratios relative to no prediction for a duty cycle of 80% at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The experiments also confirm that EKF-GPR+ controls the duty cycle with reasonable accuracy.

  9. An improved stochastic fractal search algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Sun, Chuan; Wang, Bin; Wang, Xiaojun

    2018-05-03

    Protein structure prediction (PSP) is a significant area for biological information research, disease treatment, and drug development and so on. In this paper, three-dimensional structures of proteins are predicted based on the known amino acid sequences, and the structure prediction problem is transformed into a typical NP problem by an AB off-lattice model. This work applies a novel improved Stochastic Fractal Search algorithm (ISFS) to solve the problem. The Stochastic Fractal Search algorithm (SFS) is an effective evolutionary algorithm that performs well in exploring the search space but falls into local minimums sometimes. In order to avoid the weakness, Lvy flight and internal feedback information are introduced in ISFS. In the experimental process, simulations are conducted by ISFS algorithm on Fibonacci sequences and real peptide sequences. Experimental results prove that the ISFS performs more efficiently and robust in terms of finding the global minimum and avoiding getting stuck in local minimums.

  10. Prediction based active ramp metering control strategy with mobility and safety assessment

    NASA Astrophysics Data System (ADS)

    Fang, Jie; Tu, Lili

    2018-04-01

    Ramp metering is one of the most direct and efficient motorway traffic flow management measures so as to improve traffic conditions. However, owing to short of traffic conditions prediction, in earlier studies, the impact on traffic flow dynamics of the applied RM control was not quantitatively evaluated. In this study, a RM control algorithm adopting Model Predictive Control (MPC) framework to predict and assess future traffic conditions, which taking both the current traffic conditions and the RM-controlled future traffic states into consideration, was presented. The designed RM control algorithm targets at optimizing the network mobility and safety performance. The designed algorithm is evaluated in a field-data-based simulation. Through comparing the presented algorithm controlled scenario with the uncontrolled scenario, it was proved that the proposed RM control algorithm can effectively relieve the congestion of traffic network with no significant compromises in safety aspect.

  11. False-nearest-neighbors algorithm and noise-corrupted time series

    NASA Astrophysics Data System (ADS)

    Rhodes, Carl; Morari, Manfred

    1997-05-01

    The false-nearest-neighbors (FNN) algorithm was originally developed to determine the embedding dimension for autonomous time series. For noise-free computer-generated time series, the algorithm does a good job in predicting the embedding dimension. However, the problem of predicting the embedding dimension when the time-series data are corrupted by noise was not fully examined in the original studies of the FNN algorithm. Here it is shown that with large data sets, even small amounts of noise can lead to incorrect prediction of the embedding dimension. Surprisingly, as the length of the time series analyzed by FNN grows larger, the cause of incorrect prediction becomes more pronounced. An analysis of the effect of noise on the FNN algorithm and a solution for dealing with the effects of noise are given here. Some results on the theoretically correct choice of the FNN threshold are also presented.

  12. Protein Structure Prediction with Evolutionary Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, W.E.; Krasnogor, N.; Pelta, D.A.

    1999-02-08

    Evolutionary algorithms have been successfully applied to a variety of molecular structure prediction problems. In this paper we reconsider the design of genetic algorithms that have been applied to a simple protein structure prediction problem. Our analysis considers the impact of several algorithmic factors for this problem: the confirmational representation, the energy formulation and the way in which infeasible conformations are penalized, Further we empirically evaluated the impact of these factors on a small set of polymer sequences. Our analysis leads to specific recommendations for both GAs as well as other heuristic methods for solving PSP on the HP model.

  13. A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2011-01-01

    An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.

  14. Prediction Of The Expected Safety Performance Of Rural Two-Lane Highways

    DOT National Transportation Integrated Search

    2000-12-01

    This report presents an algorithm for predicting the safety performance of a rural two-lane highway. The accident prediction algorithm consists of base models and accident modification factors for both roadway segments and at-grade intersections on r...

  15. Electric Power Engineering Cost Predicting Model Based on the PCA-GA-BP

    NASA Astrophysics Data System (ADS)

    Wen, Lei; Yu, Jiake; Zhao, Xin

    2017-10-01

    In this paper a hybrid prediction algorithm: PCA-GA-BP model is proposed. PCA algorithm is established to reduce the correlation between indicators of original data and decrease difficulty of BP neural network in complex dimensional calculation. The BP neural network is established to estimate the cost of power transmission project. The results show that PCA-GA-BP algorithm can improve result of prediction of electric power engineering cost.

  16. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony

    1990-01-01

    The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  17. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.

    1990-01-01

    Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  18. Gaming Device Usage Patterns Predict Internet Gaming Disorder: Comparison across Different Gaming Device Usage Patterns.

    PubMed

    Paik, Soo-Hyun; Cho, Hyun; Chun, Ji-Won; Jeong, Jo-Eun; Kim, Dai-Jin

    2017-12-05

    Gaming behaviors have been significantly influenced by smartphones. This study was designed to explore gaming behaviors and clinical characteristics across different gaming device usage patterns and the role of the patterns on Internet gaming disorder (IGD). Responders of an online survey regarding smartphone and online game usage were classified by different gaming device usage patterns: (1) individuals who played only computer games; (2) individuals who played computer games more than smartphone games; (3) individuals who played computer and smartphone games evenly; (4) individuals who played smartphone games more than computer games; (5) individuals who played only smartphone games. Data on demographics, gaming-related behaviors, and scales for Internet and smartphone addiction, depression, anxiety disorder, and substance use were collected. Combined users, especially those who played computer and smartphone games evenly, had higher prevalence of IGD, depression, anxiety disorder, and substance use disorder. These subjects were more prone to develop IGD than reference group (computer only gamers) (B = 0.457, odds ratio = 1.579). Smartphone only gamers had the lowest prevalence of IGD, spent the least time and money on gaming, and showed lowest scores of Internet and smartphone addiction. Our findings suggest that gaming device usage patterns may be associated with the occurrence, course, and prognosis of IGD.

  19. Gaming Device Usage Patterns Predict Internet Gaming Disorder: Comparison across Different Gaming Device Usage Patterns

    PubMed Central

    Cho, Hyun; Chun, Ji-Won; Jeong, Jo-Eun; Kim, Dai-Jin

    2017-01-01

    Gaming behaviors have been significantly influenced by smartphones. This study was designed to explore gaming behaviors and clinical characteristics across different gaming device usage patterns and the role of the patterns on Internet gaming disorder (IGD). Responders of an online survey regarding smartphone and online game usage were classified by different gaming device usage patterns: (1) individuals who played only computer games; (2) individuals who played computer games more than smartphone games; (3) individuals who played computer and smartphone games evenly; (4) individuals who played smartphone games more than computer games; (5) individuals who played only smartphone games. Data on demographics, gaming-related behaviors, and scales for Internet and smartphone addiction, depression, anxiety disorder, and substance use were collected. Combined users, especially those who played computer and smartphone games evenly, had higher prevalence of IGD, depression, anxiety disorder, and substance use disorder. These subjects were more prone to develop IGD than reference group (computer only gamers) (B = 0.457, odds ratio = 1.579). Smartphone only gamers had the lowest prevalence of IGD, spent the least time and money on gaming, and showed lowest scores of Internet and smartphone addiction. Our findings suggest that gaming device usage patterns may be associated with the occurrence, course, and prognosis of IGD. PMID:29206183

  20. The effects of media usage and interpersonal contacts on the stereotyping of lesbians and gay men in China.

    PubMed

    Tu, Jia-Wei; Lee, Tien-Tsung

    2014-01-01

    Relatively little research has investigated the association of information sources and the stereotyping of homosexuals in other cultures. This study is a survey of 226 Chinese college students about their stereotypes of homosexuals and their sources of information on gays and lesbians. The stereotyping of homosexuals is predicted by the size of community, interest in knowing homosexuals, and in-person contacts. A higher level of negative stereotypes is associated with frequent usage of Chinese media.

  1. An NAFP Project: Use of Object Oriented Methodologies and Design Patterns to Refactor Software Design

    NASA Technical Reports Server (NTRS)

    Shaykhian, Gholam Ali; Baggs, Rhoda

    2007-01-01

    In the early problem-solution era of software programming, functional decompositions were mainly used to design and implement software solutions. In functional decompositions, functions and data are introduced as two separate entities during the design phase, and are followed as such in the implementation phase. Functional decompositions make use of refactoring through optimizing the algorithms, grouping similar functionalities into common reusable functions, and using abstract representations of data where possible; all these are done during the implementation phase. This paper advocates the usage of object-oriented methodologies and design patterns as the centerpieces of refactoring software solutions. Refactoring software is a method of changing software design while explicitly preserving its external functionalities. The combined usage of object-oriented methodologies and design patterns to refactor should also benefit the overall software life cycle cost with improved software.

  2. The Location GNSS Modules for the Components of Proteus System

    NASA Astrophysics Data System (ADS)

    Brzostowski, K.; Darakchiev, R.; Foks-Ryznar, A.; Sitek, P.

    2012-01-01

    The Proteus system - the Integrated Mobile System for Counterterrorism and Rescue Operations is a complex innovative project. To assure the best possible localization of mobile components of the system, many different Global Navigation Satellite System (GNSS) modules were taken into account. In order to chose the best solution many types of tests were done. Full results and conclusions are presented in this paper. The idea of measurements was to test modules in GPS Standard Positioning Service (SPS) with EGNOS system specification according to certain algorithms. The tests had to answer the question: what type of GNSS modules should be used on different components with respect to specific usage of Proteus system. The second goal of tests was to check the solution quality of integrated GNSS/INS (Inertial Navigation System) and its possible usage in some Proteus system components.

  3. Quantitative analysis of antibiotic usage in British sheep flocks.

    PubMed

    Davies, Peers; Remnant, John G; Green, Martin J; Gascoigne, Emily; Gibbon, Nick; Hyde, Robert; Porteous, Jack R; Schubert, Kiera; Lovatt, Fiona; Corbishley, Alexander

    2017-11-11

    The aim of this study was to examine the variation in antibiotic usage between 207 commercial sheep flocks using their veterinary practice prescribing records. Mean and median prescribed mass per population corrected unit (mg/PCU) was 11.38 and 5.95, respectively and closely correlated with animal defined daily dose (ADDD) 1.47 (mean), 0.74 (median) (R 2 =0.84, P<0.001). This is low in comparison with the suggested target (an average across all the UK livestock sectors) of 50 mg/PCU. In total, 80 per cent of all antibiotic usage occurred in the 39 per cent of flocks where per animal usage was greater than 9.0 mg/PCU. Parenteral antibiotics, principally oxytetracycline, represented 82 per cent of the total prescribed mass, 65.5 per cent of antibiotics (mg/PCU) were prescribed for the treatment of lameness. Oral antibiotics were prescribed to 49 per cent of flocks, 64 per cent of predicted lamb crop/farm. Lowland flocks were prescribed significantly more antibiotics than hill flocks. Variance partitioning apportioned 79 per cent of variation in total antibiotic usage (mg/PCU) to the farm level and 21 per cent to the veterinary practice indicating that veterinary practices have a substantial impact on overall antimicrobial usage. Reducing antibiotic usage in the sheep sector should be possible with better understanding of the drivers of high usage in individual flocks and of veterinary prescribing practices. © British Veterinary Association (unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. Factors Predicting Online University Students' Use of a Mobile Learning Management System (m-LMS)

    ERIC Educational Resources Information Center

    Joo, Young Ju; Kim, Nari; Kim, Nam Hee

    2016-01-01

    This study analyzed the relationships among factors predicting online university students' actual usage of a mobile learning management system (m-LMS) through a structural model. Data from 222 students in a Korean online university were collected to investigate integrated relationships among their perceived ease of use, perceived usefulness,…

  5. The Influence of Differential "Power" and "Solidarity" upon the Predictability of Behavior: A Peruvian Example

    ERIC Educational Resources Information Center

    Moles, Jerry A.

    1978-01-01

    The usage of Spanish address terms is investigated to explore the predictability and variability in the behavior of non-Indians and Quechua Indians in Peru. The behavior variations are related to differential "power" and "solidarity" between the two ethnic groups and differential "solidarity" within the Quecha group.…

  6. Predicting Knowledge Workers' Participation in Voluntary Learning with Employee Characteristics and Online Learning Tools

    ERIC Educational Resources Information Center

    Hicks, Catherine

    2018-01-01

    Purpose: This paper aims to explore predicting employee learning activity via employee characteristics and usage for two online learning tools. Design/methodology/approach: Statistical analysis focused on observational data collected from user logs. Data are analyzed via regression models. Findings: Findings are presented for over 40,000…

  7. Accuracy of algorithms to predict accessory pathway location in children with Wolff-Parkinson-White syndrome.

    PubMed

    Wren, Christopher; Vogel, Melanie; Lord, Stephen; Abrams, Dominic; Bourke, John; Rees, Philip; Rosenthal, Eric

    2012-02-01

    The aim of this study was to examine the accuracy in predicting pathway location in children with Wolff-Parkinson-White syndrome for each of seven published algorithms. ECGs from 100 consecutive children with Wolff-Parkinson-White syndrome undergoing electrophysiological study were analysed by six investigators using seven published algorithms, six of which had been developed in adult patients. Accuracy and concordance of predictions were adjusted for the number of pathway locations. Accessory pathways were left-sided in 49, septal in 20 and right-sided in 31 children. Overall accuracy of prediction was 30-49% for the exact location and 61-68% including adjacent locations. Concordance between investigators varied between 41% and 86%. No algorithm was better at predicting septal pathways (accuracy 5-35%, improving to 40-78% including adjacent locations), but one was significantly worse. Predictive accuracy was 24-53% for the exact location of right-sided pathways (50-71% including adjacent locations) and 32-55% for the exact location of left-sided pathways (58-73% including adjacent locations). All algorithms were less accurate in our hands than in other authors' own assessment. None performed well in identifying midseptal or right anteroseptal accessory pathway locations.

  8. Performance metrics and variance partitioning reveal sources of uncertainty in species distribution models

    USGS Publications Warehouse

    Watling, James I.; Brandt, Laura A.; Bucklin, David N.; Fujisaki, Ikuko; Mazzotti, Frank J.; Romañach, Stephanie; Speroterra, Carolina

    2015-01-01

    Species distribution models (SDMs) are widely used in basic and applied ecology, making it important to understand sources and magnitudes of uncertainty in SDM performance and predictions. We analyzed SDM performance and partitioned variance among prediction maps for 15 rare vertebrate species in the southeastern USA using all possible combinations of seven potential sources of uncertainty in SDMs: algorithms, climate datasets, model domain, species presences, variable collinearity, CO2 emissions scenarios, and general circulation models. The choice of modeling algorithm was the greatest source of uncertainty in SDM performance and prediction maps, with some additional variation in performance associated with the comprehensiveness of the species presences used for modeling. Other sources of uncertainty that have received attention in the SDM literature such as variable collinearity and model domain contributed little to differences in SDM performance or predictions in this study. Predictions from different algorithms tended to be more variable at northern range margins for species with more northern distributions, which may complicate conservation planning at the leading edge of species' geographic ranges. The clear message emerging from this work is that researchers should use multiple algorithms for modeling rather than relying on predictions from a single algorithm, invest resources in compiling a comprehensive set of species presences, and explicitly evaluate uncertainty in SDM predictions at leading range margins.

  9. Controlling for Frailty in Pharmacoepidemiologic Studies of Older Adults: Validation of an Existing Medicare Claims-based Algorithm.

    PubMed

    Cuthbertson, Carmen C; Kucharska-Newton, Anna; Faurot, Keturah R; Stürmer, Til; Jonsson Funk, Michele; Palta, Priya; Windham, B Gwen; Thai, Sydney; Lund, Jennifer L

    2018-07-01

    Frailty is a geriatric syndrome characterized by weakness and weight loss and is associated with adverse health outcomes. It is often an unmeasured confounder in pharmacoepidemiologic and comparative effectiveness studies using administrative claims data. Among the Atherosclerosis Risk in Communities (ARIC) Study Visit 5 participants (2011-2013; n = 3,146), we conducted a validation study to compare a Medicare claims-based algorithm of dependency in activities of daily living (or dependency) developed as a proxy for frailty with a reference standard measure of phenotypic frailty. We applied the algorithm to the ARIC participants' claims data to generate a predicted probability of dependency. Using the claims-based algorithm, we estimated the C-statistic for predicting phenotypic frailty. We further categorized participants by their predicted probability of dependency (<5%, 5% to <20%, and ≥20%) and estimated associations with difficulties in physical abilities, falls, and mortality. The claims-based algorithm showed good discrimination of phenotypic frailty (C-statistic = 0.71; 95% confidence interval [CI] = 0.67, 0.74). Participants classified with a high predicted probability of dependency (≥20%) had higher prevalence of falls and difficulty in physical ability, and a greater risk of 1-year all-cause mortality (hazard ratio = 5.7 [95% CI = 2.5, 13]) than participants classified with a low predicted probability (<5%). Sensitivity and specificity varied across predicted probability of dependency thresholds. The Medicare claims-based algorithm showed good discrimination of phenotypic frailty and high predictive ability with adverse health outcomes. This algorithm can be used in future Medicare claims analyses to reduce confounding by frailty and improve study validity.

  10. Training the Recurrent neural network by the Fuzzy Min-Max algorithm for fault prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zemouri, Ryad; Racoceanu, Daniel; Zerhouni, Noureddine

    2009-03-05

    In this paper, we present a training technique of a Recurrent Radial Basis Function neural network for fault prediction. We use the Fuzzy Min-Max technique to initialize the k-center of the RRBF neural network. The k-means algorithm is then applied to calculate the centers that minimize the mean square error of the prediction task. The performances of the k-means algorithm are then boosted by the Fuzzy Min-Max technique.

  11. Methods for predicting properties and tailoring salt solutions for industrial processes

    NASA Technical Reports Server (NTRS)

    Ally, Moonis R.

    1993-01-01

    An algorithm developed at Oak Ridge National Laboratory accurately and quickly predicts thermodynamic properties of concentrated aqueous salt solutions. This algorithm is much simpler and much faster than other modeling schemes and is unique because it can predict solution behavior at very high concentrations and under varying conditions. Typical industrial applications of this algorithm would be in manufacture of inorganic chemicals by crystallization, thermal storage, refrigeration and cooling, extraction of metals, emissions controls, etc.

  12. Application of Avco data analysis and prediction techniques (ADAPT) to prediction of sunspot activity

    NASA Technical Reports Server (NTRS)

    Hunter, H. E.; Amato, R. A.

    1972-01-01

    The results are presented of the application of Avco Data Analysis and Prediction Techniques (ADAPT) to derivation of new algorithms for the prediction of future sunspot activity. The ADAPT derived algorithms show a factor of 2 to 3 reduction in the expected 2-sigma errors in the estimates of the 81-day running average of the Zurich sunspot numbers. The report presents: (1) the best estimates for sunspot cycles 20 and 21, (2) a comparison of the ADAPT performance with conventional techniques, and (3) specific approaches to further reduction in the errors of estimated sunspot activity and to recovery of earlier sunspot historical data. The ADAPT programs are used both to derive regression algorithm for prediction of the entire 11-year sunspot cycle from the preceding two cycles and to derive extrapolation algorithms for extrapolating a given sunspot cycle based on any available portion of the cycle.

  13. Prediction of monthly rainfall in Victoria, Australia: Clusterwise linear regression approach

    NASA Astrophysics Data System (ADS)

    Bagirov, Adil M.; Mahmood, Arshad; Barton, Andrew

    2017-05-01

    This paper develops the Clusterwise Linear Regression (CLR) technique for prediction of monthly rainfall. The CLR is a combination of clustering and regression techniques. It is formulated as an optimization problem and an incremental algorithm is designed to solve it. The algorithm is applied to predict monthly rainfall in Victoria, Australia using rainfall data with five input meteorological variables over the period of 1889-2014 from eight geographically diverse weather stations. The prediction performance of the CLR method is evaluated by comparing observed and predicted rainfall values using four measures of forecast accuracy. The proposed method is also compared with the CLR using the maximum likelihood framework by the expectation-maximization algorithm, multiple linear regression, artificial neural networks and the support vector machines for regression models using computational results. The results demonstrate that the proposed algorithm outperforms other methods in most locations.

  14. A novel toolpath force prediction algorithm using CAM volumetric data for optimizing robotic arthroplasty.

    PubMed

    Kianmajd, Babak; Carter, David; Soshi, Masakazu

    2016-10-01

    Robotic total hip arthroplasty is a procedure in which milling operations are performed on the femur to remove material for the insertion of a prosthetic implant. The robot performs the milling operation by following a sequential list of tool motions, also known as a toolpath, generated by a computer-aided manufacturing (CAM) software. The purpose of this paper is to explain a new toolpath force prediction algorithm that predicts cutting forces, which results in improving the quality and safety of surgical systems. With a custom macro developed in the CAM system's native application programming interface, cutting contact patch volume was extracted from CAM simulations. A time domain cutting force model was then developed through the use of a cutting force prediction algorithm. The second portion validated the algorithm by machining a hip canal in simulated bone using a CNC machine. Average cutting forces were measured during machining using a dynamometer and compared to the values predicted from CAM simulation data using the proposed method. The results showed the predicted forces matched the measured forces in both magnitude and overall pattern shape. However, due to inconsistent motion control, the time duration of the forces was slightly distorted. Nevertheless, the algorithm effectively predicted the forces throughout an entire hip canal procedure. This method provides a fast and easy technique for predicting cutting forces during orthopedic milling by utilizing data within a CAM software.

  15. Real-time image annotation by manifold-based biased Fisher discriminant analysis

    NASA Astrophysics Data System (ADS)

    Ji, Rongrong; Yao, Hongxun; Wang, Jicheng; Sun, Xiaoshuai; Liu, Xianming

    2008-01-01

    Automatic Linguistic Annotation is a promising solution to bridge the semantic gap in content-based image retrieval. However, two crucial issues are not well addressed in state-of-art annotation algorithms: 1. The Small Sample Size (3S) problem in keyword classifier/model learning; 2. Most of annotation algorithms can not extend to real-time online usage due to their low computational efficiencies. This paper presents a novel Manifold-based Biased Fisher Discriminant Analysis (MBFDA) algorithm to address these two issues by transductive semantic learning and keyword filtering. To address the 3S problem, Co-Training based Manifold learning is adopted for keyword model construction. To achieve real-time annotation, a Bias Fisher Discriminant Analysis (BFDA) based semantic feature reduction algorithm is presented for keyword confidence discrimination and semantic feature reduction. Different from all existing annotation methods, MBFDA views image annotation from a novel Eigen semantic feature (which corresponds to keywords) selection aspect. As demonstrated in experiments, our manifold-based biased Fisher discriminant analysis annotation algorithm outperforms classical and state-of-art annotation methods (1.K-NN Expansion; 2.One-to-All SVM; 3.PWC-SVM) in both computational time and annotation accuracy with a large margin.

  16. Accuracy and speed in computing the Chebyshev collocation derivative

    NASA Technical Reports Server (NTRS)

    Don, Wai-Sun; Solomonoff, Alex

    1991-01-01

    We studied several algorithms for computing the Chebyshev spectral derivative and compare their roundoff error. For a large number of collocation points, the elements of the Chebyshev differentiation matrix, if constructed in the usual way, are not computed accurately. A subtle cause is is found to account for the poor accuracy when computing the derivative by the matrix-vector multiplication method. Methods for accurately computing the elements of the matrix are presented, and we find that if the entities of the matrix are computed accurately, the roundoff error of the matrix-vector multiplication is as small as that of the transform-recursion algorithm. Results of CPU time usage are shown for several different algorithms for computing the derivative by the Chebyshev collocation method for a wide variety of two-dimensional grid sizes on both an IBM and a Cray 2 computer. We found that which algorithm is fastest on a particular machine depends not only on the grid size, but also on small details of the computer hardware as well. For most practical grid sizes used in computation, the even-odd decomposition algorithm is found to be faster than the transform-recursion method.

  17. A comparison of common programming languages used in bioinformatics.

    PubMed

    Fourment, Mathieu; Gillings, Michael R

    2008-02-05

    The performance of different programming languages has previously been benchmarked using abstract mathematical algorithms, but not using standard bioinformatics algorithms. We compared the memory usage and speed of execution for three standard bioinformatics methods, implemented in programs using one of six different programming languages. Programs for the Sellers algorithm, the Neighbor-Joining tree construction algorithm and an algorithm for parsing BLAST file outputs were implemented in C, C++, C#, Java, Perl and Python. Implementations in C and C++ were fastest and used the least memory. Programs in these languages generally contained more lines of code. Java and C# appeared to be a compromise between the flexibility of Perl and Python and the fast performance of C and C++. The relative performance of the tested languages did not change from Windows to Linux and no clear evidence of a faster operating system was found. Source code and additional information are available from http://www.bioinformatics.org/benchmark/. This benchmark provides a comparison of six commonly used programming languages under two different operating systems. The overall comparison shows that a developer should choose an appropriate language carefully, taking into account the performance expected and the library availability for each language.

  18. D Data Acquisition Based on Opencv for Close-Range Photogrammetry Applications

    NASA Astrophysics Data System (ADS)

    Jurjević, L.; Gašparović, M.

    2017-05-01

    Development of the technology in the area of the cameras, computers and algorithms for 3D the reconstruction of the objects from the images resulted in the increased popularity of the photogrammetry. Algorithms for the 3D model reconstruction are so advanced that almost anyone can make a 3D model of photographed object. The main goal of this paper is to examine the possibility of obtaining 3D data for the purposes of the close-range photogrammetry applications, based on the open source technologies. All steps of obtaining 3D point cloud are covered in this paper. Special attention is given to the camera calibration, for which two-step process of calibration is used. Both, presented algorithm and accuracy of the point cloud are tested by calculating the spatial difference between referent and produced point clouds. During algorithm testing, robustness and swiftness of obtaining 3D data is noted, and certainly usage of this and similar algorithms has a lot of potential in the real-time application. That is the reason why this research can find its application in the architecture, spatial planning, protection of cultural heritage, forensic, mechanical engineering, traffic management, medicine and other sciences.

  19. Chaotic Traversal (CHAT): Very Large Graphs Traversal Using Chaotic Dynamics

    NASA Astrophysics Data System (ADS)

    Changaival, Boonyarit; Rosalie, Martin; Danoy, Grégoire; Lavangnananda, Kittichai; Bouvry, Pascal

    2017-12-01

    Graph Traversal algorithms can find their applications in various fields such as routing problems, natural language processing or even database querying. The exploration can be considered as a first stepping stone into knowledge extraction from the graph which is now a popular topic. Classical solutions such as Breadth First Search (BFS) and Depth First Search (DFS) require huge amounts of memory for exploring very large graphs. In this research, we present a novel memoryless graph traversal algorithm, Chaotic Traversal (CHAT) which integrates chaotic dynamics to traverse large unknown graphs via the Lozi map and the Rössler system. To compare various dynamics effects on our algorithm, we present an original way to perform the exploration of a parameter space using a bifurcation diagram with respect to the topological structure of attractors. The resulting algorithm is an efficient and nonresource demanding algorithm, and is therefore very suitable for partial traversal of very large and/or unknown environment graphs. CHAT performance using Lozi map is proven superior than the, commonly known, Random Walk, in terms of number of nodes visited (coverage percentage) and computation time where the environment is unknown and memory usage is restricted.

  20. A comparison of common programming languages used in bioinformatics

    PubMed Central

    Fourment, Mathieu; Gillings, Michael R

    2008-01-01

    Background The performance of different programming languages has previously been benchmarked using abstract mathematical algorithms, but not using standard bioinformatics algorithms. We compared the memory usage and speed of execution for three standard bioinformatics methods, implemented in programs using one of six different programming languages. Programs for the Sellers algorithm, the Neighbor-Joining tree construction algorithm and an algorithm for parsing BLAST file outputs were implemented in C, C++, C#, Java, Perl and Python. Results Implementations in C and C++ were fastest and used the least memory. Programs in these languages generally contained more lines of code. Java and C# appeared to be a compromise between the flexibility of Perl and Python and the fast performance of C and C++. The relative performance of the tested languages did not change from Windows to Linux and no clear evidence of a faster operating system was found. Source code and additional information are available from Conclusion This benchmark provides a comparison of six commonly used programming languages under two different operating systems. The overall comparison shows that a developer should choose an appropriate language carefully, taking into account the performance expected and the library availability for each language. PMID:18251993

  1. Ant Lion Optimization algorithm for kidney exchanges.

    PubMed

    Hamouda, Eslam; El-Metwally, Sara; Tarek, Mayada

    2018-01-01

    The kidney exchange programs bring new insights in the field of organ transplantation. They make the previously not allowed surgery of incompatible patient-donor pairs easier to be performed on a large scale. Mathematically, the kidney exchange is an optimization problem for the number of possible exchanges among the incompatible pairs in a given pool. Also, the optimization modeling should consider the expected quality-adjusted life of transplant candidates and the shortage of computational and operational hospital resources. In this article, we introduce a bio-inspired stochastic-based Ant Lion Optimization, ALO, algorithm to the kidney exchange space to maximize the number of feasible cycles and chains among the pool pairs. Ant Lion Optimizer-based program achieves comparable kidney exchange results to the deterministic-based approaches like integer programming. Also, ALO outperforms other stochastic-based methods such as Genetic Algorithm in terms of the efficient usage of computational resources and the quantity of resulting exchanges. Ant Lion Optimization algorithm can be adopted easily for on-line exchanges and the integration of weights for hard-to-match patients, which will improve the future decisions of kidney exchange programs. A reference implementation for ALO algorithm for kidney exchanges is written in MATLAB and is GPL licensed. It is available as free open-source software from: https://github.com/SaraEl-Metwally/ALO_algorithm_for_Kidney_Exchanges.

  2. Comparing Binaural Pre-processing Strategies I: Instrumental Evaluation.

    PubMed

    Baumgärtel, Regina M; Krawczyk-Becker, Martin; Marquardt, Daniel; Völker, Christoph; Hu, Hongmei; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Ernst, Stephan M A; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias

    2015-12-30

    In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios. © The Author(s) 2015.

  3. Comparing Binaural Pre-processing Strategies I

    PubMed Central

    Krawczyk-Becker, Martin; Marquardt, Daniel; Völker, Christoph; Hu, Hongmei; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Ernst, Stephan M. A.; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias

    2015-01-01

    In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios. PMID:26721920

  4. Prediction of dynamical systems by symbolic regression

    NASA Astrophysics Data System (ADS)

    Quade, Markus; Abel, Markus; Shafi, Kamran; Niven, Robert K.; Noack, Bernd R.

    2016-07-01

    We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast.

  5. Code-based Diagnostic Algorithms for Idiopathic Pulmonary Fibrosis. Case Validation and Improvement.

    PubMed

    Ley, Brett; Urbania, Thomas; Husson, Gail; Vittinghoff, Eric; Brush, David R; Eisner, Mark D; Iribarren, Carlos; Collard, Harold R

    2017-06-01

    Population-based studies of idiopathic pulmonary fibrosis (IPF) in the United States have been limited by reliance on diagnostic code-based algorithms that lack clinical validation. To validate a well-accepted International Classification of Diseases, Ninth Revision, code-based algorithm for IPF using patient-level information and to develop a modified algorithm for IPF with enhanced predictive value. The traditional IPF algorithm was used to identify potential cases of IPF in the Kaiser Permanente Northern California adult population from 2000 to 2014. Incidence and prevalence were determined overall and by age, sex, and race/ethnicity. A validation subset of cases (n = 150) underwent expert medical record and chest computed tomography review. A modified IPF algorithm was then derived and validated to optimize positive predictive value. From 2000 to 2014, the traditional IPF algorithm identified 2,608 cases among 5,389,627 at-risk adults in the Kaiser Permanente Northern California population. Annual incidence was 6.8/100,000 person-years (95% confidence interval [CI], 6.1-7.7) and was higher in patients with older age, male sex, and white race. The positive predictive value of the IPF algorithm was only 42.2% (95% CI, 30.6 to 54.6%); sensitivity was 55.6% (95% CI, 21.2 to 86.3%). The corrected incidence was estimated at 5.6/100,000 person-years (95% CI, 2.6-10.3). A modified IPF algorithm had improved positive predictive value but reduced sensitivity compared with the traditional algorithm. A well-accepted International Classification of Diseases, Ninth Revision, code-based IPF algorithm performs poorly, falsely classifying many non-IPF cases as IPF and missing a substantial proportion of IPF cases. A modification of the IPF algorithm may be useful for future population-based studies of IPF.

  6. Machine learning techniques for energy optimization in mobile embedded systems

    NASA Astrophysics Data System (ADS)

    Donohoo, Brad Kyoshi

    Mobile smartphones and other portable battery operated embedded systems (PDAs, tablets) are pervasive computing devices that have emerged in recent years as essential instruments for communication, business, and social interactions. While performance, capabilities, and design are all important considerations when purchasing a mobile device, a long battery lifetime is one of the most desirable attributes. Battery technology and capacity has improved over the years, but it still cannot keep pace with the power consumption demands of today's mobile devices. This key limiter has led to a strong research emphasis on extending battery lifetime by minimizing energy consumption, primarily using software optimizations. This thesis presents two strategies that attempt to optimize mobile device energy consumption with negligible impact on user perception and quality of service (QoS). The first strategy proposes an application and user interaction aware middleware framework that takes advantage of user idle time between interaction events of the foreground application to optimize CPU and screen backlight energy consumption. The framework dynamically classifies mobile device applications based on their received interaction patterns, then invokes a number of different power management algorithms to adjust processor frequency and screen backlight levels accordingly. The second strategy proposes the usage of machine learning techniques to learn a user's mobile device usage pattern pertaining to spatiotemporal and device contexts, and then predict energy-optimal data and location interface configurations. By learning where and when a mobile device user uses certain power-hungry interfaces (3G, WiFi, and GPS), the techniques, which include variants of linear discriminant analysis, linear logistic regression, non-linear logistic regression, and k-nearest neighbor, are able to dynamically turn off unnecessary interfaces at runtime in order to save energy.

  7. A range-based predictive localization algorithm for WSID networks

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Chen, Junjie; Li, Gang

    2017-11-01

    Most studies on localization algorithms are conducted on the sensor networks with densely distributed nodes. However, the non-localizable problems are prone to occur in the network with sparsely distributed sensor nodes. To solve this problem, a range-based predictive localization algorithm (RPLA) is proposed in this paper for the wireless sensor networks syncretizing the RFID (WSID) networks. The Gaussian mixture model is established to predict the trajectory of a mobile target. Then, the received signal strength indication is used to reduce the residence area of the target location based on the approximate point-in-triangulation test algorithm. In addition, collaborative localization schemes are introduced to locate the target in the non-localizable situations. Simulation results verify that the RPLA achieves accurate localization for the network with sparsely distributed sensor nodes. The localization accuracy of the RPLA is 48.7% higher than that of the APIT algorithm, 16.8% higher than that of the single Gaussian model-based algorithm and 10.5% higher than that of the Kalman filtering-based algorithm.

  8. Design, implementation and evaluation of a practical pseudoknot folding algorithm based on thermodynamics

    PubMed Central

    Reeder, Jens; Giegerich, Robert

    2004-01-01

    Background The general problem of RNA secondary structure prediction under the widely used thermodynamic model is known to be NP-complete when the structures considered include arbitrary pseudoknots. For restricted classes of pseudoknots, several polynomial time algorithms have been designed, where the O(n6)time and O(n4) space algorithm by Rivas and Eddy is currently the best available program. Results We introduce the class of canonical simple recursive pseudoknots and present an algorithm that requires O(n4) time and O(n2) space to predict the energetically optimal structure of an RNA sequence, possible containing such pseudoknots. Evaluation against a large collection of known pseudoknotted structures shows the adequacy of the canonization approach and our algorithm. Conclusions RNA pseudoknots of medium size can now be predicted reliably as well as efficiently by the new algorithm. PMID:15294028

  9. Feed-Forward Neural Network Soft-Sensor Modeling of Flotation Process Based on Particle Swarm Optimization and Gravitational Search Algorithm

    PubMed Central

    Wang, Jie-Sheng; Han, Shuang

    2015-01-01

    For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, a feed-forward neural network (FNN) based soft-sensor model optimized by the hybrid algorithm combining particle swarm optimization (PSO) algorithm and gravitational search algorithm (GSA) is proposed. Although GSA has better optimization capability, it has slow convergence velocity and is easy to fall into local optimum. So in this paper, the velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy. Finally, the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model. Simulation results show that the model has better generalization and prediction accuracy for the concentrate grade and tailings recovery rate to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:26583034

  10. Empirical Flutter Prediction Method.

    DTIC Science & Technology

    1988-03-05

    been used in this way to discover species or subspecies of animals, and to discover different types of voter or comsumer requiring different persuasions...respect to behavior or performance or response variables. Once this were done, corresponding clusters might be sought among descriptive or predictive or...jump in a response. The first sort of usage does not apply to the flutter prediction problem. Here the types of behavior are the different kinds of

  11. Foresee: A user-centric home energy management system for energy efficiency and demand response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Xin; Baker, Kyri A.; Christensen, Dane T.

    This paper presents foresee, a user-centric home energy management system that can help optimize how a home operates to concurrently meet users' needs, achieve energy efficiency and commensurate utility cost savings, and reliably deliver grid services based on utility signals. Foresee is built on a multiobjective model predictive control framework, wherein the objectives consist of energy cost, thermal comfort, user convenience, and carbon emission. Foresee learns user preferences on different objectives and acts on their behalf to operate building equipment, such as home appliances, photovoltaic systems, and battery storage. In this work, machine-learning algorithms were used to derive data-driven appliancemore » models and usage patterns to predict the home's future energy consumption. This approach enables highly accurate predictions of comfort needs, energy costs, environmental impacts, and grid service availability. Simulation studies were performed on field data from a residential building stock data set collected in the Pacific Northwest. Results indicated that foresee generated up to 7.6% whole-home energy savings without requiring substantial behavioral changes. When responding to demand response events, foresee was able to provide load forecasts upon receipt of event notifications and delivered the committed demand response services with 10% or fewer errors. Foresee fully utilized the potential of the battery storage and controllable building loads and delivered up to 7.0-kW load reduction and 13.5-kW load increase. As a result, these benefits are provided while maintaining the occupants' thermal comfort or convenience in using their appliances.« less

  12. Foresee: A user-centric home energy management system for energy efficiency and demand response

    DOE PAGES

    Jin, Xin; Baker, Kyri A.; Christensen, Dane T.; ...

    2017-08-23

    This paper presents foresee, a user-centric home energy management system that can help optimize how a home operates to concurrently meet users' needs, achieve energy efficiency and commensurate utility cost savings, and reliably deliver grid services based on utility signals. Foresee is built on a multiobjective model predictive control framework, wherein the objectives consist of energy cost, thermal comfort, user convenience, and carbon emission. Foresee learns user preferences on different objectives and acts on their behalf to operate building equipment, such as home appliances, photovoltaic systems, and battery storage. In this work, machine-learning algorithms were used to derive data-driven appliancemore » models and usage patterns to predict the home's future energy consumption. This approach enables highly accurate predictions of comfort needs, energy costs, environmental impacts, and grid service availability. Simulation studies were performed on field data from a residential building stock data set collected in the Pacific Northwest. Results indicated that foresee generated up to 7.6% whole-home energy savings without requiring substantial behavioral changes. When responding to demand response events, foresee was able to provide load forecasts upon receipt of event notifications and delivered the committed demand response services with 10% or fewer errors. Foresee fully utilized the potential of the battery storage and controllable building loads and delivered up to 7.0-kW load reduction and 13.5-kW load increase. As a result, these benefits are provided while maintaining the occupants' thermal comfort or convenience in using their appliances.« less

  13. Model Predictive Control Based Motion Drive Algorithm for a Driving Simulator

    NASA Astrophysics Data System (ADS)

    Rehmatullah, Faizan

    In this research, we develop a model predictive control based motion drive algorithm for the driving simulator at Toronto Rehabilitation Institute. Motion drive algorithms exploit the limitations of the human vestibular system to formulate a perception of motion within the constrained workspace of a simulator. In the absence of visual cues, the human perception system is unable to distinguish between acceleration and the force of gravity. The motion drive algorithm determines control inputs to displace the simulator platform, and by using the resulting inertial forces and angular rates, creates the perception of motion. By using model predictive control, we can optimize the use of simulator workspace for every maneuver while simulating the vehicle perception. With the ability to handle nonlinear constraints, the model predictive control allows us to incorporate workspace limitations.

  14. The utility and limitations of current web-available algorithms to predict peptides recognized by CD4 T cells in response to pathogen infection #

    PubMed Central

    Chaves, Francisco A.; Lee, Alvin H.; Nayak, Jennifer; Richards, Katherine A.; Sant, Andrea J.

    2012-01-01

    The ability to track CD4 T cells elicited in response to pathogen infection or vaccination is critical because of the role these cells play in protective immunity. Coupled with advances in genome sequencing of pathogenic organisms, there is considerable appeal for implementation of computer-based algorithms to predict peptides that bind to the class II molecules, forming the complex recognized by CD4 T cells. Despite recent progress in this area, there is a paucity of data regarding their success in identifying actual pathogen-derived epitopes. In this study, we sought to rigorously evaluate the performance of multiple web-available algorithms by comparing their predictions and our results using purely empirical methods for epitope discovery in influenza that utilized overlapping peptides and cytokine Elispots, for three independent class II molecules. We analyzed the data in different ways, trying to anticipate how an investigator might use these computational tools for epitope discovery. We come to the conclusion that currently available algorithms can indeed facilitate epitope discovery, but all shared a high degree of false positive and false negative predictions. Therefore, efficiencies were low. We also found dramatic disparities among algorithms and between predicted IC50 values and true dissociation rates of peptide:MHC class II complexes. We suggest that improved success of predictive algorithms will depend less on changes in computational methods or increased data sets and more on changes in parameters used to “train” the algorithms that factor in elements of T cell repertoire and peptide acquisition by class II molecules. PMID:22467652

  15. Modelling and Predicting eHealth Usage in Europe: A Multidimensional Approach From an Online Survey of 13,000 European Union Internet Users

    PubMed Central

    Soler-Ramos, Ivan

    2016-01-01

    Background More advanced methods and models are needed to evaluate the participation of patients and citizens in the shared health care model that eHealth proposes. Objective The goal of our study was to design and evaluate a predictive multidimensional model of eHealth usage. Methods We used 2011 survey data from a sample of 13,000 European citizens aged 16–74 years who had used the Internet in the previous 3 months. We proposed and tested an eHealth usage composite indicator through 2-stage structural equation modelling with latent variables and measurement errors. Logistic regression (odds ratios, ORs) to model the predictors of eHealth usage was calculated using health status and sociodemographic independent variables. Results The dimensions with more explanatory power of eHealth usage were health Internet attitudes, information health Internet usage, empowerment of health Internet users, and the usefulness of health Internet usage. Some 52.39% (6811/13,000) of European Internet users’ eHealth usage was more intensive (greater than the mean). Users with long-term health problems or illnesses (OR 1.20, 95% CI 1.12–1.29) or receiving long-term treatment (OR 1.11, 95% CI 1.03–1.20), having family members with long-term health problems or illnesses (OR 1.44, 95% CI 1.34–1.55), or undertaking care activities for other people (OR 1.58, 95% CI 1.40–1.77) had a high propensity toward intensive eHealth usage. Sociodemographic predictors showed that Internet users who were female (OR 1.23, 95% CI 1.14–1.31), aged 25–54 years (OR 1.12, 95% CI 1.05–1.21), living in larger households (3 members: OR 1.25, 95% CI 1.15–1.36; 5 members: OR 1.13, 95% CI 0.97–1.28; ≥6 members: OR 1.31, 95% CI 1.10–1.57), had more children <16 years of age (1 child: OR 1.29, 95% CI 1.18–1.14; 2 children: OR 1.05, 95% CI 0.94–1.17; 4 children: OR 1.35, 95% CI 0.88–2.08), and had more family members >65 years of age (1 member: OR 1.33, 95% CI 1.18–1.50; ≥4 members: OR 1.82, 95% CI 0.54–6.03) had a greater propensity toward intensive eHealth usage. Likewise, users residing in densely populated areas, such as cities and large towns (OR 1.17, 95% CI 1.09–1.25), also had a greater propensity toward intensive eHealth usage. Educational levels presented an inverted U shape in relation to intensive eHealth usage, with greater propensities among those with a secondary education (OR 1.08, 95% CI 1.01–1.16). Finally, occupational categories and net monthly income data suggest a higher propensity among the employed or self-employed (OR 1.07, 95% CI 0.99–1.15) and among the minimum wage stratum, earning ≤€1000 per month (OR 1.66, 95% CI 1.48–1.87). Conclusions We provide new evidence of inequalities that explain intensive eHealth usage. The results highlight the need to develop more specific eHealth practices to address different realities. PMID:27450189

  16. Modelling and Predicting eHealth Usage in Europe: A Multidimensional Approach From an Online Survey of 13,000 European Union Internet Users.

    PubMed

    Torrent-Sellens, Joan; Díaz-Chao, Ángel; Soler-Ramos, Ivan; Saigí-Rubió, Francesc

    2016-07-22

    More advanced methods and models are needed to evaluate the participation of patients and citizens in the shared health care model that eHealth proposes. The goal of our study was to design and evaluate a predictive multidimensional model of eHealth usage. We used 2011 survey data from a sample of 13,000 European citizens aged 16-74 years who had used the Internet in the previous 3 months. We proposed and tested an eHealth usage composite indicator through 2-stage structural equation modelling with latent variables and measurement errors. Logistic regression (odds ratios, ORs) to model the predictors of eHealth usage was calculated using health status and sociodemographic independent variables. The dimensions with more explanatory power of eHealth usage were health Internet attitudes, information health Internet usage, empowerment of health Internet users, and the usefulness of health Internet usage. Some 52.39% (6811/13,000) of European Internet users' eHealth usage was more intensive (greater than the mean). Users with long-term health problems or illnesses (OR 1.20, 95% CI 1.12-1.29) or receiving long-term treatment (OR 1.11, 95% CI 1.03-1.20), having family members with long-term health problems or illnesses (OR 1.44, 95% CI 1.34-1.55), or undertaking care activities for other people (OR 1.58, 95% CI 1.40-1.77) had a high propensity toward intensive eHealth usage. Sociodemographic predictors showed that Internet users who were female (OR 1.23, 95% CI 1.14-1.31), aged 25-54 years (OR 1.12, 95% CI 1.05-1.21), living in larger households (3 members: OR 1.25, 95% CI 1.15-1.36; 5 members: OR 1.13, 95% CI 0.97-1.28; ≥6 members: OR 1.31, 95% CI 1.10-1.57), had more children <16 years of age (1 child: OR 1.29, 95% CI 1.18-1.14; 2 children: OR 1.05, 95% CI 0.94-1.17; 4 children: OR 1.35, 95% CI 0.88-2.08), and had more family members >65 years of age (1 member: OR 1.33, 95% CI 1.18-1.50; ≥4 members: OR 1.82, 95% CI 0.54-6.03) had a greater propensity toward intensive eHealth usage. Likewise, users residing in densely populated areas, such as cities and large towns (OR 1.17, 95% CI 1.09-1.25), also had a greater propensity toward intensive eHealth usage. Educational levels presented an inverted U shape in relation to intensive eHealth usage, with greater propensities among those with a secondary education (OR 1.08, 95% CI 1.01-1.16). Finally, occupational categories and net monthly income data suggest a higher propensity among the employed or self-employed (OR 1.07, 95% CI 0.99-1.15) and among the minimum wage stratum, earning ≤€1000 per month (OR 1.66, 95% CI 1.48-1.87). We provide new evidence of inequalities that explain intensive eHealth usage. The results highlight the need to develop more specific eHealth practices to address different realities.

  17. Dynamic I/O Power Management for Hard Real-Time Systems

    DTIC Science & Technology

    2005-01-01

    recently emerged as an attractive alternative to inflexible hardware solutions. DPM for hard real - time systems has received relatively little attention...In particular, energy-driven I/O device scheduling for real - time systems has not been considered before. We present the first online DPM algorithm...which we call Low Energy Device Scheduler (LEDES), for hard real - time systems . LEDES takes as inputs a predetermined task schedule and a device-usage

  18. Time Delay Measurements of Key Generation Process on Smart Cards

    DTIC Science & Technology

    2015-03-01

    random number generator is available (Chatterjee & Gupta, 2009). The ECC algorithm will grow in usage as information becomes more and more secure. Figure...Worldwide Mobile Enterprise Security Software 2012–2016 Forecast and Analysis), mobile identity and access management is expected to grow by 27.6 percent...iPad, tablets) as well as 80000 BlackBerry phones. The mobility plan itself will be deployed in three phases over 2014, with the first phase

  19. Tachyon search speeds up retrieval of similar sequences by several orders of magnitude.

    PubMed

    Tan, Joshua; Kuchibhatla, Durga; Sirota, Fernanda L; Sherman, Westley A; Gattermayer, Tobias; Kwoh, Chia Yee; Eisenhaber, Frank; Schneider, Georg; Maurer-Stroh, Sebastian

    2012-06-15

    The usage of current sequence search tools becomes increasingly slower as databases of protein sequences continue to grow exponentially. Tachyon, a new algorithm that identifies closely related protein sequences ~200 times faster than standard BLAST, circumvents this limitation with a reduced database and oligopeptide matching heuristic. The tool is publicly accessible as a webserver at http://tachyon.bii.a-star.edu.sg and can also be accessed programmatically through SOAP.

  20. Dimeric spectra analysis in Microsoft Excel: a comparative study.

    PubMed

    Gilani, A Ghanadzadeh; Moghadam, M; Zakerhamidi, M S

    2011-11-01

    The purpose of this work is to introduce the reader to an Add-in implementation, Decom. This implementation provides the whole processing requirements for analysis of dimeric spectra. General linear and nonlinear decomposition algorithms were integrated as an Excel Add-in for easy installation and usage. In this work, the results of several samples investigations were compared to those obtained by Datan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  1. Predicting the onset of hazardous alcohol drinking in primary care: development and validation of a simple risk algorithm.

    PubMed

    Bellón, Juan Ángel; de Dios Luna, Juan; King, Michael; Nazareth, Irwin; Motrico, Emma; GildeGómez-Barragán, María Josefa; Torres-González, Francisco; Montón-Franco, Carmen; Sánchez-Celaya, Marta; Díaz-Barreiros, Miguel Ángel; Vicens, Catalina; Moreno-Peral, Patricia

    2017-04-01

    Little is known about the risk of progressing to hazardous alcohol use in abstinent or low-risk drinkers. To develop and validate a simple brief risk algorithm for the onset of hazardous alcohol drinking (HAD) over 12 months for use in primary care. Prospective cohort study in 32 health centres from six Spanish provinces, with evaluations at baseline, 6 months, and 12 months. Forty-one risk factors were measured and multilevel logistic regression and inverse probability weighting were used to build the risk algorithm. The outcome was new occurrence of HAD during the study, as measured by the AUDIT. From the lists of 174 GPs, 3954 adult abstinent or low-risk drinkers were recruited. The 'predictAL-10' risk algorithm included just nine variables (10 questions): province, sex, age, cigarette consumption, perception of financial strain, having ever received treatment for an alcohol problem, childhood sexual abuse, AUDIT-C, and interaction AUDIT-C*Age. The c-index was 0.886 (95% CI = 0.854 to 0.918). The optimal cutoff had a sensitivity of 0.83 and specificity of 0.80. Excluding childhood sexual abuse from the model (the 'predictAL-9'), the c-index was 0.880 (95% CI = 0.847 to 0.913), sensitivity 0.79, and specificity 0.81. There was no statistically significant difference between the c-indexes of predictAL-10 and predictAL-9. The predictAL-10/9 is a simple and internally valid risk algorithm to predict the onset of hazardous alcohol drinking over 12 months in primary care attendees; it is a brief tool that is potentially useful for primary prevention of hazardous alcohol drinking. © British Journal of General Practice 2017.

  2. Predicting missing links and identifying spurious links via likelihood analysis

    NASA Astrophysics Data System (ADS)

    Pan, Liming; Zhou, Tao; Lü, Linyuan; Hu, Chin-Kun

    2016-03-01

    Real network data is often incomplete and noisy, where link prediction algorithms and spurious link identification algorithms can be applied. Thus far, it lacks a general method to transform network organizing mechanisms to link prediction algorithms. Here we use an algorithmic framework where a network’s probability is calculated according to a predefined structural Hamiltonian that takes into account the network organizing principles, and a non-observed link is scored by the conditional probability of adding the link to the observed network. Extensive numerical simulations show that the proposed algorithm has remarkably higher accuracy than the state-of-the-art methods in uncovering missing links and identifying spurious links in many complex biological and social networks. Such method also finds applications in exploring the underlying network evolutionary mechanisms.

  3. Predicting missing links and identifying spurious links via likelihood analysis

    PubMed Central

    Pan, Liming; Zhou, Tao; Lü, Linyuan; Hu, Chin-Kun

    2016-01-01

    Real network data is often incomplete and noisy, where link prediction algorithms and spurious link identification algorithms can be applied. Thus far, it lacks a general method to transform network organizing mechanisms to link prediction algorithms. Here we use an algorithmic framework where a network’s probability is calculated according to a predefined structural Hamiltonian that takes into account the network organizing principles, and a non-observed link is scored by the conditional probability of adding the link to the observed network. Extensive numerical simulations show that the proposed algorithm has remarkably higher accuracy than the state-of-the-art methods in uncovering missing links and identifying spurious links in many complex biological and social networks. Such method also finds applications in exploring the underlying network evolutionary mechanisms. PMID:26961965

  4. Exploring the Relationship between Parental Worry about their Children's Health and Usage of an Internet Intervention for Pediatric Encopresis

    PubMed Central

    Magee, Joshua C.; Thorndike, Frances P.; Cox, Daniel J.; Borowitz, Stephen M.

    2009-01-01

    Objective To investigate whether parental worry about their children's health predicts usage of a pediatric Internet intervention for encopresis. Methods Thirty-nine families with a child diagnosed with encopresis completed a national clinical trial of an Internet-based intervention for encopresis (www.ucanpooptoo.com). Parents rated worry about their children's health, encopresis severity, current parent treatment for depression, and parent comfort with the Internet. Usage indicators were collected while participants utilized the intervention. Results Regression analyses showed that parents who reported higher baseline levels of worry about their children's health showed greater subsequent intervention use (β =.52, p =.002), even after accounting for other plausible predictors. Exploratory analyses indicated that this effect may be stronger for families with younger children. Conclusions Characteristics of individuals using Internet-based treatment programs, such as parental worry about their children's health, can influence intervention usage, and should be considered by developers of Internet interventions. PMID:18772228

  5. Enabling active and healthy ageing decision support systems with the smart collection of TV usage patterns

    PubMed Central

    Billis, Antonis S.; Batziakas, Asterios; Bratsas, Charalampos; Tsatali, Marianna S.; Karagianni, Maria

    2016-01-01

    Smart monitoring of seniors behavioural patterns and more specifically activities of daily living have attracted immense research interest in recent years. Development of smart decision support systems to support the promotion of health smart homes has also emerged taking advantage of the plethora of smart, inexpensive and unobtrusive monitoring sensors, devices and software tools. To this end, a smart monitoring system has been used in order to extract meaningful information about television (TV) usage patterns and subsequently associate them with clinical findings of experts. The smart TV operating state remote monitoring system was installed in four elderly women homes and gathered data for more than 11 months. Results suggest that TV daily usage (time the TV is turned on) can predict mental health change. Conclusively, the authors suggest that collection of smart device usage patterns could strengthen the inference capabilities of existing health DSSs applied in uncontrolled settings such as real senior homes. PMID:27284457

  6. Enabling active and healthy ageing decision support systems with the smart collection of TV usage patterns.

    PubMed

    Billis, Antonis S; Batziakas, Asterios; Bratsas, Charalampos; Tsatali, Marianna S; Karagianni, Maria; Bamidis, Panagiotis D

    2016-03-01

    Smart monitoring of seniors behavioural patterns and more specifically activities of daily living have attracted immense research interest in recent years. Development of smart decision support systems to support the promotion of health smart homes has also emerged taking advantage of the plethora of smart, inexpensive and unobtrusive monitoring sensors, devices and software tools. To this end, a smart monitoring system has been used in order to extract meaningful information about television (TV) usage patterns and subsequently associate them with clinical findings of experts. The smart TV operating state remote monitoring system was installed in four elderly women homes and gathered data for more than 11 months. Results suggest that TV daily usage (time the TV is turned on) can predict mental health change. Conclusively, the authors suggest that collection of smart device usage patterns could strengthen the inference capabilities of existing health DSSs applied in uncontrolled settings such as real senior homes.

  7. Comparison of codon usage bias across Leishmania and Trypanosomatids to understand mRNA secondary structure, relative protein abundance and pathway functions.

    PubMed

    Subramanian, Abhishek; Sarkar, Ram Rup

    2015-10-01

    Understanding the variations in gene organization and its effect on the phenotype across different Leishmania species, and to study differential clinical manifestations of parasite within the host, we performed large scale analysis of codon usage patterns between Leishmania and other known Trypanosomatid species. We present the causes and consequences of codon usage bias in Leishmania genomes with respect to mutational pressure, translational selection and amino acid composition bias. We establish GC bias at wobble position that governs codon usage bias across Leishmania species, rather than amino acid composition bias. We found that, within Leishmania, homogenous codon context coding for less frequent amino acid pairs and codons avoiding formation of folding structures in mRNA are essentially chosen. We predicted putative differences in global expression between genes belonging to specific pathways across Leishmania. This explains the role of evolution in shaping the otherwise conserved genome to demonstrate species-specific function-level differences for efficient survival. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Traffic Noise Ground Attenuation Algorithm Evaluation

    NASA Astrophysics Data System (ADS)

    Herman, Lloyd Allen

    The Federal Highway Administration traffic noise prediction program, STAMINA 2.0, was evaluated for its accuracy. In addition, the ground attenuation algorithm used in the Ontario ORNAMENT method was evaluated to determine its potential to improve these predictions. Field measurements of sound levels were made at 41 sites on I-440 in Nashville, Tennessee in order to both study noise barrier effectiveness and to evaluate STAMINA 2.0 and the performance of the ORNAMENT ground attenuation algorithm. The measurement sites, which contain large variations in terrain, included several cross sections. Further, all sites contain some type of barrier, natural or constructed, which could more fully expose the strength and weaknesses of the ground attenuation algorithms. The noise barrier evaluation was accomplished in accordance with American National Standard Methods for Determination of Insertion Loss of Outdoor Noise Barriers which resulted in an evaluation of this standard. The entire 7.2 mile length of I-440 was modeled using STAMINA 2.0. A multiple run procedure was developed to emulate the results that would be obtained if the ORNAMENT algorithm was incorporated into STAMINA 2.0. Finally, the predicted noise levels based on STAMINA 2.0 and STAMINA with the ORNAMENT ground attenuation algorithm were compared with each other and with the field measurements. It was found that STAMINA 2.0 overpredicted noise levels by an average of over 2 dB for the receivers on I-440, whereas, the STAMINA with ORNAMENT ground attenuation algorithm overpredicted noise levels by an average of less than 0.5 dB. The mean errors for the two predictions were found to be statistically different from each other, and the mean error for the prediction with the ORNAMENT ground attenuation algorithm was not found to be statistically different from zero. The STAMINA 2.0 program predicts little, if any, ground attenuation for receivers at typical first-row distances from highways where noise barriers are used. The ORNAMENT ground attenuation algorithm, which recognizes and better compensates for the presence of obstacles in the propagation path of a sound wave, predicted significant amounts of ground attenuation for most sites.

  9. The gradient boosting algorithm and random boosting for genome-assisted evaluation in large data sets.

    PubMed

    González-Recio, O; Jiménez-Montero, J A; Alenda, R

    2013-01-01

    In the next few years, with the advent of high-density single nucleotide polymorphism (SNP) arrays and genome sequencing, genomic evaluation methods will need to deal with a large number of genetic variants and an increasing sample size. The boosting algorithm is a machine-learning technique that may alleviate the drawbacks of dealing with such large data sets. This algorithm combines different predictors in a sequential manner with some shrinkage on them; each predictor is applied consecutively to the residuals from the committee formed by the previous ones to form a final prediction based on a subset of covariates. Here, a detailed description is provided and examples using a toy data set are included. A modification of the algorithm called "random boosting" was proposed to increase predictive ability and decrease computation time of genome-assisted evaluation in large data sets. Random boosting uses a random selection of markers to add a subsequent weak learner to the predictive model. These modifications were applied to a real data set composed of 1,797 bulls genotyped for 39,714 SNP. Deregressed proofs of 4 yield traits and 1 type trait from January 2009 routine evaluations were used as dependent variables. A 2-fold cross-validation scenario was implemented. Sires born before 2005 were used as a training sample (1,576 and 1,562 for production and type traits, respectively), whereas younger sires were used as a testing sample to evaluate predictive ability of the algorithm on yet-to-be-observed phenotypes. Comparison with the original algorithm was provided. The predictive ability of the algorithm was measured as Pearson correlations between observed and predicted responses. Further, estimated bias was computed as the average difference between observed and predicted phenotypes. The results showed that the modification of the original boosting algorithm could be run in 1% of the time used with the original algorithm and with negligible differences in accuracy and bias. This modification may be used to speed the calculus of genome-assisted evaluation in large data sets such us those obtained from consortiums. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  10. A numerical solution of Duffing's equations including the prediction of jump phenomena

    NASA Technical Reports Server (NTRS)

    Moyer, E. T., Jr.; Ghasghai-Abdi, E.

    1987-01-01

    Numerical methodology for the solution of Duffing's differential equation is presented. Algorithms for the prediction of multiple equilibrium solutions and jump phenomena are developed. In addition, a filtering algorithm for producing steady state solutions is presented. The problem of a rigidly clamped circular plate subjected to cosinusoidal pressure loading is solved using the developed algorithms (the plate is assumed to be in the geometrically nonlinear range). The results accurately predict regions of solution multiplicity and jump phenomena.

  11. Statistical Relational Learning to Predict Primary Myocardial Infarction from Electronic Health Records

    PubMed Central

    Weiss, Jeremy C; Page, David; Peissig, Peggy L; Natarajan, Sriraam; McCarty, Catherine

    2013-01-01

    Electronic health records (EHRs) are an emerging relational domain with large potential to improve clinical outcomes. We apply two statistical relational learning (SRL) algorithms to the task of predicting primary myocardial infarction. We show that one SRL algorithm, relational functional gradient boosting, outperforms propositional learners particularly in the medically-relevant high recall region. We observe that both SRL algorithms predict outcomes better than their propositional analogs and suggest how our methods can augment current epidemiological practices. PMID:25360347

  12. A novel power efficient location-based cooperative routing with transmission power-upper-limit for wireless sensor networks.

    PubMed

    Shi, Juanfei; Calveras, Anna; Cheng, Ye; Liu, Kai

    2013-05-15

    The extensive usage of wireless sensor networks (WSNs) has led to the development of many power- and energy-efficient routing protocols. Cooperative routing in WSNs can improve performance in these types of networks. In this paper we discuss the existing proposals and we propose a routing algorithm for wireless sensor networks called Power Efficient Location-based Cooperative Routing with Transmission Power-upper-limit (PELCR-TP). The algorithm is based on the principle of minimum link power and aims to take advantage of nodes cooperation to make the link work well in WSNs with a low transmission power. In the proposed scheme, with a determined transmission power upper limit, nodes find the most appropriate next nodes and single-relay nodes with the proposed algorithm. Moreover, this proposal subtly avoids non-working nodes, because we add a Bad nodes Avoidance Strategy (BAS). Simulation results show that the proposed algorithm with BAS can significantly improve the performance in reducing the overall link power, enhancing the transmission success rate and decreasing the retransmission rate.

  13. Optimal pattern distributions in Rete-based production systems

    NASA Technical Reports Server (NTRS)

    Scott, Stephen L.

    1994-01-01

    Since its introduction into the AI community in the early 1980's, the Rete algorithm has been widely used. This algorithm has formed the basis for many AI tools, including NASA's CLIPS. One drawback of Rete-based implementation, however, is that the network structures used internally by the Rete algorithm make it sensitive to the arrangement of individual patterns within rules. Thus while rules may be more or less arbitrarily placed within source files, the distribution of individual patterns within these rules can significantly affect the overall system performance. Some heuristics have been proposed to optimize pattern placement, however, these suggestions can be conflicting. This paper describes a systematic effort to measure the effect of pattern distribution on production system performance. An overview of the Rete algorithm is presented to provide context. A description of the methods used to explore the pattern ordering problem area are presented, using internal production system metrics such as the number of partial matches, and coarse-grained operating system data such as memory usage and time. The results of this study should be of interest to those developing and optimizing software for Rete-based production systems.

  14. Uncertainty analysis of wavelet-based feature extraction for isotope identification on NaI gamma-ray spectra

    DOE PAGES

    Stinnett, Jacob; Sullivan, Clair J.; Xiong, Hao

    2017-03-02

    Low-resolution isotope identifiers are widely deployed for nuclear security purposes, but these detectors currently demonstrate problems in making correct identifications in many typical usage scenarios. While there are many hardware alternatives and improvements that can be made, performance on existing low resolution isotope identifiers should be able to be improved by developing new identification algorithms. We have developed a wavelet-based peak extraction algorithm and an implementation of a Bayesian classifier for automated peak-based identification. The peak extraction algorithm has been extended to compute uncertainties in the peak area calculations. To build empirical joint probability distributions of the peak areas andmore » uncertainties, a large set of spectra were simulated in MCNP6 and processed with the wavelet-based feature extraction algorithm. Kernel density estimation was then used to create a new component of the likelihood function in the Bayesian classifier. Furthermore, identification performance is demonstrated on a variety of real low-resolution spectra, including Category I quantities of special nuclear material.« less

  15. A Novel Power Efficient Location-Based Cooperative Routing with Transmission Power-Upper-Limit for Wireless Sensor Networks

    PubMed Central

    Shi, Juanfei; Calveras, Anna; Cheng, Ye; Liu, Kai

    2013-01-01

    The extensive usage of wireless sensor networks (WSNs) has led to the development of many power- and energy-efficient routing protocols. Cooperative routing in WSNs can improve performance in these types of networks. In this paper we discuss the existing proposals and we propose a routing algorithm for wireless sensor networks called Power Efficient Location-based Cooperative Routing with Transmission Power-upper-limit (PELCR-TP). The algorithm is based on the principle of minimum link power and aims to take advantage of nodes cooperation to make the link work well in WSNs with a low transmission power. In the proposed scheme, with a determined transmission power upper limit, nodes find the most appropriate next nodes and single-relay nodes with the proposed algorithm. Moreover, this proposal subtly avoids non-working nodes, because we add a Bad nodes Avoidance Strategy (BAS). Simulation results show that the proposed algorithm with BAS can significantly improve the performance in reducing the overall link power, enhancing the transmission success rate and decreasing the retransmission rate. PMID:23676625

  16. New robust algorithm for tracking cells in videos of Drosophila morphogenesis based on finding an ideal path in segmented spatio-temporal cellular structures.

    PubMed

    Bellaïche, Yohanns; Bosveld, Floris; Graner, François; Mikula, Karol; Remesíková, Mariana; Smísek, Michal

    2011-01-01

    In this paper, we present a novel algorithm for tracking cells in time lapse confocal microscopy movie of a Drosophila epithelial tissue during pupal morphogenesis. We consider a 2D + time video as a 3D static image, where frames are stacked atop each other, and using a spatio-temporal segmentation algorithm we obtain information about spatio-temporal 3D tubes representing evolutions of cells. The main idea for tracking is the usage of two distance functions--first one from the cells in the initial frame and second one from segmented boundaries. We track the cells backwards in time. The first distance function attracts the subsequently constructed cell trajectories to the cells in the initial frame and the second one forces them to be close to centerlines of the segmented tubular structures. This makes our tracking algorithm robust against noise and missing spatio-temporal boundaries. This approach can be generalized to a 3D + time video analysis, where spatio-temporal tubes are 4D objects.

  17. Pipelining in structural health monitoring wireless sensor network

    NASA Astrophysics Data System (ADS)

    Li, Xu; Dorvash, Siavash; Cheng, Liang; Pakzad, Shamim

    2010-04-01

    Application of wireless sensor network (WSN) for structural health monitoring (SHM), is becoming widespread due to its implementation ease and economic advantage over traditional sensor networks. Beside advantages that have made wireless network preferable, there are some concerns regarding their performance in some applications. In long-span Bridge monitoring the need to transfer data over long distance causes some challenges in design of WSN platforms. Due to the geometry of bridge structures, using multi-hop data transfer between remote nodes and base station is essential. This paper focuses on the performances of pipelining algorithms. We summarize several prevent pipelining approaches, discuss their performances, and propose a new pipelining algorithm, which gives consideration to both boosting of channel usage and the simplicity in deployment.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zawisza, I; Yan, H; Yin, F

    Purpose: To assure that tumor motion is within the radiation field during high-dose and high-precision radiosurgery, real-time imaging and surrogate monitoring are employed. These methods are useful in providing real-time tumor/surrogate motion but no future information is available. In order to anticipate future tumor/surrogate motion and track target location precisely, an algorithm is developed and investigated for estimating surrogate motion multiple-steps ahead. Methods: The study utilized a one-dimensional surrogate motion signal divided into three components: (a) training component containing the primary data including the first frame to the beginning of the input subsequence; (b) input subsequence component of the surrogatemore » signal used as input to the prediction algorithm: (c) output subsequence component is the remaining signal used as the known output of the prediction algorithm for validation. The prediction algorithm consists of three major steps: (1) extracting subsequences from training component which best-match the input subsequence according to given criterion; (2) calculating weighting factors from these best-matched subsequence; (3) collecting the proceeding parts of the subsequences and combining them together with assigned weighting factors to form output. The prediction algorithm was examined for several patients, and its performance is assessed based on the correlation between prediction and known output. Results: Respiratory motion data was collected for 20 patients using the RPM system. The output subsequence is the last 50 samples (∼2 seconds) of a surrogate signal, and the input subsequence was 100 (∼3 seconds) frames prior to the output subsequence. Based on the analysis of correlation coefficient between predicted and known output subsequence, the average correlation is 0.9644±0.0394 and 0.9789±0.0239 for equal-weighting and relative-weighting strategies, respectively. Conclusion: Preliminary results indicate that the prediction algorithm is effective in estimating surrogate motion multiple-steps in advance. Relative-weighting method shows better prediction accuracy than equal-weighting method. More parameters of this algorithm are under investigation.« less

  19. HIV1 V3 loop hypermutability is enhanced by the guanine usage bias in the part of env gene coding for it.

    PubMed

    Khrustalev, Vladislav Victorovich

    2009-01-01

    Guanine is the most mutable nucleotide in HIV genes because of frequently occurring G to A transitions, which are caused by cytosine deamination in viral DNA minus strands catalyzed by APOBEC enzymes. Distribution of guanine between three codon positions should influence the probability for G to A mutation to be nonsynonymous (to occur in first or second codon position). We discovered that nucleotide sequences of env genes coding for third variable regions (V3 loops) of gp120 from HIV1 and HIV2 have different kinds of guanine usage biases. In the HIV1 reference strain and 100 additionally analyzed HIV1 strains the guanine usage bias in V3 loop coding regions (2G>1G>3G) should lead to elevated nonsynonymous G to A transitions occurrence rates. In the HIV2 reference strain and 100 other HIV2 strains guanine usage bias in V3 loop coding regions (3G>2G>1G) should protect V3 loops from hypermutability. According to the HIV1 and HIV2 V3 alignment, insertion of the sequence enriched with 2G (21 codons in length) occurred during the evolution of HIV1 predecessor, while insertion of the different sequence enriched with 3G (19 codons in length) occurred during the evolution of HIV2 predecessor. The higher is the level of 3G in the V3 coding region, the lower should be the immune escaping mutation occurrence rates. This hypothesis was tested in this study by comparing the guanine usage in V3 loop coding regions from HIV1 fast and slow progressors. All calculations have been performed by our algorithms "VVK In length", "VVK Dinucleotides" and "VVK Consensus" (www.barkovsky.hotmail.ru).

  20. Research prioritization through prediction of future impact on biomedical science: a position paper on inference-analytics.

    PubMed

    Ganapathiraju, Madhavi K; Orii, Naoki

    2013-08-30

    Advances in biotechnology have created "big-data" situations in molecular and cellular biology. Several sophisticated algorithms have been developed that process big data to generate hundreds of biomedical hypotheses (or predictions). The bottleneck to translating this large number of biological hypotheses is that each of them needs to be studied by experimentation for interpreting its functional significance. Even when the predictions are estimated to be very accurate, from a biologist's perspective, the choice of which of these predictions is to be studied further is made based on factors like availability of reagents and resources and the possibility of formulating some reasonable hypothesis about its biological relevance. When viewed from a global perspective, say from that of a federal funding agency, ideally the choice of which prediction should be studied would be made based on which of them can make the most translational impact. We propose that algorithms be developed to identify which of the computationally generated hypotheses have potential for high translational impact; this way, funding agencies and scientific community can invest resources and drive the research based on a global view of biomedical impact without being deterred by local view of feasibility. In short, data-analytic algorithms analyze big-data and generate hypotheses; in contrast, the proposed inference-analytic algorithms analyze these hypotheses and rank them by predicted biological impact. We demonstrate this through the development of an algorithm to predict biomedical impact of protein-protein interactions (PPIs) which is estimated by the number of future publications that cite the paper which originally reported the PPI. This position paper describes a new computational problem that is relevant in the era of big-data and discusses the challenges that exist in studying this problem, highlighting the need for the scientific community to engage in this line of research. The proposed class of algorithms, namely inference-analytic algorithms, is necessary to ensure that resources are invested in translating those computational outcomes that promise maximum biological impact. Application of this concept to predict biomedical impact of PPIs illustrates not only the concept, but also the challenges in designing these algorithms.

  1. Medical chart validation of an algorithm for identifying multiple sclerosis relapse in healthcare claims.

    PubMed

    Chastek, Benjamin J; Oleen-Burkey, Merrikay; Lopez-Bresnahan, Maria V

    2010-01-01

    Relapse is a common measure of disease activity in relapsing-remitting multiple sclerosis (MS). The objective of this study was to test the content validity of an operational algorithm for detecting relapse in claims data. A claims-based relapse detection algorithm was tested by comparing its detection rate over a 1-year period with relapses identified based on medical chart review. According to the algorithm, MS patients in a US healthcare claims database who had either (1) a primary claim for MS during hospitalization or (2) a corticosteroid claim following a MS-related outpatient visit were designated as having a relapse. Patient charts were examined for explicit indication of relapse or care suggestive of relapse. Positive and negative predictive values were calculated. Medical charts were reviewed for 300 MS patients, half of whom had a relapse according to the algorithm. The claims-based criteria correctly classified 67.3% of patients with relapses (positive predictive value) and 70.0% of patients without relapses (negative predictive value; kappa 0.373: p < 0.001). Alternative algorithms did not improve on the predictive value of the operational algorithm. Limitations of the algorithm include lack of differentiation between relapsing-remitting MS and other types, and that it does not incorporate measures of function and disability. The claims-based algorithm appeared to successfully detect moderate-to-severe MS relapse. This validated definition can be applied to future claims-based MS studies.

  2. Comparative evaluation of atom mapping algorithms for balanced metabolic reactions: application to Recon 3D

    DOE PAGES

    Preciat Gonzalez, German A.; El Assal, Lemmer R. P.; Noronha, Alberto; ...

    2017-06-14

    The mechanism of each chemical reaction in a metabolic network can be represented as a set of atom mappings, each of which relates an atom in a substrate metabolite to an atom of the same element in a product metabolite. Genome-scale metabolic network reconstructions typically represent biochemistry at the level of reaction stoichiometry. However, a more detailed representation at the underlying level of atom mappings opens the possibility for a broader range of biological, biomedical and biotechnological applications than with stoichiometry alone. Complete manual acquisition of atom mapping data for a genome-scale metabolic network is a laborious process. However, manymore » algorithms exist to predict atom mappings. How do their predictions compare to each other and to manually curated atom mappings? For more than four thousand metabolic reactions in the latest human metabolic reconstruction, Recon 3D, we compared the atom mappings predicted by six atom mapping algorithms. We also compared these predictions to those obtained by manual curation of atom mappings for over five hundred reactions distributed among all top level Enzyme Commission number classes. Five of the evaluated algorithms had similarly high prediction accuracy of over 91% when compared to manually curated atom mapped reactions. On average, the accuracy of the prediction was highest for reactions catalysed by oxidoreductases and lowest for reactions catalysed by ligases. In addition to prediction accuracy, the algorithms were evaluated on their accessibility, their advanced features, such as the ability to identify equivalent atoms, and their ability to map hydrogen atoms. In addition to prediction accuracy, we found that software accessibility and advanced features were fundamental to the selection of an atom mapping algorithm in practice.« less

  3. Comparative evaluation of atom mapping algorithms for balanced metabolic reactions: application to Recon 3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preciat Gonzalez, German A.; El Assal, Lemmer R. P.; Noronha, Alberto

    The mechanism of each chemical reaction in a metabolic network can be represented as a set of atom mappings, each of which relates an atom in a substrate metabolite to an atom of the same element in a product metabolite. Genome-scale metabolic network reconstructions typically represent biochemistry at the level of reaction stoichiometry. However, a more detailed representation at the underlying level of atom mappings opens the possibility for a broader range of biological, biomedical and biotechnological applications than with stoichiometry alone. Complete manual acquisition of atom mapping data for a genome-scale metabolic network is a laborious process. However, manymore » algorithms exist to predict atom mappings. How do their predictions compare to each other and to manually curated atom mappings? For more than four thousand metabolic reactions in the latest human metabolic reconstruction, Recon 3D, we compared the atom mappings predicted by six atom mapping algorithms. We also compared these predictions to those obtained by manual curation of atom mappings for over five hundred reactions distributed among all top level Enzyme Commission number classes. Five of the evaluated algorithms had similarly high prediction accuracy of over 91% when compared to manually curated atom mapped reactions. On average, the accuracy of the prediction was highest for reactions catalysed by oxidoreductases and lowest for reactions catalysed by ligases. In addition to prediction accuracy, the algorithms were evaluated on their accessibility, their advanced features, such as the ability to identify equivalent atoms, and their ability to map hydrogen atoms. In addition to prediction accuracy, we found that software accessibility and advanced features were fundamental to the selection of an atom mapping algorithm in practice.« less

  4. Comparative evaluation of atom mapping algorithms for balanced metabolic reactions: application to Recon 3D.

    PubMed

    Preciat Gonzalez, German A; El Assal, Lemmer R P; Noronha, Alberto; Thiele, Ines; Haraldsdóttir, Hulda S; Fleming, Ronan M T

    2017-06-14

    The mechanism of each chemical reaction in a metabolic network can be represented as a set of atom mappings, each of which relates an atom in a substrate metabolite to an atom of the same element in a product metabolite. Genome-scale metabolic network reconstructions typically represent biochemistry at the level of reaction stoichiometry. However, a more detailed representation at the underlying level of atom mappings opens the possibility for a broader range of biological, biomedical and biotechnological applications than with stoichiometry alone. Complete manual acquisition of atom mapping data for a genome-scale metabolic network is a laborious process. However, many algorithms exist to predict atom mappings. How do their predictions compare to each other and to manually curated atom mappings? For more than four thousand metabolic reactions in the latest human metabolic reconstruction, Recon 3D, we compared the atom mappings predicted by six atom mapping algorithms. We also compared these predictions to those obtained by manual curation of atom mappings for over five hundred reactions distributed among all top level Enzyme Commission number classes. Five of the evaluated algorithms had similarly high prediction accuracy of over 91% when compared to manually curated atom mapped reactions. On average, the accuracy of the prediction was highest for reactions catalysed by oxidoreductases and lowest for reactions catalysed by ligases. In addition to prediction accuracy, the algorithms were evaluated on their accessibility, their advanced features, such as the ability to identify equivalent atoms, and their ability to map hydrogen atoms. In addition to prediction accuracy, we found that software accessibility and advanced features were fundamental to the selection of an atom mapping algorithm in practice.

  5. The Current Status of Unsteady CFD Approaches for Aerodynamic Flow Control

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Singer, Bart A.; Yamaleev, Nail; Vatsa, Veer N.; Viken, Sally A.; Atkins, Harold L.

    2002-01-01

    An overview of the current status of time dependent algorithms is presented. Special attention is given to algorithms used to predict fluid actuator flows, as well as other active and passive flow control devices. Capabilities for the next decade are predicted, and principal impediments to the progress of time-dependent algorithms are identified.

  6. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    PubMed

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  7. BIG DATA ANALYTICS AND PRECISION ANIMAL AGRICULTURE SYMPOSIUM: Data to decisions.

    PubMed

    White, B J; Amrine, D E; Larson, R L

    2018-04-14

    Big data are frequently used in many facets of business and agronomy to enhance knowledge needed to improve operational decisions. Livestock operations collect data of sufficient quantity to perform predictive analytics. Predictive analytics can be defined as a methodology and suite of data evaluation techniques to generate a prediction for specific target outcomes. The objective of this manuscript is to describe the process of using big data and the predictive analytic framework to create tools to drive decisions in livestock production, health, and welfare. The predictive analytic process involves selecting a target variable, managing the data, partitioning the data, then creating algorithms, refining algorithms, and finally comparing accuracy of the created classifiers. The partitioning of the datasets allows model building and refining to occur prior to testing the predictive accuracy of the model with naive data to evaluate overall accuracy. Many different classification algorithms are available for predictive use and testing multiple algorithms can lead to optimal results. Application of a systematic process for predictive analytics using data that is currently collected or that could be collected on livestock operations will facilitate precision animal management through enhanced livestock operational decisions.

  8. Predictive Caching Using the TDAG Algorithm

    NASA Technical Reports Server (NTRS)

    Laird, Philip; Saul, Ronald

    1992-01-01

    We describe how the TDAG algorithm for learning to predict symbol sequences can be used to design a predictive cache store. A model of a two-level mass storage system is developed and used to calculate the performance of the cache under various conditions. Experimental simulations provide good confirmation of the model.

  9. Predictive Model of Linear Antimicrobial Peptides Active against Gram-Negative Bacteria.

    PubMed

    Vishnepolsky, Boris; Gabrielian, Andrei; Rosenthal, Alex; Hurt, Darrell E; Tartakovsky, Michael; Managadze, Grigol; Grigolava, Maya; Makhatadze, George I; Pirtskhalava, Malak

    2018-05-29

    Antimicrobial peptides (AMPs) have been identified as a potential new class of anti-infectives for drug development. There are a lot of computational methods that try to predict AMPs. Most of them can only predict if a peptide will show any antimicrobial potency, but to the best of our knowledge, there are no tools which can predict antimicrobial potency against particular strains. Here we present a predictive model of linear AMPs being active against particular Gram-negative strains relying on a semi-supervised machine-learning approach with a density-based clustering algorithm. The algorithm can well distinguish peptides active against particular strains from others which may also be active but not against the considered strain. The available AMP prediction tools cannot carry out this task. The prediction tool based on the algorithm suggested herein is available on https://dbaasp.org.

  10. Unified method of knowledge representation in the evolutionary artificial intelligence systems

    NASA Astrophysics Data System (ADS)

    Bykov, Nickolay M.; Bykova, Katherina N.

    2003-03-01

    The evolution of artificial intelligence systems called by complicating of their operation topics and science perfecting has resulted in a diversification of the methods both the algorithms of knowledge representation and usage in these systems. Often by this reason it is very difficult to design the effective methods of knowledge discovering and operation for such systems. In the given activity the authors offer a method of unitized representation of the systems knowledge about objects of an external world by rank transformation of their descriptions, made in the different features spaces: deterministic, probabilistic, fuzzy and other. The proof of a sufficiency of the information about the rank configuration of the object states in the features space for decision making is presented. It is shown that the geometrical and combinatorial model of the rank configurations set introduce their by group of some system of incidence, that allows to store the information on them in a convolute kind. The method of the rank configuration description by the DRP - code (distance rank preserving code) is offered. The problems of its completeness, information capacity, noise immunity and privacy are reviewed. It is shown, that the capacity of a transmission channel for such submission of the information is more than unit, as the code words contain the information both about the object states, and about the distance ranks between them. The effective algorithm of the data clustering for the object states identification, founded on the given code usage, is described. The knowledge representation with the help of the rank configurations allows to unitize and to simplify algorithms of the decision making by fulfillment of logic operations above the DRP - code words. Examples of the proposed clustering techniques operation on the given samples set, the rank configuration of resulted clusters and its DRP-codes are presented.

  11. Sequence and System in the Acquisition of Tense and Agreement

    ERIC Educational Resources Information Center

    Rispoli, Matthew; Hadley, Pamela A.; Holt, Janet K.

    2012-01-01

    Purpose: The relatedness of tense morphemes in the language of children younger than 3 years of age is a matter of controversy. Generativist accounts predict that the morphemes will be related, whereas usage-based accounts predict the absence of relationships. This study focused on the increasing productivity of the 5 morphemes in the tense…

  12. Are Career Centers Worthwhile?: Predicting Unique Variance in Career Outcomes through Career Center Usage

    ERIC Educational Resources Information Center

    Brotheridge, Celeste M.; Power, Jacqueline L.

    2008-01-01

    Purpose: This study seeks to examine the extent to which the use of career center services results in the significant incremental prediction of career outcomes beyond its established predictors. Design/methodology/approach: The authors survey the clients of a public agency's career center and use hierarchical multiple regressions in order to…

  13. Prognostics Uncertainty Management with Application to Government and Industry

    NASA Technical Reports Server (NTRS)

    Celaya, Jose; Sankararaman, Shankar; Daigle, Matthew; Saxena, Abhinav; Goebel, Kai

    2014-01-01

    Predictions about the future are contingent on future usage, but also on the quality of the models employed and the assessment of the current health state. These factors, amongst others, need to be considered to arrive at a prediction that is conducted through a rigorous method but where the confidence bounds are not prohibitively large.

  14. Community detection in complex networks using link prediction

    NASA Astrophysics Data System (ADS)

    Cheng, Hui-Min; Ning, Yi-Zi; Yin, Zhao; Yan, Chao; Liu, Xin; Zhang, Zhong-Yuan

    2018-01-01

    Community detection and link prediction are both of great significance in network analysis, which provide very valuable insights into topological structures of the network from different perspectives. In this paper, we propose a novel community detection algorithm with inclusion of link prediction, motivated by the question whether link prediction can be devoted to improving the accuracy of community partition. For link prediction, we propose two novel indices to compute the similarity between each pair of nodes, one of which aims to add missing links, and the other tries to remove spurious edges. Extensive experiments are conducted on benchmark data sets, and the results of our proposed algorithm are compared with two classes of baselines. In conclusion, our proposed algorithm is competitive, revealing that link prediction does improve the precision of community detection.

  15. The sensitivity and negative predictive value of a pediatric cervical spine clearance algorithm that minimizes computerized tomography.

    PubMed

    Arbuthnot, Mary; Mooney, David P

    2017-01-01

    It is crucial to identify cervical spine injuries while minimizing ionizing radiation. This study analyzes the sensitivity and negative predictive value of a pediatric cervical spine clearance algorithm. We performed a retrospective review of all children <21years old who were admitted following blunt trauma and underwent cervical spine clearance utilizing our institution's cervical spine clearance algorithm over a 10-year period. Age, gender, International Classification of Diseases 9th Edition diagnosis codes, presence or absence of cervical collar on arrival, Injury Severity Score, and type of cervical spine imaging obtained were extracted from the trauma registry and electronic medical record. Descriptive statistics were used and the sensitivity and negative predictive value of the algorithm were calculated. Approximately 125,000 children were evaluated in the Emergency Department and 11,331 were admitted. Of the admitted children, 1023 patients arrived in a cervical collar without advanced cervical spine imaging and were evaluated using the cervical spine clearance algorithm. Algorithm sensitivity was 94.4% and the negative predictive value was 99.9%. There was one missed injury, a spinous process tip fracture in a teenager maintained in a collar. Our algorithm was associated with a low missed injury rate and low CT utilization rate, even in children <3years old. IV. Published by Elsevier Inc.

  16. A material political economy: Automated Trading Desk and price prediction in high-frequency trading.

    PubMed

    MacKenzie, Donald

    2017-04-01

    This article contains the first detailed historical study of one of the new high-frequency trading (HFT) firms that have transformed many of the world's financial markets. The study, of Automated Trading Desk (ATD), one of the earliest and most important such firms, focuses on how ATD's algorithms predicted share price changes. The article argues that political-economic struggles are integral to the existence of some of the 'pockets' of predictable structure in the otherwise random movements of prices, to the availability of the data that allow algorithms to identify these pockets, and to the capacity of algorithms to use these predictions to trade profitably. The article also examines the role of HFT algorithms such as ATD's in the epochal, fiercely contested shift in US share trading from 'fixed-role' markets towards 'all-to-all' markets.

  17. Local-search based prediction of medical image registration error

    NASA Astrophysics Data System (ADS)

    Saygili, Görkem

    2018-03-01

    Medical image registration is a crucial task in many different medical imaging applications. Hence, considerable amount of work has been published recently that aim to predict the error in a registration without any human effort. If provided, these error predictions can be used as a feedback to the registration algorithm to further improve its performance. Recent methods generally start with extracting image-based and deformation-based features, then apply feature pooling and finally train a Random Forest (RF) regressor to predict the real registration error. Image-based features can be calculated after applying a single registration but provide limited accuracy whereas deformation-based features such as variation of deformation vector field may require up to 20 registrations which is a considerably high time-consuming task. This paper proposes to use extracted features from a local search algorithm as image-based features to estimate the error of a registration. The proposed method comprises a local search algorithm to find corresponding voxels between registered image pairs and based on the amount of shifts and stereo confidence measures, it predicts the amount of registration error in millimetres densely using a RF regressor. Compared to other algorithms in the literature, the proposed algorithm does not require multiple registrations, can be efficiently implemented on a Graphical Processing Unit (GPU) and can still provide highly accurate error predictions in existence of large registration error. Experimental results with real registrations on a public dataset indicate a substantially high accuracy achieved by using features from the local search algorithm.

  18. The Role of Habits in Massive Multiplayer Online Role-Playing Game Usage: Predicting Excessive and Problematic Gaming Through Players' Sensitivity to Situational Cues.

    PubMed

    Lukavská, Kateřina; Hrabec, Ondřej; Chrz, Vladimír

    2016-04-01

    We examined the effect of habitual regulation of massive multiplayer online role-playing game (MMORPG) playing on the problematic (addictive) usage and excessiveness of gaming (time that user spent playing weekly, per session, and in relation to his other leisure activities). We developed the approach to assess the strength of habitual regulation that was based on sensitivity to situational cues. We defined cues as real-life or in-game conditions (e.g., work to be done, activities with friends or family, need to relax, new game expansion) that usually promote gaming (proplay cues) or prevent it (contraplay cues). Using a sample of 377 MMORPG players, we analyzed relationships between variables through partial least squares path modeling. We found that proplay cues sensitivity significantly positively affected the excessiveness of gaming (playing time) as well as the occurrence of problematic usage symptoms. Conversely, contraplay cues sensitivity functioned as a protective factor from these conditions; significant negative effects were found for playing time and problematic usage. Playing time was confirmed to be a mediating variable, affected by cues sensitivity and at the same time affecting problematic usage symptoms. We obtained moderately strong coefficients of determination for both endogenous variables (R(2) = 0.28 for playing time; R(2) = 0.31 for problematic usage) suggesting that the proposed variables possess good explanatory power. Based on our results, we argue that the strength of habitual regulation within MMORPG usage has both positive and negative effects on excessive and problematic usage, which is a new and important finding within the area of Internet gaming addiction.

  19. Choosing the appropriate forecasting model for predictive parameter control.

    PubMed

    Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars

    2014-01-01

    All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.

  20. Electrostatic Discharge (ESD) Susceptibility of Electronic Devices

    DTIC Science & Technology

    1983-01-01

    determined by knowing the thermal environment in which the component is to operate. However, ESD is not that predictable . It is considered a random... predicted where a part may experience an ESD pulse during usage? (3) What effect does stressing a device with different models have on its propensity for...relatively low ESO threshold voltages (i.e., < 7000 volts) tend to be consistently high compared with their predicted level using EMP data. But at

  1. Modeling pilot interaction with automated digital avionics systems: Guidance and control algorithms for contour and nap-of-the-Earth flight

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1990-01-01

    A collection of technical papers are presented that cover modeling pilot interaction with automated digital avionics systems and guidance and control algorithms for contour and nap-of-the-earth flight. The titles of the papers presented are as follows: (1) Automation effects in a multiloop manual control system; (2) A qualitative model of human interaction with complex dynamic systems; (3) Generalized predictive control of dynamic systems; (4) An application of generalized predictive control to rotorcraft terrain-following flight; (5) Self-tuning generalized predictive control applied to terrain-following flight; and (6) Precise flight path control using a predictive algorithm.

  2. Space shuttle propulsion parameter estimation using optimal estimation techniques

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.

  3. Development of an Evolutionary Algorithm for the ab Initio Discovery of Two-Dimensional Materials

    NASA Astrophysics Data System (ADS)

    Revard, Benjamin Charles

    Crystal structure prediction is an important first step on the path toward computational materials design. Increasingly robust methods have become available in recent years for computing many materials properties, but because properties are largely a function of crystal structure, the structure must be known before these methods can be brought to bear. In addition, structure prediction is particularly useful for identifying low-energy structures of subperiodic materials, such as two-dimensional (2D) materials, which may adopt unexpected structures that differ from those of the corresponding bulk phases. Evolutionary algorithms, which are heuristics for global optimization inspired by biological evolution, have proven to be a fruitful approach for tackling the problem of crystal structure prediction. This thesis describes the development of an improved evolutionary algorithm for structure prediction and several applications of the algorithm to predict the structures of novel low-energy 2D materials. The first part of this thesis contains an overview of evolutionary algorithms for crystal structure prediction and presents our implementation, including details of extending the algorithm to search for clusters, wires, and 2D materials, improvements to efficiency when running in parallel, improved composition space sampling, and the ability to search for partial phase diagrams. We then present several applications of the evolutionary algorithm to 2D systems, including InP, the C-Si and Sn-S phase diagrams, and several group-IV dioxides. This thesis makes use of the Cornell graduate school's "papers" option. Chapters 1 and 3 correspond to the first-author publications of Refs. [131] and [132], respectively, and chapter 2 will soon be submitted as a first-author publication. The material in chapter 4 is taken from Ref. [144], in which I share joint first-authorship. In this case I have included only my own contributions.

  4. A parallel algorithm for the initial screening of space debris collisions prediction using the SGP4/SDP4 models and GPU acceleration

    NASA Astrophysics Data System (ADS)

    Lin, Mingpei; Xu, Ming; Fu, Xiaoyu

    2017-05-01

    Currently, a tremendous amount of space debris in Earth's orbit imperils operational spacecraft. It is essential to undertake risk assessments of collisions and predict dangerous encounters in space. However, collision predictions for an enormous amount of space debris give rise to large-scale computations. In this paper, a parallel algorithm is established on the Compute Unified Device Architecture (CUDA) platform of NVIDIA Corporation for collision prediction. According to the parallel structure of NVIDIA graphics processors, a block decomposition strategy is adopted in the algorithm. Space debris is divided into batches, and the computation and data transfer operations of adjacent batches overlap. As a consequence, the latency to access shared memory during the entire computing process is significantly reduced, and a higher computing speed is reached. Theoretically, a simulation of collision prediction for space debris of any amount and for any time span can be executed. To verify this algorithm, a simulation example including 1382 pieces of debris, whose operational time scales vary from 1 min to 3 days, is conducted on Tesla C2075 of NVIDIA. The simulation results demonstrate that with the same computational accuracy as that of a CPU, the computing speed of the parallel algorithm on a GPU is 30 times that on a CPU. Based on this algorithm, collision prediction of over 150 Chinese spacecraft for a time span of 3 days can be completed in less than 3 h on a single computer, which meets the timeliness requirement of the initial screening task. Furthermore, the algorithm can be adapted for multiple tasks, including particle filtration, constellation design, and Monte-Carlo simulation of an orbital computation.

  5. Embedded System for Prosthetic Control Using Implanted Neuromuscular Interfaces Accessed Via an Osseointegrated Implant.

    PubMed

    Mastinu, Enzo; Doguet, Pascal; Botquin, Yohan; Hakansson, Bo; Ortiz-Catalan, Max

    2017-08-01

    Despite the technological progress in robotics achieved in the last decades, prosthetic limbs still lack functionality, reliability, and comfort. Recently, an implanted neuromusculoskeletal interface built upon osseointegration was developed and tested in humans, namely the Osseointegrated Human-Machine Gateway. Here, we present an embedded system to exploit the advantages of this technology. Our artificial limb controller allows for bioelectric signals acquisition, processing, decoding of motor intent, prosthetic control, and sensory feedback. It includes a neurostimulator to provide direct neural feedback based on sensory information. The system was validated using real-time tasks characterization, power consumption evaluation, and myoelectric pattern recognition performance. Functionality was proven in a first pilot patient from whom results of daily usage were obtained. The system was designed to be reliably used in activities of daily living, as well as a research platform to monitor prosthesis usage and training, machine-learning-based control algorithms, and neural stimulation paradigms.

  6. A complex network-based importance measure for mechatronics systems

    NASA Astrophysics Data System (ADS)

    Wang, Yanhui; Bi, Lifeng; Lin, Shuai; Li, Man; Shi, Hao

    2017-01-01

    In view of the negative impact of functional dependency, this paper attempts to provide an alternative importance measure called Improved-PageRank (IPR) for measuring the importance of components in mechatronics systems. IPR is a meaningful extension of the centrality measures in complex network, which considers usage reliability of components and functional dependency between components to increase importance measures usefulness. Our work makes two important contributions. First, this paper integrates the literature of mechatronic architecture and complex networks theory to define component network. Second, based on the notion of component network, a meaningful IPR is brought into the identifying of important components. In addition, the IPR component importance measures, and an algorithm to perform stochastic ordering of components due to the time-varying nature of usage reliability of components and functional dependency between components, are illustrated with a component network of bogie system that consists of 27 components.

  7. Deep learning improves prediction of CRISPR-Cpf1 guide RNA activity.

    PubMed

    Kim, Hui Kwon; Min, Seonwoo; Song, Myungjae; Jung, Soobin; Choi, Jae Woo; Kim, Younggwang; Lee, Sangeun; Yoon, Sungroh; Kim, Hyongbum Henry

    2018-03-01

    We present two algorithms to predict the activity of AsCpf1 guide RNAs. Indel frequencies for 15,000 target sequences were used in a deep-learning framework based on a convolutional neural network to train Seq-deepCpf1. We then incorporated chromatin accessibility information to create the better-performing DeepCpf1 algorithm for cell lines for which such information is available and show that both algorithms outperform previous machine learning algorithms on our own and published data sets.

  8. A statistical analysis of RNA folding algorithms through thermodynamic parameter perturbation.

    PubMed

    Layton, D M; Bundschuh, R

    2005-01-01

    Computational RNA secondary structure prediction is rather well established. However, such prediction algorithms always depend on a large number of experimentally measured parameters. Here, we study how sensitive structure prediction algorithms are to changes in these parameters. We found already that for changes corresponding to the actual experimental error to which these parameters have been determined, 30% of the structure are falsely predicted whereas the ground state structure is preserved under parameter perturbation in only 5% of all the cases. We establish that base-pairing probabilities calculated in a thermal ensemble are viable although not a perfect measure for the reliability of the prediction of individual structure elements. Here, a new measure of stability using parameter perturbation is proposed, and its limitations are discussed.

  9. Seeing the "Big" Picture: Big Data Methods for Exploring Relationships Between Usage, Language, and Outcome in Internet Intervention Data.

    PubMed

    Carpenter, Jordan; Crutchley, Patrick; Zilca, Ran D; Schwartz, H Andrew; Smith, Laura K; Cobb, Angela M; Parks, Acacia C

    2016-08-31

    Assessing the efficacy of Internet interventions that are already in the market introduces both challenges and opportunities. While vast, often unprecedented amounts of data may be available (hundreds of thousands, and sometimes millions of participants with high dimensions of assessed variables), the data are observational in nature, are partly unstructured (eg, free text, images, sensor data), do not include a natural control group to be used for comparison, and typically exhibit high attrition rates. New approaches are therefore needed to use these existing data and derive new insights that can augment traditional smaller-group randomized controlled trials. Our objective was to demonstrate how emerging big data approaches can help explore questions about the effectiveness and process of an Internet well-being intervention. We drew data from the user base of a well-being website and app called Happify. To explore effectiveness, multilevel models focusing on within-person variation explored whether greater usage predicted higher well-being in a sample of 152,747 users. In addition, to explore the underlying processes that accompany improvement, we analyzed language for 10,818 users who had a sufficient volume of free-text response and timespan of platform usage. A topic model constructed from this free text provided language-based correlates of individual user improvement in outcome measures, providing insights into the beneficial underlying processes experienced by users. On a measure of positive emotion, the average user improved 1.38 points per week (SE 0.01, t122,455=113.60, P<.001, 95% CI 1.36-1.41), about an 11% increase over 8 weeks. Within a given individual user, more usage predicted more positive emotion and less usage predicted less positive emotion (estimate 0.09, SE 0.01, t6047=9.15, P=.001, 95% CI .07-.12). This estimate predicted that a given user would report positive emotion 1.26 points (or 1.26%) higher after a 2-week period when they used Happify daily than during a week when they didn't use it at all. Among highly engaged users, 200 automatically clustered topics showed a significant (corrected P<.001) effect on change in well-being over time, illustrating which topics may be more beneficial than others when engaging with the interventions. In particular, topics that are related to addressing negative thoughts and feelings were correlated with improvement over time. Using observational analyses on naturalistic big data, we can explore the relationship between usage and well-being among people using an Internet well-being intervention and provide new insights into the underlying mechanisms that accompany it. By leveraging big data to power these new types of analyses, we can explore the workings of an intervention from new angles, and harness the insights that surface to feed back into the intervention and improve it further in the future.

  10. Seeing the “Big” Picture: Big Data Methods for Exploring Relationships Between Usage, Language, and Outcome in Internet Intervention Data

    PubMed Central

    Carpenter, Jordan; Crutchley, Patrick; Zilca, Ran D; Schwartz, H Andrew; Smith, Laura K; Cobb, Angela M

    2016-01-01

    Background Assessing the efficacy of Internet interventions that are already in the market introduces both challenges and opportunities. While vast, often unprecedented amounts of data may be available (hundreds of thousands, and sometimes millions of participants with high dimensions of assessed variables), the data are observational in nature, are partly unstructured (eg, free text, images, sensor data), do not include a natural control group to be used for comparison, and typically exhibit high attrition rates. New approaches are therefore needed to use these existing data and derive new insights that can augment traditional smaller-group randomized controlled trials. Objective Our objective was to demonstrate how emerging big data approaches can help explore questions about the effectiveness and process of an Internet well-being intervention. Methods We drew data from the user base of a well-being website and app called Happify. To explore effectiveness, multilevel models focusing on within-person variation explored whether greater usage predicted higher well-being in a sample of 152,747 users. In addition, to explore the underlying processes that accompany improvement, we analyzed language for 10,818 users who had a sufficient volume of free-text response and timespan of platform usage. A topic model constructed from this free text provided language-based correlates of individual user improvement in outcome measures, providing insights into the beneficial underlying processes experienced by users. Results On a measure of positive emotion, the average user improved 1.38 points per week (SE 0.01, t122,455=113.60, P<.001, 95% CI 1.36–1.41), about a 27% increase over 8 weeks. Within a given individual user, more usage predicted more positive emotion and less usage predicted less positive emotion (estimate 0.09, SE 0.01, t6047=9.15, P=.001, 95% CI .07–.12). This estimate predicted that a given user would report positive emotion 1.26 points higher after a 2-week period when they used Happify daily than during a week when they didn’t use it at all. Among highly engaged users, 200 automatically clustered topics showed a significant (corrected P<.001) effect on change in well-being over time, illustrating which topics may be more beneficial than others when engaging with the interventions. In particular, topics that are related to addressing negative thoughts and feelings were correlated with improvement over time. Conclusions Using observational analyses on naturalistic big data, we can explore the relationship between usage and well-being among people using an Internet well-being intervention and provide new insights into the underlying mechanisms that accompany it. By leveraging big data to power these new types of analyses, we can explore the workings of an intervention from new angles, and harness the insights that surface to feed back into the intervention and improve it further in the future. PMID:27580524

  11. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    USGS Publications Warehouse

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier Science B.V. All rights reserved.

  12. Analysis of algorithms for predicting canopy fuel

    Treesearch

    Katharine L. Gray; Elizabeth Reinhardt

    2003-01-01

    We compared observed canopy fuel characteristics with those predicted by existing biomass algorithms. We specifically examined the accuracy of the biomass equations developed by Brown (1978. We used destructively sampled data obtained at 5 different study areas. We compared predicted and observed quantities of foliage and crown biomass for individual trees in our study...

  13. A Cross-Cultural Examination of SNS Usage Intensity and Managing Interpersonal Relationships Online: The Role of Culture and the Autonomous-Related Self-Construal.

    PubMed

    Lee, Soon Li; Kim, Jung-Ae; Golden, Karen Jennifer; Kim, Jae-Hwi; Park, Miriam Sang-Ah

    2016-01-01

    Perception of the autonomy and relatedness of the self may be influenced by one's experiences and social expectations within a particular cultural setting. The present research examined the role of culture and the Autonomous-Related self-construal in predicting for different aspects of Social Networking Sites (SNS) usage in three Asian countries, especially focusing on those aspects serving interpersonal goals. Participants in this cross-cultural study included 305 university students from Malaysia (n = 105), South Korea (n = 113), and China (n = 87). The study explored specific social and interpersonal behaviors on SNS, such as browsing the contacts' profiles, checking for updates, and improving contact with SNS contacts, as well as the intensity of SNS use, hypothesizing that those with high intensity of use in the Asian context may be doing so to achieve the social goal of maintaining contact and keeping updated with friends. Two scales measuring activities on other users' profiles and contact with friends' profiles were developed and validated. As predicted, some cross-cultural differences were found. Koreans were more likely to use SNS to increase contact but tended to spend less time browsing contacts' profiles than the Malaysians and Chinese. The intensity of SNS use differed between the countries as well, where Malaysians reported higher intensity than Koreans and Chinese. Consistent with study predictions, Koreans were found with the highest Autonomous-Related self-construal scores. The Autonomous-Related self-construal predicted SNS intensity. The findings suggest that cultural contexts, along with the way the self is construed in different cultures, may encourage different types of SNS usage. The authors discuss study implications and suggest future research directions.

  14. A Cross-Cultural Examination of SNS Usage Intensity and Managing Interpersonal Relationships Online: The Role of Culture and the Autonomous-Related Self-Construal

    PubMed Central

    Lee, Soon Li; Kim, Jung-Ae; Golden, Karen Jennifer; Kim, Jae-Hwi; Park, Miriam Sang-Ah

    2016-01-01

    Perception of the autonomy and relatedness of the self may be influenced by one's experiences and social expectations within a particular cultural setting. The present research examined the role of culture and the Autonomous-Related self-construal in predicting for different aspects of Social Networking Sites (SNS) usage in three Asian countries, especially focusing on those aspects serving interpersonal goals. Participants in this cross-cultural study included 305 university students from Malaysia (n = 105), South Korea (n = 113), and China (n = 87). The study explored specific social and interpersonal behaviors on SNS, such as browsing the contacts' profiles, checking for updates, and improving contact with SNS contacts, as well as the intensity of SNS use, hypothesizing that those with high intensity of use in the Asian context may be doing so to achieve the social goal of maintaining contact and keeping updated with friends. Two scales measuring activities on other users' profiles and contact with friends' profiles were developed and validated. As predicted, some cross-cultural differences were found. Koreans were more likely to use SNS to increase contact but tended to spend less time browsing contacts' profiles than the Malaysians and Chinese. The intensity of SNS use differed between the countries as well, where Malaysians reported higher intensity than Koreans and Chinese. Consistent with study predictions, Koreans were found with the highest Autonomous-Related self-construal scores. The Autonomous-Related self-construal predicted SNS intensity. The findings suggest that cultural contexts, along with the way the self is construed in different cultures, may encourage different types of SNS usage. The authors discuss study implications and suggest future research directions. PMID:27148100

  15. Algorithm for Lossless Compression of Calibrated Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2010-01-01

    A two-stage predictive method was developed for lossless compression of calibrated hyperspectral imagery. The first prediction stage uses a conventional linear predictor intended to exploit spatial and/or spectral dependencies in the data. The compressor tabulates counts of the past values of the difference between this initial prediction and the actual sample value. To form the ultimate predicted value, in the second stage, these counts are combined with an adaptively updated weight function intended to capture information about data regularities introduced by the calibration process. Finally, prediction residuals are losslessly encoded using adaptive arithmetic coding. Algorithms of this type are commonly tested on a readily available collection of images from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral imager. On the standard calibrated AVIRIS hyperspectral images that are most widely used for compression benchmarking, the new compressor provides more than 0.5 bits/sample improvement over the previous best compression results. The algorithm has been implemented in Mathematica. The compression algorithm was demonstrated as beneficial on 12-bit calibrated AVIRIS images.

  16. Algorithm aversion: people erroneously avoid algorithms after seeing them err.

    PubMed

    Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade

    2015-02-01

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

  17. Training radial basis function networks for wind speed prediction using PSO enhanced differential search optimizer

    PubMed Central

    2018-01-01

    This paper presents an integrated hybrid optimization algorithm for training the radial basis function neural network (RBF NN). Training of neural networks is still a challenging exercise in machine learning domain. Traditional training algorithms in general suffer and trap in local optima and lead to premature convergence, which makes them ineffective when applied for datasets with diverse features. Training algorithms based on evolutionary computations are becoming popular due to their robust nature in overcoming the drawbacks of the traditional algorithms. Accordingly, this paper proposes a hybrid training procedure with differential search (DS) algorithm functionally integrated with the particle swarm optimization (PSO). To surmount the local trapping of the search procedure, a new population initialization scheme is proposed using Logistic chaotic sequence, which enhances the population diversity and aid the search capability. To demonstrate the effectiveness of the proposed RBF hybrid training algorithm, experimental analysis on publicly available 7 benchmark datasets are performed. Subsequently, experiments were conducted on a practical application case for wind speed prediction to expound the superiority of the proposed RBF training algorithm in terms of prediction accuracy. PMID:29768463

  18. Training radial basis function networks for wind speed prediction using PSO enhanced differential search optimizer.

    PubMed

    Rani R, Hannah Jessie; Victoire T, Aruldoss Albert

    2018-01-01

    This paper presents an integrated hybrid optimization algorithm for training the radial basis function neural network (RBF NN). Training of neural networks is still a challenging exercise in machine learning domain. Traditional training algorithms in general suffer and trap in local optima and lead to premature convergence, which makes them ineffective when applied for datasets with diverse features. Training algorithms based on evolutionary computations are becoming popular due to their robust nature in overcoming the drawbacks of the traditional algorithms. Accordingly, this paper proposes a hybrid training procedure with differential search (DS) algorithm functionally integrated with the particle swarm optimization (PSO). To surmount the local trapping of the search procedure, a new population initialization scheme is proposed using Logistic chaotic sequence, which enhances the population diversity and aid the search capability. To demonstrate the effectiveness of the proposed RBF hybrid training algorithm, experimental analysis on publicly available 7 benchmark datasets are performed. Subsequently, experiments were conducted on a practical application case for wind speed prediction to expound the superiority of the proposed RBF training algorithm in terms of prediction accuracy.

  19. Predicting the onset of hazardous alcohol drinking in primary care: development and validation of a simple risk algorithm

    PubMed Central

    Bellón, Juan Ángel; de Dios Luna, Juan; King, Michael; Nazareth, Irwin; Motrico, Emma; GildeGómez-Barragán, María Josefa; Torres-González, Francisco; Montón-Franco, Carmen; Sánchez-Celaya, Marta; Díaz-Barreiros, Miguel Ángel; Vicens, Catalina; Moreno-Peral, Patricia

    2017-01-01

    Background Little is known about the risk of progressing to hazardous alcohol use in abstinent or low-risk drinkers. Aim To develop and validate a simple brief risk algorithm for the onset of hazardous alcohol drinking (HAD) over 12 months for use in primary care. Design and setting Prospective cohort study in 32 health centres from six Spanish provinces, with evaluations at baseline, 6 months, and 12 months. Method Forty-one risk factors were measured and multilevel logistic regression and inverse probability weighting were used to build the risk algorithm. The outcome was new occurrence of HAD during the study, as measured by the AUDIT. Results From the lists of 174 GPs, 3954 adult abstinent or low-risk drinkers were recruited. The ‘predictAL-10’ risk algorithm included just nine variables (10 questions): province, sex, age, cigarette consumption, perception of financial strain, having ever received treatment for an alcohol problem, childhood sexual abuse, AUDIT-C, and interaction AUDIT-C*Age. The c-index was 0.886 (95% CI = 0.854 to 0.918). The optimal cutoff had a sensitivity of 0.83 and specificity of 0.80. Excluding childhood sexual abuse from the model (the ‘predictAL-9’), the c-index was 0.880 (95% CI = 0.847 to 0.913), sensitivity 0.79, and specificity 0.81. There was no statistically significant difference between the c-indexes of predictAL-10 and predictAL-9. Conclusion The predictAL-10/9 is a simple and internally valid risk algorithm to predict the onset of hazardous alcohol drinking over 12 months in primary care attendees; it is a brief tool that is potentially useful for primary prevention of hazardous alcohol drinking. PMID:28360074

  20. The Ship Movement Trajectory Prediction Algorithm Using Navigational Data Fusion.

    PubMed

    Borkowski, Piotr

    2017-06-20

    It is essential for the marine navigator conducting maneuvers of his ship at sea to know future positions of himself and target ships in a specific time span to effectively solve collision situations. This article presents an algorithm of ship movement trajectory prediction, which, through data fusion, takes into account measurements of the ship's current position from a number of doubled autonomous devices. This increases the reliability and accuracy of prediction. The algorithm has been implemented in NAVDEC, a navigation decision support system and practically used on board ships.

  1. XtalOpt  version r9: An open-source evolutionary algorithm for crystal structure prediction

    DOE PAGES

    Falls, Zackary; Lonie, David C.; Avery, Patrick; ...

    2015-10-23

    This is a new version of XtalOpt, an evolutionary algorithm for crystal structure prediction available for download from the CPC library or the XtalOpt website, http://xtalopt.github.io. XtalOpt is published under the Gnu Public License (GPL), which is an open source license that is recognized by the Open Source Initiative. We have detailed the new version incorporates many bug-fixes and new features here and predict the crystal structure of a system from its stoichiometry alone, using evolutionary algorithms.

  2. The Ship Movement Trajectory Prediction Algorithm Using Navigational Data Fusion

    PubMed Central

    Borkowski, Piotr

    2017-01-01

    It is essential for the marine navigator conducting maneuvers of his ship at sea to know future positions of himself and target ships in a specific time span to effectively solve collision situations. This article presents an algorithm of ship movement trajectory prediction, which, through data fusion, takes into account measurements of the ship’s current position from a number of doubled autonomous devices. This increases the reliability and accuracy of prediction. The algorithm has been implemented in NAVDEC, a navigation decision support system and practically used on board ships. PMID:28632176

  3. A recurrence-weighted prediction algorithm for musical analysis

    NASA Astrophysics Data System (ADS)

    Colucci, Renato; Leguizamon Cucunuba, Juan Sebastián; Lloyd, Simon

    2018-03-01

    Forecasting the future behaviour of a system using past data is an important topic. In this article we apply nonlinear time series analysis in the context of music, and present new algorithms for extending a sample of music, while maintaining characteristics similar to the original piece. By using ideas from ergodic theory, we adapt the classical prediction method of Lorenz analogues so as to take into account recurrence times, and demonstrate with examples, how the new algorithm can produce predictions with a high degree of similarity to the original sample.

  4. Neural network-based run-to-run controller using exposure and resist thickness adjustment

    NASA Astrophysics Data System (ADS)

    Geary, Shane; Barry, Ronan

    2003-06-01

    This paper describes the development of a run-to-run control algorithm using a feedforward neural network, trained using the backpropagation training method. The algorithm is used to predict the critical dimension of the next lot using previous lot information. It is compared to a common prediction algorithm - the exponentially weighted moving average (EWMA) and is shown to give superior prediction performance in simulations. The manufacturing implementation of the final neural network showed significantly improved process capability when compared to the case where no run-to-run control was utilised.

  5. An O(n(5)) algorithm for MFE prediction of kissing hairpins and 4-chains in nucleic acids.

    PubMed

    Chen, Ho-Lin; Condon, Anne; Jabbari, Hosna

    2009-06-01

    Efficient methods for prediction of minimum free energy (MFE) nucleic secondary structures are widely used, both to better understand structure and function of biological RNAs and to design novel nano-structures. Here, we present a new algorithm for MFE secondary structure prediction, which significantly expands the class of structures that can be handled in O(n(5)) time. Our algorithm can handle H-type pseudoknotted structures, kissing hairpins, and chains of four overlapping stems, as well as nested substructures of these types.

  6. DVD-COOP: Innovative Conjunction Prediction Using Voronoi-filter based on the Dynamic Voronoi Diagram of 3D Spheres

    NASA Astrophysics Data System (ADS)

    Cha, J.; Ryu, J.; Lee, M.; Song, C.; Cho, Y.; Schumacher, P.; Mah, M.; Kim, D.

    Conjunction prediction is one of the critical operations in space situational awareness (SSA). For geospace objects, common algorithms for conjunction prediction are usually based on all-pairwise check, spatial hash, or kd-tree. Computational load is usually reduced through some filters. However, there exists a good chance of missing potential collisions between space objects. We present a novel algorithm which both guarantees no missing conjunction and is efficient to answer to a variety of spatial queries including pairwise conjunction prediction. The algorithm takes only O(k log N) time for N objects in the worst case to answer conjunctions where k is a constant which is linear to prediction time length. The proposed algorithm, named DVD-COOP (Dynamic Voronoi Diagram-based Conjunctive Orbital Object Predictor), is based on the dynamic Voronoi diagram of moving spherical balls in 3D space. The algorithm has a preprocessing which consists of two steps: The construction of an initial Voronoi diagram (taking O(N) time on average) and the construction of a priority queue for the events of topology changes in the Voronoi diagram (taking O(N log N) time in the worst case). The scalability of the proposed algorithm is also discussed. We hope that the proposed Voronoi-approach will change the computational paradigm in spatial reasoning among space objects.

  7. SNBRFinder: A Sequence-Based Hybrid Algorithm for Enhanced Prediction of Nucleic Acid-Binding Residues.

    PubMed

    Yang, Xiaoxia; Wang, Jia; Sun, Jun; Liu, Rong

    2015-01-01

    Protein-nucleic acid interactions are central to various fundamental biological processes. Automated methods capable of reliably identifying DNA- and RNA-binding residues in protein sequence are assuming ever-increasing importance. The majority of current algorithms rely on feature-based prediction, but their accuracy remains to be further improved. Here we propose a sequence-based hybrid algorithm SNBRFinder (Sequence-based Nucleic acid-Binding Residue Finder) by merging a feature predictor SNBRFinderF and a template predictor SNBRFinderT. SNBRFinderF was established using the support vector machine whose inputs include sequence profile and other complementary sequence descriptors, while SNBRFinderT was implemented with the sequence alignment algorithm based on profile hidden Markov models to capture the weakly homologous template of query sequence. Experimental results show that SNBRFinderF was clearly superior to the commonly used sequence profile-based predictor and SNBRFinderT can achieve comparable performance to the structure-based template methods. Leveraging the complementary relationship between these two predictors, SNBRFinder reasonably improved the performance of both DNA- and RNA-binding residue predictions. More importantly, the sequence-based hybrid prediction reached competitive performance relative to our previous structure-based counterpart. Our extensive and stringent comparisons show that SNBRFinder has obvious advantages over the existing sequence-based prediction algorithms. The value of our algorithm is highlighted by establishing an easy-to-use web server that is freely accessible at http://ibi.hzau.edu.cn/SNBRFinder.

  8. RNA design using simulated SHAPE data.

    PubMed

    Lotfi, Mohadeseh; Zare-Mirakabad, Fatemeh; Montaseri, Soheila

    2018-05-03

    It has long been established that in addition to being involved in protein translation, RNA plays essential roles in numerous other cellular processes, including gene regulation and DNA replication. Such roles are known to be dictated by higher-order structures of RNA molecules. It is therefore of prime importance to find an RNA sequence that can fold to acquire a particular function that is desirable for use in pharmaceuticals and basic research. The challenge of finding an RNA sequence for a given structure is known as the RNA design problem. Although there are several algorithms to solve this problem, they mainly consider hard constraints, such as minimum free energy, to evaluate the predicted sequences. Recently, SHAPE data has emerged as a new soft constraint for RNA secondary structure prediction. To take advantage of this new experimental constraint, we report here a new method for accurate design of RNA sequences based on their secondary structures using SHAPE data as pseudo-free energy. We then compare our algorithm with four others: INFO-RNA, ERD, MODENA and RNAifold 2.0. Our algorithm precisely predicts 26 out of 29 new sequences for the structures extracted from the Rfam dataset, while the other four algorithms predict no more than 22 out of 29. The proposed algorithm is comparable to the above algorithms on RNA-SSD datasets, where they can predict up to 33 appropriate sequences for RNA secondary structures out of 34.

  9. A Novel RSSI Prediction Using Imperialist Competition Algorithm (ICA), Radial Basis Function (RBF) and Firefly Algorithm (FFA) in Wireless Networks

    PubMed Central

    Goudarzi, Shidrokh; Haslina Hassan, Wan; Abdalla Hashim, Aisha-Hassan; Soleymani, Seyed Ahmad; Anisi, Mohammad Hossein; Zakaria, Omar M.

    2016-01-01

    This study aims to design a vertical handover prediction method to minimize unnecessary handovers for a mobile node (MN) during the vertical handover process. This relies on a novel method for the prediction of a received signal strength indicator (RSSI) referred to as IRBF-FFA, which is designed by utilizing the imperialist competition algorithm (ICA) to train the radial basis function (RBF), and by hybridizing with the firefly algorithm (FFA) to predict the optimal solution. The prediction accuracy of the proposed IRBF–FFA model was validated by comparing it to support vector machines (SVMs) and multilayer perceptron (MLP) models. In order to assess the model’s performance, we measured the coefficient of determination (R2), correlation coefficient (r), root mean square error (RMSE) and mean absolute percentage error (MAPE). The achieved results indicate that the IRBF–FFA model provides more precise predictions compared to different ANNs, namely, support vector machines (SVMs) and multilayer perceptron (MLP). The performance of the proposed model is analyzed through simulated and real-time RSSI measurements. The results also suggest that the IRBF–FFA model can be applied as an efficient technique for the accurate prediction of vertical handover. PMID:27438600

  10. A Novel RSSI Prediction Using Imperialist Competition Algorithm (ICA), Radial Basis Function (RBF) and Firefly Algorithm (FFA) in Wireless Networks.

    PubMed

    Goudarzi, Shidrokh; Haslina Hassan, Wan; Abdalla Hashim, Aisha-Hassan; Soleymani, Seyed Ahmad; Anisi, Mohammad Hossein; Zakaria, Omar M

    2016-01-01

    This study aims to design a vertical handover prediction method to minimize unnecessary handovers for a mobile node (MN) during the vertical handover process. This relies on a novel method for the prediction of a received signal strength indicator (RSSI) referred to as IRBF-FFA, which is designed by utilizing the imperialist competition algorithm (ICA) to train the radial basis function (RBF), and by hybridizing with the firefly algorithm (FFA) to predict the optimal solution. The prediction accuracy of the proposed IRBF-FFA model was validated by comparing it to support vector machines (SVMs) and multilayer perceptron (MLP) models. In order to assess the model's performance, we measured the coefficient of determination (R2), correlation coefficient (r), root mean square error (RMSE) and mean absolute percentage error (MAPE). The achieved results indicate that the IRBF-FFA model provides more precise predictions compared to different ANNs, namely, support vector machines (SVMs) and multilayer perceptron (MLP). The performance of the proposed model is analyzed through simulated and real-time RSSI measurements. The results also suggest that the IRBF-FFA model can be applied as an efficient technique for the accurate prediction of vertical handover.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Versino, Daniele; Bronkhorst, Curt Allan

    The computational formulation of a micro-mechanical material model for the dynamic failure of ductile metals is presented in this paper. The statistical nature of porosity initiation is accounted for by introducing an arbitrary probability density function which describes the pores nucleation pressures. Each micropore within the representative volume element is modeled as a thick spherical shell made of plastically incompressible material. The treatment of porosity by a distribution of thick-walled spheres also allows for the inclusion of micro-inertia effects under conditions of shock and dynamic loading. The second order ordinary differential equation governing the microscopic porosity evolution is solved withmore » a robust implicit procedure. A new Chebyshev collocation method is employed to approximate the porosity distribution and remapping is used to optimize memory usage. The adaptive approximation of the porosity distribution leads to a reduction of computational time and memory usage of up to two orders of magnitude. Moreover, the proposed model affords consistent performance: changing the nucleation pressure probability density function and/or the applied strain rate does not reduce accuracy or computational efficiency of the material model. The numerical performance of the model and algorithms presented is tested against three problems for high density tantalum: single void, one-dimensional uniaxial strain, and two-dimensional plate impact. Here, the results using the integration and algorithmic advances suggest a significant improvement in computational efficiency and accuracy over previous treatments for dynamic loading conditions.« less

  12. Performance of five surface energy balance models for estimating daily evapotranspiration in high biomass sorghum

    NASA Astrophysics Data System (ADS)

    Wagle, Pradeep; Bhattarai, Nishan; Gowda, Prasanna H.; Kakani, Vijaya G.

    2017-06-01

    Robust evapotranspiration (ET) models are required to predict water usage in a variety of terrestrial ecosystems under different geographical and agrometeorological conditions. As a result, several remote sensing-based surface energy balance (SEB) models have been developed to estimate ET over large regions. However, comparison of the performance of several SEB models at the same site is limited. In addition, none of the SEB models have been evaluated for their ability to predict ET in rain-fed high biomass sorghum grown for biofuel production. In this paper, we evaluated the performance of five widely used single-source SEB models, namely Surface Energy Balance Algorithm for Land (SEBAL), Mapping ET with Internalized Calibration (METRIC), Surface Energy Balance System (SEBS), Simplified Surface Energy Balance Index (S-SEBI), and operational Simplified Surface Energy Balance (SSEBop), for estimating ET over a high biomass sorghum field during the 2012 and 2013 growing seasons. The predicted ET values were compared against eddy covariance (EC) measured ET (ETEC) for 19 cloud-free Landsat image. In general, S-SEBI, SEBAL, and SEBS performed reasonably well for the study period, while METRIC and SSEBop performed poorly. All SEB models substantially overestimated ET under extremely dry conditions as they underestimated sensible heat (H) and overestimated latent heat (LE) fluxes under dry conditions during the partitioning of available energy. METRIC, SEBAL, and SEBS overestimated LE regardless of wet or dry periods. Consequently, predicted seasonal cumulative ET by METRIC, SEBAL, and SEBS were higher than seasonal cumulative ETEC in both seasons. In contrast, S-SEBI and SSEBop substantially underestimated ET under too wet conditions, and predicted seasonal cumulative ET by S-SEBI and SSEBop were lower than seasonal cumulative ETEC in the relatively wetter 2013 growing season. Our results indicate the necessity of inclusion of soil moisture or plant water stress component in SEB models for the improvement of their performance, especially under too dry or wet environments.

  13. Using a Guided Machine Learning Ensemble Model to Predict Discharge Disposition following Meningioma Resection.

    PubMed

    Muhlestein, Whitney E; Akagi, Dallin S; Kallos, Justiss A; Morone, Peter J; Weaver, Kyle D; Thompson, Reid C; Chambless, Lola B

    2018-04-01

    Objective  Machine learning (ML) algorithms are powerful tools for predicting patient outcomes. This study pilots a novel approach to algorithm selection and model creation using prediction of discharge disposition following meningioma resection as a proof of concept. Materials and Methods  A diversity of ML algorithms were trained on a single-institution database of meningioma patients to predict discharge disposition. Algorithms were ranked by predictive power and top performers were combined to create an ensemble model. The final ensemble was internally validated on never-before-seen data to demonstrate generalizability. The predictive power of the ensemble was compared with a logistic regression. Further analyses were performed to identify how important variables impact the ensemble. Results  Our ensemble model predicted disposition significantly better than a logistic regression (area under the curve of 0.78 and 0.71, respectively, p  = 0.01). Tumor size, presentation at the emergency department, body mass index, convexity location, and preoperative motor deficit most strongly influence the model, though the independent impact of individual variables is nuanced. Conclusion  Using a novel ML technique, we built a guided ML ensemble model that predicts discharge destination following meningioma resection with greater predictive power than a logistic regression, and that provides greater clinical insight than a univariate analysis. These techniques can be extended to predict many other patient outcomes of interest.

  14. Feature combination networks for the interpretation of statistical machine learning models: application to Ames mutagenicity.

    PubMed

    Webb, Samuel J; Hanser, Thierry; Howlin, Brendan; Krause, Paul; Vessey, Jonathan D

    2014-03-25

    A new algorithm has been developed to enable the interpretation of black box models. The developed algorithm is agnostic to learning algorithm and open to all structural based descriptors such as fragments, keys and hashed fingerprints. The algorithm has provided meaningful interpretation of Ames mutagenicity predictions from both random forest and support vector machine models built on a variety of structural fingerprints.A fragmentation algorithm is utilised to investigate the model's behaviour on specific substructures present in the query. An output is formulated summarising causes of activation and deactivation. The algorithm is able to identify multiple causes of activation or deactivation in addition to identifying localised deactivations where the prediction for the query is active overall. No loss in performance is seen as there is no change in the prediction; the interpretation is produced directly on the model's behaviour for the specific query. Models have been built using multiple learning algorithms including support vector machine and random forest. The models were built on public Ames mutagenicity data and a variety of fingerprint descriptors were used. These models produced a good performance in both internal and external validation with accuracies around 82%. The models were used to evaluate the interpretation algorithm. Interpretation was revealed that links closely with understood mechanisms for Ames mutagenicity. This methodology allows for a greater utilisation of the predictions made by black box models and can expedite further study based on the output for a (quantitative) structure activity model. Additionally the algorithm could be utilised for chemical dataset investigation and knowledge extraction/human SAR development.

  15. An unsupervised classification scheme for improving predictions of prokaryotic TIS.

    PubMed

    Tech, Maike; Meinicke, Peter

    2006-03-09

    Although it is not difficult for state-of-the-art gene finders to identify coding regions in prokaryotic genomes, exact prediction of the corresponding translation initiation sites (TIS) is still a challenging problem. Recently a number of post-processing tools have been proposed for improving the annotation of prokaryotic TIS. However, inherent difficulties of these approaches arise from the considerable variation of TIS characteristics across different species. Therefore prior assumptions about the properties of prokaryotic gene starts may cause suboptimal predictions for newly sequenced genomes with TIS signals differing from those of well-investigated genomes. We introduce a clustering algorithm for completely unsupervised scoring of potential TIS, based on positionally smoothed probability matrices. The algorithm requires an initial gene prediction and the genomic sequence of the organism to perform the reannotation. As compared with other methods for improving predictions of gene starts in bacterial genomes, our approach is not based on any specific assumptions about prokaryotic TIS. Despite the generality of the underlying algorithm, the prediction rate of our method is competitive on experimentally verified test data from E. coli and B. subtilis. Regarding genomes with high G+C content, in contrast to some previously proposed methods, our algorithm also provides good performance on P. aeruginosa, B. pseudomallei and R. solanacearum. On reliable test data we showed that our method provides good results in post-processing the predictions of the widely-used program GLIMMER. The underlying clustering algorithm is robust with respect to variations in the initial TIS annotation and does not require specific assumptions about prokaryotic gene starts. These features are particularly useful on genomes with high G+C content. The algorithm has been implemented in the tool "TICO" (TIs COrrector) which is publicly available from our web site.

  16. Vocal communication in a complex multi-level society: constrained acoustic structure and flexible call usage in Guinea baboons.

    PubMed

    Maciej, Peter; Ndao, Ibrahima; Hammerschmidt, Kurt; Fischer, Julia

    2013-09-23

    To understand the evolution of acoustic communication in animals, it is important to distinguish between the structure and the usage of vocal signals, since both aspects are subject to different constraints. In terrestrial mammals, the structure of calls is largely innate, while individuals have a greater ability to actively initiate or withhold calls. In closely related taxa, one would therefore predict a higher flexibility in call usage compared to call structure. In the present study, we investigated the vocal repertoire of free living Guinea baboons (Papio papio) and examined the structure and usage of the animals' vocal signals. Guinea baboons live in a complex multi-level social organization and exhibit a largely tolerant and affiliative social style, contrary to most other baboon taxa. To classify the vocal repertoire of male and female Guinea baboons, cluster analyses were used and focal observations were conducted to assess the usage of vocal signals in the particular contexts. In general, the vocal repertoire of Guinea baboons largely corresponded to the vocal repertoire other baboon taxa. The usage of calls, however, differed considerably from other baboon taxa and corresponded with the specific characteristics of the Guinea baboons' social behaviour. While Guinea baboons showed a diminished usage of contest and display vocalizations (a common pattern observed in chacma baboons), they frequently used vocal signals during affiliative and greeting interactions. Our study shows that the call structure of primates is largely unaffected by the species' social system (including grouping patterns and social interactions), while the usage of calls can be more flexibly adjusted, reflecting the quality of social interactions of the individuals. Our results support the view that the primary function of social signals is to regulate social interactions, and therefore the degree of competition and cooperation may be more important to explain variation in call usage than grouping patterns or group size.

  17. Vocal communication in a complex multi-level society: constrained acoustic structure and flexible call usage in Guinea baboons

    PubMed Central

    2013-01-01

    Background To understand the evolution of acoustic communication in animals, it is important to distinguish between the structure and the usage of vocal signals, since both aspects are subject to different constraints. In terrestrial mammals, the structure of calls is largely innate, while individuals have a greater ability to actively initiate or withhold calls. In closely related taxa, one would therefore predict a higher flexibility in call usage compared to call structure. In the present study, we investigated the vocal repertoire of free living Guinea baboons (Papio papio) and examined the structure and usage of the animals’ vocal signals. Guinea baboons live in a complex multi-level social organization and exhibit a largely tolerant and affiliative social style, contrary to most other baboon taxa. To classify the vocal repertoire of male and female Guinea baboons, cluster analyses were used and focal observations were conducted to assess the usage of vocal signals in the particular contexts. Results In general, the vocal repertoire of Guinea baboons largely corresponded to the vocal repertoire other baboon taxa. The usage of calls, however, differed considerably from other baboon taxa and corresponded with the specific characteristics of the Guinea baboons’ social behaviour. While Guinea baboons showed a diminished usage of contest and display vocalizations (a common pattern observed in chacma baboons), they frequently used vocal signals during affiliative and greeting interactions. Conclusions Our study shows that the call structure of primates is largely unaffected by the species’ social system (including grouping patterns and social interactions), while the usage of calls can be more flexibly adjusted, reflecting the quality of social interactions of the individuals. Our results support the view that the primary function of social signals is to regulate social interactions, and therefore the degree of competition and cooperation may be more important to explain variation in call usage than grouping patterns or group size. PMID:24059742

  18. Teaching a Machine to Feel Postoperative Pain: Combining High-Dimensional Clinical Data with Machine Learning Algorithms to Forecast Acute Postoperative Pain

    PubMed Central

    Tighe, Patrick J.; Harle, Christopher A.; Hurley, Robert W.; Aytug, Haldun; Boezaart, Andre P.; Fillingim, Roger B.

    2015-01-01

    Background Given their ability to process highly dimensional datasets with hundreds of variables, machine learning algorithms may offer one solution to the vexing challenge of predicting postoperative pain. Methods Here, we report on the application of machine learning algorithms to predict postoperative pain outcomes in a retrospective cohort of 8071 surgical patients using 796 clinical variables. Five algorithms were compared in terms of their ability to forecast moderate to severe postoperative pain: Least Absolute Shrinkage and Selection Operator (LASSO), gradient-boosted decision tree, support vector machine, neural network, and k-nearest neighbor, with logistic regression included for baseline comparison. Results In forecasting moderate to severe postoperative pain for postoperative day (POD) 1, the LASSO algorithm, using all 796 variables, had the highest accuracy with an area under the receiver-operating curve (ROC) of 0.704. Next, the gradient-boosted decision tree had an ROC of 0.665 and the k-nearest neighbor algorithm had an ROC of 0.643. For POD 3, the LASSO algorithm, using all variables, again had the highest accuracy, with an ROC of 0.727. Logistic regression had a lower ROC of 0.5 for predicting pain outcomes on POD 1 and 3. Conclusions Machine learning algorithms, when combined with complex and heterogeneous data from electronic medical record systems, can forecast acute postoperative pain outcomes with accuracies similar to methods that rely only on variables specifically collected for pain outcome prediction. PMID:26031220

  19. PCTFPeval: a web tool for benchmarking newly developed algorithms for predicting cooperative transcription factor pairs in yeast.

    PubMed

    Lai, Fu-Jou; Chang, Hong-Tsun; Wu, Wei-Sheng

    2015-01-01

    Computational identification of cooperative transcription factor (TF) pairs helps understand the combinatorial regulation of gene expression in eukaryotic cells. Many advanced algorithms have been proposed to predict cooperative TF pairs in yeast. However, it is still difficult to conduct a comprehensive and objective performance comparison of different algorithms because of lacking sufficient performance indices and adequate overall performance scores. To solve this problem, in our previous study (published in BMC Systems Biology 2014), we adopted/proposed eight performance indices and designed two overall performance scores to compare the performance of 14 existing algorithms for predicting cooperative TF pairs in yeast. Most importantly, our performance comparison framework can be applied to comprehensively and objectively evaluate the performance of a newly developed algorithm. However, to use our framework, researchers have to put a lot of effort to construct it first. To save researchers time and effort, here we develop a web tool to implement our performance comparison framework, featuring fast data processing, a comprehensive performance comparison and an easy-to-use web interface. The developed tool is called PCTFPeval (Predicted Cooperative TF Pair evaluator), written in PHP and Python programming languages. The friendly web interface allows users to input a list of predicted cooperative TF pairs from their algorithm and select (i) the compared algorithms among the 15 existing algorithms, (ii) the performance indices among the eight existing indices, and (iii) the overall performance scores from two possible choices. The comprehensive performance comparison results are then generated in tens of seconds and shown as both bar charts and tables. The original comparison results of each compared algorithm and each selected performance index can be downloaded as text files for further analyses. Allowing users to select eight existing performance indices and 15 existing algorithms for comparison, our web tool benefits researchers who are eager to comprehensively and objectively evaluate the performance of their newly developed algorithm. Thus, our tool greatly expedites the progress in the research of computational identification of cooperative TF pairs.

  20. PCTFPeval: a web tool for benchmarking newly developed algorithms for predicting cooperative transcription factor pairs in yeast

    PubMed Central

    2015-01-01

    Background Computational identification of cooperative transcription factor (TF) pairs helps understand the combinatorial regulation of gene expression in eukaryotic cells. Many advanced algorithms have been proposed to predict cooperative TF pairs in yeast. However, it is still difficult to conduct a comprehensive and objective performance comparison of different algorithms because of lacking sufficient performance indices and adequate overall performance scores. To solve this problem, in our previous study (published in BMC Systems Biology 2014), we adopted/proposed eight performance indices and designed two overall performance scores to compare the performance of 14 existing algorithms for predicting cooperative TF pairs in yeast. Most importantly, our performance comparison framework can be applied to comprehensively and objectively evaluate the performance of a newly developed algorithm. However, to use our framework, researchers have to put a lot of effort to construct it first. To save researchers time and effort, here we develop a web tool to implement our performance comparison framework, featuring fast data processing, a comprehensive performance comparison and an easy-to-use web interface. Results The developed tool is called PCTFPeval (Predicted Cooperative TF Pair evaluator), written in PHP and Python programming languages. The friendly web interface allows users to input a list of predicted cooperative TF pairs from their algorithm and select (i) the compared algorithms among the 15 existing algorithms, (ii) the performance indices among the eight existing indices, and (iii) the overall performance scores from two possible choices. The comprehensive performance comparison results are then generated in tens of seconds and shown as both bar charts and tables. The original comparison results of each compared algorithm and each selected performance index can be downloaded as text files for further analyses. Conclusions Allowing users to select eight existing performance indices and 15 existing algorithms for comparison, our web tool benefits researchers who are eager to comprehensively and objectively evaluate the performance of their newly developed algorithm. Thus, our tool greatly expedites the progress in the research of computational identification of cooperative TF pairs. PMID:26677932

Top