Science.gov

Sample records for abstract machine model

  1. Abstract quantum computing machines and quantum computational logics

    NASA Astrophysics Data System (ADS)

    Chiara, Maria Luisa Dalla; Giuntini, Roberto; Sergioli, Giuseppe; Leporini, Roberto

    2016-06-01

    Classical and quantum parallelism are deeply different, although it is sometimes claimed that quantum Turing machines are nothing but special examples of classical probabilistic machines. We introduce the concepts of deterministic state machine, classical probabilistic state machine and quantum state machine. On this basis, we discuss the question: To what extent can quantum state machines be simulated by classical probabilistic state machines? Each state machine is devoted to a single task determined by its program. Real computers, however, behave differently, being able to solve different kinds of problems. This capacity can be modeled, in the quantum case, by the mathematical notion of abstract quantum computing machine, whose different programs determine different quantum state machines. The computations of abstract quantum computing machines can be linguistically described by the formulas of a particular form of quantum logic, termed quantum computational logic.

  2. Programming the Navier-Stokes computer: An abstract machine model and a visual editor

    NASA Technical Reports Server (NTRS)

    Middleton, David; Crockett, Tom; Tomboulian, Sherry

    1988-01-01

    The Navier-Stokes computer is a parallel computer designed to solve Computational Fluid Dynamics problems. Each processor contains several floating point units which can be configured under program control to implement a vector pipeline with several inputs and outputs. Since the development of an effective compiler for this computer appears to be very difficult, machine level programming seems necessary and support tools for this process have been studied. These support tools are organized into a graphical program editor. A programming process is described by which appropriate computations may be efficiently implemented on the Navier-Stokes computer. The graphical editor would support this programming process, verifying various programmer choices for correctness and deducing values such as pipeline delays and network configurations. Step by step details are provided and demonstrated with two example programs.

  3. Abstraction Augmented Markov Models.

    PubMed

    Caragea, Cornelia; Silvescu, Adrian; Caragea, Doina; Honavar, Vasant

    2010-12-13

    High accuracy sequence classification often requires the use of higher order Markov models (MMs). However, the number of MM parameters increases exponentially with the range of direct dependencies between sequence elements, thereby increasing the risk of overfitting when the data set is limited in size. We present abstraction augmented Markov models (AAMMs) that effectively reduce the number of numeric parameters of k(th) order MMs by successively grouping strings of length k (i.e., k-grams) into abstraction hierarchies. We evaluate AAMMs on three protein subcellular localization prediction tasks. The results of our experiments show that abstraction makes it possible to construct predictive models that use significantly smaller number of features (by one to three orders of magnitude) as compared to MMs. AAMMs are competitive with and, in some cases, significantly outperform MMs. Moreover, the results show that AAMMs often perform significantly better than variable order Markov models, such as decomposed context tree weighting, prediction by partial match, and probabilistic suffix trees.

  4. Automatic Review of Abstract State Machines by Meta Property Verification

    NASA Technical Reports Server (NTRS)

    Arcaini, Paolo; Gargantini, Angelo; Riccobene, Elvinia

    2010-01-01

    A model review is a validation technique aimed at determining if a model is of sufficient quality and allows defects to be identified early in the system development, reducing the cost of fixing them. In this paper we propose a technique to perform automatic review of Abstract State Machine (ASM) formal specifications. We first detect a family of typical vulnerabilities and defects a developer can introduce during the modeling activity using the ASMs and we express such faults as the violation of meta-properties that guarantee certain quality attributes of the specification. These meta-properties are then mapped to temporal logic formulas and model checked for their violation. As a proof of concept, we also report the result of applying this ASM review process to several specifications.

  5. Multimodeling and Model Abstraction

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The multiplicity of models of the same process or phenomenon is the commonplace in environmental modeling. Last 10 years brought marked interest to making use of the variety of conceptual approaches instead of attempting to find the best model or using a single preferred model. Two systematic approa...

  6. Abstract models of molecular walkers

    NASA Astrophysics Data System (ADS)

    Semenov, Oleg

    Recent advances in single-molecule chemistry have led to designs for artificial multi-pedal walkers that follow tracks of chemicals. The walkers, called molecular spiders, consist of a rigid chemically inert body and several flexible enzymatic legs. The legs can reversibly bind to chemical substrates on a surface, and through their enzymatic action convert them to products. We study abstract models of molecular spiders to evaluate how efficiently they can perform two tasks: molecular transport of cargo over tracks and search for targets on finite surfaces. For the single-spider model our simulations show a transient behavior wherein certain spiders move superdiffusively over significant distances and times. This gives the spiders potential as a faster-than-diffusion transport mechanism. However, analysis shows that single-spider motion eventually decays into an ordinary diffusive motion, owing to the ever increasing size of the region of products. Inspired by cooperative behavior of natural molecular walkers, we propose a symmetric exclusion process (SEP) model for multiple walkers interacting as they move over a one-dimensional lattice. We show that when walkers are sequentially released from the origin, the collective effect is to prevent the leading walkers from moving too far backwards. Hence, there is an effective outward pressure on the leading walkers that keeps them moving superdiffusively for longer times. Despite this improvement the leading spider eventually slows down and moves diffusively, similarly to a single spider. The slowdown happens because all spiders behind the leading spiders never encounter substrates, and thus they are never biased. They cannot keep up with leading spiders, and cannot put enough pressure on them. Next, we investigate search properties of a single and multiple spiders moving over one- and two-dimensional surfaces with various absorbing and reflecting boundaries. For the single-spider model we evaluate by how much the

  7. Machine characterization based on an abstract high-level language machine

    NASA Technical Reports Server (NTRS)

    Saavedra-Barrera, Rafael H.; Smith, Alan Jay; Miya, Eugene

    1989-01-01

    Measurements are presented for a large number of machines ranging from small workstations to supercomputers. The authors combine these measurements into groups of parameters which relate to specific aspects of the machine implementation, and use these groups to provide overall machine characterizations. The authors also define the concept of pershapes, which represent the level of performance of a machine for different types of computation. A metric based on pershapes is introduced that provides a quantitative way of measuring how similar two machines are in terms of their performance distributions. The metric is related to the extent to which pairs of machines have varying relative performance levels depending on which benchmark is used.

  8. Abstracts

    NASA Astrophysics Data System (ADS)

    2012-09-01

    Measuring cosmological parameters with GRBs: status and perspectives New interpretation of the Amati relation The SED Machine - a dedicated transient spectrograph PTF10iue - evidence for an internal engine in a unique Type Ic SN Direct evidence for the collapsar model of long gamma-ray bursts On pair instability supernovae and gamma-ray bursts Pan-STARRS1 observations of ultraluminous SNe The influence of rotation on the critical neutrino luminosity in core-collapse supernovae General relativistic magnetospheres of slowly rotating and oscillating neutron stars Host galaxies of short GRBs GRB 100418A: a bridge between GRB-associated hypernovae and SNe Two super-luminous SNe at z ~ 1.5 from the SNLS Prospects for very-high-energy gamma-ray bursts with the Cherenkov Telescope Array The dynamics and radiation of relativistic flows from massive stars The search for light echoes from the supernova explosion of 1181 AD The proto-magnetar model for gamma-ray bursts Stellar black holes at the dawn of the universe MAXI J0158-744: the discovery of a supersoft X-ray transient Wide-band spectra of magnetar burst emission Dust formation and evolution in envelope-stripped core-collapse supernovae The host galaxies of dark gamma-ray bursts Keck observations of 150 GRB host galaxies Search for properties of GRBs at large redshift The early emission from SNe Spectral properties of SN shock breakout MAXI observation of GRBs and short X-ray transients A three-dimensional view of SN 1987A using light echo spectroscopy X-ray study of the southern extension of the SNR Puppis A All-sky survey of short X-ray transients by MAXI GSC Development of the CALET gamma-ray burst monitor (CGBM)

  9. Abstracts

    ERIC Educational Resources Information Center

    American Biology Teacher, 1977

    1977-01-01

    Included are over 50 abstracts of papers being presented at the 1977 National Association of Biology Teachers Convention. Included in each abstract are the title, author, and summary of the paper. Topics include photographic techniques environmental studies, and biological instruction. (MA)

  10. Model-based machine learning.

    PubMed

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  11. Model-based machine learning

    PubMed Central

    Bishop, Christopher M.

    2013-01-01

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612

  12. Directory of Energy Information Administration Model Abstracts

    SciTech Connect

    Not Available

    1986-07-16

    This directory partially fulfills the requirements of Section 8c, of the documentation order, which states in part that: The Office of Statistical Standards will annually publish an EIA document based on the collected abstracts and the appendices. This report contains brief statements about each model's title, acronym, purpose, and status, followed by more detailed information on characteristics, uses, and requirements. Sources for additional information are identified. All models active through March 1985 are included. The main body of this directory is an alphabetical list of all active EIA models. Appendix A identifies major EIA modeling systems and the models within these systems, and Appendix B identifies active EIA models by type (basic, auxiliary, and developing). EIA also leases models developed by proprietary software vendors. Documentation for these proprietary models is the responsibility of the companies from which they are leased. EIA has recently leased models from Chase Econometrics, Inc., Data Resources, Inc. (DRI), the Oak Ridge National Laboratory (ORNL), and Wharton Econometric Forecasting Associates (WEFA). Leased models are not abstracted here. The directory is intended for the use of energy and energy-policy analysts in the public and private sectors.

  13. Model Checking Abstract PLEXIL Programs with SMART

    NASA Technical Reports Server (NTRS)

    Siminiceanu, Radu I.

    2007-01-01

    We describe a method to automatically generate discrete-state models of abstract Plan Execution Interchange Language (PLEXIL) programs that can be analyzed using model checking tools. Starting from a high-level description of a PLEXIL program or a family of programs with common characteristics, the generator lays the framework that models the principles of program execution. The concrete parts of the program are not automatically generated, but require the modeler to introduce them by hand. As a case study, we generate models to verify properties of the PLEXIL macro constructs that are introduced as shorthand notation. After an exhaustive analysis, we conclude that the macro definitions obey the intended semantics and behave as expected, but contingently on a few specific requirements on the timing semantics of micro-steps in the concrete executive implementation.

  14. Multivariate cross-classification: applying machine learning techniques to characterize abstraction in neural representations

    PubMed Central

    Kaplan, Jonas T.; Man, Kingson; Greening, Steven G.

    2015-01-01

    Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC), and review several domains where it has recently made an impact. MVCC has been important in establishing correspondences among neural patterns across cognitive domains, including motor-perception matching and cross-sensory matching. It has been used to test for similarity between neural patterns evoked by perception and those generated from memory. Other work has used MVCC to investigate the similarity of representations for semantic categories across different kinds of stimulus presentation, and in the presence of different cognitive demands. We use these examples to demonstrate the power of MVCC as a tool for investigating neural abstraction and discuss some important methodological issues related to its application. PMID:25859202

  15. Hierarchical abstract semantic model for image classification

    NASA Astrophysics Data System (ADS)

    Ye, Zhipeng; Liu, Peng; Zhao, Wei; Tang, Xianglong

    2015-09-01

    Semantic gap limits the performance of bag-of-visual-words. To deal with this problem, a hierarchical abstract semantics method that builds abstract semantic layers, generates semantic visual vocabularies, measures semantic gap, and constructs classifiers using the Adaboost strategy is proposed. First, abstract semantic layers are proposed to narrow the semantic gap between visual features and their interpretation. Then semantic visual words are extracted as features to train semantic classifiers. One popular form of measurement is used to quantify the semantic gap. The Adaboost training strategy is used to combine weak classifiers into strong ones to further improve performance. For a testing image, the category is estimated layer-by-layer. Corresponding abstract hierarchical structures for popular datasets, including Caltech-101 and MSRC, are proposed for evaluation. The experimental results show that the proposed method is capable of narrowing semantic gaps effectively and performs better than other categorization methods.

  16. Abstract models for the synthesis of optimization algorithms.

    NASA Technical Reports Server (NTRS)

    Meyer, G. G. L.; Polak, E.

    1971-01-01

    Systematic approach to the problem of synthesis of optimization algorithms. Abstract models for algorithms are developed which guide the inventive process toward ?conceptual' algorithms which may consist of operations that are inadmissible in a practical method. Once the abstract models are established a set of methods for converting ?conceptual' algorithms falling into the class defined by the abstract models into ?implementable' iterative procedures is presented.

  17. Rough set models of Physarum machines

    NASA Astrophysics Data System (ADS)

    Pancerz, Krzysztof; Schumann, Andrew

    2015-04-01

    In this paper, we consider transition system models of behaviour of Physarum machines in terms of rough set theory. A Physarum machine, a biological computing device implemented in the plasmodium of Physarum polycephalum (true slime mould), is a natural transition system. In the behaviour of Physarum machines, one can notice some ambiguity in Physarum motions that influences exact anticipation of states of machines in time. To model this ambiguity, we propose to use rough set models created over transition systems. Rough sets are an appropriate tool to deal with rough (ambiguous, imprecise) concepts in the universe of discourse.

  18. Metagenomic Classification Using an Abstraction Augmented Markov Model

    PubMed Central

    Zhu, Xiujun (Sylvia)

    2016-01-01

    Abstract The abstraction augmented Markov model (AAMM) is an extension of a Markov model that can be used for the analysis of genetic sequences. It is developed using the frequencies of all possible consecutive words with same length (p-mers). This article will review the theory behind AAMM and apply the theory behind AAMM in metagenomic classification. PMID:26618474

  19. Modelling abstraction licensing strategies ahead of the UK's water abstraction licensing reform

    NASA Astrophysics Data System (ADS)

    Klaar, M. J.

    2012-12-01

    Within England and Wales, river water abstractions are licensed and regulated by the Environment Agency (EA), who uses compliance with the Environmental Flow Indicator (EFI) to ascertain where abstraction may cause undesirable effects on river habitats and species. The EFI is a percentage deviation from natural flow represented using a flow duration curve. The allowable percentage deviation changes with different flows, and also changes depending on an assessment of the sensitivity of the river to changes in flow (Table 1). Within UK abstraction licensing, resource availability is expressed as a surplus or deficit of water resources in relation to the EFI, and utilises the concept of 'hands-off-flows' (HOFs) at the specified flow statistics detailed in Table 1. Use of a HOF system enables abstraction to cease at set flows, but also enables abstraction to occur at periods of time when more water is available. Compliance at low flows (Q95) is used by the EA to determine the hydrological classification and compliance with the Water Framework Directive (WFD) for identifying waterbodies where flow may be causing or contributing to a failure in good ecological status (GES; Table 2). This compliance assessment shows where the scenario flows are below the EFI and by how much, to help target measures for further investigation and assessment. Currently, the EA is reviewing the EFI methodology in order to assess whether or not it can be used within the reformed water abstraction licensing system which is being planned by the Department for Environment, Food and Rural Affairs (DEFRA) to ensure the licensing system is resilient to the challenges of climate change and population growth, while allowing abstractors to meet their water needs efficiently, and better protect the environment. In order to assess the robustness of the EFI, a simple model has been created which allows a number of abstraction, flow and licensing scenarios to be run to determine WFD compliance using the

  20. Coupling Radar Rainfall to Hydrological Models for Water Abstraction Management

    NASA Astrophysics Data System (ADS)

    Asfaw, Alemayehu; Shucksmith, James; Smith, Andrea; MacDonald, Ken

    2015-04-01

    The impacts of climate change and growing water use are likely to put considerable pressure on water resources and the environment. In the UK, a reform to surface water abstraction policy has recently been proposed which aims to increase the efficiency of using available water resources whilst minimising impacts on the aquatic environment. Key aspects to this reform include the consideration of dynamic rather than static abstraction licensing as well as introducing water trading concepts. Dynamic licensing will permit varying levels of abstraction dependent on environmental conditions (i.e. river flow and quality). The practical implementation of an effective dynamic abstraction strategy requires suitable flow forecasting techniques to inform abstraction asset management. Potentially the predicted availability of water resources within a catchment can be coupled to predicted demand and current storage to inform a cost effective water resource management strategy which minimises environmental impacts. The aim of this work is to use a historical analysis of UK case study catchment to compare potential water resource availability using modelled dynamic abstraction scenario informed by a flow forecasting model, against observed abstraction under a conventional abstraction regime. The work also demonstrates the impacts of modelling uncertainties on the accuracy of predicted water availability over range of forecast lead times. The study utilised a conceptual rainfall-runoff model PDM - Probability-Distributed Model developed by Centre for Ecology & Hydrology - set up in the Dove River catchment (UK) using 1km2 resolution radar rainfall as inputs and 15 min resolution gauged flow data for calibration and validation. Data assimilation procedures are implemented to improve flow predictions using observed flow data. Uncertainties in the radar rainfall data used in the model are quantified using artificial statistical error model described by Gaussian distribution and

  1. How Pupils Use a Model for Abstract Concepts in Genetics

    ERIC Educational Resources Information Center

    Venville, Grady; Donovan, Jenny

    2008-01-01

    The purpose of this research was to explore the way pupils of different age groups use a model to understand abstract concepts in genetics. Pupils from early childhood to late adolescence were taught about genes and DNA using an analogical model (the wool model) during their regular biology classes. Changing conceptual understandings of the…

  2. Vibration absorber modeling for handheld machine tool

    NASA Astrophysics Data System (ADS)

    Abdullah, Mohd Azman; Mustafa, Mohd Muhyiddin; Jamil, Jazli Firdaus; Salim, Mohd Azli; Ramli, Faiz Redza

    2015-05-01

    Handheld machine tools produce continuous vibration to the users during operation. This vibration causes harmful effects to the health of users for repeated operations in a long period of time. In this paper, a dynamic vibration absorber (DVA) is designed and modeled to reduce the vibration generated by the handheld machine tool. Several designs and models of vibration absorbers with various stiffness properties are simulated, tested and optimized in order to diminish the vibration. Ordinary differential equation is used to derive and formulate the vibration phenomena in the machine tool with and without the DVA. The final transfer function of the DVA is later analyzed using commercial available mathematical software. The DVA with optimum properties of mass and stiffness is developed and applied on the actual handheld machine tool. The performance of the DVA is experimentally tested and validated by the final result of vibration reduction.

  3. Quantum mechanical hamiltonian models of turing machines

    NASA Astrophysics Data System (ADS)

    Benioff, Paul

    1982-11-01

    Quantum mechanical Hamiltonian models, which represent an aribtrary but finite number of steps of any Turing machine computation, are constructed here on a finite lattice of spin-1/2 systems. Different regions of the lattice correspond to different components of the Turing machine (plus recording system). Successive states of any machine computation are represented in the model by spin configuration states. Both time-independent and time-dependent Hamiltonian models are constructed here. The time-independent models do not dissipate energy or degrade the system state as they evolve. They operate close to the quantum limit in that the total system energy uncertainty/computation speed is close to the limit given by the time-energy uncertainty relation. However, the model evolution is time global and the Hamiltonian is more complex. The time-dependent models do not degrade the system state. Also they are time local and the Hamiltonian is less complex.

  4. Concrete Model Checking with Abstract Matching and Refinement

    NASA Technical Reports Server (NTRS)

    Pasareanu Corina S.; Peianek Radek; Visser, Willem

    2005-01-01

    We propose an abstraction-based model checking method which relies on refinement of an under-approximation of the feasible behaviors of the system under analysis. The method preserves errors to safety properties, since all analyzed behaviors are feasible by definition. The method does not require an abstract transition relation to he generated, but instead executes the concrete transitions while storing abstract versions of the concrete states, as specified by a set of abstraction predicates. For each explored transition. the method checks, with the help of a theorem prover, whether there is any loss of precision introduced by abstraction. The results of these checks are used to decide termination or to refine the abstraction, by generating new abstraction predicates. If the (possibly infinite) concrete system under analysis has a finite bisimulation quotient, then the method is guaranteed to eventually explore an equivalent finite bisimilar structure. We illustrate the application of the approach for checking concurrent programs. We also show how a lightweight variant can be used for efficient software testing.

  5. An Investigation of System Identification Techniques for Simulation Model Abstraction

    DTIC Science & Technology

    2000-02-01

    This report summarizes research into the application of system identification techniques to simulation model abstraction. System identification produces...34Mission Simulation," a simulation of a squadron of aircraft performing battlefield air interdiction. The system identification techniques were...simplified mathematical models that approximate the dynamic behaviors of the underlying stochastic simulations. Four state-space system

  6. An abstract specification language for Markov reliability models

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1985-01-01

    Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.

  7. An abstract language for specifying Markov reliability models

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.

    1986-01-01

    Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.

  8. Modeling electronic quantum transport with machine learning

    NASA Astrophysics Data System (ADS)

    Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole

    2014-06-01

    We present a machine learning approach to solve electronic quantum transport equations of one-dimensional nanostructures. The transmission coefficients of disordered systems were computed to provide training and test data sets to the machine. The system's representation encodes energetic as well as geometrical information to characterize similarities between disordered configurations, while the Euclidean norm is used as a measure of similarity. Errors for out-of-sample predictions systematically decrease with training set size, enabling the accurate and fast prediction of new transmission coefficients. The remarkable performance of our model to capture the complexity of interference phenomena lends further support to its viability in dealing with transport problems of undulatory nature.

  9. Particle Tracking Model and Abstraction of Transport Processes

    SciTech Connect

    B. Robinson

    2004-10-21

    The purpose of this report is to document the abstraction model being used in total system performance assessment (TSPA) model calculations for radionuclide transport in the unsaturated zone (UZ). The UZ transport abstraction model uses the particle-tracking method that is incorporated into the finite element heat and mass model (FEHM) computer code (Zyvoloski et al. 1997 [DIRS 100615]) to simulate radionuclide transport in the UZ. This report outlines the assumptions, design, and testing of a model for calculating radionuclide transport in the UZ at Yucca Mountain. In addition, methods for determining and inputting transport parameters are outlined for use in the TSPA for license application (LA) analyses. Process-level transport model calculations are documented in another report for the UZ (BSC 2004 [DIRS 164500]). Three-dimensional, dual-permeability flow fields generated to characterize UZ flow (documented by BSC 2004 [DIRS 169861]; DTN: LB03023DSSCP9I.001 [DIRS 163044]) are converted to make them compatible with the FEHM code for use in this abstraction model. This report establishes the numerical method and demonstrates the use of the model that is intended to represent UZ transport in the TSPA-LA. Capability of the UZ barrier for retarding the transport is demonstrated in this report, and by the underlying process model (BSC 2004 [DIRS 164500]). The technical scope, content, and management of this report are described in the planning document ''Technical Work Plan for: Unsaturated Zone Transport Model Report Integration'' (BSC 2004 [DIRS 171282]). Deviations from the technical work plan (TWP) are noted within the text of this report, as appropriate. The latest version of this document is being prepared principally to correct parameter values found to be in error due to transcription errors, changes in source data that were not captured in the report, calculation errors, and errors in interpretation of source data.

  10. Modelling the influence of irrigation abstractions on Scotland's water resources.

    PubMed

    Dunn, S M; Chalmers, N; Stalham, M; Lilly, A; Crabtree, B; Johnston, L

    2003-01-01

    Legislation to control abstraction of water in Scotland is limited and for purposes such as irrigation there are no restrictions in place over most of the country. This situation is set to change with implementation of the European Water Framework Directive. As a first step towards the development of appropriate policy for irrigation control there is a need to assess the current scale of irrigation practices in Scotland. This paper presents a modelling approach that has been used to quantify spatially the volume of water abstractions across the country for irrigation of potato crops under typical climatic conditions. A water balance model was developed to calculate soil moisture deficits and identify the potential need for irrigation. The results were then combined with spatial data on potato cropping and integrated to the sub-catchment scale to identify the river systems most at risk from over-abstraction. The results highlight that the areas that have greatest need for irrigation of potatoes are all concentrated in the central east-coast area of Scotland. The difference between irrigation demand in wet and dry years is very significant, although spatial patterns of the distribution are similar.

  11. Of Models and Machines: Implementing Bounded Rationality.

    PubMed

    Dick, Stephanie

    2015-09-01

    This essay explores the early history of Herbert Simon's principle of bounded rationality in the context of his Artificial Intelligence research in the mid 1950s. It focuses in particular on how Simon and his colleagues at the RAND Corporation translated a model of human reasoning into a computer program, the Logic Theory Machine. They were motivated by a belief that computers and minds were the same kind of thing--namely, information-processing systems. The Logic Theory Machine program was a model of how people solved problems in elementary mathematical logic. However, in making this model actually run on their 1950s computer, the JOHNNIAC, Simon and his colleagues had to navigate many obstacles and material constraints quite foreign to the human experience of logic. They crafted new tools and engaged in new practices that accommodated the affordances of their machine, rather than reflecting the character of human cognition and its bounds. The essay argues that tracking this implementation effort shows that "internal" cognitive practices and "external" tools and materials are not so easily separated as they are in Simon's principle of bounded rationality--the latter often shaping the dynamics of the former.

  12. Exploiting mid-range DNA patterns for sequence classification: binary abstraction Markov models

    PubMed Central

    Shepard, Samuel S.; McSweeny, Andrew; Serpen, Gursel; Fedorov, Alexei

    2012-01-01

    Messenger RNA sequences possess specific nucleotide patterns distinguishing them from non-coding genomic sequences. In this study, we explore the utilization of modified Markov models to analyze sequences up to 44 bp, far beyond the 8-bp limit of conventional Markov models, for exon/intron discrimination. In order to analyze nucleotide sequences of this length, their information content is first reduced by conversion into shorter binary patterns via the application of numerous abstraction schemes. After the conversion of genomic sequences to binary strings, homogenous Markov models trained on the binary sequences are used to discriminate between exons and introns. We term this approach the Binary Abstraction Markov Model (BAMM). High-quality abstraction schemes for exon/intron discrimination are selected using optimization algorithms on supercomputers. The best MM classifiers are then combined using support vector machines into a single classifier. With this approach, over 95% classification accuracy is achieved without taking reading frame into account. With further development, the BAMM approach can be applied to sequences lacking the genetic code such as ncRNAs and 5′-untranslated regions. PMID:22344692

  13. Entity-Centric Abstraction and Modeling Framework for Transportation Architectures

    NASA Technical Reports Server (NTRS)

    Lewe, Jung-Ho; DeLaurentis, Daniel A.; Mavris, Dimitri N.; Schrage, Daniel P.

    2007-01-01

    A comprehensive framework for representing transpportation architectures is presented. After discussing a series of preceding perspectives and formulations, the intellectual underpinning of the novel framework using an entity-centric abstraction of transportation is described. The entities include endogenous and exogenous factors and functional expressions are offered that relate these and their evolution. The end result is a Transportation Architecture Field which permits analysis of future concepts under the holistic perspective. A simulation model which stems from the framework is presented and exercised producing results which quantify improvements in air transportation due to advanced aircraft technologies. Finally, a modeling hypothesis and its accompanying criteria are proposed to test further use of the framework for evaluating new transportation solutions.

  14. Engagement Angle Modeling for Multiple-circle Continuous Machining and Its Application in the Pocket Machining

    NASA Astrophysics Data System (ADS)

    WU, Shixiong; MA, Wei; BAI, Haiping; WANG, Chengyong; SONG, Yuexian

    2017-03-01

    The progressive cutting based on auxiliary paths is an effective machining method for the material accumulating region inside the mould pocket. But the method is commonly based on the radial depth of cut as the control parameter, further more there is no more appropriate adjustment and control approach. The end-users often fail to set the parameter correctly, which leads to excessive tool load in the process of actual machining. In order to make more reasonable control of the machining load and tool-path, an engagement angle modeling method for multiple-circle continuous machining is presented. The distribution mode of multiple circles, dynamic changing process of engagement angle, extreme and average value of engagement angle are carefully considered. Based on the engagement angle model, numerous application techniques for mould pocket machining are presented, involving the calculation of the milling force in multiple-circle continuous machining, and rough and finish machining path planning and load control for the material accumulating region inside the pocket, and other aspects. Simulation and actual machining experiments show that the engagement angle modeling method for multiple-circle continuous machining is correct and reliable, and the related numerous application techniques for pocket machining are feasible and effective. The proposed research contributes to the analysis and control tool load effectively and tool-path planning reasonably for the material accumulating region inside the mould pocket.

  15. Modeling quantum physics with machine learning

    NASA Astrophysics Data System (ADS)

    Lopez-Bezanilla, Alejandro; Arsenault, Louis-Francois; Millis, Andrew; Littlewood, Peter; von Lilienfeld, Anatole

    2014-03-01

    Machine Learning (ML) is a systematic way of inferring new results from sparse information. It directly allows for the resolution of computationally expensive sets of equations by making sense of accumulated knowledge and it is therefore an attractive method for providing computationally inexpensive 'solvers' for some of the important systems of condensed matter physics. In this talk a non-linear regression statistical model is introduced to demonstrate the utility of ML methods in solving quantum physics related problem, and is applied to the calculation of electronic transport in 1D channels. DOE contract number DE-AC02-06CH11357.

  16. Finite State Machines and Modal Models in Ptolemy II

    DTIC Science & Technology

    2009-11-01

    Finite State Machines and Modal Models in Ptolemy II Edward A. Lee Electrical Engineering and Computer Sciences University of California at Berkeley...DATES COVERED 00-00-2009 to 00-00-2009 4. TITLE AND SUBTITLE Finite State Machines and Modal Models in Ptolemy II 5a. CONTRACT NUMBER 5b...describes the usage and semantics of finite-state machines (FSMs) and modal models in Ptolemy II. FSMs are actors whose behavior is described using a

  17. A rule-based approach to model checking of UML state machines

    NASA Astrophysics Data System (ADS)

    Grobelna, Iwona; Grobelny, Michał; Stefanowicz, Łukasz

    2016-12-01

    In the paper a new approach to formal verification of control process specification expressed by means of UML state machines in version 2.x is proposed. In contrast to other approaches from the literature, we use the abstract and universal rule-based logical model suitable both for model checking (using the nuXmv model checker), but also for logical synthesis in form of rapid prototyping. Hence, a prototype implementation in hardware description language VHDL can be obtained that fully reflects the primary, already formally verified specification in form of UML state machines. Presented approach allows to increase the assurance that implemented system meets the user-defined requirements.

  18. A Machine-Learning-Driven Sky Model.

    PubMed

    Satylmys, Pynar; Bashford-Rogers, Thomas; Chalmers, Alan; Debattista, Kurt

    2017-01-01

    Sky illumination is responsible for much of the lighting in a virtual environment. A machine-learning-based approach can compactly represent sky illumination from both existing analytic sky models and from captured environment maps. The proposed approach can approximate the captured lighting at a significantly reduced memory cost and enable smooth transitions of sky lighting to be created from a small set of environment maps captured at discrete times of day. The author's results demonstrate accuracy close to the ground truth for both analytical and capture-based methods. The approach has a low runtime overhead, so it can be used as a generic approach for both offline and real-time applications.

  19. Selected translated abstracts of Russian-language climate-change publications. 4: General circulation models

    SciTech Connect

    Burtis, M.D.; Razuvaev, V.N.; Sivachok, S.G.

    1996-10-01

    This report presents English-translated abstracts of important Russian-language literature concerning general circulation models as they relate to climate change. Into addition to the bibliographic citations and abstracts translated into English, this report presents the original citations and abstracts in Russian. Author and title indexes are included to assist the reader in locating abstracts of particular interest.

  20. Modeling of cumulative tool wear in machining metal matrix composites

    SciTech Connect

    Hung, N.P.; Tan, V.K.; Oon, B.E.

    1995-12-31

    Metal matrix composites (MMCs) are notoriously known for their low machinability because of the abrasive and brittle reinforcement. Although a near-net-shape product could be produced, finish machining is still required for the final shape and dimension. The classical Taylor`s tool life equation that relates tool life and cutting conditions has been traditionally used to study machinability. The turning operation is commonly used to investigate the machinability of a material; tedious and costly milling experiments have to be performed separately; while a facing test is not applicable for the Taylor`s model since the facing speed varies as the tool moves radially. Collecting intensive machining data for MMCs is often difficult because of the constraints on size, cost of the material, and the availability of sophisticated machine tools. A more flexible model and machinability testing technique are, therefore, sought. This study presents and verifies new models for turning, facing, and milling operations. Different cutting conditions were utilized to assess the machinability of MMCs reinforced with silicon carbide or alumina particles. Experimental data show that tool wear does not depend on the order of different cutting speeds since abrasion is the main wear mechanism. Correlation between data for turning, milling, and facing is presented. It is more economical to rank machinability using data for facing and then to convert the data for turning and milling, if required. Subsurface damages such as work-hardened and cracked matrix alloy, and fractured and delaminated particles are discussed.

  1. Generative Modeling for Machine Learning on the D-Wave

    SciTech Connect

    Thulasidasan, Sunil

    2016-11-15

    These are slides on Generative Modeling for Machine Learning on the D-Wave. The following topics are detailed: generative models; Boltzmann machines: a generative model; restricted Boltzmann machines; learning parameters: RBM training; practical ways to train RBM; D-Wave as a Boltzmann sampler; mapping RBM onto the D-Wave; Chimera restricted RBM; mapping binary RBM to Ising model; experiments; data; D-Wave effective temperature, parameters noise, etc.; experiments: contrastive divergence (CD) 1 step; after 50 steps of CD; after 100 steps of CD; D-Wave (experiments 1, 2, 3); D-Wave observations.

  2. Two-Stage Machine Learning model for guideline development.

    PubMed

    Mani, S; Shankle, W R; Dick, M B; Pazzani, M J

    1999-05-01

    We present a Two-Stage Machine Learning (ML) model as a data mining method to develop practice guidelines and apply it to the problem of dementia staging. Dementia staging in clinical settings is at present complex and highly subjective because of the ambiguities and the complicated nature of existing guidelines. Our model abstracts the two-stage process used by physicians to arrive at the global Clinical Dementia Rating Scale (CDRS) score. The model incorporates learning intermediate concepts (CDRS category scores) in the first stage that then become the feature space for the second stage (global CDRS score). The sample consisted of 678 patients evaluated in the Alzheimer's Disease Research Center at the University of California, Irvine. The demographic variables, functional and cognitive test results used by physicians for the task of dementia severity staging were used as input to the machine learning algorithms. Decision tree learners and rule inducers (C4.5, Cart, C4.5 rules) were selected for our study as they give expressive models, and Naive Bayes was used as a baseline algorithm for comparison purposes. We first learned the six CDRS category scores (memory, orientation, judgement and problem solving, personal care, home and hobbies, and community affairs). These learned CDRS category scores were then used to learn the global CDRS scores. The Two-Stage ML model classified as well as or better than the published inter-rater agreements for both the category and global CDRS scoring by dementia experts. Furthermore, for the most critical distinction, normal versus very mildly impaired, the Two-Stage ML model was 28.1 and 6.6% more accurate than published performances by domain experts. Our study of the CDRS examined one of the largest, most diverse samples in the literature, suggesting that our findings are robust. The Two-Stage ML model also identified a CDRS category, Judgment and Problem Solving, which has low classification accuracy similar to published

  3. Modeling situated abstraction : action coalescence via multidimensional coherence.

    SciTech Connect

    Sallach, D. L.; Decision and Information Sciences; Univ. of Chicago

    2007-01-01

    Situated social agents weigh dozens of priorities, each with its own complexities. Domains of interest are intertwined, and progress in one area either complements or conflicts with other priorities. Interpretive agents address these complexities through: (1) integrating cognitive complexities through the use of radial concepts, (2) recognizing the role of emotion in prioritizing alternatives and urgencies, (3) using Miller-range constraints to avoid oversimplified notions omniscience, and (4) constraining actions to 'moves' in multiple prototype games. Situated agent orientations are dynamically grounded in pragmatic considerations as well as intertwined with internal and external priorities. HokiPoki is a situated abstraction designed to shape and focus strategic agent orientations. The design integrates four pragmatic pairs: (1) problem and solution, (2) dependence and power, (3) constraint and affordance, and (4) (agent) intent and effect. In this way, agents are empowered to address multiple facets of a situation in an exploratory, or even arbitrary, order. HokiPoki is open to the internal orientation of the agent as it evolves, but also to the communications and actions of other agents.

  4. Symbolic LTL Compilation for Model Checking: Extended Abstract

    NASA Technical Reports Server (NTRS)

    Rozier, Kristin Y.; Vardi, Moshe Y.

    2007-01-01

    In Linear Temporal Logic (LTL) model checking, we check LTL formulas representing desired behaviors against a formal model of the system designed to exhibit these behaviors. To accomplish this task, the LTL formulas must be translated into automata [21]. We focus on LTL compilation by investigating LTL satisfiability checking via a reduction to model checking. Having shown that symbolic LTL compilation algorithms are superior to explicit automata construction algorithms for this task [16], we concentrate here on seeking a better symbolic algorithm.We present experimental data comparing algorithmic variations such as normal forms, encoding methods, and variable ordering and examine their effects on performance metrics including processing time and scalability. Safety critical systems, such as air traffic control, life support systems, hazardous environment controls, and automotive control systems, pervade our daily lives, yet testing and simulation alone cannot adequately verify their reliability [3]. Model checking is a promising approach to formal verification for safety critical systems which involves creating a formal mathematical model of the system and translating desired safety properties into a formal specification for this model. The complement of the specification is then checked against the system model. When the model does not satisfy the specification, model-checking tools accompany this negative answer with a counterexample, which points to an inconsistency between the system and the desired behaviors and aids debugging efforts.

  5. Towards an Abstraction-Friendly Programming Model for High Productivity and High Performance Computing

    SciTech Connect

    Liao, C; Quinlan, D; Panas, T

    2009-10-06

    General purpose languages, such as C++, permit the construction of various high level abstractions to hide redundant, low level details and accelerate programming productivity. Example abstractions include functions, data structures, classes, templates and so on. However, the use of abstractions significantly impedes static code analyses and optimizations, including parallelization, applied to the abstractions complex implementations. As a result, there is a common perception that performance is inversely proportional to the level of abstraction. On the other hand, programming large scale, possibly heterogeneous high-performance computing systems is notoriously difficult and programmers are less likely to abandon the help from high level abstractions when solving real-world, complex problems. Therefore, the need for programming models balancing both programming productivity and execution performance has reached a new level of criticality. We are exploring a novel abstraction-friendly programming model in order to support high productivity and high performance computing. We believe that standard or domain-specific semantics associated with high level abstractions can be exploited to aid compiler analyses and optimizations, thus helping achieving high performance without losing high productivity. We encode representative abstractions and their useful semantics into an abstraction specification file. In the meantime, an accessible, source-to-source compiler infrastructure (the ROSE compiler) is used to facilitate recognizing high level abstractions and utilizing their semantics for more optimization opportunities. Our initial work has shown that recognizing abstractions and knowing their semantics within a compiler can dramatically extend the applicability of existing optimizations, including automatic parallelization. Moreover, a new set of optimizations have become possible within an abstraction-friendly and semantics-aware programming model. In the future, we will

  6. Applying model abstraction techniques to optimize monitoring networks for detecting subsurface contaminant transport

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Improving strategies for monitoring subsurface contaminant transport includes performance comparison of competing models, developed independently or obtained via model abstraction. Model comparison and parameter discrimination involve specific performance indicators selected to better understand s...

  7. Derivation of Rigid Body Analysis Models from Vehicle Architecture Abstractions

    DTIC Science & Technology

    2011-06-17

    simultaneously with the model creation. The author has described the evolution of the car design process from the conventional approach to the new development...models of every type have their basis in some type of physical representation of the design domain. Rather than describing three-dimensional continua of...arrangement, while capturing just enough physical detail to be used as the basis for a meaningful representation of the design , and eventually, analyses that

  8. Developing a PLC-friendly state machine model: lessons learned

    NASA Astrophysics Data System (ADS)

    Pessemier, Wim; Deconinck, Geert; Raskin, Gert; Saey, Philippe; Van Winckel, Hans

    2014-07-01

    Modern Programmable Logic Controllers (PLCs) have become an attractive platform for controlling real-time aspects of astronomical telescopes and instruments due to their increased versatility, performance and standardization. Likewise, vendor-neutral middleware technologies such as OPC Unified Architecture (OPC UA) have recently demonstrated that they can greatly facilitate the integration of these industrial platforms into the overall control system. Many practical questions arise, however, when building multi-tiered control systems that consist of PLCs for low level control, and conventional software and platforms for higher level control. How should the PLC software be structured, so that it can rely on well-known programming paradigms on the one hand, and be mapped to a well-organized OPC UA interface on the other hand? Which programming languages of the IEC 61131-3 standard closely match the problem domains of the abstraction levels within this structure? How can the recent additions to the standard (such as the support for namespaces and object-oriented extensions) facilitate a model based development approach? To what degree can our applications already take advantage of the more advanced parts of the OPC UA standard, such as the high expressiveness of the semantic modeling language that it defines, or the support for events, aggregation of data, automatic discovery, ... ? What are the timing and concurrency problems to be expected for the higher level tiers of the control system due to the cyclic execution of control and communication tasks by the PLCs? We try to answer these questions by demonstrating a semantic state machine model that can readily be implemented using IEC 61131 and OPC UA. One that does not aim to capture all possible states of a system, but rather one that attempts to organize the course-grained structure and behaviour of a system. In this paper we focus on the intricacies of this seemingly simple task, and on the lessons that we

  9. Context in Models of Human-Machine Systems

    NASA Technical Reports Server (NTRS)

    Callantine, Todd J.; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    All human-machine systems models represent context. This paper proposes a theory of context through which models may be usefully related and integrated for design. The paper presents examples of context representation in various models, describes an application to developing models for the Crew Activity Tracking System (CATS), and advances context as a foundation for integrated design of complex dynamic systems.

  10. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    PubMed

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  11. Predicting Market Impact Costs Using Nonparametric Machine Learning Models

    PubMed Central

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235

  12. Mesoscale modeling of molecular machines: cyclic dynamics and hydrodynamical fluctuations.

    PubMed

    Cressman, Andrew; Togashi, Yuichi; Mikhailov, Alexander S; Kapral, Raymond

    2008-05-01

    Proteins acting as molecular machines can undergo cyclic internal conformational motions that are coupled to ligand binding and dissociation events. In contrast to their macroscopic counterparts, nanomachines operate in a highly fluctuating environment, which influences their operation. To bridge the gap between detailed microscopic and simple phenomenological descriptions, a mesoscale approach, which combines an elastic network model of a machine with a particle-based mesoscale description of the solvent, is employed. The time scale of the cyclic hinge motions of the machine prototype is strongly affected by hydrodynamical coupling to the solvent.

  13. X: A Comprehensive Analytic Model for Parallel Machines

    SciTech Connect

    Li, Ang; Song, Shuaiwen; Brugel, Eric; Kumar, Akash; Chavarría-Miranda, Daniel; Corporaal, Henk

    2016-05-23

    To continuously comply with Moore’s Law, modern parallel machines become increasingly complex. Effectively tuning application performance for these machines therefore becomes a daunting task. Moreover, identifying performance bottlenecks at application and architecture level, as well as evaluating various optimization strategies, are becoming extremely difficult when the entanglement of numerous correlated factors is being presented. To tackle these challenges, we present a visual analytical model named “X”. It is intuitive and sufficiently flexible to track all the typical features of a parallel machine.

  14. Efficient Plasma Ion Source Modeling With Adaptive Mesh Refinement (Abstract)

    SciTech Connect

    Kim, J.S.; Vay, J.L.; Friedman, A.; Grote, D.P.

    2005-03-15

    Ion beam drivers for high energy density physics and inertial fusion energy research require high brightness beams, so there is little margin of error allowed for aberration at the emitter. Thus, accurate plasma ion source computer modeling is required to model the plasma sheath region and time-dependent effects correctly.A computer plasma source simulation module that can be used with a powerful heavy ion fusion code, WARP, or as a standalone code, is being developed. In order to treat the plasma sheath region accurately and efficiently, the module will have the capability of handling multiple spatial scale problems by using Adaptive Mesh Refinement (AMR). We will report on our progress on the project.

  15. Phase Transitions in a Model of Y-Molecules Abstract

    NASA Astrophysics Data System (ADS)

    Holz, Danielle; Ruth, Donovan; Toral, Raul; Gunton, James

    Immunoglobulin is a Y-shaped molecule that functions as an antibody to neutralize pathogens. In special cases where there is a high concentration of immunoglobulin molecules, self-aggregation can occur and the molecules undergo phase transitions. This prevents the molecules from completing their function. We used a simplified model of 2-Dimensional Y-molecules with three identical arms on a triangular lattice with 2-dimensional Grand Canonical Ensemble. The molecules were permitted to be placed, removed, rotated or moved on the lattice. Once phase coexistence was found, we used histogram reweighting and multicanonical sampling to calculate our phase diagram.

  16. Abstract: Sample Size Planning for Latent Curve Models.

    PubMed

    Lai, Keke

    2011-11-30

    When designing a study that uses structural equation modeling (SEM), an important task is to decide an appropriate sample size. Historically, this task is approached from the power analytic perspective, where the goal is to obtain sufficient power to reject a false null hypothesis. However, hypothesis testing only tells if a population effect is zero and fails to address the question about the population effect size. Moreover, significance tests in the SEM context often reject the null hypothesis too easily, and therefore the problem in practice is having too much power instead of not enough power. An alternative means to infer the population effect is forming confidence intervals (CIs). A CI is more informative than hypothesis testing because a CI provides a range of plausible values for the population effect size of interest. Given the close relationship between CI and sample size, the sample size for an SEM study can be planned with the goal to obtain sufficiently narrow CIs for the population model parameters of interest. Latent curve models (LCMs) is an application of SEM with mean structure to studying change over time. The sample size planning method for LCM from the CI perspective is based on maximum likelihood and expected information matrix. Given a sample, to form a CI for the model parameter of interest in LCM, it requires the sample covariance matrix S, sample mean vector [Formula: see text], and sample size N. Therefore, the width (w) of the resulting CI can be considered a function of S, [Formula: see text], and N. Inverting the CI formation process gives the sample size planning process. The inverted process requires a proxy for the population covariance matrix Σ, population mean vector μ, and the desired width ω as input, and it returns N as output. The specification of the input information for sample size planning needs to be performed based on a systematic literature review. In the context of covariance structure analysis, Lai and Kelley

  17. Modelling machine ensembles with discrete event dynamical system theory

    NASA Technical Reports Server (NTRS)

    Hunter, Dan

    1990-01-01

    Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).

  18. Component based modelling of piezoelectric ultrasonic actuators for machining applications

    NASA Astrophysics Data System (ADS)

    Saleem, A.; Salah, M.; Ahmed, N.; Silberschmidt, V. V.

    2013-07-01

    Ultrasonically Assisted Machining (UAM) is an emerging technology that has been utilized to improve the surface finishing in machining processes such as turning, milling, and drilling. In this context, piezoelectric ultrasonic transducers are being used to vibrate the cutting tip while machining at predetermined amplitude and frequency. However, modelling and simulation of these transducers is a tedious and difficult task. This is due to the inherent nonlinearities associated with smart materials. Therefore, this paper presents a component-based model of ultrasonic transducers that mimics the nonlinear behaviour of such a system. The system is decomposed into components, a mathematical model of each component is created, and the whole system model is accomplished by aggregating the basic components' model. System parameters are identified using Finite Element technique which then has been used to simulate the system in Matlab/SIMULINK. Various operation conditions are tested and performed to demonstrate the system performance.

  19. Modeling powder encapsulation in dosator-based machines: I. Theory.

    PubMed

    Khawam, Ammar

    2011-12-15

    Automatic encapsulation machines have two dosing principles: dosing disc and dosator. Dosator-based machines compress the powder to plugs that are transferred into capsules. The encapsulation process in dosator-based capsule machines was modeled in this work. A model was proposed to predict the weight and length of produced plugs. According to the model, the plug weight is a function of piston dimensions, powder-bed height, bulk powder density and precompression densification inside dosator while plug length is a function of piston height, set piston displacement, spring stiffness and powder compressibility. Powder densification within the dosator can be achieved by precompression, compression or both. Precompression densification depends on the powder to piston height ratio while compression densification depends on piston displacement against powder. This article provides the theoretical basis of the encapsulation model, including applications and limitations. The model will be applied to experimental data separately.

  20. Agent Based Computing Machine

    DTIC Science & Technology

    2005-12-09

    coordinates as in cellular automata systems. But using biology as a model suggests that the most general systems must provide for partial, but constrained...17. SECURITY CLASSIFICATION OF 118. SECURITY CLASSIFICATION OF 19. SECURITY CLASSIFICATION OF 20. LIMITATION OF ABSTRA REPORT THIS PAGE ABSTRACT...system called an "agent based computing" machine (ABC Machine). The ABC Machine is motivated by cellular biochemistry and it is based upon a concept

  1. Committee of machine learning predictors of hydrological models uncertainty

    NASA Astrophysics Data System (ADS)

    Kayastha, Nagendra; Solomatine, Dimitri

    2014-05-01

    In prediction of uncertainty based on machine learning methods, the results of various sampling schemes namely, Monte Carlo sampling (MCS), generalized likelihood uncertainty estimation (GLUE), Markov chain Monte Carlo (MCMC), shuffled complex evolution metropolis algorithm (SCEMUA), differential evolution adaptive metropolis (DREAM), particle swarm optimization (PSO) and adaptive cluster covering (ACCO)[1] used to build a predictive models. These models predict the uncertainty (quantiles of pdf) of a deterministic output from hydrological model [2]. Inputs to these models are the specially identified representative variables (past events precipitation and flows). The trained machine learning models are then employed to predict the model output uncertainty which is specific for the new input data. For each sampling scheme three machine learning methods namely, artificial neural networks, model tree, locally weighted regression are applied to predict output uncertainties. The problem here is that different sampling algorithms result in different data sets used to train different machine learning models which leads to several models (21 predictive uncertainty models). There is no clear evidence which model is the best since there is no basis for comparison. A solution could be to form a committee of all models and to sue a dynamic averaging scheme to generate the final output [3]. This approach is applied to estimate uncertainty of streamflows simulation from a conceptual hydrological model HBV in the Nzoia catchment in Kenya. [1] N. Kayastha, D. L. Shrestha and D. P. Solomatine. Experiments with several methods of parameter uncertainty estimation in hydrological modeling. Proc. 9th Intern. Conf. on Hydroinformatics, Tianjin, China, September 2010. [2] D. L. Shrestha, N. Kayastha, and D. P. Solomatine, and R. Price. Encapsulation of parameteric uncertainty statistics by various predictive machine learning models: MLUE method, Journal of Hydroinformatic, in press

  2. Parallel phase model : a programming model for high-end parallel machines with manycores.

    SciTech Connect

    Wu, Junfeng; Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  3. Modelling and Control of Mini-Flying Machines

    NASA Astrophysics Data System (ADS)

    Castillo, Pedro; Lozano, Rogelio; Dzul, Alejandro E.

    Problems in the motion control of aircraft are of perennial interest to the control engineer as they tend to be of complex and nonlinear nature. Modelling and Control of Mini-Flying Machines is an exposition of models developed for various types of mini-aircraft: planar Vertical Take-off and Landing aircraft; helicopters; quadrotor mini-rotorcraft; other fixed-wing aircraft; blimps; for each of which it propounds: detailed models derived from Euler-Lagrange methods; appropriate nonlinear control strategies and convergence properties; real-time experimental comparisons of the performance of control algorithms= ; review of the principal sensors, on-board electronics, real-time architectu= re and communications systems for mini-flying machine control, including di= scussion of their performance; detailed explanation of the use of the Kalman filter to flying machine loca= lization. http://www.springeronline.com/alert/article?a=3D1_1fva7w_172cml_63f_6

  4. Abstract Model of the SATS Concept of Operations: Initial Results and Recommendations

    NASA Technical Reports Server (NTRS)

    Dowek, Gilles; Munoz, Cesar; Carreno, Victor A.

    2004-01-01

    An abstract mathematical model of the concept of operations for the Small Aircraft Transportation System (SATS) is presented. The Concept of Operations consist of several procedures that describe nominal operations for SATS, Several safety properties of the system are proven using formal techniques. The final goal of the verification effort is to show that under nominal operations, aircraft are safely separated. The abstract model was written and formally verified in the Prototype Verification System (PVS).

  5. Restricted Boltzmann machines for the long range Ising models

    NASA Astrophysics Data System (ADS)

    Aoki, Ken-Ichi; Kobayashi, Tamao

    2016-12-01

    We set up restricted Boltzmann machines (RBM) to reproduce the long range Ising (LRI) models of the Ohmic type in one dimension. The RBM parameters are tuned by using the standard machine learning procedure with an additional method of configuration with probability (CwP). The quality of resultant RBM is evaluated through the susceptibility with respect to the magnetic external field. We compare the results with those by block decimation renormalization group (BDRG) method, and our RBM clear the test with satisfactory precision.

  6. A non linear analytical model of switched reluctance machines

    NASA Astrophysics Data System (ADS)

    Sofiane, Y.; Tounzi, A.; Piriou, F.

    2002-06-01

    Nowadays, the switched reluctance machine are widely used. To determine their performances and to elaborate control strategy, we generally use the linear analytical model. Unhappily, this last is not very accurate. To yield accurate modelling results, we use then numerical models based on either 2D or 3D Finite Element Method. However, this approach is very expensive in terms of computation time and remains suitable to study the behaviour of eventually a whole device. However, it is not, a priori, adapted to elaborate control strategy for electrical machines. This paper deals with a non linear analytical model in terms of variable inductances. The theoretical development of the proposed model is introduced. Then, the model is applied to study the behaviour of a whole controlled switched reluctance machine. The parameters of the structure are identified from a 2D numerical model. They can also be determined from an experimental bench. Then, the results given by the proposed model are compared to those issue from the 2D-FEM approach and from the classical linear analytical model.

  7. Three dimensional CAD model of the Ignitor machine

    NASA Astrophysics Data System (ADS)

    Orlandi, S.; Zanaboni, P.; Macco, A.; Sioli, V.; Risso, E.

    1998-11-01

    defind The final, global product of all the structural and thermomechanical design activities is a complete three dimensional CAD (AutoCAD and Intergraph Design Review) model of the IGNITOR machine. With this powerful tool, any interface, modification, or upgrading of the machine design is managed as an integrated part of the general effort aimed at the construction of the Ignitor facility. ind The activities that are underway, to complete the design of the core of the experiment and that will be described, concern the following: ind - the cryogenic cooling system, ind - the radial press, the center post, the mechanical supports (legs) of the entire machine, ind - the inner mechanical supports of major components such as the plasma chamber and the outer poloidal field coils.

  8. Adding Abstraction and Reuse to a Network Modelling Tool Using the Reuseware Composition Framework

    NASA Astrophysics Data System (ADS)

    Johannes, Jendrik; Fernández, Miguel A.

    Domain-specific modelling (DSM) environments enable experts in a certain domain to actively participate in model-driven development. Developing DSM environments need to be cost-efficient, since they are only used by a limited group of domain experts. Different model-driven technologies promise to allow this cost-efficient development. [1] presented experiences in developing a DSM environment for telecommunication network modelling. There, challenges were identified that need to be addressed by other new modelling technologies. In this paper, we now present the results of addressing one of theses challenges - abstraction and reuse support - with the Reuseware Composition Framework. We show how we identified the abstraction and reuse features required in the telecommunication DSM environment in a case study and extended the existing environment with these features using Reuseware. We discuss the advantages of using this technology and propose a process for further improving the abstraction and reuse capabilities of the DSM environment in the future.

  9. Applying Machine Trust Models to Forensic Investigations

    NASA Astrophysics Data System (ADS)

    Wojcik, Marika; Venter, Hein; Eloff, Jan; Olivier, Martin

    Digital forensics involves the identification, preservation, analysis and presentation of electronic evidence for use in legal proceedings. In the presence of contradictory evidence, forensic investigators need a means to determine which evidence can be trusted. This is particularly true in a trust model environment where computerised agents may make trust-based decisions that influence interactions within the system. This paper focuses on the analysis of evidence in trust-based environments and the determination of the degree to which evidence can be trusted. The trust model proposed in this work may be implemented in a tool for conducting trust-based forensic investigations. The model takes into account the trust environment and parameters that influence interactions in a computer network being investigated. Also, it allows for crimes to be reenacted to create more substantial evidentiary proof.

  10. Global ocean modeling on the Connection Machine

    SciTech Connect

    Smith, R.D.; Dukowicz, J.K.; Malone, R.C.

    1993-10-01

    The authors have developed a version of the Bryan-Cox-Semtner ocean model (Bryan, 1969; Semtner, 1976; Cox, 1984) for massively parallel computers. Such models are three-dimensional, Eulerian models that use latitude and longitude as the horizontal spherical coordinates and fixed depth levels as the vertical coordinate. The incompressible Navier-Stokes equations, with a turbulent eddy viscosity, and mass continuity equation are solved, subject to the hydrostatic and Boussinesq approximations. The traditional model formulation uses a rigid-lid approximation (vertical velocity = 0 at the ocean surface) to eliminate fast surface waves. These waves would otherwise require that a very short time step be used in numerical simulations, which would greatly increase the computational cost. To solve the equations with the rigid-lid assumption, the equations of motion are split into two parts: a set of twodimensional ``barotropic`` equations describing the vertically-averaged flow, and a set of three-dimensional ``baroclinic`` equations describing temperature, salinity and deviations of the horizontal velocities from the vertically-averaged flow.

  11. Bilingual Cluster Based Models for Statistical Machine Translation

    NASA Astrophysics Data System (ADS)

    Yamamoto, Hirofumi; Sumita, Eiichiro

    We propose a domain specific model for statistical machine translation. It is well-known that domain specific language models perform well in automatic speech recognition. We show that domain specific language and translation models also benefit statistical machine translation. However, there are two problems with using domain specific models. The first is the data sparseness problem. We employ an adaptation technique to overcome this problem. The second issue is domain prediction. In order to perform adaptation, the domain must be provided, however in many cases, the domain is not known or changes dynamically. For these cases, not only the translation target sentence but also the domain must be predicted. This paper focuses on the domain prediction problem for statistical machine translation. In the proposed method, a bilingual training corpus, is automatically clustered into sub-corpora. Each sub-corpus is deemed to be a domain. The domain of a source sentence is predicted by using its similarity to the sub-corpora. The predicted domain (sub-corpus) specific language and translation models are then used for the translation decoding. This approach gave an improvement of 2.7 in BLEU score on the IWSLT05 Japanese to English evaluation corpus (improving the score from 52.4 to 55.1). This is a substantial gain and indicates the validity of the proposed bilingual cluster based models.

  12. Abstract Machines for Polymorphous Computing

    DTIC Science & Technology

    2007-12-01

    In this paper , the scope of the word “configuration” is expanded to include also the mapping of the application onto the reconfigurable...optimization. The focus of this paper is thus on the on-line refinement component and its interaction with the configuration store. For a given instance...Mattson, J. Namkoong, J. D. Owens, B. Towles , and A. Chang., “Imagine: Media Processing with Streams,” IEEE Micro, March/April 2001, pp. 35-46. [27

  13. The rise of machine consciousness: studying consciousness with computational models.

    PubMed

    Reggia, James A

    2013-08-01

    Efforts to create computational models of consciousness have accelerated over the last two decades, creating a field that has become known as artificial consciousness. There have been two main motivations for this controversial work: to develop a better scientific understanding of the nature of human/animal consciousness and to produce machines that genuinely exhibit conscious awareness. This review begins by briefly explaining some of the concepts and terminology used by investigators working on machine consciousness, and summarizes key neurobiological correlates of human consciousness that are particularly relevant to past computational studies. Models of consciousness developed over the last twenty years are then surveyed. These models are largely found to fall into five categories based on the fundamental issue that their developers have selected as being most central to consciousness: a global workspace, information integration, an internal self-model, higher-level representations, or attention mechanisms. For each of these five categories, an overview of past work is given, a representative example is presented in some detail to illustrate the approach, and comments are provided on the contributions and limitations of the methodology. Three conclusions are offered about the state of the field based on this review: (1) computational modeling has become an effective and accepted methodology for the scientific study of consciousness, (2) existing computational models have successfully captured a number of neurobiological, cognitive, and behavioral correlates of conscious information processing as machine simulations, and (3) no existing approach to artificial consciousness has presented a compelling demonstration of phenomenal machine consciousness, or even clear evidence that artificial phenomenal consciousness will eventually be possible. The paper concludes by discussing the importance of continuing work in this area, considering the ethical issues it raises

  14. Transferable Atomic Multipole Machine Learning Models for Small Organic Molecules.

    PubMed

    Bereau, Tristan; Andrienko, Denis; von Lilienfeld, O Anatole

    2015-07-14

    Accurate representation of the molecular electrostatic potential, which is often expanded in distributed multipole moments, is crucial for an efficient evaluation of intermolecular interactions. Here we introduce a machine learning model for multipole coefficients of atom types H, C, O, N, S, F, and Cl in any molecular conformation. The model is trained on quantum-chemical results for atoms in varying chemical environments drawn from thousands of organic molecules. Multipoles in systems with neutral, cationic, and anionic molecular charge states are treated with individual models. The models' predictive accuracy and applicability are illustrated by evaluating intermolecular interaction energies of nearly 1,000 dimers and the cohesive energy of the benzene crystal.

  15. Parallelizing the track-target model for the MIMD machine

    SciTech Connect

    Zhong Xiong, W.; Swietlik, C.

    1992-09-01

    Military Tracking-Target systems are important analysis tools for modelling the major functions of a strategic defense system operating against a ballistic missile threat during a simulated end-to-end scenario. As demands grow for modelling more trajectories with increasing numbers of missile types, so have demands for more processing power. Argonne National Laboratory has developed the parallel version of this Tracking-Target model. The parallel version has exhibited speedups of up to a factor of 6.3 resulting from a shared memory multiprocessor machine. This paper documents a project to implement the Tracking-Target model on a parallel processing environment.

  16. Modelling, abstraction, and computation in systems biology: A view from computer science.

    PubMed

    Melham, Tom

    2013-04-01

    Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology.

  17. Stochastic Local Interaction (SLI) model: Bridging machine learning and geostatistics

    NASA Astrophysics Data System (ADS)

    Hristopulos, Dionissios T.

    2015-12-01

    Machine learning and geostatistics are powerful mathematical frameworks for modeling spatial data. Both approaches, however, suffer from poor scaling of the required computational resources for large data applications. We present the Stochastic Local Interaction (SLI) model, which employs a local representation to improve computational efficiency. SLI combines geostatistics and machine learning with ideas from statistical physics and computational geometry. It is based on a joint probability density function defined by an energy functional which involves local interactions implemented by means of kernel functions with adaptive local kernel bandwidths. SLI is expressed in terms of an explicit, typically sparse, precision (inverse covariance) matrix. This representation leads to a semi-analytical expression for interpolation (prediction), which is valid in any number of dimensions and avoids the computationally costly covariance matrix inversion.

  18. 97. View of International Business Machine (IBM) digital computer model ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    97. View of International Business Machine (IBM) digital computer model 7090 magnetic core installation, international telephone and telegraph (ITT) Artic Services Inc., Official photograph BMEWS site II, Clear, AK, by unknown photographer, 17 September 1965, BMEWS, clear as negative no. A-6604. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  19. Hydrogen-atom abstraction from a model amino acid: dependence on the attacking radical.

    PubMed

    Amos, Ruth I J; Chan, Bun; Easton, Christopher J; Radom, Leo

    2015-01-22

    We have used computational chemistry to examine the reactivity of a model amino acid toward hydrogen abstraction by HO•, HOO•, and Br•. The trends in the calculated condensed-phase (acetic acid) free energy barriers are in accord with experimental relative reactivities. Our calculations suggest that HO• is likely to be the abstracting species for reactions with hydrogen peroxide. For HO• abstractions, the barriers decrease as the site of reaction becomes more remote from the electron-withdrawing α-substituents, in accord with a diminishing polar deactivating effect. We find that the transition structures for α- and β-abstractions have additional hydrogen-bonding interactions, which lead to lower gas-phase vibrationless electronic barriers at these positions. Such favorable interactions become less important in a polar solvent such as acetic acid, and this leads to larger calculated barriers when the effect of solvation is taken into account. For Br• abstractions, the α-barrier is the smallest while the β-barrier is the largest, with the barrier gradually becoming smaller further along the side chain. We attribute the low barrier for the α-abstraction in this case to the partial reflection of the thermodynamic effect of the captodatively stabilized α-radical product in the more product-like transition structure, while the trend of decreasing barriers in the order β > γ > δ ∼ ε is explained by the diminishing polar deactivating effect. More generally, the favorable influence of thermodynamic effects on the α-abstraction barrier is found to be smaller when the transition structure for hydrogen abstraction is earlier.

  20. Modeling and analysis of uncertainty in on-machine form characterization of diamond-machined optical micro-structured surfaces

    NASA Astrophysics Data System (ADS)

    Zhu, Wu-Le; Zhu, Zhiwei; Ren, Mingjun; Ehmann, Kornel F.; Ju, Bing-Feng

    2016-12-01

    Ultra-precision diamond machining is widely used in the manufacture of optical micro-structured surfaces with sub-micron form accuracy. As optical performance is highly-dependent on surface form accuracy, it is critically important to use reliable form characterization methods for surface quality control. To ascertain the characteristics of real machined surfaces, a reliable on-machine spiral scanning approach with high fidelity is presented in this paper. However, since many uncertainty contributors that lead to significant variations in the characterization results are unavoidable, an error analysis model is developed to identify the associated uncertainties to facilitate the reliable quantification of the demanding specifications of the manufactured surfaces. To accomplish this, both the diamond machining process and the on-machine spiral scanning procedure are investigated. Through the proposed model, via the Monte Carlo method, the estimation of form error parameters of a compound eye lens array is conducted in correlation with form deviations, scanning centering errors, measurement drift and noise, etc. Application experiments, using an on-machine scanning tunneling microscope, verify the proposed model and also confirm its potential superiority over the conventional off-machine raster scanning method for surface characterization and quality control.

  1. Geochemistry Model Abstraction and Sensitivity Studies for the 21 PWR CSNF Waste Package

    SciTech Connect

    P. Bernot; S. LeStrange; E. Thomas; K. Zarrabi; S. Arthur

    2002-10-29

    The CSNF geochemistry model abstraction, as directed by the TWP (BSC 2002b), was developed to provide regression analysis of EQ6 cases to obtain abstracted values of pH (and in some cases HCO{sub 3}{sup -} concentration) for use in the Configuration Generator Model. The pH of the system is the controlling factor over U mineralization, CSNF degradation rate, and HCO{sub 3}{sup -} concentration in solution. The abstraction encompasses a large variety of combinations for the degradation rates of materials. The ''base case'' used EQ6 simulations looking at differing steel/alloy corrosion rates, drip rates, and percent fuel exposure. Other values such as the pH/HCO{sub 3}{sup -} dependent fuel corrosion rate and the corrosion rate of A516 were kept constant. Relationships were developed for pH as a function of these differing rates to be used in the calculation of total C and subsequently, the fuel rate. An additional refinement to the abstraction was the addition of abstracted pH values for cases where there was limited O{sub 2} for waste package corrosion and a flushing fluid other than J-13, which has been used in all EQ6 calculation up to this point. These abstractions also used EQ6 simulations with varying combinations of corrosion rates of materials to abstract the pH (and HCO{sub 3}{sup -} in the case of the limiting O{sub 2} cases) as a function of WP materials corrosion rates. The goodness of fit for most of the abstracted values was above an R{sup 2} of 0.9. Those below this value occurred during the time at the very beginning of WP corrosion when large variations in the system pH are observed. However, the significance of F-statistic for all the abstractions showed that the variable relationships are significant. For the abstraction, an analysis of the minerals that may form the ''sludge'' in the waste package was also presented. This analysis indicates that a number a different iron and aluminum minerals may form in the waste package other than those

  2. "Machine" consciousness and "artificial" thought: an operational architectonics model guided approach.

    PubMed

    Fingelkurts, Andrew A; Fingelkurts, Alexander A; Neves, Carlos F H

    2012-01-05

    Instead of using low-level neurophysiology mimicking and exploratory programming methods commonly used in the machine consciousness field, the hierarchical operational architectonics (OA) framework of brain and mind functioning proposes an alternative conceptual-theoretical framework as a new direction in the area of model-driven machine (robot) consciousness engineering. The unified brain-mind theoretical OA model explicitly captures (though in an informal way) the basic essence of brain functional architecture, which indeed constitutes a theory of consciousness. The OA describes the neurophysiological basis of the phenomenal level of brain organization. In this context the problem of producing man-made "machine" consciousness and "artificial" thought is a matter of duplicating all levels of the operational architectonics hierarchy (with its inherent rules and mechanisms) found in the brain electromagnetic field. We hope that the conceptual-theoretical framework described in this paper will stimulate the interest of mathematicians and/or computer scientists to abstract and formalize principles of hierarchy of brain operations which are the building blocks for phenomenal consciousness and thought.

  3. Technical Work Plan for: Near Field Environment: Engineered System: Radionuclide Transport Abstraction Model Report

    SciTech Connect

    J.D. Schreiber

    2006-12-08

    This technical work plan (TWP) describes work activities to be performed by the Near-Field Environment Team. The objective of the work scope covered by this TWP is to generate Revision 03 of EBS Radionuclide Transport Abstraction, referred to herein as the radionuclide transport abstraction (RTA) report. The RTA report is being revised primarily to address condition reports (CRs), to address issues identified by the Independent Validation Review Team (IVRT), to address the potential impact of transport, aging, and disposal (TAD) canister design on transport models, and to ensure integration with other models that are closely associated with the RTA report and being developed or revised in other analysis/model reports in response to IVRT comments. The RTA report will be developed in accordance with the most current version of LP-SIII.10Q-BSC and will reflect current administrative procedures (LP-3.15Q-BSC, ''Managing Technical Product Inputs''; LP-SIII.2Q-BSC, ''Qualification of Unqualified Data''; etc.), and will develop related Document Input Reference System (DIRS) reports and data qualifications as applicable in accordance with prevailing procedures. The RTA report consists of three models: the engineered barrier system (EBS) flow model, the EBS transport model, and the EBS-unsaturated zone (UZ) interface model. The flux-splitting submodel in the EBS flow model will change, so the EBS flow model will be validated again. The EBS transport model and validation of the model will be substantially revised in Revision 03 of the RTA report, which is the main subject of this TWP. The EBS-UZ interface model may be changed in Revision 03 of the RTA report due to changes in the conceptualization of the UZ transport abstraction model (a particle tracker transport model based on the discrete fracture transfer function will be used instead of the dual-continuum transport model previously used). Validation of the EBS-UZ interface model will be revised to be consistent with

  4. Modelling fate and transport of pesticides in river catchments with drinking water abstractions

    NASA Astrophysics Data System (ADS)

    Desmet, Nele; Seuntjens, Piet; Touchant, Kaatje

    2010-05-01

    When drinking water is abstracted from surface water, the presence of pesticides may have a large impact on the purification costs. In order to respect imposed thresholds at points of drinking water abstraction in a river catchment, sustainable pesticide management strategies might be required in certain areas. To improve management strategies, a sound understanding of the emission routes, the transport, the environmental fate and the sources of pesticides is needed. However, pesticide monitoring data on which measures are founded, are generally scarce. Data scarcity hampers the interpretation and the decision making. In such a case, a modelling approach can be very useful as a tool to obtain complementary information. Modelling allows to take into account temporal and spatial variability in both discharges and concentrations. In the Netherlands, the Meuse river is used for drinking water abstraction and the government imposes the European drinking water standard for individual pesticides (0.1 ?g.L-1) for surface waters at points of drinking water abstraction. The reported glyphosate concentrations in the Meuse river frequently exceed the standard and this enhances the request for targeted measures. In this study, a model for the Meuse river was developed to estimate the contribution of influxes at the Dutch-Belgian border on the concentration levels detected at the drinking water intake 250 km downstream and to assess the contribution of the tributaries to the glyphosate loads. The effects of glyphosate decay on environmental fate were considered as well. Our results show that the application of a river model allows to asses fate and transport of pesticides in a catchment in spite of monitoring data scarcity. Furthermore, the model provides insight in the contribution of different sub basins to the pollution level. The modelling results indicate that the effect of local measures to reduce pesticides concentrations in the river at points of drinking water

  5. Editorial: Mathematical Methods and Modeling in Machine Fault Diagnosis

    DOE PAGES

    Yan, Ruqiang; Chen, Xuefeng; Li, Weihua; ...

    2014-12-18

    Modern mathematics has commonly been utilized as an effective tool to model mechanical equipment so that their dynamic characteristics can be studied analytically. This will help identify potential failures of mechanical equipment by observing change in the equipment’s dynamic parameters. On the other hand, dynamic signals are also important and provide reliable information about the equipment’s working status. Modern mathematics has also provided us with a systematic way to design and implement various signal processing methods, which are used to analyze these dynamic signals, and to enhance intrinsic signal components that are directly related to machine failures. This special issuemore » is aimed at stimulating not only new insights on mathematical methods for modeling but also recently developed signal processing methods, such as sparse decomposition with potential applications in machine fault diagnosis. Finally, the papers included in this special issue provide a glimpse into some of the research and applications in the field of machine fault diagnosis through applications of the modern mathematical methods.« less

  6. Editorial: Mathematical Methods and Modeling in Machine Fault Diagnosis

    SciTech Connect

    Yan, Ruqiang; Chen, Xuefeng; Li, Weihua; Sheng, Shuangwen

    2014-12-18

    Modern mathematics has commonly been utilized as an effective tool to model mechanical equipment so that their dynamic characteristics can be studied analytically. This will help identify potential failures of mechanical equipment by observing change in the equipment’s dynamic parameters. On the other hand, dynamic signals are also important and provide reliable information about the equipment’s working status. Modern mathematics has also provided us with a systematic way to design and implement various signal processing methods, which are used to analyze these dynamic signals, and to enhance intrinsic signal components that are directly related to machine failures. This special issue is aimed at stimulating not only new insights on mathematical methods for modeling but also recently developed signal processing methods, such as sparse decomposition with potential applications in machine fault diagnosis. Finally, the papers included in this special issue provide a glimpse into some of the research and applications in the field of machine fault diagnosis through applications of the modern mathematical methods.

  7. Modeling the Swift BAT Trigger Algorithm with Machine Learning

    NASA Astrophysics Data System (ADS)

    Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori

    2016-02-01

    To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift/BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of ≳97% (≲3% error), which is a significant improvement on a cut in GRB flux, which has an accuracy of 89.6% (10.4% error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of {n}0∼ {0.48}-0.23+0.41 {{{Gpc}}}-3 {{{yr}}}-1 with power-law indices of {n}1∼ {1.7}-0.5+0.6 and {n}2∼ -{5.9}-0.1+5.7 for GRBs above and below a break point of {z}1∼ {6.8}-3.2+2.8. This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting.

  8. Modeling of autoresonant control of a parametrically excited screen machine

    NASA Astrophysics Data System (ADS)

    Abolfazl Zahedi, S.; Babitsky, Vladimir

    2016-10-01

    Modelling of nonlinear dynamic response of a screen machine described by the nonlinear coupled differential equations and excited by the system of autoresonant control is presented. The displacement signal of the screen is fed to the screen excitation directly by means of positive feedback. Negative feedback is used to fix the level of screen amplitude response within the expected range. The screen is anticipated to vibrate with a parametric resonance and the excitation, stabilization and control response of the system are studied in the stable mode. Autoresonant control is thoroughly investigated and output tracking is reported. The control developed provides the possibility of self-tuning and self-adaptation mechanisms that allow the screen machine to maintain a parametric resonant mode of oscillation under a wide range of uncertainty of mass and viscosity.

  9. Model-based object classification using unification grammars and abstract representations

    NASA Astrophysics Data System (ADS)

    Liburdy, Kathleen A.; Schalkoff, Robert J.

    1993-04-01

    The design and implementation of a high level computer vision system which performs object classification is described. General object labelling and functional analysis require models of classes which display a wide range of geometric variations. A large representational gap exists between abstract criteria such as `graspable' and current geometric image descriptions. The vision system developed and described in this work addresses this problem and implements solutions based on a fusion of semantics, unification, and formal language theory. Object models are represented using unification grammars, which provide a framework for the integration of structure and semantics. A methodology for the derivation of symbolic image descriptions capable of interacting with the grammar-based models is described and implemented. A unification-based parser developed for this system achieves object classification by determining if the symbolic image description can be unified with the abstract criteria of an object model. Future research directions are indicated.

  10. Model-Driven Engineering of Machine Executable Code

    NASA Astrophysics Data System (ADS)

    Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira

    Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.

  11. Vehicle Concept Model Abstractions for Integrated Geometric, Inertial, Rigid Body, Powertrain, and FE Analysis

    DTIC Science & Technology

    2011-01-01

    including abstractions specific to 2 The nomenclature “simplified model” has also been...applied to attribute- based FEMs. We avoid this terminology because these models, while small in terms of element count, involve modeling decisions...TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) U.S. Army Research, Development and Engineering Command (RDECOM

  12. Machine learning and docking models for Mycobacterium tuberculosis topoisomerase I.

    PubMed

    Ekins, Sean; Godbole, Adwait Anand; Kéri, György; Orfi, Lászlo; Pato, János; Bhat, Rajeshwari Subray; Verma, Rinkee; Bradley, Erin K; Nagaraja, Valakunja

    2017-03-01

    There is a shortage of compounds that are directed towards new targets apart from those targeted by the FDA approved drugs used against Mycobacterium tuberculosis. Topoisomerase I (Mttopo I) is an essential mycobacterial enzyme and a promising target in this regard. However, it suffers from a shortage of known inhibitors. We have previously used computational approaches such as homology modeling and docking to propose 38 FDA approved drugs for testing and identified several active molecules. To follow on from this, we now describe the in vitro testing of a library of 639 compounds. These data were used to create machine learning models for Mttopo I which were further validated. The combined Mttopo I Bayesian model had a 5 fold cross validation receiver operator characteristic of 0.74 and sensitivity, specificity and concordance values above 0.76 and was used to select commercially available compounds for testing in vitro. The recently described crystal structure of Mttopo I was also compared with the previously described homology model and then used to dock the Mttopo I actives norclomipramine and imipramine. In summary, we describe our efforts to identify small molecule inhibitors of Mttopo I using a combination of machine learning modeling and docking studies in conjunction with screening of the selected molecules for enzyme inhibition. We demonstrate the experimental inhibition of Mttopo I by small molecule inhibitors and show that the enzyme can be readily targeted for lead molecule development.

  13. Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents

    SciTech Connect

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard; Parker, Lynne Edwards

    2014-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.

  14. Identifying crop vulnerability to groundwater abstraction: modelling and expert knowledge in a GIS.

    PubMed

    Procter, Chris; Comber, Lex; Betson, Mark; Buckley, Dennis; Frost, Andy; Lyons, Hester; Riding, Alison; Voyce, Kevin

    2006-11-01

    Water use is expected to increase and climate change scenarios indicate the need for more frequent water abstraction. Abstracting groundwater may have a detrimental effect on soil moisture availability for crop growth and yields. This work presents an elegant and robust method for identifying zones of crop vulnerability to abstraction. Archive groundwater level datasets were used to generate a composite groundwater surface that was subtracted from a digital terrain model. The result was the depth from surface to groundwater and identified areas underlain by shallow groundwater. Knowledge from an expert agronomist was used to define classes of risk in terms of their depth below ground level. Combining information on the permeability of geological drift types further refined the assessment of the risk of crop growth vulnerability. The nature of the mapped output is one that is easy to communicate to the intended farming audience because of the general familiarity of mapped information. Such Geographic Information System (GIS)-based products can play a significant role in the characterisation of catchments under the EU Water Framework Directive especially in the process of public liaison that is fundamental to the setting of priorities for management change. The creation of a baseline allows the impact of future increased water abstraction rates to be modelled and the vulnerability maps are in a format that can be readily understood by the various stakeholders. This methodology can readily be extended to encompass additional data layers and for a range of groundwater vulnerability issues including water resources, ecological impacts, nitrate and phosphorus.

  15. Medical record review conduction model for improving interrater reliability of abstracting medical-related information.

    PubMed

    Engel, Lisa; Henderson, Courtney; Fergenbaum, Jennifer; Colantonio, Angela

    2009-09-01

    Medical record review (MRR) is often used in clinical research and evaluation, yet there is limited literature regarding best practices in conducting a MRR, and there are few studies reporting interrater reliability (IRR) from MRR data. The aim of this research was twofold: (a) to develop a MRR abstraction tool and standardize the MRR process and (b) to examine the IRR from MRR data. This study introduces the MRR-Conduction Model, which was used to implement a MRR, and examines the IRR between two abstractors who collected preinjury medical and psychiatric, incident-related medical and postinjury head symptom information from the medical records of 47 neurologically injured workers. Results showed that the percentage agreement was > or =85% and the unweighted kappa statistic was > or =.60 for most variables, indicating substantial IRR. An effective and reliable MRR to abstract medical-related information requires planning and time. The MRR-Conduction Model is proposed to guide the process of creating a MRR.

  16. Modeling of Unsteady Three-dimensional Flows in Multistage Machines

    NASA Technical Reports Server (NTRS)

    Hall, Kenneth C.; Pratt, Edmund T., Jr.; Kurkov, Anatole (Technical Monitor)

    2003-01-01

    Despite many years of development, the accurate and reliable prediction of unsteady aerodynamic forces acting on turbomachinery blades remains less than satisfactory, especially when viewed next to the great success investigators have had in predicting steady flows. Hall and Silkowski (1997) have proposed that one of the main reasons for the discrepancy between theory and experiment and/or industrial experience is that many of the current unsteady aerodynamic theories model a single blade row in an infinitely long duct, ignoring potentially important multistage effects. However, unsteady flows are made up of acoustic, vortical, and entropic waves. These waves provide a mechanism for the rotors and stators of multistage machines to communicate with one another. In other words, wave behavior makes unsteady flows fundamentally a multistage (and three-dimensional) phenomenon. In this research program, we have has as goals (1) the development of computationally efficient computer models of the unsteady aerodynamic response of blade rows embedded in a multistage machine (these models will ultimately be capable of analyzing three-dimensional viscous transonic flows), and (2) the use of these computer codes to study a number of important multistage phenomena.

  17. A salamander's flexible spinal network for locomotion, modeled at two levels of abstraction.

    PubMed

    Knüsel, Jeremie; Bicanski, Andrej; Ryczko, Dimitri; Cabelguen, Jean-Marie; Ijspeert, Auke Jan

    2013-08-01

    Animals have to coordinate a large number of muscles in different ways to efficiently move at various speeds and in different and complex environments. This coordination is in large part based on central pattern generators (CPGs). These neural networks are capable of producing complex rhythmic patterns when activated and modulated by relatively simple control signals. Although the generation of particular gaits by CPGs has been successfully modeled at many levels of abstraction, the principles underlying the generation and selection of a diversity of patterns of coordination in a single neural network are still not well understood. The present work specifically addresses the flexibility of the spinal locomotor networks in salamanders. We compare an abstract oscillator model and a CPG network composed of integrate-and-fire neurons, according to their ability to account for different axial patterns of coordination, and in particular the transition in gait between swimming and stepping modes. The topology of the network is inspired by models of the lamprey CPG, complemented by additions based on experimental data from isolated spinal cords of salamanders. Oscillatory centers of the limbs are included in a way that preserves the flexibility of the axial network. Similarly to the selection of forward and backward swimming in lamprey models via different excitation to the first axial segment, we can account for the modification of the axial coordination pattern between swimming and forward stepping on land in the salamander model, via different uncoupled frequencies in limb versus axial oscillators (for the same level of excitation). These results transfer partially to a more realistic model based on formal spiking neurons, and we discuss the difference between the abstract oscillator model and the model built with formal spiking neurons.

  18. Applications and modelling of bulk HTSs in brushless ac machines

    NASA Astrophysics Data System (ADS)

    Barnes, G. J.; McCulloch, M. D.; Dew-Hughes, D.

    2000-06-01

    The use of high temperature superconducting material in its bulk form for engineering applications is attractive due to the large power densities that can be achieved. In brushless electrical machines, there are essentially four properties that can be exploited; their hysteretic nature, their flux shielding properties, their ability to trap large flux densities and their ability to produce levitation. These properties translate to hysteresis machines, reluctance machines, trapped-field synchronous machines and linear motors respectively. Each one of these machines is addressed separately and computer simulations that reveal the current and field distributions within the machines are used to explain their operation.

  19. A hot-atom reaction kinetic model for H abstraction from solid surfaces

    NASA Astrophysics Data System (ADS)

    Kammler, Th.; Kolovos-Vellianitis, D.; Küppers, J.

    2000-07-01

    Measurements of the abstraction reaction kinetics in the interaction of gaseous H atoms with D adsorbed on metal and semiconductor surfaces, H(g)+D(ad)/S→ products, have shown that the kinetics of the HD products are at variance with the expectations drawn from the operation of Eley-Rideal mechanisms. Furthermore, in addition to HD product molecules, D 2 products were observed which are not expected in an Eley-Rideal scenario. Products and kinetics of abstraction reactions on Ni(100), Pt(111), and Cu(111) surfaces were recently explained by a random-walk model based solely on the operation of hot-atom mechanistic steps. Based on the same reaction scenario, the present work provides numerical solutions of the appropriate kinetic equations in the limit of the steady-state approximation for hot-atom species. It is shown that the HD and D 2 product kinetics derived from global kinetic rate constants are the same as those obtained from local probabilities in the random walk model. The rate constants of the hot-atom kinetics provide a background for the interpretation of measured data, which was missing up to now. Assuming that reconstruction affects the competition between hot-atom sticking and hot-atom reaction, the application of the present model at D abstraction from Cu(100) surfaces reproduces the essential characteristics of the experimentally determined kinetics.

  20. Modeling the Swift BAT Trigger Algorithm with Machine Learning

    NASA Technical Reports Server (NTRS)

    Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori

    2015-01-01

    To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. (2014) is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of approximately greater than 97% (approximately less than 3% error), which is a significant improvement on a cut in GRB flux which has an accuracy of 89:6% (10:4% error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of eta(sub 0) approximately 0.48(+0.41/-0.23) Gpc(exp -3) yr(exp -1) with power-law indices of eta(sub 1) approximately 1.7(+0.6/-0.5) and eta(sub 2) approximately -5.9(+5.7/-0.1) for GRBs above and below a break point of z(sub 1) approximately 6.8(+2.8/-3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting. The code used in this is analysis is publicly available online.

  1. An initial-abstraction, constant-loss model for unit hydrograph modeling for applicable watersheds in Texas

    USGS Publications Warehouse

    Asquith, William H.; Roussel, Meghan C.

    2007-01-01

    Estimation of representative hydrographs from design storms, which are known as design hydrographs, provides for cost-effective, riskmitigated design of drainage structures such as bridges, culverts, roadways, and other infrastructure. During 2001?07, the U.S. Geological Survey (USGS), in cooperation with the Texas Department of Transportation, investigated runoff hydrographs, design storms, unit hydrographs,and watershed-loss models to enhance design hydrograph estimation in Texas. Design hydrographs ideally should mimic the general volume, peak, and shape of observed runoff hydrographs. Design hydrographs commonly are estimated in part by unit hydrographs. A unit hydrograph is defined as the runoff hydrograph that results from a unit pulse of excess rainfall uniformly distributed over the watershed at a constant rate for a specific duration. A time-distributed, watershed-loss model is required for modeling by unit hydrographs. This report develops a specific time-distributed, watershed-loss model known as an initial-abstraction, constant-loss model. For this watershed-loss model, a watershed is conceptualized to have the capacity to store or abstract an absolute depth of rainfall at and near the beginning of a storm. Depths of total rainfall less than this initial abstraction do not produce runoff. The watershed also is conceptualized to have the capacity to remove rainfall at a constant rate (loss) after the initial abstraction is satisfied. Additional rainfall inputs after the initial abstraction is satisfied contribute to runoff if the rainfall rate (intensity) is larger than the constant loss. The initial abstraction, constant-loss model thus is a two-parameter model. The initial-abstraction, constant-loss model is investigated through detailed computational and statistical analysis of observed rainfall and runoff data for 92 USGS streamflow-gaging stations (watersheds) in Texas with contributing drainage areas from 0.26 to 166 square miles. The analysis is

  2. Ecological footprint model using the support vector machine technique.

    PubMed

    Ma, Haibo; Chang, Wenjuan; Cui, Guangbai

    2012-01-01

    The per capita ecological footprint (EF) is one of the most widely recognized measures of environmental sustainability. It aims to quantify the Earth's biological resources required to support human activity. In this paper, we summarize relevant previous literature, and present five factors that influence per capita EF. These factors are: National gross domestic product (GDP), urbanization (independent of economic development), distribution of income (measured by the Gini coefficient), export dependence (measured by the percentage of exports to total GDP), and service intensity (measured by the percentage of service to total GDP). A new ecological footprint model based on a support vector machine (SVM), which is a machine-learning method based on the structural risk minimization principle from statistical learning theory was conducted to calculate the per capita EF of 24 nations using data from 123 nations. The calculation accuracy was measured by average absolute error and average relative error. They were 0.004883 and 0.351078% respectively. Our results demonstrate that the EF model based on SVM has good calculation performance.

  3. Ecological Footprint Model Using the Support Vector Machine Technique

    PubMed Central

    Ma, Haibo; Chang, Wenjuan; Cui, Guangbai

    2012-01-01

    The per capita ecological footprint (EF) is one of the most widely recognized measures of environmental sustainability. It aims to quantify the Earth's biological resources required to support human activity. In this paper, we summarize relevant previous literature, and present five factors that influence per capita EF. These factors are: National gross domestic product (GDP), urbanization (independent of economic development), distribution of income (measured by the Gini coefficient), export dependence (measured by the percentage of exports to total GDP), and service intensity (measured by the percentage of service to total GDP). A new ecological footprint model based on a support vector machine (SVM), which is a machine-learning method based on the structural risk minimization principle from statistical learning theory was conducted to calculate the per capita EF of 24 nations using data from 123 nations. The calculation accuracy was measured by average absolute error and average relative error. They were 0.004883 and 0.351078% respectively. Our results demonstrate that the EF model based on SVM has good calculation performance. PMID:22291949

  4. Influence of Material Models Used in Finite Element Modeling on Cutting Forces in Machining

    NASA Astrophysics Data System (ADS)

    Jivishov, Vusal; Rzayev, Elchin

    2016-08-01

    Finite element modeling of machining is significantly influenced by various modeling input parameters such as boundary conditions, mesh size and distribution, as well as properties of workpiece and tool materials. The flow stress model of the workpiece material is the most critical input parameter. However, it is very difficult to obtain experimental values under the same conditions as in machining operations.. This paper analyses the influence of different material models for two steels (AISI 1045 and hardened AISI 52100) in finite element modelling of cutting forces. In this study, the machining process is scaled by a constant ratio of the variable depth of cut h and cutting edge radius rβ. The simulation results are compared with experimental measurements. This comparison illustrates some of the capabilities and limitations of FEM modelling.

  5. Ontological modelling of knowledge management for human-machine integrated design of ultra-precision grinding machine

    NASA Astrophysics Data System (ADS)

    Hong, Haibo; Yin, Yuehong; Chen, Xing

    2016-11-01

    Despite the rapid development of computer science and information technology, an efficient human-machine integrated enterprise information system for designing complex mechatronic products is still not fully accomplished, partly because of the inharmonious communication among collaborators. Therefore, one challenge in human-machine integration is how to establish an appropriate knowledge management (KM) model to support integration and sharing of heterogeneous product knowledge. Aiming at the diversity of design knowledge, this article proposes an ontology-based model to reach an unambiguous and normative representation of knowledge. First, an ontology-based human-machine integrated design framework is described, then corresponding ontologies and sub-ontologies are established according to different purposes and scopes. Second, a similarity calculation-based ontology integration method composed of ontology mapping and ontology merging is introduced. The ontology searching-based knowledge sharing method is then developed. Finally, a case of human-machine integrated design of a large ultra-precision grinding machine is used to demonstrate the effectiveness of the method.

  6. A Reference Model for Virtual Machine Launching Overhead

    SciTech Connect

    Wu, Hao; Ren, Shangping; Garzoglio, Gabriele; Timm, Steven; Bernabeu, Gerard; Chadwick, Keith; Noh, Seo-Young

    2014-01-01

    Cloud bursting is one of the key research topics in the cloud computing communities. A well designed cloud bursting module enables private clouds to automatically launch virtual machines (VMs) to public clouds when more resources are needed. One of the main challenges in developing a cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on system operational data obtained from FermiCloud, a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows, the VM launching overhead is not a constant. It varies with physical resource utilization, such as CPU and I/O device utilizations, at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launching overhead reference model is needed. In this paper, we first develop a VM launching overhead reference model based on operational data we have obtained on FermiCloud. Second, we apply the developed reference model on FermiCloud and compare calculated VM launching overhead values based on the model with measured overhead values on FermiCloud. Our empirical results on FermiCloud indicate that the developed reference model is accurate. We believe, with the guidance of the developed reference model, efficient resource allocation algorithms can be developed for cloud bursting process to minimize the operational cost and resource waste.

  7. Physiological model of motion analysis for machine vision

    NASA Astrophysics Data System (ADS)

    Young, Richard A.; Lesperance, Ronald M.

    1993-09-01

    We studied the spatio-temporal shape of `receptive fields' of simple cells in the monkey visual cortex. Receptive fields are maps of the regions in space and time that affect a cell's electrical responses. Fields with no change in shape over time responded to all directions of motion; fields with changing shape over time responded to only some directions of motion. A Gaussian Derivative (GD) model fit these fields well, in a transformed variable space that aligned the centers and principal axes of the field and model in space-time. The model accounts for fields that vary in orientation, location, spatial scale, motion properties, and number of lobes. The model requires only ten parameters (the minimum possible) to describe fields in two dimensions of space and one of time. A difference-of-offset-Gaussians (DOOG) provides a plausible physiological means to form GD model fields. Because of its simplicity, the GD model improves the efficiency of machine vision systems for analyzing motion. An implementation produced robust local estimates of the direction and speed of moving objects in real scenes.

  8. Modelling the sensitivity of river reaches to water abstraction: RAPHSA- a hydroecology tool for environmental managers

    NASA Astrophysics Data System (ADS)

    Klaar, Megan; Laize, Cedric; Maddock, Ian; Acreman, Mike; Tanner, Kath; Peet, Sarah

    2014-05-01

    A key challenge for environmental managers is the determination of environmental flows which allow a maximum yield of water resources to be taken from surface and sub-surface sources, whilst ensuring sufficient water remains in the environment to support biota and habitats. It has long been known that sensitivity to changes in water levels resulting from river and groundwater abstractions varies between rivers. Whilst assessment at the catchment scale is ideal for determining broad pressures on water resources and ecosystems, assessment of the sensitivity of reaches to changes in flow has previously been done on a site-by-site basis, often with the application of detailed but time consuming techniques (e.g. PHABSIM). While this is appropriate for a limited number of sites, it is costly in terms of money and time resources and therefore not appropriate for application at a national level required by responsible licensing authorities. To address this need, the Environment Agency (England) is developing an operational tool to predict relationships between physical habitat and flow which may be applied by field staff to rapidly determine the sensitivity of physical habitat to flow alteration for use in water resource management planning. An initial model of river sensitivity to abstraction (defined as the change in physical habitat related to changes in river discharge) was developed using site characteristics and data from 66 individual PHABSIM surveys throughout the UK (Booker & Acreman, 2008). By applying a multivariate multiple linear regression analysis to the data to define habitat availability-flow curves using resource intensity as predictor variables, the model (known as RAPHSA- Rapid Assessment of Physical Habitat Sensitivity to Abstraction) is able to take a risk-based approach to modeled certainty. Site specific information gathered using desk-based, or a variable amount of field work can be used to predict the shape of the habitat- flow curves, with the

  9. Fault Modeling of Extreme Scale Applications Using Machine Learning

    SciTech Connect

    Vishnu, Abhinav; Dam, Hubertus van; Tallent, Nathan R.; Kerbyson, Darren J.; Hoisie, Adolfy

    2016-05-01

    Faults are commonplace in large scale systems. These systems experience a variety of faults such as transient, permanent and intermittent. Multi-bit faults are typically not corrected by the hardware resulting in an error. Here, this paper attempts to answer an important question: Given a multi-bit fault in main memory, will it result in an application error — and hence a recovery algorithm should be invoked — or can it be safely ignored? We propose an application fault modeling methodology to answer this question. Given a fault signature (a set of attributes comprising of system and application state), we use machine learning to create a model which predicts whether a multibit permanent/transient main memory fault will likely result in error. We present the design elements such as the fault injection methodology for covering important data structures, the application and system attributes which should be used for learning the model, the supervised learning algorithms (and potentially ensembles), and important metrics. Lastly, we use three applications — NWChem, LULESH and SVM — as examples for demonstrating the effectiveness of the proposed fault modeling methodology.

  10. Predictive Surface Roughness Model for End Milling of Machinable Glass Ceramic

    NASA Astrophysics Data System (ADS)

    Mohan Reddy, M.; Gorin, Alexander; Abou-El-Hossein, K. A.

    2011-02-01

    Advanced ceramics of Machinable glass ceramic is attractive material to produce high accuracy miniaturized components for many applications in various industries such as aerospace, electronics, biomedical, automotive and environmental communications due to their wear resistance, high hardness, high compressive strength, good corrosion resistance and excellent high temperature properties. Many research works have been conducted in the last few years to investigate the performance of different machining operations when processing various advanced ceramics. Micro end-milling is one of the machining methods to meet the demand of micro parts. Selecting proper machining parameters are important to obtain good surface finish during machining of Machinable glass ceramic. Therefore, this paper describes the development of predictive model for the surface roughness of Machinable glass ceramic in terms of speed, feed rate by using micro end-milling operation.

  11. Machine learning and cosmological simulations - I. Semi-analytical models

    NASA Astrophysics Data System (ADS)

    Kamdar, Harshil M.; Turk, Matthew J.; Brunner, Robert J.

    2016-01-01

    We present a new exploratory framework to model galaxy formation and evolution in a hierarchical Universe by using machine learning (ML). Our motivations are two-fold: (1) presenting a new, promising technique to study galaxy formation, and (2) quantitatively analysing the extent of the influence of dark matter halo properties on galaxies in the backdrop of semi-analytical models (SAMs). We use the influential Millennium Simulation and the corresponding Munich SAM to train and test various sophisticated ML algorithms (k-Nearest Neighbors, decision trees, random forests, and extremely randomized trees). By using only essential dark matter halo physical properties for haloes of M > 1012 M⊙ and a partial merger tree, our model predicts the hot gas mass, cold gas mass, bulge mass, total stellar mass, black hole mass and cooling radius at z = 0 for each central galaxy in a dark matter halo for the Millennium run. Our results provide a unique and powerful phenomenological framework to explore the galaxy-halo connection that is built upon SAMs and demonstrably place ML as a promising and a computationally efficient tool to study small-scale structure formation.

  12. Modeling the Virtual Machine Launching Overhead under Fermicloud

    SciTech Connect

    Garzoglio, Gabriele; Wu, Hao; Ren, Shangping; Timm, Steven; Bernabeu, Gerard; Noh, Seo-Young

    2014-11-12

    FermiCloud is a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows. The Cloud Bursting module of the FermiCloud enables the FermiCloud, when more computational resources are needed, to automatically launch virtual machines to available resources such as public clouds. One of the main challenges in developing the cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on FermiCloud’s system operational data, the VM launching overhead is not a constant. It varies with physical resource (CPU, memory, I/O device) utilization at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launch overhead reference model is needed. The paper is to develop a VM launch overhead reference model based on operational data we have obtained on FermiCloud and uses the reference model to guide the cloud bursting process.

  13. Modeling Stochastic Kinetics of Molecular Machines at Multiple Levels: From Molecules to Modules

    PubMed Central

    Chowdhury, Debashish

    2013-01-01

    A molecular machine is either a single macromolecule or a macromolecular complex. In spite of the striking superficial similarities between these natural nanomachines and their man-made macroscopic counterparts, there are crucial differences. Molecular machines in a living cell operate stochastically in an isothermal environment far from thermodynamic equilibrium. In this mini-review we present a catalog of the molecular machines and an inventory of the essential toolbox for theoretically modeling these machines. The tool kits include 1), nonequilibrium statistical-physics techniques for modeling machines and machine-driven processes; and 2), statistical-inference methods for reverse engineering a functional machine from the empirical data. The cell is often likened to a microfactory in which the machineries are organized in modular fashion; each module consists of strongly coupled multiple machines, but different modules interact weakly with each other. This microfactory has its own automated supply chain and delivery system. Buoyed by the success achieved in modeling individual molecular machines, we advocate integration of these models in the near future to develop models of functional modules. A system-level description of the cell from the perspective of molecular machinery (the mechanome) is likely to emerge from further integrations that we envisage here. PMID:23746505

  14. Modeling stochastic kinetics of molecular machines at multiple levels: from molecules to modules.

    PubMed

    Chowdhury, Debashish

    2013-06-04

    A molecular machine is either a single macromolecule or a macromolecular complex. In spite of the striking superficial similarities between these natural nanomachines and their man-made macroscopic counterparts, there are crucial differences. Molecular machines in a living cell operate stochastically in an isothermal environment far from thermodynamic equilibrium. In this mini-review we present a catalog of the molecular machines and an inventory of the essential toolbox for theoretically modeling these machines. The tool kits include 1), nonequilibrium statistical-physics techniques for modeling machines and machine-driven processes; and 2), statistical-inference methods for reverse engineering a functional machine from the empirical data. The cell is often likened to a microfactory in which the machineries are organized in modular fashion; each module consists of strongly coupled multiple machines, but different modules interact weakly with each other. This microfactory has its own automated supply chain and delivery system. Buoyed by the success achieved in modeling individual molecular machines, we advocate integration of these models in the near future to develop models of functional modules. A system-level description of the cell from the perspective of molecular machinery (the mechanome) is likely to emerge from further integrations that we envisage here.

  15. Access, Equity, and Opportunity. Women in Machining: A Model Program.

    ERIC Educational Resources Information Center

    Warner, Heather

    The Women in Machining (WIM) program is a Machine Action Project (MAP) initiative that was developed in response to a local skilled metalworking labor shortage, despite a virtual absence of women and people of color from area shops. The project identified post-war stereotypes and other barriers that must be addressed if women are to have an equal…

  16. Parameterizing Phrase Based Statistical Machine Translation Models: An Analytic Study

    ERIC Educational Resources Information Center

    Cer, Daniel

    2011-01-01

    The goal of this dissertation is to determine the best way to train a statistical machine translation system. I first develop a state-of-the-art machine translation system called Phrasal and then use it to examine a wide variety of potential learning algorithms and optimization criteria and arrive at two very surprising results. First, despite the…

  17. Abstractive dissociation of oxygen over Al(111): a nonadiabatic quantum model.

    PubMed

    Katz, Gil; Kosloff, Ronnie; Zeiri, Yehuda

    2004-02-22

    The dissociation of oxygen on a clean aluminum surface is studied theoretically. A nonadiabatic quantum dynamical model is used, based on four electronically distinct potential energy surfaces characterized by the extent of charge transfer from the metal to the adsorbate. A flat surface approximation is used to reduce the computation complexity. The conservation of the helicopter angular momentum allows Boltzmann averaging of the outcome of the propagation of a three degrees of freedom wave function. The dissociation event is simulated by solving the time-dependent Schrödinger equation for a period of 30 femtoseconds. As a function of incident kinetic energy, the dissociation yield follows the experimental trend. An attempt at simulation employing only the lowest adiabatic surface failed, qualitatively disagreeing with both experiment and nonadiabatic calculations. The final products, adsorptive dissociation and abstractive dissociation, are obtained by carrying out a semiclassical molecular dynamics simulation with surface hopping which describes the back charge transfer from an oxygen atom negative ion to the surface. The final adsorbed oxygen pair distribution compares well with experiment. By running the dynamical events backward in time, a correlation is established between the products and the initial conditions which lead to their production. Qualitative agreement is thus obtained with recent experiments that show suppression of abstraction by rotational excitation.

  18. Alternative Models of Service, Centralized Machine Operations. Phase II Report. Volume II.

    ERIC Educational Resources Information Center

    Technology Management Corp., Alexandria, VA.

    A study was conducted to determine if the centralization of playback machine operations for the national free library program would be feasible, economical, and desirable. An alternative model of playback machine services was constructed and compared with existing network operations considering both cost and service. The alternative model was…

  19. AZOrange - High performance open source machine learning for QSAR modeling in a graphical programming environment

    PubMed Central

    2011-01-01

    Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of

  20. Crystal structure representations for machine learning models of formation energies

    SciTech Connect

    Faber, Felix; Lindmaa, Alexander; von Lilienfeld, O. Anatole; Armiento, Rickard

    2015-04-20

    We introduce and evaluate a set of feature vector representations of crystal structures for machine learning (ML) models of formation energies of solids. ML models of atomization energies of organic molecules have been successful using a Coulomb matrix representation of the molecule. We consider three ways to generalize such representations to periodic systems: (i) a matrix where each element is related to the Ewald sum of the electrostatic interaction between two different atoms in the unit cell repeated over the lattice; (ii) an extended Coulomb-like matrix that takes into account a number of neighboring unit cells; and (iii) an ansatz that mimics the periodicity and the basic features of the elements in the Ewald sum matrix using a sine function of the crystal coordinates of the atoms. The representations are compared for a Laplacian kernel with Manhattan norm, trained to reproduce formation energies using a dataset of 3938 crystal structures obtained from the Materials Project. For training sets consisting of 3000 crystals, the generalization error in predicting formation energies of new structures corresponds to (i) 0.49, (ii) 0.64, and (iii) 0.37eV/atom for the respective representations.

  1. Derivation of a model of the exciter of a brushless synchronous machine

    NASA Astrophysics Data System (ADS)

    Vleeshouwers, J. M.

    1992-06-01

    The modeling of the brushless exciter for a machine used in a wind turbine is addressed. A brushless exciter reduces the susceptability of the machine to atmospheric conditions and therefore the need for maintenance compared to a synchronous machine equipped with brushes and sliprings. Furthermore, no large excitation winding power supply is needed. In large wind turbines which apply a synchronous machine, these advantages will be vital. A brushless exciter is usually constructed as a small synchronous machine with rectifier. According to manufacturers, exciters are designed to function as a current transformer. The method which has been developed in an earlier resarch project to model the synchronous machine with rectifier is concluded to be applicable to model the exciter, provided that the effect of resistances on the commutation may be neglected. This restricts the technique to modeling exciters of machines in the 100 kW range and larger. For smaller exciters the existing modeling approach is not applicable. Measurements of a small exciter (of a 37.5 kVa machine) show that higher harmonics in the exciter significantly contribute to its behavior. Based on experimental data a simple linear first order dynamic model was developed for the small exciter. The model parameters can be deduced from the steady state current gain and a simple dynamic experiment.

  2. Mutation-selection dynamics and error threshold in an evolutionary model for Turing machines.

    PubMed

    Musso, Fabio; Feverati, Giovanni

    2012-01-01

    We investigate the mutation-selection dynamics for an evolutionary computation model based on Turing machines. The use of Turing machines allows for very simple mechanisms of code growth and code activation/inactivation through point mutations. To any value of the point mutation probability corresponds a maximum amount of active code that can be maintained by selection and the Turing machines that reach it are said to be at the error threshold. Simulations with our model show that the Turing machines population evolve toward the error threshold. Mathematical descriptions of the model point out that this behaviour is due more to the mutation-selection dynamics than to the intrinsic nature of the Turing machines. This indicates that this result is much more general than the model considered here and could play a role also in biological evolution.

  3. On problems in defining abstract and metaphysical concepts--emergence of a new model.

    PubMed

    Nahod, Bruno; Nahod, Perina Vukša

    2014-12-01

    Basic anthropological terminology is the first project covering terms from the domain of the social sciences under the Croatian Special Field Terminology program (Struna). Problems that have been sporadically noticed or whose existence could have been presumed during the processing of terms mainly from technical fields and sciences have finally emerged in "anthropology". The principles of the General Theory of Terminology (GTT), which are followed in Struna, were put to a truly exacting test, and sometimes stretched beyond their limits when applied to concepts that do not necessarily have references in the physical world; namely, abstract and metaphysical concepts. We are currently developing a new terminographical model based on Idealized Cognitive Models (ICM), which will hopefully ensure a better cross-filed implementation of various types of concepts and their relations. The goal of this paper is to introduce the theoretical bases of our model. Additionally, we will present a pilot study of the series of experiments in which we are trying to investigate the nature of conceptual categorization in special languages and its proposed difference form categorization in general language.

  4. Comparison of two different surfaces for 3d model abstraction in support of remote sensing simulations

    SciTech Connect

    Pope, Paul A; Ranken, Doug M

    2010-01-01

    A method for abstracting a 3D model by shrinking a triangular mesh, defined upon a best fitting ellipsoid surrounding the model, onto the model's surface has been previously described. This ''shrinkwrap'' process enables a semi-regular mesh to be defined upon an object's surface. This creates a useful data structure for conducting remote sensing simulations and image processing. However, using a best fitting ellipsoid having a graticule-based tessellation to seed the shrinkwrap process suffers from a mesh which is too dense at the poles. To achieve a more regular mesh, the use of a best fitting, subdivided icosahedron was tested. By subdividing each of the twenty facets of the icosahedron into regular triangles of a predetermined size, arbitrarily dense, highly-regular starting meshes can be created. Comparisons of the meshes resulting from these two seed surfaces are described. Use of a best fitting icosahedron-based mesh as the seed surface in the shrinkwrap process is preferable to using a best fitting ellipsoid. The impacts to remote sensing simulations, specifically generation of synthetic imagery, is illustrated.

  5. Guidelines for Developing and Reporting Machine Learning Predictive Models in Biomedical Research: A Multidisciplinary View

    PubMed Central

    2016-01-01

    Background As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. Objective To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. Methods A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. Results The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. Conclusions A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. PMID:27986644

  6. Experimental "evolutional machines": mathematical and experimental modeling of biological evolution

    NASA Astrophysics Data System (ADS)

    Brilkov, A. V.; Loginov, I. A.; Morozova, E. V.; Shuvaev, A. N.; Pechurkin, N. S.

    Experimentalists possess model systems of two major types for study of evolution continuous cultivation in the chemostat and long-term development in closed laboratory microecosystems with several trophic structure If evolutionary changes or transfer from one steady state to another in the result of changing qualitative properties of the system take place in such systems the main characteristics of these evolution steps can be measured By now this has not been realized from the point of view of methodology though a lot of data on the work of both types of evolutionary machines has been collected In our experiments with long-term continuous cultivation we used the bacterial strains containing in plasmids the cloned genes of bioluminescence and green fluorescent protein which expression level can be easily changed and controlled In spite of the apparent kinetic diversity of evolutionary transfers in two types of systems the general mechanisms characterizing the increase of used energy flow by populations of primer producent can be revealed at their study According to the energy approach at spontaneous transfer from one steady state to another e g in the process of microevolution competition or selection heat dissipation characterizing the rate of entropy growth should increase rather then decrease or maintain steady as usually believed The results of our observations of experimental evolution require further development of thermodynamic theory of open and closed biological systems and further study of general mechanisms of biological

  7. (abstract) Modeling Protein Families and Human Genes: Hidden Markov Models and a Little Beyond

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre

    1994-01-01

    We will first give a brief overview of Hidden Markov Models (HMMs) and their use in Computational Molecular Biology. In particular, we will describe a detailed application of HMMs to the G-Protein-Coupled-Receptor Superfamily. We will also describe a number of analytical results on HMMs that can be used in discrimination tests and database mining. We will then discuss the limitations of HMMs and some new directions of research. We will conclude with some recent results on the application of HMMs to human gene modeling and parsing.

  8. Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness

    NASA Astrophysics Data System (ADS)

    Kusuma, K. K.; Maruf, A.

    2016-02-01

    Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.

  9. A Consistent Information Criterion for Support Vector Machines in Diverging Model Spaces

    PubMed Central

    Zhang, Xiang; Wu, Yichao; Wang, Lan; Li, Runze

    2015-01-01

    Information criteria have been popularly used in model selection and proved to possess nice theoretical properties. For classification, Claeskens et al. (2008) proposed support vector machine information criterion for feature selection and provided encouraging numerical evidence. Yet no theoretical justification was given there. This work aims to fill the gap and to provide some theoretical justifications for support vector machine information criterion in both fixed and diverging model spaces. We first derive a uniform convergence rate for the support vector machine solution and then show that a modification of the support vector machine information criterion achieves model selection consistency even when the number of features diverges at an exponential rate of the sample size. This consistency result can be further applied to selecting the optimal tuning parameter for various penalized support vector machine methods. Finite-sample performance of the proposed information criterion is investigated using Monte Carlo studies and one real-world gene selection problem. PMID:27239164

  10. DFT modeling of chemistry on the Z machine

    NASA Astrophysics Data System (ADS)

    Mattsson, Thomas

    2013-06-01

    Density Functional Theory (DFT) has proven remarkably accurate in predicting properties of matter under shock compression for a wide-range of elements and compounds: from hydrogen to xenon via water. Materials where chemistry plays a role are of particular interest for many applications. For example the deep interiors of Neptune, Uranus, and hundreds of similar exoplanets are composed of molecular ices of carbon, hydrogen, oxygen, and nitrogen at pressures of several hundred GPa and temperatures of many thousand Kelvin. High-quality thermophysical experimental data and high-fidelity simulations including chemical reaction are necessary to constrain planetary models over a large range of conditions. As examples of where chemical reactions are important, and demonstration of the high fidelity possible for these both structurally and chemically complex systems, we will discuss shock- and re-shock of liquid carbon dioxide (CO2) in the range 100 to 800 GPa, shock compression of the hydrocarbon polymers polyethylene (PE) and poly(4-methyl-1-pentene) (PMP), and finally simulations of shock compression of glow discharge polymer (GDP) including the effects of doping with germanium. Experimental results from Sandia's Z machine have time and again validated the DFT simulations at extreme conditions and the combination of experiment and DFT provide reliable data for evaluating existing and constructing future wide-range equations of state models for molecular compounds like CO2 and polymers like PE, PMP, and GDP. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  11. Using financial risk measures for analyzing generalization performance of machine learning models.

    PubMed

    Takeda, Akiko; Kanamori, Takafumi

    2014-09-01

    We propose a unified machine learning model (UMLM) for two-class classification, regression and outlier (or novelty) detection via a robust optimization approach. The model embraces various machine learning models such as support vector machine-based and minimax probability machine-based classification and regression models. The unified framework makes it possible to compare and contrast existing learning models and to explain their differences and similarities. In this paper, after relating existing learning models to UMLM, we show some theoretical properties for UMLM. Concretely, we show an interpretation of UMLM as minimizing a well-known financial risk measure (worst-case value-at risk (VaR) or conditional VaR), derive generalization bounds for UMLM using such a risk measure, and prove that solving problems of UMLM leads to estimators with the minimized generalization bounds. Those theoretical properties are applicable to related existing learning models.

  12. A Sustainable Model for Integrating Current Topics in Machine Learning Research into the Undergraduate Curriculum

    ERIC Educational Resources Information Center

    Georgiopoulos, M.; DeMara, R. F.; Gonzalez, A. J.; Wu, A. S.; Mollaghasemi, M.; Gelenbe, E.; Kysilka, M.; Secretan, J.; Sharma, C. A.; Alnsour, A. J.

    2009-01-01

    This paper presents an integrated research and teaching model that has resulted from an NSF-funded effort to introduce results of current Machine Learning research into the engineering and computer science curriculum at the University of Central Florida (UCF). While in-depth exposure to current topics in Machine Learning has traditionally occurred…

  13. A Model for Predicting Integrated Man-Machine System Reliability: Model Logic and Description

    DTIC Science & Technology

    1974-11-01

    A MODEL FOR PREDICTING INTEGRATED MAN-MACHINE SYSTEMS RELIABILITY prepared for Naval Si nand Deparrmem aw nr. Con :’III’lit UNCLASSIFIED...was substantially modified so as to allow its use for system reliability and system availability predictive purposes. The resultant new model is...from 4 to 20 members was substantially modified so as to allow its use for system reliability and system availability predictive purposes. The

  14. Distributed model for electromechanical interaction in rotordynamics of cage rotor electrical machines

    NASA Astrophysics Data System (ADS)

    Laiho, Antti; Holopainen, Timo P.; Klinge, Paul; Arkkio, Antero

    2007-05-01

    In this work the effects of the electromechanical interaction on rotordynamics and vibration characteristics of cage rotor electrical machines were considered. An eccentric rotor motion distorts the electromagnetic field in the air-gap between the stator and rotor inducing a total force, the unbalanced magnetic pull, exerted on the rotor. In this paper a low-order parametric model for the unbalanced magnetic pull is coupled with a three-dimensional finite element structural model of the electrical machine. The main contribution of the work is to present a computationally efficient electromechanical model for vibration analysis of cage rotor machines. In this model, the interaction between the mechanical and electromagnetic systems is distributed over the air gap of the machine. This enables the inclusion of rotor and stator deflections into the analysis and, thus, yields more realistic prediction for the effects of electromechanical interaction. The model was tested by implementing it for two electrical machines with nominal speeds close to one of the rotor bending critical speeds. Rated machine data was used in order to predict the effects of the electromechanical interaction on vibration characteristics of the example machines.

  15. Estimation and forecasting of machine health condition using ARMA/GARCH model

    NASA Astrophysics Data System (ADS)

    Pham, Hong Thom; Yang, Bo-Suk

    2010-02-01

    This paper proposes the hybrid model of autoregressive moving average (ARMA) and generalized autoregressive conditional heteroscedasticity (GARCH) to estimate and forecast the machine state based on vibration signal. The main idea in this study is to employ the linear ARMA model and the nonlinear GARCH model to explain the wear and fault condition of machine, respectively. The successful outcomes of the ARMA/GARCH prediction model can give obvious explanation for future states of machine, which enhance the worth of machine condition monitoring as well as condition-based maintenance in practical applications. The advance of the proposed model is verified in empirical results as applying for a real system of a methane compressor in a petrochemical plant.

  16. What good are abstract and what-if models? Lessons from the Gaïa hypothesis.

    PubMed

    Dutreuil, Sébastien

    2014-08-01

    This article on the epistemology of computational models stems from an analysis of the Gaïa hypothesis (GH). It begins with James Kirchner's criticisms of the central computational model of GH: Daisyworld. Among other things, the model has been criticized for being too abstract, describing fictional entities (fictive daisies on an imaginary planet) and trying to answer counterfactual (what-if) questions (how would a planet look like if life had no influence on it?). For these reasons the model has been considered not testable and therefore not legitimate in science, and in any case not very interesting since it explores non actual issues. This criticism implicitly assumes that science should only be involved in the making of models that are "actual" (by opposition to what-if) and "specific" (by opposition to abstract). I challenge both of these criticisms in this article. First by showing that although the testability-understood as the comparison of model output with empirical data-is an important procedure for explanatory models, there are plenty of models that are not testable. The fact that these are not testable (in this restricted sense) has nothing to do with their being "abstract" or "what-if" but with their being predictive models. Secondly, I argue that "abstract" and "what-if" models aim at (respectable) epistemic purposes distinct from those pursued by "actual and specific models". Abstract models are used to propose how-possibly explanation or to pursue theorizing. What-if models are used to attribute causal or explanatory power to a variable of interest. The fact that they aim at different epistemic goals entails that it may not be accurate to consider the choice between different kinds of model as a "strategy".

  17. Modelling of the dynamic behaviour of hard-to-machine alloys

    NASA Astrophysics Data System (ADS)

    Hokka, M.; Leemet, T.; Shrot, A.; Bäker, M.; Kuokkala, V.-T.

    2012-08-01

    Machining of titanium alloys and nickel based superalloys can be difficult due to their excellent mechanical properties combining high strength, ductility, and excellent overall high temperature performance. Machining of these alloys can, however, be improved by simulating the processes and by optimizing the machining parameters. The simulations, however, need accurate material models that predict the material behaviour in the range of strains and strain rates that occur in the machining processes. In this work, the behaviour of titanium 15-3-3-3 alloy and nickel based superalloy 625 were characterized in compression, and Johnson-Cook material model parameters were obtained from the results. For the titanium alloy, the adiabatic Johnson-Cook model predicts softening of the material adequately, but the high strain hardening rate of Alloy 625 in the model prevents the localization of strain and no shear bands were formed when using this model. For Alloy 625, the Johnson-Cook model was therefore modified to decrease the strain hardening rate at large strains. The models were used in the simulations of orthogonal cutting of the material. For both materials, the models are able to predict the serrated chip formation, frequently observed in the machining of these alloys. The machining forces also match relatively well, but some differences can be seen in the details of the experimentally obtained and simulated chip shapes.

  18. Abstraction and Consolidation

    ERIC Educational Resources Information Center

    Monaghan, John; Ozmantar, Mehmet Fatih

    2004-01-01

    What is involved in consolidating a new mathematical abstraction? This paper examines the work of one student who was working on a task designed to consolidate two recently constructed absolute function abstractions. The study adopts an activity theoretic model of abstraction in context. Selected protocol data are presented. The initial state of…

  19. ABSTRACTION OF INFORMATION FROM 2- AND 3-DIMENSIONAL PORFLOW MODELS INTO A 1-D GOLDSIM MODEL - 11404

    SciTech Connect

    Taylor, G.; Hiergesell, R.

    2010-11-16

    The Savannah River National Laboratory has developed a 'hybrid' approach to Performance Assessment modeling which has been used for a number of Performance Assessments. This hybrid approach uses a multi-dimensional modeling platform (PorFlow) to develop deterministic flow fields and perform contaminant transport. The GoldSim modeling platform is used to develop the Sensitivity and Uncertainty analyses. Because these codes are performing complementary tasks, it is incumbent upon them that for the deterministic cases they produce very similar results. This paper discusses two very different waste forms, one with no engineered barriers and one with engineered barriers, each of which present different challenges to the abstraction of data. The hybrid approach to Performance Assessment modeling used at the SRNL uses a 2-D unsaturated zone (UZ) and a 3-D saturated zone (SZ) model in the PorFlow modeling platform. The UZ model consists of the waste zone and the unsaturated zoned between the waste zone and the water table. The SZ model consists of source cells beneath the waste form to the points of interest. Both models contain 'buffer' cells so that modeling domain boundaries do not adversely affect the calculation. The information pipeline between the two models is the contaminant flux. The domain contaminant flux, typically in units of moles (or Curies) per year from the UZ model is used as a boundary condition for the source cells in the SZ. The GoldSim modeling component of the hybrid approach is an integrated UZ-SZ model. The model is a 1-D representation of the SZ, typically 1-D in the UZ, but as discussed below, depending on the waste form being analyzed may contain pseudo-2-D elements. A waste form at the Savannah River Site (SRS) which has no engineered barriers is commonly referred to as a slit trench. A slit trench, as its name implies, is an unlined trench, typically 6 m deep, 6 m wide, and 200 m long. Low level waste consisting of soil, debris, rubble, wood

  20. Improving protein–protein interactions prediction accuracy using protein evolutionary information and relevance vector machine model

    PubMed Central

    An, Ji‐Yong; Meng, Fan‐Rong; Chen, Xing; Yan, Gui‐Ying; Hu, Ji‐Pu

    2016-01-01

    Abstract Predicting protein–protein interactions (PPIs) is a challenging task and essential to construct the protein interaction networks, which is important for facilitating our understanding of the mechanisms of biological systems. Although a number of high‐throughput technologies have been proposed to predict PPIs, there are unavoidable shortcomings, including high cost, time intensity, and inherently high false positive rates. For these reasons, many computational methods have been proposed for predicting PPIs. However, the problem is still far from being solved. In this article, we propose a novel computational method called RVM‐BiGP that combines the relevance vector machine (RVM) model and Bi‐gram Probabilities (BiGP) for PPIs detection from protein sequences. The major improvement includes (1) Protein sequences are represented using the Bi‐gram probabilities (BiGP) feature representation on a Position Specific Scoring Matrix (PSSM), in which the protein evolutionary information is contained; (2) For reducing the influence of noise, the Principal Component Analysis (PCA) method is used to reduce the dimension of BiGP vector; (3) The powerful and robust Relevance Vector Machine (RVM) algorithm is used for classification. Five‐fold cross‐validation experiments executed on yeast and Helicobacter pylori datasets, which achieved very high accuracies of 94.57 and 90.57%, respectively. Experimental results are significantly better than previous methods. To further evaluate the proposed method, we compare it with the state‐of‐the‐art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM‐BiGP method is significantly better than the SVM‐based method. In addition, we achieved 97.15% accuracy on imbalance yeast dataset, which is higher than that of balance yeast dataset. The promising experimental results show the efficiency and robust of the proposed method, which can be an automatic

  1. Machine Learning Models for Detection of Regions of High Model Form Uncertainty in RANS

    NASA Astrophysics Data System (ADS)

    Ling, Julia; Templeton, Jeremy

    2015-11-01

    Reynolds Averaged Navier Stokes (RANS) models are widely used because of their computational efficiency and ease-of-implementation. However, because they rely on inexact turbulence closures, they suffer from significant model form uncertainty in many flows. Many RANS models make use of the Boussinesq hypothesis, which assumes a non-negative, scalar eddy viscosity that provides a linear relation between the Reynolds stresses and the mean strain rate. In many flows of engineering relevance, this eddy viscosity assumption is violated, leading to inaccuracies in the RANS predictions. For example, in near wall regions, the Boussinesq hypothesis fails to capture the correct Reynolds stress anisotropy. In regions of flow curvature, the linear relation between Reynolds stresses and mean strain rate may be inaccurate. This model form uncertainty cannot be quantified by simply varying the model parameters, as it is rooted in the model structure itself. Machine learning models were developed to detect regions of high model form uncertainty. These machine learning models consisted of binary classifiers that predicted, on a point-by-point basis, whether or not key RANS assumptions were violated. These classifiers were trained and evaluated for their sensitivity, specificity, and generalizability on a database of canonical flows.

  2. Study of the machining process of nano-electrical discharge machining based on combined atomistic-continuum modeling method

    NASA Astrophysics Data System (ADS)

    Zhang, Guojun; Guo, Jianwen; Ming, Wuyi; Huang, Yu; Shao, Xinyu; Zhang, Zhen

    2014-01-01

    Nano-electrical discharge machining (nano-EDM) is an attractive measure to manufacture parts with nanoscale precision, however, due to the incompleteness of its theories, the development of more advanced nano-EDM technology is impeded. In this paper, a computational simulation model combining the molecular dynamics simulation model and the two-temperature model for single discharge process in nano-EDM is constructed to study the machining mechanism of nano-EDM from the thermal point of view. The melting process is analyzed. Before the heated material gets melted, thermal compressive stress higher than 3 GPa is induced. After the material gets melted, the compressive stress gets relieved. The cooling and solidifying processes are also analyzed. It is found that during the cooling process of the melted material, tensile stress higher than 3 GPa arises, which leads to the disintegration of material. The formation of the white layer is attributed to the homogeneous solidification, and additionally, the resultant residual stress is analyzed.

  3. The Modelling Of Basing Holes Machining Of Automatically Replaceable Cubical Units For Reconfigurable Manufacturing Systems With Low-Waste Production

    NASA Astrophysics Data System (ADS)

    Bobrovskij, N. M.; Levashkin, D. G.; Bobrovskij, I. N.; Melnikov, P. A.; Lukyanov, A. A.

    2017-01-01

    Article is devoted the decision of basing holes machining accuracy problems of automatically replaceable cubical units (carriers) for reconfigurable manufacturing systems with low-waste production (RMS). Results of automatically replaceable units basing holes machining modeling on the basis of the dimensional chains analysis are presented. Influence of machining parameters processing on accuracy spacings on centers between basing apertures is shown. The mathematical model of carriers basing holes machining accuracy is offered.

  4. A stochastic model for the cell formation problem considering machine reliability

    NASA Astrophysics Data System (ADS)

    Esmailnezhad, Bahman; Fattahi, Parviz; Kheirkhah, Amir Saman

    2015-03-01

    This paper presents a new mathematical model to solve cell formation problem in cellular manufacturing systems, where inter-arrival time, processing time, and machine breakdown time are probabilistic. The objective function maximizes the number of operations of each part with more arrival rate within one cell. Because a queue behind each machine; queuing theory is used to formulate the model. To solve the model, two metaheurstic algorithms such as modified particle swarm optimization and genetic algorithm are proposed. For the generation of initial solutions in these algorithms, a new heuristic method is developed, which always creates feasible solutions. Both metaheurstic algorithms are compared against global solutions obtained from Lingo software's branch and bound (B&B). Also, a statistical method will be used for comparison of solutions of two metaheurstic algorithms. The results of numerical examples indicate that considering the machine breakdown has significant effect on block structures of machine-part matrixes.

  5. Human factors model concerning the man-machine interface of mining crewstations

    NASA Technical Reports Server (NTRS)

    Rider, James P.; Unger, Richard L.

    1989-01-01

    The U.S. Bureau of Mines is developing a computer model to analyze the human factors aspect of mining machine operator compartments. The model will be used as a research tool and as a design aid. It will have the capability to perform the following: simulated anthropometric or reach assessment, visibility analysis, illumination analysis, structural analysis of the protective canopy, operator fatigue analysis, and computation of an ingress-egress rating. The model will make extensive use of graphics to simplify data input and output. Two dimensional orthographic projections of the machine and its operator compartment are digitized and the data rebuilt into a three dimensional representation of the mining machine. Anthropometric data from either an individual or any size population may be used. The model is intended for use by equipment manufacturers and mining companies during initial design work on new machines. In addition to its use in machine design, the model should prove helpful as an accident investigation tool and for determining the effects of machine modifications made in the field on the critical areas of visibility and control reach ability.

  6. Comparing statistical and machine learning classifiers: alternatives for predictive modeling in human factors research.

    PubMed

    Carnahan, Brian; Meyer, Gérard; Kuntz, Lois-Ann

    2003-01-01

    Multivariate classification models play an increasingly important role in human factors research. In the past, these models have been based primarily on discriminant analysis and logistic regression. Models developed from machine learning research offer the human factors professional a viable alternative to these traditional statistical classification methods. To illustrate this point, two machine learning approaches--genetic programming and decision tree induction--were used to construct classification models designed to predict whether or not a student truck driver would pass his or her commercial driver license (CDL) examination. The models were developed and validated using the curriculum scores and CDL exam performances of 37 student truck drivers who had completed a 320-hr driver training course. Results indicated that the machine learning classification models were superior to discriminant analysis and logistic regression in terms of predictive accuracy. Actual or potential applications of this research include the creation of models that more accurately predict human performance outcomes.

  7. Thermal Error Modeling of a Machine Tool Using Data Mining Scheme

    NASA Astrophysics Data System (ADS)

    Wang, Kun-Chieh; Tseng, Pai-Chang

    In this paper the knowledge discovery technique is used to build an effective and transparent mathematic thermal error model for machine tools. Our proposed thermal error modeling methodology (called KRL) integrates the schemes of K-means theory (KM), rough-set theory (RS), and linear regression model (LR). First, to explore the machine tool's thermal behavior, an integrated system is designed to simultaneously measure the temperature ascents at selected characteristic points and the thermal deformations at spindle nose under suitable real machining conditions. Second, the obtained data are classified by the KM method, further reduced by the RS scheme, and a linear thermal error model is established by the LR technique. To evaluate the performance of our proposed model, an adaptive neural fuzzy inference system (ANFIS) thermal error model is introduced for comparison. Finally, a verification experiment is carried out and results reveal that the proposed KRL model is effective in predicting thermal behavior in machine tools. Our proposed KRL model is transparent, easily understood by users, and can be easily programmed or modified for different machining conditions.

  8. Precision holding prediction model for moving joint surfaces of large machine tool

    NASA Astrophysics Data System (ADS)

    Wang, Mulan; Chen, Xuanyu; Ding, Wenzheng; Xu, Kaiyun

    2017-01-01

    In large machine tool, the plastic guide rail is more and more widely used because of its good mechanical properties. Based on the actual operating conditions of the machine tool, this paper analyzes the precision holding performance of the main bearing surface of the large machine tool with plastic guide rail moving. The precision holding performance of the plastic sliding guide rail is studied in detail from several aspects, such as the lubrication condition, the operating parameters of the machine tool and the material properties. The precision holding model of the moving binding surface of the plastic coated guide rail is established. At the same time, the experimental research on the accuracy of the guide rail is carried out, which verifies the validity of the theoretical model.

  9. Modelling of internal architecture of kinesin nanomotor as a machine language.

    PubMed

    Khataee, H R; Ibrahim, M Y

    2012-09-01

    Kinesin is a protein-based natural nanomotor that transports molecular cargoes within cells by walking along microtubules. Kinesin nanomotor is considered as a bio-nanoagent which is able to sense the cell through its sensors (i.e. its heads and tail), make the decision internally and perform actions on the cell through its actuator (i.e. its motor domain). The study maps the agent-based architectural model of internal decision-making process of kinesin nanomotor to a machine language using an automata algorithm. The applied automata algorithm receives the internal agent-based architectural model of kinesin nanomotor as a deterministic finite automaton (DFA) model and generates a regular machine language. The generated regular machine language was acceptable by the architectural DFA model of the nanomotor and also in good agreement with its natural behaviour. The internal agent-based architectural model of kinesin nanomotor indicates the degree of autonomy and intelligence of the nanomotor interactions with its cell. Thus, our developed regular machine language can model the degree of autonomy and intelligence of kinesin nanomotor interactions with its cell as a language. Modelling of internal architectures of autonomous and intelligent bio-nanosystems as machine languages can lay the foundation towards the concept of bio-nanoswarms and next phases of the bio-nanorobotic systems development.

  10. Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool

    NASA Astrophysics Data System (ADS)

    Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo

    2017-03-01

    Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.

  11. Learning with Technology: Video Modeling with Concrete-Representational-Abstract Sequencing for Students with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Yakubova, Gulnoza; Hughes, Elizabeth M.; Shinaberry, Megan

    2016-01-01

    The purpose of this study was to determine the effectiveness of a video modeling intervention with concrete-representational-abstract instructional sequence in teaching mathematics concepts to students with autism spectrum disorder (ASD). A multiple baseline across skills design of single-case experimental methodology was used to determine the…

  12. Teaching Subtraction and Multiplication with Regrouping Using the Concrete-Representational-Abstract Sequence and Strategic Instruction Model

    ERIC Educational Resources Information Center

    Flores, Margaret M.; Hinton, Vanessa; Strozier, Shaunita D.

    2014-01-01

    Based on Common Core Standards (2010), mathematics interventions should emphasize conceptual understanding of numbers and operations as well as fluency. For students at risk for failure, the concrete-representational-abstract (CRA) sequence and the Strategic Instruction Model (SIM) have been shown effective in teaching computation with an emphasis…

  13. Quantum turing machine and brain model represented by Fock space

    NASA Astrophysics Data System (ADS)

    Iriyama, Satoshi; Ohya, Masanori

    2016-05-01

    The adaptive dynamics is known as a new mathematics to treat with a complex phenomena, for example, chaos, quantum algorithm and psychological phenomena. In this paper, we briefly review the notion of the adaptive dynamics, and explain the definition of the generalized Turing machine (GTM) and recognition process represented by the Fock space. Moreover, we show that there exists the quantum channel which is described by the GKSL master equation to achieve the Chaos Amplifier used in [M. Ohya and I. V. Volovich, J. Opt. B 5(6) (2003) 639., M. Ohya and I. V. Volovich, Rep. Math. Phys. 52(1) (2003) 25.

  14. Research Abstracts.

    ERIC Educational Resources Information Center

    Plotnick, Eric

    2001-01-01

    Presents research abstracts from the ERIC Clearinghouse on Information and Technology. Topics include: classroom communication apprehension and distance education; outcomes of a distance-delivered science course; the NASA/Kennedy Space Center Virtual Science Mentor program; survey of traditional and distance learning higher education members;…

  15. Research Abstracts.

    ERIC Educational Resources Information Center

    Plotnik, Eric

    2001-01-01

    Presents six research abstracts from the ERIC (Educational Resources Information Center) database. Topics include: effectiveness of distance versus traditional on-campus education; improved attribution recall from diversification of environmental context during computer-based instruction; qualitative analysis of situated Web-based learning;…

  16. Abstract Constructions.

    ERIC Educational Resources Information Center

    Pietropola, Anne

    1998-01-01

    Describes a lesson designed to culminate a year of eighth-grade art classes in which students explore elements of design and space by creating 3-D abstract constructions. Outlines the process of using foam board and markers to create various shapes and optical effects. (DSK)

  17. An Introduction to Topic Modeling as an Unsupervised Machine Learning Way to Organize Text Information

    ERIC Educational Resources Information Center

    Snyder, Robin M.

    2015-01-01

    The field of topic modeling has become increasingly important over the past few years. Topic modeling is an unsupervised machine learning way to organize text (or image or DNA, etc.) information such that related pieces of text can be identified. This paper/session will present/discuss the current state of topic modeling, why it is important, and…

  18. A Framework for Modeling Human-Machine Interactions

    NASA Technical Reports Server (NTRS)

    Shafto, Michael G.; Rosekind, Mark R. (Technical Monitor)

    1996-01-01

    Modern automated flight-control systems employ a variety of different behaviors, or modes, for managing the flight. While developments in cockpit automation have resulted in workload reduction and economical advantages, they have also given rise to an ill-defined class of human-machine problems, sometimes referred to as 'automation surprises'. Our interest in applying formal methods for describing human-computer interaction stems from our ongoing research on cockpit automation. In this area of aeronautical human factors, there is much concern about how flight crews interact with automated flight-control systems, so that the likelihood of making errors, in particular mode-errors, is minimized and the consequences of such errors are contained. The goal of the ongoing research on formal methods in this context is: (1) to develop a framework for describing human interaction with control systems; (2) to formally categorize such automation surprises; and (3) to develop tests for identification of these categories early in the specification phase of a new human-machine system.

  19. Scientist-Centered Workflow Abstractions via Generic Actors, Workflow Templates, and Context-Awareness for Groundwater Modeling and Analysis

    SciTech Connect

    Chin, George; Sivaramakrishnan, Chandrika; Critchlow, Terence J.; Schuchardt, Karen L.; Ngu, Anne Hee Hiong

    2011-07-04

    A drawback of existing scientific workflow systems is the lack of support to domain scientists in designing and executing their own scientific workflows. Many domain scientists avoid developing and using workflows because the basic objects of workflows are too low-level and high-level tools and mechanisms to aid in workflow construction and use are largely unavailable. In our research, we are prototyping higher-level abstractions and tools to better support scientists in their workflow activities. Specifically, we are developing generic actors that provide abstract interfaces to specific functionality, workflow templates that encapsulate workflow and data patterns that can be reused and adapted by scientists, and context-awareness mechanisms to gather contextual information from the workflow environment on behalf of the scientist. To evaluate these scientist-centered abstractions on real problems, we apply them to construct and execute scientific workflows in the specific domain area of groundwater modeling and analysis.

  20. Improving Domain-specific Machine Translation by Constraining the Language Model

    DTIC Science & Technology

    2012-07-01

    of greater amounts of training data in the two models, especially in the target language model (Brants et al., 2007). Och (2005) reports findings...train with the largest language models (NIST, 2006). The highest scoring Arabic-English system used a 1-trillion-word language model ( Och , 2006...References Brants, T.; Popat, A. C.; Xu, P.; Och , F. J.; Dean, J. Large Language Models in Machine Translation. Joint Meeting of the Conference on Empirical

  1. Combining Psychological Models with Machine Learning to Better Predict People’s Decisions

    DTIC Science & Technology

    2012-03-09

    in some applications (Kaelbling, Littman, & Cassandra, 1998; Neumann & Morgenstern, 1944; Russell & Norvig , 2003). However, research into people’s...scientists often model peoples’ decisions through machine learning techniques (Russell & Norvig , 2003). These models are based on statistical methods such as...A., & Kraus, S. (2011). Using aspiration adaptation theory to improve learning. In Aamas (p. 423-430). Russell, S. J., & Norvig , P. (2003

  2. Genetic Optimization of Training Sets for Improved Machine Learning Models of Molecular Properties.

    PubMed

    Browning, Nicholas J; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole; Roethlisberger, Ursula

    2017-04-06

    The training of molecular models of quantum mechanical properties based on statistical machine learning requires large data sets which exemplify the map from chemical structure to molecular property. Intelligent a priori selection of training examples is often difficult or impossible to achieve, as prior knowledge may be unavailable. Ordinarily representative selection of training molecules from such data sets is achieved through random sampling. We use genetic algorithms for the optimization of training set composition consisting of tens of thousands of small organic molecules. The resulting machine learning models are considerably more accurate: in the limit of small training sets, mean absolute errors for out-of-sample predictions are reduced by up to ∼75%. We discuss and present optimized training sets consisting of 10 molecular classes for all molecular properties studied. We show that these classes can be used to design improved training sets for the generation of machine learning models of the same properties in similar but unrelated molecular sets.

  3. Experience with abstract notation one

    NASA Technical Reports Server (NTRS)

    Harvey, James D.; Weaver, Alfred C.

    1990-01-01

    The development of computer science has produced a vast number of machine architectures, programming languages, and compiler technologies. The cross product of these three characteristics defines the spectrum of previous and present data representation methodologies. With regard to computer networks, the uniqueness of these methodologies presents an obstacle when disparate host environments are to be interconnected. Interoperability within a heterogeneous network relies upon the establishment of data representation commonality. The International Standards Organization (ISO) is currently developing the abstract syntax notation one standard (ASN.1) and the basic encoding rules standard (BER) that collectively address this problem. When used within the presentation layer of the open systems interconnection reference model, these two standards provide the data representation commonality required to facilitate interoperability. The details of a compiler that was built to automate the use of ASN.1 and BER are described. From this experience, insights into both standards are given and potential problems relating to this development effort are discussed.

  4. A Review of Current Machine Learning Methods Used for Cancer Recurrence Modeling and Prediction

    SciTech Connect

    Hemphill, Geralyn M.

    2016-09-27

    Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type has become a necessity in cancer research. A major challenge in cancer management is the classification of patients into appropriate risk groups for better treatment and follow-up. Such risk assessment is critically important in order to optimize the patient’s health and the use of medical resources, as well as to avoid cancer recurrence. This paper focuses on the application of machine learning methods for predicting the likelihood of a recurrence of cancer. It is not meant to be an extensive review of the literature on the subject of machine learning techniques for cancer recurrence modeling. Other recent papers have performed such a review, and I will rely heavily on the results and outcomes from these papers. The electronic databases that were used for this review include PubMed, Google, and Google Scholar. Query terms used include “cancer recurrence modeling”, “cancer recurrence and machine learning”, “cancer recurrence modeling and machine learning”, and “machine learning for cancer recurrence and prediction”. The most recent and most applicable papers to the topic of this review have been included in the references. It also includes a list of modeling and classification methods to predict cancer recurrence.

  5. Machine learning for many-body physics: The case of the Anderson impurity model

    NASA Astrophysics Data System (ADS)

    Arsenault, Louis-François; Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole; Millis, Andrew J.

    2014-10-01

    Machine learning methods are applied to finding the Green's function of the Anderson impurity model, a basic model system of quantum many-body condensed-matter physics. Different methods of parametrizing the Green's function are investigated; a representation in terms of Legendre polynomials is found to be superior due to its limited number of coefficients and its applicability to state of the art methods of solution. The dependence of the errors on the size of the training set is determined. The results indicate that a machine learning approach to dynamical mean-field theory may be feasible.

  6. A Rapid Compression Machine Modelling Study of the Heptane Isomers

    SciTech Connect

    Silke, E J; Curran, H J; Simmie, J M; Pitz, W J; Westbrook, C K

    2005-05-10

    Previously we have reported on the combustion behavior of all nine isomers of heptane in a rapid compression machine (RCM) with stoichiometric fuel and ''air'' mixtures at a compressed gas pressure of 15 atm. The dependence of autoignition delay times on molecular structure was illustrated. Here, we report some additional experimental work that was performed in order to address unusual results regarding significant differences in the ignition delay times recorded at the same fuel and oxygen composition, but with different fractions of nitrogen and argon diluent gases. Moreover, we have begun to simulate these experiments with detailed chemical kinetic mechanisms. These mechanisms are based on previous studies of other alkane molecules, in particular, n-heptane and iso-octane. We have focused our attention on n-heptane in order to systematically redevelop the chemistry and thermochemistry for this C{sub 7} isomer with the intention of extending our greater knowledge gained to the other eight isomers. The addition of new reaction types, that were not included previously, has had a significant impact on the simulations, particularly at low temperatures.

  7. Predicting Mouse Liver Microsomal Stability with “Pruned” Machine Learning Models and Public Data

    PubMed Central

    Perryman, Alexander L.; Stratton, Thomas P.; Ekins, Sean; Freundlich, Joel S.

    2015-01-01

    Purpose Mouse efficacy studies are a critical hurdle to advance translational research of potential therapeutic compounds for many diseases. Although mouse liver microsomal (MLM) stability studies are not a perfect surrogate for in vivo studies of metabolic clearance, they are the initial model system used to assess metabolic stability. Consequently, we explored the development of machine learning models that can enhance the probability of identifying compounds possessing MLM stability. Methods Published assays on MLM half-life values were identified in PubChem, reformatted, and curated to create a training set with 894 unique small molecules. These data were used to construct machine learning models assessed with internal cross-validation, external tests with a published set of antitubercular compounds, and independent validation with an additional diverse set of 571 compounds (PubChem data on percent metabolism). Results “Pruning” out the moderately unstable/moderately stable compounds from the training set produced models with superior predictive power. Bayesian models displayed the best predictive power for identifying compounds with a half-life ≥1 hour. Conclusions Our results suggest the pruning strategy may be of general benefit to improve test set enrichment and provide machine learning models with enhanced predictive value for the MLM stability of small organic molecules. This study represents the most exhaustive study to date of using machine learning approaches with MLM data from public sources. PMID:26415647

  8. Lateral-Directional Parameter Estimation on the X-48B Aircraft Using an Abstracted, Multi-Objective Effector Model

    NASA Technical Reports Server (NTRS)

    Ratnayake, Nalin A.; Waggoner, Erin R.; Taylor, Brian R.

    2011-01-01

    The problem of parameter estimation on hybrid-wing-body aircraft is complicated by the fact that many design candidates for such aircraft involve a large number of aerodynamic control effectors that act in coplanar motion. This adds to the complexity already present in the parameter estimation problem for any aircraft with a closed-loop control system. Decorrelation of flight and simulation data must be performed in order to ascertain individual surface derivatives with any sort of mathematical confidence. Non-standard control surface configurations, such as clamshell surfaces and drag-rudder modes, further complicate the modeling task. In this paper, time-decorrelation techniques are applied to a model structure selected through stepwise regression for simulated and flight-generated lateral-directional parameter estimation data. A virtual effector model that uses mathematical abstractions to describe the multi-axis effects of clamshell surfaces is developed and applied. Comparisons are made between time history reconstructions and observed data in order to assess the accuracy of the regression model. The Cram r-Rao lower bounds of the estimated parameters are used to assess the uncertainty of the regression model relative to alternative models. Stepwise regression was found to be a useful technique for lateral-directional model design for hybrid-wing-body aircraft, as suggested by available flight data. Based on the results of this study, linear regression parameter estimation methods using abstracted effectors are expected to perform well for hybrid-wing-body aircraft properly equipped for the task.

  9. Application of autoregressive distributed lag model to thermal error compensation of machine tools

    NASA Astrophysics Data System (ADS)

    Miao, Enming; Niu, Pengcheng; Fei, Yetai; Yan, Yan

    2011-12-01

    Since Thermal error in precision CNC machine tools cannot be ignored, it is essential to construct a simple and effective thermal error compensation mathematical model. In this paper, three modeling methods are introduced in detail. The first is multiple linear regression model; the second is congruence model, which combines multiple linear regression model with AR model of its residual error; and the third is autoregressive distributed lag model(ADL), which is compared and analyzed. Multiple linear regression analysis is used most commonly in thermal error compensation, since it is a simple and quick modeling method. But thermal error is nonlinear and interactive, so it is difficult to model a precise least squares model of thermal error. The congruence model and autoregressive distributed lag model belong to time series analysis method which has the advantage of establishing a precise mathematical model. The distinctions between the two models are that: the congruence model divides the parameter into two parts to estimate them respectively, but autoregressive distributed lag model estimates parameter uniformly, so congruence model is less accurate than autoregressive distributed lag model in modeling. This paper, based upon an actual example, concludes that autoregressive distributed lag model for thermal error of precision CNC machine tools is a good way to improve modeling accuracy.

  10. Nonlinear and Digital Man-machine Control Systems Modeling

    NASA Technical Reports Server (NTRS)

    Mekel, R.

    1972-01-01

    An adaptive modeling technique is examined by which controllers can be synthesized to provide corrective dynamics to a human operator's mathematical model in closed loop control systems. The technique utilizes a class of Liapunov functions formulated for this purpose, Liapunov's stability criterion and a model-reference system configuration. The Liapunov function is formulated to posses variable characteristics to take into consideration the identification dynamics. The time derivative of the Liapunov function generate the identification and control laws for the mathematical model system. These laws permit the realization of a controller which updates the human operator's mathematical model parameters so that model and human operator produce the same response when subjected to the same stimulus. A very useful feature is the development of a digital computer program which is easily implemented and modified concurrent with experimentation. The program permits the modeling process to interact with the experimentation process in a mutually beneficial way.

  11. State Machine Modeling of the Space Launch System Solid Rocket Boosters

    NASA Technical Reports Server (NTRS)

    Harris, Joshua A.; Patterson-Hine, Ann

    2013-01-01

    The Space Launch System is a Shuttle-derived heavy-lift vehicle currently in development to serve as NASA's premiere launch vehicle for space exploration. The Space Launch System is a multistage rocket with two Solid Rocket Boosters and multiple payloads, including the Multi-Purpose Crew Vehicle. Planned Space Launch System destinations include near-Earth asteroids, the Moon, Mars, and Lagrange points. The Space Launch System is a complex system with many subsystems, requiring considerable systems engineering and integration. To this end, state machine analysis offers a method to support engineering and operational e orts, identify and avert undesirable or potentially hazardous system states, and evaluate system requirements. Finite State Machines model a system as a finite number of states, with transitions between states controlled by state-based and event-based logic. State machines are a useful tool for understanding complex system behaviors and evaluating "what-if" scenarios. This work contributes to a state machine model of the Space Launch System developed at NASA Ames Research Center. The Space Launch System Solid Rocket Booster avionics and ignition subsystems are modeled using MATLAB/Stateflow software. This model is integrated into a larger model of Space Launch System avionics used for verification and validation of Space Launch System operating procedures and design requirements. This includes testing both nominal and o -nominal system states and command sequences.

  12. A mechanistic ultrasonic vibration amplitude model during rotary ultrasonic machining of CFRP composites.

    PubMed

    Ning, Fuda; Wang, Hui; Cong, Weilong; Fernando, P K S C

    2017-04-01

    Rotary ultrasonic machining (RUM) has been investigated in machining of brittle, ductile, as well as composite materials. Ultrasonic vibration amplitude, as one of the most important input variables, affects almost all the output variables in RUM. Numerous investigations on measuring ultrasonic vibration amplitude without RUM machining have been reported. In recent years, ultrasonic vibration amplitude measurement with RUM of ductile materials has been investigated. It is found that the ultrasonic vibration amplitude with RUM was different from that without RUM under the same input variables. RUM is primarily used in machining of brittle materials through brittle fracture removal. With this reason, the method for measuring ultrasonic vibration amplitude in RUM of ductile materials is not feasible for measuring that in RUM of brittle materials. However, there are no reported methods for measuring ultrasonic vibration amplitude in RUM of brittle materials. In this study, ultrasonic vibration amplitude in RUM of brittle materials is investigated by establishing a mechanistic amplitude model through cutting force. Pilot experiments are conducted to validate the calculation model. The results show that there are no significant differences between amplitude values calculated by model and those obtained from experimental investigations. The model can provide a relationship between ultrasonic vibration amplitude and input variables, which is a foundation for building models to predict other output variables in RUM.

  13. Modeling aspects of estuarine eutrophication. (Latest citations from the Selected Water Resources Abstracts database). Published Search

    SciTech Connect

    Not Available

    1993-05-01

    The bibliography contains citations concerning mathematical modeling of existing water quality stresses in estuaries, harbors, bays, and coves. Both physical hydraulic and numerical models for estuarine circulation are discussed. (Contains a minimum of 96 citations and includes a subject term index and title list.)

  14. Fractured rock hydrogeology: Modeling studies. (Latest citations from the Selected Water Resources Abstracts database). Published Search

    SciTech Connect

    Not Available

    1993-07-01

    The bibliography contains citations concerning the use of mathematical and conceptual models in describing the hydraulic parameters of fluid flow in fractured rock. Topics include the use of tracers, solute and mass transport studies, and slug test analyses. The use of modeling techniques in injection well performance prediction is also discussed. (Contains 250 citations and includes a subject term index and title list.)

  15. ShrinkWrap: 3D model abstraction for remote sensing simulation

    SciTech Connect

    Pope, Paul A

    2009-01-01

    Remote sensing simulations often require the use of 3D models of objects of interest. There are a multitude of these models available from various commercial sources. There are image processing, computational, database storage, and . data access advantages to having a regularized, encapsulating, triangular mesh representing the surface of a 3D object model. However, this is usually not how these models are stored. They can have too much detail in some areas, and not enough detail in others. They can have a mix of planar geometric primitives (triangles, quadrilaterals, n-sided polygons) representing not only the surface of the model, but also interior features. And the exterior mesh is usually not regularized nor encapsulating. This paper presents a method called SHRlNKWRAP which can be used to process 3D object models to achieve output models having the aforementioned desirable traits. The method works by collapsing an encapsulating sphere, which has a regularized triangular mesh on its surface, onto the surface of the model. A GUI has been developed to make it easy to leverage this capability. The SHRlNKWRAP processing chain and use of the GUI are described and illustrated.

  16. Modeling powder encapsulation in dosator-based machines: II. Experimental evaluation.

    PubMed

    Khawam, Ammar; Schultz, Leon

    2011-12-15

    A theoretical model was previously derived to predict powder encapsulation in dosator-based machines. The theoretical basis of the model was discussed earlier. In this part; the model was evaluated experimentally using two powder formulations with substantially different flow behavior. Encapsulation experiments were performed using a Zanasi encapsulation machine under two sets of experimental conditions. Model predicted outcomes such as encapsulation fill weight and plug height were compared to those experimentally obtained. Results showed a high correlation between predicted and actual outcomes demonstrating the model's success in predicting the encapsulation of both formulations. The model is a potentially useful in silico analysis tool that can be used for capsule dosage form development in accordance to quality by design (QbD) principles.

  17. Abstraction and art.

    PubMed Central

    Gortais, Bernard

    2003-01-01

    In a given social context, artistic creation comprises a set of processes, which relate to the activity of the artist and the activity of the spectator. Through these processes we see and understand that the world is vaster than it is said to be. Artistic processes are mediated experiences that open up the world. A successful work of art expresses a reality beyond actual reality: it suggests an unknown world using the means and the signs of the known world. Artistic practices incorporate the means of creation developed by science and technology and change forms as they change. Artists and the public follow different processes of abstraction at different levels, in the definition of the means of creation, of representation and of perception of a work of art. This paper examines how the processes of abstraction are used within the framework of the visual arts and abstract painting, which appeared during a period of growing importance for the processes of abstraction in science and technology, at the beginning of the twentieth century. The development of digital platforms and new man-machine interfaces allow multimedia creations. This is performed under the constraint of phases of multidisciplinary conceptualization using generic representation languages, which tend to abolish traditional frontiers between the arts: visual arts, drama, dance and music. PMID:12903659

  18. SAINT: A combined simulation language for modeling man-machine systems

    NASA Technical Reports Server (NTRS)

    Seifert, D. J.

    1979-01-01

    SAINT (Systems Analysis of Integrated Networks of Tasks) is a network modeling and simulation technique for design and analysis of complex man machine systems. SAINT provides the conceptual framework for representing systems that consist of discrete task elements, continuous state variables, and interactions between them. It also provides a mechanism for combining human performance models and dynamic system behaviors in a single modeling structure. The SAINT technique is described and applications of the SAINT are discussed.

  19. Multiscale Modeling and Analysis of an Ultra-Precision Damage Free Machining Method

    NASA Astrophysics Data System (ADS)

    Guan, Chaoliang; Peng, Wenqiang

    2016-06-01

    Under the condition of high laser flux, laser induced damage of optical element does not occur is the key to success of laser fusion ignition system. US government survey showed that the processing defects caused the laser induced damage threshold (LIDT) to decrease is one of the three major challenges. Cracks and scratches caused by brittle and plastic removal machining are fatal flaws. Using hydrodynamic effect polishing method can obtain damage free surface on quartz glass. The material removal mechanism of this typical ultra-precision machining process was modeled in multiscale. In atomic scale, chemical modeling illustrated the weakening and breaking of chemical bond energy. In particle scale, micro contact modeling given the elastic remove mode boundary of materials. In slurry scale, hydrodynamic flow modeling showed the dynamic pressure and shear stress distribution which are relations with machining effect. Experiment was conducted on a numerically controlled system, and one quartz glass optical component was polished in the elastic mode. Results show that the damages are removed away layer by layer as the removal depth increases due to the high damage free machining ability of the HEP. And the LIDT of sample was greatly improved.

  20. Horizontal-axis washing machines offer large savings: New models entering North American market

    SciTech Connect

    Shepard, M.

    1992-12-31

    Long popular in Europe, new horizontal-axis clothes washers are entering the North American market, creating opportunities for government and utility conservation efforts. Unlike vertical-axis machines, which immerse the clothes in water, horizontal-axis designs use a tumbling action and require far less water, water-heating energy, and detergent. One development in this area is the recent reintroduction by the Frigidaire Company of a full-size, front-load, horizontal-axis washing machine. The new model is an improved version of an earlier design that was discontinued in mid-1991 during changes in manufacturing facilities. It is available under the Sears Kenmore, White-Westinghouse, and Gibson labels. While several European and commercial-grade front-load washers are sold in the US, they are all considerably more expensive than the Frigidaire machine, making it the most efficient clothes washer currently available in a mainstream North American consumer product line.

  1. The Academy for Community College Leadership Advancement, Innovation, and Modeling (ACCLAIM): Abstract.

    ERIC Educational Resources Information Center

    North Carolina State Univ., Raleigh. Academy for Community Coll. Leadership Advancement, Innovation, and Modeling.

    The Academy for Community College Leadership, Innovation, and Modeling (ACCLAIM) is a 3-year pilot project funded by the W. K. Kellogg Foundation, North Carolina State University (NCSU), and the community college systems of Maryland, Virginia, South Carolina, and North Carolina. ACCLAIM's purpose is to help the region's community colleges assume a…

  2. ERGONOMICS ABSTRACTS 48347-48982.

    ERIC Educational Resources Information Center

    Ministry of Technology, London (England). Warren Spring Lab.

    IN THIS COLLECTION OF ERGONOMICS ABSTRACTS AND ANNOTATIONS THE FOLLOWING AREAS OF CONCERN ARE REPRESENTED--GENERAL REFERENCES, METHODS, FACILITIES, AND EQUIPMENT RELATING TO ERGONOMICS, SYSTEMS OF MAN AND MACHINES, VISUAL, AUDITORY, AND OTHER SENSORY INPUTS AND PROCESSES (INCLUDING SPEECH AND INTELLIGIBILITY), INPUT CHANNELS, BODY MEASUREMENTS,…

  3. Modeling and predicting abstract concept or idea introduction and propagation through geopolitical groups

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger M.; Handley, James W.; Hicklen, Michael L.

    2007-04-01

    This paper describes a novel capability for modeling known idea propagation transformations and predicting responses to new ideas from geopolitical groups. Ideas are captured using semantic words that are text based and bear cognitive definitions. We demonstrate a unique algorithm for converting these into analytical predictive equations. Using the illustrative idea of "proposing a gasoline price increase of 1 per gallon from 2" and its changing perceived impact throughout 5 demographic groups, we identify 13 cost of living Diplomatic, Information, Military, and Economic (DIME) features common across all 5 demographic groups. This enables the modeling and monitoring of Political, Military, Economic, Social, Information, and Infrastructure (PMESII) effects of each group to this idea and how their "perception" of this proposal changes. Our algorithm and results are summarized in this paper.

  4. Fractured rock hydrogeology (excluding modeling). (Latest citations from the Selected Water Resources Abstracts database). Published Search

    SciTech Connect

    Not Available

    1992-11-01

    The bibliography contains citations concerning the nature and occurrence of groundwater in fractured crystalline and sedimentary rocks. Techniques for determining connectivity and hydraulic conductivity, pollutant distribution in fractures, and site studies in specific geologic environments are among the topics discussed. Citations pertaining to modeling studies of fractured rock hydrogeology are addressed in a separate bibliography. (Contains a minimum of 54 citations and includes a subject term index and title list.)

  5. Fractured rock hydrogeology (excluding modeling). (Latest citations from the Selected Water Resources abstracts database). Published Search

    SciTech Connect

    Not Available

    1994-01-01

    The bibliography contains citations concerning the nature and occurrence of groundwater in fractured crystalline and sedimentary rocks. Techniques for determining connectivity and hydraulic conductivity, pollutant distribution in fractures, and site studies in specific geologic environments are among the topics discussed. Citations pertaining to modeling studies of fractured rock hydrogeology are addressed in a separate bibliography. (Contains a minimum of 62 citations and includes a subject term index and title list.)

  6. Fault Modeling of Extreme Scale Applications Using Machine Learning

    DOE PAGES

    Vishnu, Abhinav; Dam, Hubertus van; Tallent, Nathan R.; ...

    2016-05-01

    Faults are commonplace in large scale systems. These systems experience a variety of faults such as transient, permanent and intermittent. Multi-bit faults are typically not corrected by the hardware resulting in an error. Here, this paper attempts to answer an important question: Given a multi-bit fault in main memory, will it result in an application error — and hence a recovery algorithm should be invoked — or can it be safely ignored? We propose an application fault modeling methodology to answer this question. Given a fault signature (a set of attributes comprising of system and application state), we use machinemore » learning to create a model which predicts whether a multibit permanent/transient main memory fault will likely result in error. We present the design elements such as the fault injection methodology for covering important data structures, the application and system attributes which should be used for learning the model, the supervised learning algorithms (and potentially ensembles), and important metrics. Lastly, we use three applications — NWChem, LULESH and SVM — as examples for demonstrating the effectiveness of the proposed fault modeling methodology.« less

  7. A tool for urban soundscape evaluation applying Support Vector Machines for developing a soundscape classification model.

    PubMed

    Torija, Antonio J; Ruiz, Diego P; Ramos-Ridao, Angel F

    2014-06-01

    To ensure appropriate soundscape management in urban environments, the urban-planning authorities need a range of tools that enable such a task to be performed. An essential step during the management of urban areas from a sound standpoint should be the evaluation of the soundscape in such an area. In this sense, it has been widely acknowledged that a subjective and acoustical categorization of a soundscape is the first step to evaluate it, providing a basis for designing or adapting it to match people's expectations as well. In this sense, this work proposes a model for automatic classification of urban soundscapes. This model is intended for the automatic classification of urban soundscapes based on underlying acoustical and perceptual criteria. Thus, this classification model is proposed to be used as a tool for a comprehensive urban soundscape evaluation. Because of the great complexity associated with the problem, two machine learning techniques, Support Vector Machines (SVM) and Support Vector Machines trained with Sequential Minimal Optimization (SMO), are implemented in developing model classification. The results indicate that the SMO model outperforms the SVM model in the specific task of soundscape classification. With the implementation of the SMO algorithm, the classification model achieves an outstanding performance (91.3% of instances correctly classified).

  8. The Use of Machine Aids in Dynamic Multi-Task Environments: A Comparison of an Optimal Model to Human Behavior.

    DTIC Science & Technology

    1982-06-01

    advantage . Unproductive machines, however, were used far more frequently than indicated by the optimal model. Increasing the cost of using machines was...found to have a greater inhibiting effect on their use than did de-creasing machine productivity. SECURITY CLASSIFICATION OF THIS PAQE(Vhm’ Date Bnb .,e@V...and the method used to deal with it. The cognitive interface is like the storm front between a warm air mass and a cold air mass. Both are described as

  9. Learning with Technology: Video Modeling with Concrete-Representational-Abstract Sequencing for Students with Autism Spectrum Disorder.

    PubMed

    Yakubova, Gulnoza; Hughes, Elizabeth M; Shinaberry, Megan

    2016-07-01

    The purpose of this study was to determine the effectiveness of a video modeling intervention with concrete-representational-abstract instructional sequence in teaching mathematics concepts to students with autism spectrum disorder (ASD). A multiple baseline across skills design of single-case experimental methodology was used to determine the effectiveness of the intervention on the acquisition and maintenance of addition, subtraction, and number comparison skills for four elementary school students with ASD. Findings supported the effectiveness of the intervention in improving skill acquisition and maintenance at a 3-week follow-up. Implications for practice and future research are discussed.

  10. A study of sound transmission in an abstract middle ear using physical and finite element models

    PubMed Central

    Gonzalez-Herrera, Antonio; Olson, Elizabeth S.

    2015-01-01

    The classical picture of middle ear (ME) transmission has the tympanic membrane (TM) as a piston and the ME cavity as a vacuum. In reality, the TM moves in a complex multiphasic pattern and substantial pressure is radiated into the ME cavity by the motion of the TM. This study explores ME transmission with a simple model, using a tube terminated with a plastic membrane. Membrane motion was measured with a laser interferometer and pressure on both sides of the membrane with micro-sensors that could be positioned close to the membrane without disturbance. A finite element model of the system explored the experimental results. Both experimental and theoretical results show resonances that are in some cases primarily acoustical or mechanical and sometimes produced by coupled acousto-mechanics. The largest membrane motions were a result of the membrane's mechanical resonances. At these resonant frequencies, sound transmission through the system was larger with the membrane in place than it was when the membrane was absent. PMID:26627771

  11. Modeling Physical Processes at the Nanoscale—Insight into Self-Organization of Small Systems (abstract)

    NASA Astrophysics Data System (ADS)

    Proykova, Ana

    2009-04-01

    Essential contributions have been made in the field of finite-size systems of ingredients interacting with potentials of various ranges. Theoretical simulations have revealed peculiar size effects on stability, ground state structure, phases, and phase transformation of systems confined in space and time. Models developed in the field of pure physics (atomic and molecular clusters) have been extended and successfully transferred to finite-size systems that seem very different—small-scale financial markets, autoimmune reactions, and social group reactions to advertisements. The models show that small-scale markets diverge unexpectedly fast as a result of small fluctuations; autoimmune reactions are sequences of two discontinuous phase transitions; and social groups possess critical behavior (social percolation) under the influence of an external field (advertisement). Some predicted size-dependent properties have been experimentally observed. These findings lead to the hypothesis that restrictions on an object's size determine the object's total internal (configuration) and external (environmental) interactions. Since phases are emergent phenomena produced by self-organization of a large number of particles, the occurrence of a phase in a system containing a small number of ingredients is remarkable.

  12. Derivative Free Optimization of Complex Systems with the Use of Statistical Machine Learning Models

    DTIC Science & Technology

    2015-09-12

    free setting. We extensively tested our software on a complex problem of protein alignment. The ridge regression models did not produce a noticeable...AFRL-AFOSR-VA-TR-2015-0278 DERIVATIVE FREE OPTIMIZATION OF COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS Katya Scheinberg...12-09-2015 2. REPORT TYPE Final Performance 3. DATES COVERED (From - To) 15-08-2011 to 14-08-2014 4. TITLE AND SUBTITLE DERIVATIVE FREE OPTIMIZATION OF

  13. Ghosts in the Machine. Interoceptive Modeling for Chronic Pain Treatment

    PubMed Central

    Di Lernia, Daniele; Serino, Silvia; Cipresso, Pietro; Riva, Giuseppe

    2016-01-01

    Pain is a complex and multidimensional perception, embodied in our daily experiences through interoceptive appraisal processes. The article reviews the recent literature about interoception along with predictive coding theories and tries to explain a missing link between the sense of the physiological condition of the entire body and the perception of pain in chronic conditions, which are characterized by interoceptive deficits. Understanding chronic pain from an interoceptive point of view allows us to better comprehend the multidimensional nature of this specific organic information, integrating the input of several sources from Gifford's Mature Organism Model to Melzack's neuromatrix. The article proposes the concept of residual interoceptive images (ghosts), to explain the diffuse multilevel nature of chronic pain perceptions. Lastly, we introduce a treatment concept, forged upon the possibility to modify the interoceptive chronic representation of pain through external input in a process that we call interoceptive modeling, with the ultimate goal of reducing pain in chronic subjects. PMID:27445681

  14. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications

    SciTech Connect

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi; Sozer, Yilmaz; Husain, Iqbal; Muljadi, Eduard

    2015-09-02

    This paper presents a nonlinear analytical model of a novel double sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets (PM), stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry which makes it a good alternative for evaluating prospective designs of TFM as compared to finite element solvers which are numerically intensive and require more computation time. A single phase, 1 kW, 400 rpm machine is analytically modeled and its resulting flux distribution, no-load EMF and torque, verified with Finite Element Analysis (FEA). The results are found to be in agreement with less than 5% error, while reducing the computation time by 25 times.

  15. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications: Preprint

    SciTech Connect

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi; Sozer, Yilmaz; Husain; Iqbal; Muljadi, Eduard

    2015-08-24

    This paper presents a nonlinear analytical model of a novel double-sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets, stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry that makes it a good alternative for evaluating prospective designs of TFM compared to finite element solvers that are numerically intensive and require more computation time. A single-phase, 1-kW, 400-rpm machine is analytically modeled, and its resulting flux distribution, no-load EMF, and torque are verified with finite element analysis. The results are found to be in agreement, with less than 5% error, while reducing the computation time by 25 times.

  16. Equational Abstractions

    DTIC Science & Technology

    2007-01-01

    Uribe for many useful discussions that have influenced the ideas presented here, Manuel Clavel and Francisco Durán for their help in the preparation of...Grumberg, and Doron A. Peled. Model Checking. MIT Press, 1999. 11. Manuel Clavel , Francisco Durán, Steven Eker, Patrick Lincoln, Narciso Martı́-Oliet...Manuel Clavel , Francisco Durán, Steven Eker, Patrick Lincoln, Narciso Martı́-Oliet, José Meseguer, and Carolyn L. Talcott. All About Maude, A High

  17. A model for a multi-class classification machine

    NASA Astrophysics Data System (ADS)

    Rau, Albrecht; Nadal, Jean-Pierre

    1992-06-01

    We consider the properties of multi-class neural networks, where each neuron can be in several different states. The motivations for considering such systems are manifold. In image processing for example, the different states correspond to the different grey tone levels. Another multi-class classification task implemented on a feed-forward network is the analysis of DNA sequences or the prediction of the secondary structure of proteins from the sequence of amino acids. To investigate the behaviour of such systems, one specific dynamical rule - the “winner-take-all” rule - is studied. Gauge invariances of the model are analysed. For a multi-class perceptron with N Q-state input neurons and Q‧-state output neuron, the maximal number of patterns that can be stored in the large N limit is found to be proportional to N(Q - 1) ƒ(Q‧), where ƒ( Q‧) is a slowly increasing and bounded function of order 1.

  18. Law machines: scale models, forensic materiality and the making of modern patent law.

    PubMed

    Pottage, Alain

    2011-10-01

    Early US patent law was machine made. Before the Patent Office took on the function of examining patent applications in 1836, questions of novelty and priority were determined in court, within the forum of the infringement action. And at all levels of litigation, from the circuit courts up to the Supreme Court, working models were the media through which doctrine, evidence and argument were made legible, communicated and interpreted. A model could be set on a table, pointed at, picked up, rotated or upended so as to display a point of interest to a particular audience within the courtroom, and, crucially, set in motion to reveal the 'mode of operation' of a machine. The immediate object of demonstration was to distinguish the intangible invention from its tangible embodiment, but models also'machined' patent law itself. Demonstrations of patent claims with models articulated and resolved a set of conceptual tensions that still make the definition and apprehension of the invention difficult, even today, but they resolved these tensions in the register of materiality, performativity and visibility, rather than the register of conceptuality. The story of models tells us something about how inventions emerge and subsist within the context of patent litigation and patent doctrine, and it offers a starting point for renewed reflection on the question of how technology becomes property.

  19. Using Machine Learning to Create Turbine Performance Models (Presentation)

    SciTech Connect

    Clifton, A.

    2013-04-01

    Wind turbine power output is known to be a strong function of wind speed, but is also affected by turbulence and shear. In this work, new aerostructural simulations of a generic 1.5 MW turbine are used to explore atmospheric influences on power output. Most significant is the hub height wind speed, followed by hub height turbulence intensity and then wind speed shear across the rotor disk. These simulation data are used to train regression trees that predict the turbine response for any combination of wind speed, turbulence intensity, and wind shear that might be expected at a turbine site. For a randomly selected atmospheric condition, the accuracy of the regression tree power predictions is three times higher than that of the traditional power curve methodology. The regression tree method can also be applied to turbine test data and used to predict turbine performance at a new site. No new data is required in comparison to the data that are usually collected for a wind resource assessment. Implementing the method requires turbine manufacturers to create a turbine regression tree model from test site data. Such an approach could significantly reduce bias in power predictions that arise because of different turbulence and shear at the new site, compared to the test site.

  20. A paradigm for data-driven predictive modeling using field inversion and machine learning

    NASA Astrophysics Data System (ADS)

    Parish, Eric J.; Duraisamy, Karthik

    2016-01-01

    We propose a modeling paradigm, termed field inversion and machine learning (FIML), that seeks to comprehensively harness data from sources such as high-fidelity simulations and experiments to aid the creation of improved closure models for computational physics applications. In contrast to inferring model parameters, this work uses inverse modeling to obtain corrective, spatially distributed functional terms, offering a route to directly address model-form errors. Once the inference has been performed over a number of problems that are representative of the deficient physics in the closure model, machine learning techniques are used to reconstruct the model corrections in terms of variables that appear in the closure model. These reconstructed functional forms are then used to augment the closure model in a predictive computational setting. As a first demonstrative example, a scalar ordinary differential equation is considered, wherein the model equation has missing and deficient terms. Following this, the methodology is extended to the prediction of turbulent channel flow. In both of these applications, the approach is demonstrated to be able to successfully reconstruct functional corrections and yield accurate predictive solutions while providing a measure of model form uncertainties.

  1. River Flow Forecasting: a Hybrid Model of Self Organizing Maps and Least Square Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Ismail, S.; Samsudin, R.; Shabri, A.

    2010-10-01

    Successful river flow time series forecasting is a major goal and an essential procedure that is necessary in water resources planning and management. This study introduced a new hybrid model based on a combination of two familiar non-linear method of mathematical modeling: Self Organizing Map (SOM) and Least Square Support Vector Machine (LSSVM) model referred as SOM-LSSVM model. The hybrid model uses the SOM algorithm to cluster the training data into several disjointed clusters and the individual LSSVM is used to forecast the river flow. The feasibility of this proposed model is evaluated to actual river flow data from Bernam River located in Selangor, Malaysia. Their results have been compared to those obtained using LSSVM and artificial neural networks (ANN) models. The experiment results show that the SOM-LSSVM model outperforms other models for forecasting river flow. It also indicates that the proposed model can forecast more precisely and provides a promising alternative technique in river flow forecasting.

  2. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I-Model Development.

    PubMed

    Calvo, Roque; D'Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-09-29

    The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM's behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  3. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    PubMed Central

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052

  4. A Model-Free Machine Learning Method for Risk Classification and Survival Probability Prediction.

    PubMed

    Geng, Yuan; Lu, Wenbin; Zhang, Hao Helen

    2014-01-01

    Risk classification and survival probability prediction are two major goals in survival data analysis since they play an important role in patients' risk stratification, long-term diagnosis, and treatment selection. In this article, we propose a new model-free machine learning framework for risk classification and survival probability prediction based on weighted support vector machines. The new procedure does not require any specific parametric or semiparametric model assumption on data, and is therefore capable of capturing nonlinear covariate effects. We use numerous simulation examples to demonstrate finite sample performance of the proposed method under various settings. Applications to a glioma tumor data and a breast cancer gene expression survival data are shown to illustrate the new methodology in real data analysis.

  5. A mathematical model of the controlled axial flow divider for mobile machines

    NASA Astrophysics Data System (ADS)

    Mulyukin, V. L.; Karelin, D. L.; Belousov, A. M.

    2016-06-01

    The authors give a mathematical model of the axial adjustable flow divider allowing one to define the parameters of the feed pump and the hydraulic motor-wheels in the multi-circuit hydrostatic transmission of mobile machines, as well as for example built features that allows to clearly evaluate the mutual influence of the values of pressure and flow on all input and output circuits of the system.

  6. Applications of hand-arm models in the investigation of the interaction between man and machine.

    PubMed

    Jahn, R; Hesse, M

    1986-08-01

    The mode of vibration of hand-held tools cannot be considered without knowledge of the influence of the operator's hand-arm system. Therefore some technical applications of hand-arm models were realized for drill hammers by the University of Dortmund. These applications are a software program to simulate the motion of machine components, a horizontal drilling jig, and a chucking device in a drilling rig.

  7. The modified nodal analysis method applied to the modeling of the thermal circuit of an asynchronous machine

    NASA Astrophysics Data System (ADS)

    Nedelcu, O.; Salisteanu, C. I.; Popa, F.; Salisteanu, B.; Oprescu, C. V.; Dogaru, V.

    2017-01-01

    The complexity of electrical circuits or of equivalent thermal circuits that were considered to be analyzed and solved requires taking into account the method that is used for their solving. Choosing the method of solving determines the amount of calculation necessary for applying one of the methods. The heating and ventilation systems of electrical machines that have to be modeled result in complex equivalent electrical circuits of large dimensions, which requires the use of the most efficient methods of solving them. The purpose of the thermal calculation of electrical machines is to establish the heating, the overruns of temperatures or over-temperatures in some parts of the machine compared to the temperature of the ambient, in a given operating mode of the machine. The paper presents the application of the modified nodal analysis method for the modeling of the thermal circuit of an asynchronous machine.

  8. Hypoglycemia prediction using machine learning models for patients with type 2 diabetes.

    PubMed

    Sudharsan, Bharath; Peeples, Malinda; Shomali, Mansur

    2015-01-01

    Minimizing the occurrence of hypoglycemia in patients with type 2 diabetes is a challenging task since these patients typically check only 1 to 2 self-monitored blood glucose (SMBG) readings per day. We trained a probabilistic model using machine learning algorithms and SMBG values from real patients. Hypoglycemia was defined as a SMBG value < 70 mg/dL. We validated our model using multiple data sets. In addition, we trained a second model, which used patient SMBG values and information about patient medication administration. The optimal number of SMBG values needed by the model was approximately 10 per week. The sensitivity of the model for predicting a hypoglycemia event in the next 24 hours was 92% and the specificity was 70%. In the model that incorporated medication information, the prediction window was for the hour of hypoglycemia, and the specificity improved to 90%. Our machine learning models can predict hypoglycemia events with a high degree of sensitivity and specificity. These models-which have been validated retrospectively and if implemented in real time-could be useful tools for reducing hypoglycemia in vulnerable patients.

  9. Machine learning models for lung cancer classification using array comparative genomic hybridization.

    PubMed

    Aliferis, C F; Hardin, D; Massion, P P

    2002-01-01

    Array CGH is a recently introduced technology that measures changes in the gene copy number of hundreds of genes in a single experiment. The primary goal of this study was to develop machine learning models that classify non-small Lung Cancers according to histopathology types and to compare several machine learning methods in this learning task. DNA from tumors of 37 patients (21 squamous carcinomas, and 16 adenocarcinomas) were extracted and hybridized onto a 452 BAC clone array. The following algorithms were used: KNN, Decision Tree Induction, Support Vector Machines and Feed-Forward Neural Networks. Performance was measured via leave-one-out classification accuracy. The best multi-gene model found had a leave-one-out accuracy of 89.2%. Decision Trees performed poorer than the other methods in this learning task and dataset. We conclude that gene copy numbers as measured by array CGH are, collectively, an excellent indicator of histological subtype. Several interesting research directions are discussed.

  10. Bayesian reliability modeling and assessment solution for NC machine tools under small-sample data

    NASA Astrophysics Data System (ADS)

    Yang, Zhaojun; Kan, Yingnan; Chen, Fei; Xu, Binbin; Chen, Chuanhai; Yang, Chuangui

    2015-11-01

    Although Markov chain Monte Carlo(MCMC) algorithms are accurate, many factors may cause instability when they are utilized in reliability analysis; such instability makes these algorithms unsuitable for widespread engineering applications. Thus, a reliability modeling and assessment solution aimed at small-sample data of numerical control(NC) machine tools is proposed on the basis of Bayes theories. An expert-judgment process of fusing multi-source prior information is developed to obtain the Weibull parameters' prior distributions and reduce the subjective bias of usual expert-judgment methods. The grid approximation method is applied to two-parameter Weibull distribution to derive the formulas for the parameters' posterior distributions and solve the calculation difficulty of high-dimensional integration. The method is then applied to the real data of a type of NC machine tool to implement a reliability assessment and obtain the mean time between failures(MTBF). The relative error of the proposed method is 5.8020×10-4 compared with the MTBF obtained by the MCMC algorithm. This result indicates that the proposed method is as accurate as MCMC. The newly developed solution for reliability modeling and assessment of NC machine tools under small-sample data is easy, practical, and highly suitable for widespread application in the engineering field; in addition, the solution does not reduce accuracy.

  11. Application of Machine-Learning Models to Predict Tacrolimus Stable Dose in Renal Transplant Recipients

    PubMed Central

    Tang, Jie; Liu, Rong; Zhang, Yue-Li; Liu, Mou-Ze; Hu, Yong-Fang; Shao, Ming-Jie; Zhu, Li-Jun; Xin, Hua-Wen; Feng, Gui-Wen; Shang, Wen-Jun; Meng, Xiang-Guang; Zhang, Li-Rong; Ming, Ying-Zi; Zhang, Wei

    2017-01-01

    Tacrolimus has a narrow therapeutic window and considerable variability in clinical use. Our goal was to compare the performance of multiple linear regression (MLR) and eight machine learning techniques in pharmacogenetic algorithm-based prediction of tacrolimus stable dose (TSD) in a large Chinese cohort. A total of 1,045 renal transplant patients were recruited, 80% of which were randomly selected as the “derivation cohort” to develop dose-prediction algorithm, while the remaining 20% constituted the “validation cohort” to test the final selected algorithm. MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied and their performances were compared in this work. Among all the machine learning models, RT performed best in both derivation [0.71 (0.67–0.76)] and validation cohorts [0.73 (0.63–0.82)]. In addition, the ideal rate of RT was 4% higher than that of MLR. To our knowledge, this is the first study to use machine learning models to predict TSD, which will further facilitate personalized medicine in tacrolimus administration in the future. PMID:28176850

  12. Application of Machine-Learning Models to Predict Tacrolimus Stable Dose in Renal Transplant Recipients.

    PubMed

    Tang, Jie; Liu, Rong; Zhang, Yue-Li; Liu, Mou-Ze; Hu, Yong-Fang; Shao, Ming-Jie; Zhu, Li-Jun; Xin, Hua-Wen; Feng, Gui-Wen; Shang, Wen-Jun; Meng, Xiang-Guang; Zhang, Li-Rong; Ming, Ying-Zi; Zhang, Wei

    2017-02-08

    Tacrolimus has a narrow therapeutic window and considerable variability in clinical use. Our goal was to compare the performance of multiple linear regression (MLR) and eight machine learning techniques in pharmacogenetic algorithm-based prediction of tacrolimus stable dose (TSD) in a large Chinese cohort. A total of 1,045 renal transplant patients were recruited, 80% of which were randomly selected as the "derivation cohort" to develop dose-prediction algorithm, while the remaining 20% constituted the "validation cohort" to test the final selected algorithm. MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied and their performances were compared in this work. Among all the machine learning models, RT performed best in both derivation [0.71 (0.67-0.76)] and validation cohorts [0.73 (0.63-0.82)]. In addition, the ideal rate of RT was 4% higher than that of MLR. To our knowledge, this is the first study to use machine learning models to predict TSD, which will further facilitate personalized medicine in tacrolimus administration in the future.

  13. Application of Machine-Learning Models to Predict Tacrolimus Stable Dose in Renal Transplant Recipients

    NASA Astrophysics Data System (ADS)

    Tang, Jie; Liu, Rong; Zhang, Yue-Li; Liu, Mou-Ze; Hu, Yong-Fang; Shao, Ming-Jie; Zhu, Li-Jun; Xin, Hua-Wen; Feng, Gui-Wen; Shang, Wen-Jun; Meng, Xiang-Guang; Zhang, Li-Rong; Ming, Ying-Zi; Zhang, Wei

    2017-02-01

    Tacrolimus has a narrow therapeutic window and considerable variability in clinical use. Our goal was to compare the performance of multiple linear regression (MLR) and eight machine learning techniques in pharmacogenetic algorithm-based prediction of tacrolimus stable dose (TSD) in a large Chinese cohort. A total of 1,045 renal transplant patients were recruited, 80% of which were randomly selected as the “derivation cohort” to develop dose-prediction algorithm, while the remaining 20% constituted the “validation cohort” to test the final selected algorithm. MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied and their performances were compared in this work. Among all the machine learning models, RT performed best in both derivation [0.71 (0.67–0.76)] and validation cohorts [0.73 (0.63–0.82)]. In addition, the ideal rate of RT was 4% higher than that of MLR. To our knowledge, this is the first study to use machine learning models to predict TSD, which will further facilitate personalized medicine in tacrolimus administration in the future.

  14. Physics-informed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data

    NASA Astrophysics Data System (ADS)

    Wang, Jian-Xun; Wu, Jin-Long; Xiao, Heng

    2017-03-01

    Turbulence modeling is a critical component in numerical simulations of industrial flows based on Reynolds-averaged Navier-Stokes (RANS) equations. However, after decades of efforts in the turbulence modeling community, universally applicable RANS models with predictive capabilities are still lacking. Large discrepancies in the RANS-modeled Reynolds stresses are the main source that limits the predictive accuracy of RANS models. Identifying these discrepancies is of significance to possibly improve the RANS modeling. In this work, we propose a data-driven, physics-informed machine learning approach for reconstructing discrepancies in RANS modeled Reynolds stresses. The discrepancies are formulated as functions of the mean flow features. By using a modern machine learning technique based on random forests, the discrepancy functions are trained by existing direct numerical simulation (DNS) databases and then used to predict Reynolds stress discrepancies in different flows where data are not available. The proposed method is evaluated by two classes of flows: (1) fully developed turbulent flows in a square duct at various Reynolds numbers and (2) flows with massive separations. In separated flows, two training flow scenarios of increasing difficulties are considered: (1) the flow in the same periodic hills geometry yet at a lower Reynolds number and (2) the flow in a different hill geometry with a similar recirculation zone. Excellent predictive performances were observed in both scenarios, demonstrating the merits of the proposed method.

  15. A Physics-Informed Machine Learning Framework for RANS-based Predictive Turbulence Modeling

    NASA Astrophysics Data System (ADS)

    Xiao, Heng; Wu, Jinlong; Wang, Jianxun; Ling, Julia

    2016-11-01

    Numerical models based on the Reynolds-averaged Navier-Stokes (RANS) equations are widely used in turbulent flow simulations in support of engineering design and optimization. In these models, turbulence modeling introduces significant uncertainties in the predictions. In light of the decades-long stagnation encountered by the traditional approach of turbulence model development, data-driven methods have been proposed as a promising alternative. We will present a data-driven, physics-informed machine-learning framework for predictive turbulence modeling based on RANS models. The framework consists of three components: (1) prediction of discrepancies in RANS modeled Reynolds stresses based on machine learning algorithms, (2) propagation of improved Reynolds stresses to quantities of interests with a modified RANS solver, and (3) quantitative, a priori assessment of predictive confidence based on distance metrics in the mean flow feature space. Merits of the proposed framework are demonstrated in a class of flows featuring massive separations. Significant improvements over the baseline RANS predictions are observed. The favorable results suggest that the proposed framework is a promising path toward RANS-based predictive turbulence in the era of big data. (SAND2016-7435 A).

  16. Machine learning methods enable predictive modeling of antibody feature:function relationships in RV144 vaccinees.

    PubMed

    Choi, Ickwon; Chung, Amy W; Suscovich, Todd J; Rerks-Ngarm, Supachai; Pitisuttithum, Punnee; Nitayaphan, Sorachai; Kaewkungwal, Jaranit; O'Connell, Robert J; Francis, Donald; Robb, Merlin L; Michael, Nelson L; Kim, Jerome H; Alter, Galit; Ackerman, Margaret E; Bailey-Kellogg, Chris

    2015-04-01

    The adaptive immune response to vaccination or infection can lead to the production of specific antibodies to neutralize the pathogen or recruit innate immune effector cells for help. The non-neutralizing role of antibodies in stimulating effector cell responses may have been a key mechanism of the protection observed in the RV144 HIV vaccine trial. In an extensive investigation of a rich set of data collected from RV144 vaccine recipients, we here employ machine learning methods to identify and model associations between antibody features (IgG subclass and antigen specificity) and effector function activities (antibody dependent cellular phagocytosis, cellular cytotoxicity, and cytokine release). We demonstrate via cross-validation that classification and regression approaches can effectively use the antibody features to robustly predict qualitative and quantitative functional outcomes. This integration of antibody feature and function data within a machine learning framework provides a new, objective approach to discovering and assessing multivariate immune correlates.

  17. Experimental study on light induced influence model to mice using support vector machine

    NASA Astrophysics Data System (ADS)

    Ji, Lei; Zhao, Zhimin; Yu, Yinshan; Zhu, Xingyue

    2014-08-01

    Previous researchers have made studies on different influences created by light irradiation to animals, including retinal damage, changes of inner index and so on. However, the model of light induced damage to animals using physiological indicators as features in machine learning method is never founded. This study was designed to evaluate the changes in micro vascular diameter, the serum absorption spectrum and the blood flow influenced by light irradiation of different wavelengths, powers and exposure time with support vector machine (SVM). The micro images of the mice auricle were recorded and the vessel diameters were calculated by computer program. The serum absorption spectrums were analyzed. The result shows that training sample rate 20% and 50% have almost the same correct recognition rate. Better performance and accuracy was achieved by third-order polynomial kernel SVM quadratic optimization method and it worked suitably for predicting the light induced damage to organisms.

  18. Use of different sampling schemes in machine learning-based prediction of hydrological models' uncertainty

    NASA Astrophysics Data System (ADS)

    Kayastha, Nagendra; Solomatine, Dimitri; Lal Shrestha, Durga; van Griensven, Ann

    2013-04-01

    In recent years, a lot of attention in the hydrologic literature is given to model parameter uncertainty analysis. The robustness estimation of uncertainty depends on the efficiency of sampling method used to generate the best fit responses (outputs) and on ease of use. This paper aims to investigate: (1) how sampling strategies effect the uncertainty estimations of hydrological models, (2) how to use this information in machine learning predictors of models uncertainty. Sampling of parameters may employ various algorithms. We compared seven different algorithms namely, Monte Carlo (MC) simulation, generalized likelihood uncertainty estimation (GLUE), Markov chain Monte Carlo (MCMC), shuffled complex evolution metropolis algorithm (SCEMUA), differential evolution adaptive metropolis (DREAM), partical swarm optimization (PSO) and adaptive cluster covering (ACCO) [1]. These methods were applied to estimate uncertainty of streamflow simulation using conceptual model HBV and Semi-distributed hydrological model SWAT. Nzoia catchment in West Kenya is considered as the case study. The results are compared and analysed based on the shape of the posterior distribution of parameters, uncertainty results on model outputs. The MLUE method [2] uses results of Monte Carlo sampling (or any other sampling shceme) to build a machine learning (regression) model U able to predict uncertainty (quantiles of pdf) of a hydrological model H outputs. Inputs to these models are specially identified representative variables (past events precipitation and flows). The trained machine learning models are then employed to predict the model output uncertainty which is specific for the new input data. The problem here is that different sampling algorithms result in different data sets used to train such a model U, which leads to several models (and there is no clear evidence which model is the best since there is no basis for comparison). A solution could be to form a committee of all models U and

  19. Transient modeling and parameter identification based on wavelet and correlation filtering for rotating machine fault diagnosis

    NASA Astrophysics Data System (ADS)

    Wang, Shibin; Huang, Weiguo; Zhu, Z. K.

    2011-05-01

    At constant rotating speed, localized faults in rotating machine tend to result in periodic shocks and thus arouse periodic transients in the vibration signal. The transient feature analysis has always been a crucial problem for localized fault detection, and the key aim for transient feature analysis is to identify the model and its parameters (frequency, damping ratio and time index) of the transient, and the time interval, i.e. period, between transients. Based on wavelet and correlation filtering, a technique incorporating transient modeling and parameter identification is proposed for rotating machine fault feature detection. With the proposed method, both parameters of a single transient and the period between transients can be identified from the vibration signal, and localized faults can be detected based on the parameters, especially the period. First, a simulation signal is used to test the performance of the proposed method. Then the method is applied to the vibration signals of different types of bearings with localized faults in the outer race, the inner race and the rolling element, respectively, and all the results show that the period between transients, representing the localized fault characteristic, is successfully detected. The method is also utilized in gearbox fault diagnosis and the effectiveness is verified through identifying the parameters of the transient model and the period. Moreover, it can be drawn that for bearing fault detection, the single-side wavelet model is more suitable than double-side one, while the double-side model for gearbox fault detection. This research proposed an effective method of localized fault detection for rotating machine fault diagnosis through transient modeling and parameter detection.

  20. A Critical Review for Developing Accurate and Dynamic Predictive Models Using Machine Learning Methods in Medicine and Health Care.

    PubMed

    Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer

    2017-04-01

    Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.

  1. EBS Radionuclide Transport Abstraction

    SciTech Connect

    R. Schreiner

    2001-06-27

    The purpose of this work is to develop the Engineered Barrier System (EBS) radionuclide transport abstraction model, as directed by a written development plan (CRWMS M&O 1999a). This abstraction is the conceptual model that will be used to determine the rate of release of radionuclides from the EBS to the unsaturated zone (UZ) in the total system performance assessment-license application (TSPA-LA). In particular, this model will be used to quantify the time-dependent radionuclide releases from a failed waste package (WP) and their subsequent transport through the EBS to the emplacement drift wall/UZ interface. The development of this conceptual model will allow Performance Assessment Operations (PAO) and its Engineered Barrier Performance Department to provide a more detailed and complete EBS flow and transport abstraction. The results from this conceptual model will allow PA0 to address portions of the key technical issues (KTIs) presented in three NRC Issue Resolution Status Reports (IRSRs): (1) the Evolution of the Near-Field Environment (ENFE), Revision 2 (NRC 1999a), (2) the Container Life and Source Term (CLST), Revision 2 (NRC 1999b), and (3) the Thermal Effects on Flow (TEF), Revision 1 (NRC 1998). The conceptual model for flow and transport in the EBS will be referred to as the ''EBS RT Abstraction'' in this analysis/modeling report (AMR). The scope of this abstraction and report is limited to flow and transport processes. More specifically, this AMR does not discuss elements of the TSPA-SR and TSPA-LA that relate to the EBS but are discussed in other AMRs. These elements include corrosion processes, radionuclide solubility limits, waste form dissolution rates and concentrations of colloidal particles that are generally represented as boundary conditions or input parameters for the EBS RT Abstraction. In effect, this AMR provides the algorithms for transporting radionuclides using the flow geometry and radionuclide concentrations determined by other

  2. Ascertaining Validity in the Abstract Realm of PMESII Simulation Models: An Analysis of the Peace Support Operations Model (PSOM)

    DTIC Science & Technology

    2009-06-01

    the situation we wish to model ( Perla , 1990, p. 276). This problem is amplified when attempting to model irregular warfare. In FM 3–07, the newest...turn affects, decisions made during the course of those events by players representing opposing sides ( Perla , 1990, p. 274). PSOM is a campaign level...exploration of human decisions processes in the content of military action ( Perla 1990 p. 261). An action model that is disconnected from the

  3. Uncertainty "escalation" and use of machine learning to forecast residual and data model uncertainties

    NASA Astrophysics Data System (ADS)

    Solomatine, Dimitri

    2016-04-01

    When speaking about model uncertainty many authors implicitly assume the data uncertainty (mainly in parameters or inputs) which is probabilistically described by distributions. Often however it is look also into the residual uncertainty as well. It is hence reasonable to classify the main approaches to uncertainty analysis with respect to the two main types of model uncertainty that can be distinguished: A. The residual uncertainty of models. In this case the model parameters and/or model inputs are considered to be fixed (deterministic), i.e. the model is considered to be optimal (calibrated) and deterministic. Model error is considered as the manifestation of uncertainty. If there is enough past data about the model errors (i.e. it uncertainty), it is possible to build a statistical or machine learning model of uncertainty trained on this data. The following methods can be mentioned: (a) quantile regression (QR) method by Koenker and Basset in which linear regression is used to build predictive models for distribution quantiles [1] (b) a more recent approach that takes into account the input variables influencing such uncertainty and uses more advanced machine learning (non-linear) methods (neural networks, model trees etc.) - the UNEEC method [2,3,7] (c) and even more recent DUBRAUE method (Dynamic Uncertainty Model By Regression on Absolute Error), a autoregressive model of model residuals (it corrects the model residual first and then carries out the uncertainty prediction by a autoregressive statistical model) [5] B. The data uncertainty (parametric and/or input) - in this case we study the propagation of uncertainty (presented typically probabilistically) from parameters or inputs to the model outputs. In case of simple functions representing models analytical approaches can be used, or approximation methods (e.g., first-order second moment method). However, for real complex non-linear models implemented in software there is no other choice except using

  4. Mathematical concepts for modeling human behavior in complex man-machine systems

    NASA Technical Reports Server (NTRS)

    Johannsen, G.; Rouse, W. B.

    1979-01-01

    Many human behavior (e.g., manual control) models have been found to be inadequate for describing processes in certain real complex man-machine systems. An attempt is made to find a way to overcome this problem by examining the range of applicability of existing mathematical models with respect to the hierarchy of human activities in real complex tasks. Automobile driving is chosen as a baseline scenario, and a hierarchy of human activities is derived by analyzing this task in general terms. A structural description leads to a block diagram and a time-sharing computer analogy.

  5. Sensitivity Analysis of a Spatio-Temporal Avalanche Forecasting Model Based on Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Matasci, G.; Pozdnoukhov, A.; Kanevski, M.

    2009-04-01

    The recent progress in environmental monitoring technologies allows capturing extensive amount of data that can be used to assist in avalanche forecasting. While it is not straightforward to directly obtain the stability factors with the available technologies, the snow-pack profiles and especially meteorological parameters are becoming more and more available at finer spatial and temporal scales. Being very useful for improving physical modelling, these data are also of particular interest regarding their use involving the contemporary data-driven techniques of machine learning. Such, the use of support vector machine classifier opens ways to discriminate the ``safe'' and ``dangerous'' conditions in the feature space of factors related to avalanche activity based on historical observations. The input space of factors is constructed from the number of direct and indirect snowpack and weather observations pre-processed with heuristic and physical models into a high-dimensional spatially varying vector of input parameters. The particular system presented in this work is implemented for the avalanche-prone site of Ben Nevis, Lochaber region in Scotland. A data-driven model for spatio-temporal avalanche danger forecasting provides an avalanche danger map for this local (5x5 km) region at the resolution of 10m based on weather and avalanche observations made by forecasters on a daily basis at the site. We present the further work aimed at overcoming the ``black-box'' type modelling, a disadvantage the machine learning methods are often criticized for. It explores what the data-driven method of support vector machine has to offer to improve the interpretability of the forecast, uncovers the properties of the developed system with respect to highlighting which are the important features that led to the particular prediction (both in time and space), and presents the analysis of sensitivity of the prediction with respect to the varying input parameters. The purpose of the

  6. Constructing and validating readability models: the method of integrating multilevel linguistic features with machine learning.

    PubMed

    Sung, Yao-Ting; Chen, Ju-Ling; Cha, Ji-Her; Tseng, Hou-Chiang; Chang, Tao-Hsing; Chang, Kuo-En

    2015-06-01

    Multilevel linguistic features have been proposed for discourse analysis, but there have been few applications of multilevel linguistic features to readability models and also few validations of such models. Most traditional readability formulae are based on generalized linear models (GLMs; e.g., discriminant analysis and multiple regression), but these models have to comply with certain statistical assumptions about data properties and include all of the data in formulae construction without pruning the outliers in advance. The use of such readability formulae tends to produce a low text classification accuracy, while using a support vector machine (SVM) in machine learning can enhance the classification outcome. The present study constructed readability models by integrating multilevel linguistic features with SVM, which is more appropriate for text classification. Taking the Chinese language as an example, this study developed 31 linguistic features as the predicting variables at the word, semantic, syntax, and cohesion levels, with grade levels of texts as the criterion variable. The study compared four types of readability models by integrating unilevel and multilevel linguistic features with GLMs and an SVM. The results indicate that adopting a multilevel approach in readability analysis provides a better representation of the complexities of both texts and the reading comprehension process.

  7. Field tests and machine learning approaches for refining algorithms and correlations of driver's model parameters.

    PubMed

    Tango, Fabio; Minin, Luca; Tesauri, Francesco; Montanari, Roberto

    2010-03-01

    This paper describes the field tests on a driving simulator carried out to validate the algorithms and the correlations of dynamic parameters, specifically driving task demand and drivers' distraction, able to predict drivers' intentions. These parameters belong to the driver's model developed by AIDE (Adaptive Integrated Driver-vehicle InterfacE) European Integrated Project. Drivers' behavioural data have been collected from the simulator tests to model and validate these parameters using machine learning techniques, specifically the adaptive neuro fuzzy inference systems (ANFIS) and the artificial neural network (ANN). Two models of task demand and distraction have been developed, one for each adopted technique. The paper provides an overview of the driver's model, the description of the task demand and distraction modelling and the tests conducted for the validation of these parameters. A test comparing predicted and expected outcomes of the modelled parameters for each machine learning technique has been carried out: for distraction, in particular, promising results (low prediction errors) have been obtained by adopting an artificial neural network.

  8. Structure Based Thermostability Prediction Models for Protein Single Point Mutations with Machine Learning Tools.

    PubMed

    Jia, Lei; Yarlagadda, Ramya; Reed, Charles C

    2015-01-01

    Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find "hot spots" in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html) is a public database that consists of thousands of protein mutants' experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG) and melting temperature change (dTm) were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor) and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models.

  9. A hybrid prognostic model for multistep ahead prediction of machine condition

    NASA Astrophysics Data System (ADS)

    Roulias, D.; Loutas, T. H.; Kostopoulos, V.

    2012-05-01

    Prognostics are the future trend in condition based maintenance. In the current framework a data driven prognostic model is developed. The typical procedure of developing such a model comprises a) the selection of features which correlate well with the gradual degradation of the machine and b) the training of a mathematical tool. In this work the data are taken from a laboratory scale single stage gearbox under multi-sensor monitoring. Tests monitoring the condition of the gear pair from healthy state until total brake down following several days of continuous operation were conducted. After basic pre-processing of the derived data, an indicator that correlated well with the gearbox condition was obtained. Consecutively the time series is split in few distinguishable time regions via an intelligent data clustering scheme. Each operating region is modelled with a feed-forward artificial neural network (FFANN) scheme. The performance of the proposed model is tested by applying the system to predict the machine degradation level on unseen data. The results show the plausibility and effectiveness of the model in following the trend of the timeseries even in the case that a sudden change occurs. Moreover the model shows ability to generalise for application in similar mechanical assets.

  10. Abstracts of SIG Sessions.

    ERIC Educational Resources Information Center

    Proceedings of the ASIS Annual Meeting, 1995

    1995-01-01

    Presents abstracts of 15 special interest group (SIG) sessions. Topics include navigation and information utilization in the Internet, natural language processing, automatic indexing, image indexing, classification, users' models of database searching, online public access catalogs, education for information professions, information services,…

  11. Some cases of machining large-scale parts: Characterization and modelling of heavy turning, deep drilling and broaching

    NASA Astrophysics Data System (ADS)

    Haddag, B.; Nouari, M.; Moufki, A.

    2016-10-01

    Machining large-scale parts involves extreme loading at the cutting zone. This paper presents an overview of some cases of machining large-scale parts: heavy turning, deep drilling and broaching processes. It focuses on experimental characterization and modelling methods of these processes. Observed phenomena and/or measured cutting forces are reported. The paper also discusses the predictive ability of the proposed models to reproduce experimental data.

  12. Shared Consensus Machine Learning Models for Predicting Blood Stage Malaria Inhibition.

    PubMed

    Verras, Andreas; Waller, Christopher Lee; Gedeck, Peter; Green, Darren; Kogej, Thierry; Raichurkar, Anandkumar V; Panda, Manoranjan; Shelat, Anang A; Clark, Julie A; Guy, R Kiplin; Papadatos, George; Burrows, Jeremy N

    2017-03-03

    The development of new antimalarial therapies is essential and lowering the barrier of entry for the screening and discovery of new lead compound classes can spur drug development at organizations that may not have large compound screening libraries or resources to conduct high throughput screens. Machine learning models have been long established to be more robust and have a larger domain of applicability with larger training sets. Screens over multiple data sets to find compounds with potential malaria blood stage inhibitory activity have been used to generate multiple Bayesian models. Here we describe a method by which Bayesian QSAR models, which contain information on thousands to millions of proprietary compounds, can be shared between collaborators at both for-profit and not-for-profit institutions. This model-sharing paradigm allows for the development of consensus models that have increased predictive power over any single model, and yet does not reveal the identity of any compounds in the training sets.

  13. EBS Radionuclide Transport Abstraction

    SciTech Connect

    J. Prouty

    2006-07-14

    The purpose of this report is to develop and analyze the engineered barrier system (EBS) radionuclide transport abstraction model, consistent with Level I and Level II model validation, as identified in Technical Work Plan for: Near-Field Environment and Transport: Engineered Barrier System: Radionuclide Transport Abstraction Model Report Integration (BSC 2005 [DIRS 173617]). The EBS radionuclide transport abstraction (or EBS RT Abstraction) is the conceptual model used in the total system performance assessment (TSPA) to determine the rate of radionuclide releases from the EBS to the unsaturated zone (UZ). The EBS RT Abstraction conceptual model consists of two main components: a flow model and a transport model. Both models are developed mathematically from first principles in order to show explicitly what assumptions, simplifications, and approximations are incorporated into the models used in the TSPA. The flow model defines the pathways for water flow in the EBS and specifies how the flow rate is computed in each pathway. Input to this model includes the seepage flux into a drift. The seepage flux is potentially split by the drip shield, with some (or all) of the flux being diverted by the drip shield and some passing through breaches in the drip shield that might result from corrosion or seismic damage. The flux through drip shield breaches is potentially split by the waste package, with some (or all) of the flux being diverted by the waste package and some passing through waste package breaches that might result from corrosion or seismic damage. Neither the drip shield nor the waste package survives an igneous intrusion, so the flux splitting submodel is not used in the igneous scenario class. The flow model is validated in an independent model validation technical review. The drip shield and waste package flux splitting algorithms are developed and validated using experimental data. The transport model considers advective transport and diffusive transport

  14. [Research, design and application of model NSE-1 neck muscle training machine for pilots].

    PubMed

    Cheng, Haiping; Wang, Zhijie; Liu, Songyang; Yang, Yi; Zhao, Guang; Cong, Hong; Han, Xueping; Liu, Min; Yu, Mengsun

    2011-04-01

    Pain in the cervical region of air force pilots, who are exposed to high G-forces, is a specifically occupational health problem. To minimize neck problems, the cervical muscles need specific strength exercise. It is important that the training for the neck must be carried out with optimal resistance in exercises. The model NSE-1 neck training machine for pilots was designed for neck strengthening exercises under safe and effective conditions. In order to realize the functions of changeable velocity and resistant (CVR) training and neck isometric contractive exercises, the techniques of adaptive hydraulics, sensor, optic and auditory biological feedback, and signal processing were applied to this machine. The training system mainly consists of mechanical parts (including the chair of flexion and extension, the chair of right and left lateral flexion, the components of hydraulics and torque transformer, etc.), and the software of signal processing and biological feedback. Eleven volunteers were selected for the experiments of neck isometric contractive exercises, three times a week for 6 weeks, where CVR training (flexion, extension, right, left lateral flexion) one time a week. The increase in relative strength of the neck (flexion, extension, left and right lateral flexion) was 70.8%, 83.7%, 78.6% and 75.2%, respectively after training. Results show that the strength of the neck can be increased safely, effectively and rapidly with NSE-1 neck training machine to perform neck training.

  15. Biosimilarity Assessments of Model IgG1-Fc Glycoforms Using a Machine Learning Approach.

    PubMed

    Kim, Jae Hyun; Joshi, Sangeeta B; Tolbert, Thomas J; Middaugh, C Russell; Volkin, David B; Smalter Hall, Aaron

    2016-02-01

    Biosimilarity assessments are performed to decide whether 2 preparations of complex biomolecules can be considered "highly similar." In this work, a machine learning approach is demonstrated as a mathematical tool for such assessments using a variety of analytical data sets. As proof-of-principle, physical stability data sets from 8 samples, 4 well-defined immunoglobulin G1-Fragment crystallizable glycoforms in 2 different formulations, were examined (see More et al., companion article in this issue). The data sets included triplicate measurements from 3 analytical methods across different pH and temperature conditions (2066 data features). Established machine learning techniques were used to determine whether the data sets contain sufficient discriminative power in this application. The support vector machine classifier identified the 8 distinct samples with high accuracy. For these data sets, there exists a minimum threshold in terms of information quality and volume to grant enough discriminative power. Generally, data from multiple analytical techniques, multiple pH conditions, and at least 200 representative features were required to achieve the highest discriminative accuracy. In addition to classification accuracy tests, various methods such as sample space visualization, similarity analysis based on Euclidean distance, and feature ranking by mutual information scores are demonstrated to display their effectiveness as modeling tools for biosimilarity assessments.

  16. A comparison of optimal MIMO linear and nonlinear models for brain-machine interfaces.

    PubMed

    Kim, S-P; Sanchez, J C; Rao, Y N; Erdogmus, D; Carmena, J M; Lebedev, M A; Nicolelis, M A L; Principe, J C

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  17. Supercomputer Assisted Generation of Machine Learning Agents for the Calibration of Building Energy Models

    SciTech Connect

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard

    2013-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrot pur- poses. EnergyPlus is the agship Department of Energy software that performs BEM for dierent types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manu- ally by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building en- ergy modeling unfeasible for smaller projects. In this paper, we describe the \\Autotune" research which employs machine learning algorithms to generate agents for the dierent kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of En- ergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-eective cali- bration of building models.

  18. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    PubMed

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  19. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP

    PubMed Central

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740

  20. A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces

    NASA Astrophysics Data System (ADS)

    Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  1. Selecting statistical or machine learning techniques for regional landslide susceptibility modelling by evaluating spatial prediction

    NASA Astrophysics Data System (ADS)

    Goetz, Jason; Brenning, Alexander; Petschko, Helene; Leopold, Philip

    2015-04-01

    With so many techniques now available for landslide susceptibility modelling, it can be challenging to decide on which technique to apply. Generally speaking, the criteria for model selection should be tied closely to end users' purpose, which could be spatial prediction, spatial analysis or both. In our research, we focus on comparing the spatial predictive abilities of landslide susceptibility models. We illustrate how spatial cross-validation, a statistical approach for assessing spatial prediction performance, can be applied with the area under the receiver operating characteristic curve (AUROC) as a prediction measure for model comparison. Several machine learning and statistical techniques are evaluated for prediction in Lower Austria: support vector machine, random forest, bundling with penalized linear discriminant analysis, logistic regression, weights of evidence, and the generalized additive model. In addition to predictive performance, the importance of predictor variables in each model was estimated using spatial cross-validation by calculating the change in AUROC performance when variables are randomly permuted. The susceptibility modelling techniques were tested in three areas of interest in Lower Austria, which have unique geologic conditions associated with landslide occurrence. Overall, we found for the majority of comparisons that there were little practical or even statistically significant differences in AUROCs. That is the models' prediction performances were very similar. Therefore, in addition to prediction, the ability to interpret models for spatial analysis and the qualitative qualities of the prediction surface (map) are considered and discussed. The measure of variable importance provided some insight into the model behaviour for prediction, in particular for "black-box" models. However, there were no clear patterns in all areas of interest to why certain variables were given more importance over others.

  2. One- and two-dimensional Stirling machine simulation using experimentally generated reversing flow turbuulence models

    SciTech Connect

    Goldberg, L.F.

    1990-08-01

    The activities described in this report do not constitute a continuum but rather a series of linked smaller investigations in the general area of one- and two-dimensional Stirling machine simulation. The initial impetus for these investigations was the development and construction of the Mechanical Engineering Test Rig (METR) under a grant awarded by NASA to Dr. Terry Simon at the Department of Mechanical Engineering, University of Minnesota. The purpose of the METR is to provide experimental data on oscillating turbulent flows in Stirling machine working fluid flow path components (heater, cooler, regenerator, etc.) with particular emphasis on laminar/turbulent flow transitions. Hence, the initial goals for the grant awarded by NASA were, broadly, to provide computer simulation backup for the design of the METR and to analyze the results produced. This was envisaged in two phases: First, to apply an existing one-dimensional Stirling machine simulation code to the METR and second, to adapt a two-dimensional fluid mechanics code which had been developed for simulating high Rayleigh number buoyant cavity flows to the METR. The key aspect of this latter component was the development of an appropriate turbulence model suitable for generalized application to Stirling simulation. A final-step was then to apply the two-dimensional code to an existing Stirling machine for which adequate experimental data exist. The work described herein was carried out over a period of three years on a part-time basis. Forty percent of the first year`s funding was provided as a match to the NASA funds by the Underground Space Center, University of Minnesota, which also made its computing facilities available to the project at no charge.

  3. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

    NASA Astrophysics Data System (ADS)

    Nishizuka, N.; Sugiura, K.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M.

    2017-02-01

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010–2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite. We detected active regions (ARs) from the full-disk magnetogram, from which ∼60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

  4. Seismic Consequence Abstraction

    SciTech Connect

    M. Gross

    2004-10-25

    The primary purpose of this model report is to develop abstractions for the response of engineered barrier system (EBS) components to seismic hazards at a geologic repository at Yucca Mountain, Nevada, and to define the methodology for using these abstractions in a seismic scenario class for the Total System Performance Assessment - License Application (TSPA-LA). A secondary purpose of this model report is to provide information for criticality studies related to seismic hazards. The seismic hazards addressed herein are vibratory ground motion, fault displacement, and rockfall due to ground motion. The EBS components are the drip shield, the waste package, and the fuel cladding. The requirements for development of the abstractions and the associated algorithms for the seismic scenario class are defined in ''Technical Work Plan For: Regulatory Integration Modeling of Drift Degradation, Waste Package and Drip Shield Vibratory Motion and Seismic Consequences'' (BSC 2004 [DIRS 171520]). The development of these abstractions will provide a more complete representation of flow into and transport from the EBS under disruptive events. The results from this development will also address portions of integrated subissue ENG2, Mechanical Disruption of Engineered Barriers, including the acceptance criteria for this subissue defined in Section 2.2.1.3.2.3 of the ''Yucca Mountain Review Plan, Final Report'' (NRC 2003 [DIRS 163274]).

  5. Monkey models for brain-machine interfaces: the need for maintaining diversity.

    PubMed

    Nuyujukian, Paul; Fan, Joline M; Gilja, Vikash; Kalanithi, Paul S; Chestek, Cindy A; Shenoy, Krishna V

    2011-01-01

    Brain-machine interfaces (BMIs) aim to help disabled patients by translating neural signals from the brain into control signals for guiding prosthetic arms, computer cursors, and other assistive devices. Animal models are central to the development of these systems and have helped enable the successful translation of the first generation of BMIs. As we move toward next-generation systems, we face the question of which animal models will aid broader patient populations and achieve even higher performance, robustness, and functionality. We review here four general types of rhesus monkey models employed in BMI research, and describe two additional, complementary models. Given the physiological diversity of neurological injury and disease, we suggest a need to maintain the current diversity of animal models and to explore additional alternatives, as each mimic different aspects of injury or disease.

  6. Etch proximity correction through machine-learning-driven etch bias model

    NASA Astrophysics Data System (ADS)

    Shim, Seongbo; Shin, Youngsoo

    2016-03-01

    Accurate prediction of etch bias has become more important as technology node shrinks. A simulation is not feasible solution in full chip level due to excessive runtime, so etch proximity correction (EPC) often relies on empirically obtained rules or models. However, simple rules alone cannot accurately correct various pattern shapes, and a few empirical parameters in model-based EPC is still not enough to achieve satisfactory OCV. We propose a new approach of etch bias modeling through machine learning (ML) technique. A segment of interest (and its surroundings) are characterized by some geometric and optical parameters, which are received by an artificial neural network (ANN), which then outputs predicted etch bias of the segment. The ANN is used as our etch bias model for new EPC, which we propose in this paper. The new etch bias model and EPC are implemented in commercial OPC tool and demonstrated using 20nm technology DRAM gate layer.

  7. Evaluating machine learning and statistical prediction techniques for landslide susceptibility modeling

    NASA Astrophysics Data System (ADS)

    Goetz, J. N.; Brenning, A.; Petschko, H.; Leopold, P.

    2015-08-01

    Statistical and now machine learning prediction methods have been gaining popularity in the field of landslide susceptibility modeling. Particularly, these data driven approaches show promise when tackling the challenge of mapping landslide prone areas for large regions, which may not have sufficient geotechnical data to conduct physically-based methods. Currently, there is no best method for empirical susceptibility modeling. Therefore, this study presents a comparison of traditional statistical and novel machine learning models applied for regional scale landslide susceptibility modeling. These methods were evaluated by spatial k-fold cross-validation estimation of the predictive performance, assessment of variable importance for gaining insights into model behavior and by the appearance of the prediction (i.e. susceptibility) map. The modeling techniques applied were logistic regression (GLM), generalized additive models (GAM), weights of evidence (WOE), the support vector machine (SVM), random forest classification (RF), and bootstrap aggregated classification trees (bundling) with penalized discriminant analysis (BPLDA). These modeling methods were tested for three areas in the province of Lower Austria, Austria. The areas are characterized by different geological and morphological settings. Random forest and bundling classification techniques had the overall best predictive performances. However, the performances of all modeling techniques were for the majority not significantly different from each other; depending on the areas of interest, the overall median estimated area under the receiver operating characteristic curve (AUROC) differences ranged from 2.9 to 8.9 percentage points. The overall median estimated true positive rate (TPR) measured at a 10% false positive rate (FPR) differences ranged from 11 to 15pp. The relative importance of each predictor was generally different between the modeling methods. However, slope angle, surface roughness and plan

  8. Three-Phase Unbalanced Transient Dynamics and Powerflow for Modeling Distribution Systems With Synchronous Machines

    SciTech Connect

    Elizondo, Marcelo A.; Tuffner, Francis K.; Schneider, Kevin P.

    2016-01-01

    Unlike transmission systems, distribution feeders in North America operate under unbalanced conditions at all times, and generally have a single strong voltage source. When a distribution feeder is connected to a strong substation source, the system is dynamically very stable, even for large transients. However if a distribution feeder, or part of the feeder, is separated from the substation and begins to operate as an islanded microgrid, transient dynamics become more of an issue. To assess the impact of transient dynamics at the distribution level, it is not appropriate to use traditional transmission solvers, which generally assume transposed lines and balanced loads. Full electromagnetic solvers capture a high level of detail, but it is difficult to model large systems because of the required detail. This paper proposes an electromechanical transient model of synchronous machine for distribution-level modeling and microgrids. This approach includes not only the machine model, but also its interface with an unbalanced network solver, and a powerflow method to solve unbalanced conditions without a strong reference bus. The presented method is validated against a full electromagnetic transient simulation.

  9. Modeling and Designing of A Nonlineartemperature-Humidity Controller Using Inmushroom-Drying Machine

    NASA Astrophysics Data System (ADS)

    Wu, Xiuhua; Luo, Haiyan; Shi, Minhui

    Drying-process of many kinds of farm produce in a close room, such as mushroom-drying machine, is generally a complicated nonlinear and timedelay cause, in which the temperature and the humidity are the main controlled elements. The accurate controlling of the temperature and humidity is always an interesting problem. It's difficult and very important to make a more accurate mathematical model about the varying of the two. A math model was put forward after considering many aspects and analyzing the actual working circumstance in this paper. Form the model it can be seen that the changes of temperature and humidity in drying machine are not simple linear but an affine nonlinear process. Controlling the process exactly is the key that influences the quality of the dried mushroom. In this paper, the differential geometry theories and methods are used to analyze and solve the model of these smallenvironment elements. And at last a kind of nonlinear controller which satisfied the optimal quadratic performance index is designed. It can be proved more feasible and practical than the conventional controlling.

  10. The applications of machine learning algorithms in the modeling of estrogen-like chemicals.

    PubMed

    Liu, Huanxiang; Yao, Xiaojun; Gramatica, Paola

    2009-06-01

    Increasing concern is being shown by the scientific community, government regulators, and the public about endocrine-disrupting chemicals that, in the environment, are adversely affecting human and wildlife health through a variety of mechanisms, mainly estrogen receptor-mediated mechanisms of toxicity. Because of the large number of such chemicals in the environment, there is a great need for an effective means of rapidly assessing endocrine-disrupting activity in the toxicology assessment process. When faced with the challenging task of screening large libraries of molecules for biological activity, the benefits of computational predictive models based on quantitative structure-activity relationships to identify possible estrogens become immediately obvious. Recently, in order to improve the accuracy of prediction, some machine learning techniques were introduced to build more effective predictive models. In this review we will focus our attention on some recent advances in the use of these methods in modeling estrogen-like chemicals. The advantages and disadvantages of the machine learning algorithms used in solving this problem, the importance of the validation and performance assessment of the built models as well as their applicability domains will be discussed.

  11. Integrating Machine Learning into a Crowdsourced Model for Earthquake-Induced Damage Assessment

    NASA Technical Reports Server (NTRS)

    Rebbapragada, Umaa; Oommen, Thomas

    2011-01-01

    On January 12th, 2010, a catastrophic 7.0M earthquake devastated the country of Haiti. In the aftermath of an earthquake, it is important to rapidly assess damaged areas in order to mobilize the appropriate resources. The Haiti damage assessment effort introduced a promising model that uses crowdsourcing to map damaged areas in freely available remotely-sensed data. This paper proposes the application of machine learning methods to improve this model. Specifically, we apply work on learning from multiple, imperfect experts to the assessment of volunteer reliability, and propose the use of image segmentation to automate the detection of damaged areas. We wrap both tasks in an active learning framework in order to shift volunteer effort from mapping a full catalog of images to the generation of high-quality training data. We hypothesize that the integration of machine learning into this model improves its reliability, maintains the speed of damage assessment, and allows the model to scale to higher data volumes.

  12. A Genetic Algorithm Based Support Vector Machine Model for Blood-Brain Barrier Penetration Prediction

    PubMed Central

    Zhang, Daqing; Xiao, Jianfeng; Zhou, Nannan; Zheng, Mingyue; Luo, Xiaomin; Jiang, Hualiang; Chen, Kaixian

    2015-01-01

    Blood-brain barrier (BBB) is a highly complex physical barrier determining what substances are allowed to enter the brain. Support vector machine (SVM) is a kernel-based machine learning method that is widely used in QSAR study. For a successful SVM model, the kernel parameters for SVM and feature subset selection are the most important factors affecting prediction accuracy. In most studies, they are treated as two independent problems, but it has been proven that they could affect each other. We designed and implemented genetic algorithm (GA) to optimize kernel parameters and feature subset selection for SVM regression and applied it to the BBB penetration prediction. The results show that our GA/SVM model is more accurate than other currently available log BB models. Therefore, to optimize both SVM parameters and feature subset simultaneously with genetic algorithm is a better approach than other methods that treat the two problems separately. Analysis of our log BB model suggests that carboxylic acid group, polar surface area (PSA)/hydrogen-bonding ability, lipophilicity, and molecular charge play important role in BBB penetration. Among those properties relevant to BBB penetration, lipophilicity could enhance the BBB penetration while all the others are negatively correlated with BBB penetration. PMID:26504797

  13. A model-based analysis of impulsivity using a slot-machine gambling paradigm.

    PubMed

    Paliwal, Saee; Petzschner, Frederike H; Schmitz, Anna Katharina; Tittgemeyer, Marc; Stephan, Klaas E

    2014-01-01

    Impulsivity plays a key role in decision-making under uncertainty. It is a significant contributor to problem and pathological gambling (PG). Standard assessments of impulsivity by questionnaires, however, have various limitations, partly because impulsivity is a broad, multi-faceted concept. What remains unclear is which of these facets contribute to shaping gambling behavior. In the present study, we investigated impulsivity as expressed in a gambling setting by applying computational modeling to data from 47 healthy male volunteers who played a realistic, virtual slot-machine gambling task. Behaviorally, we found that impulsivity, as measured independently by the 11th revision of the Barratt Impulsiveness Scale (BIS-11), correlated significantly with an aggregate read-out of the following gambling responses: bet increases (BIs), machines switches (MS), casino switches (CS), and double-ups (DUs). Using model comparison, we compared a set of hierarchical Bayesian belief-updating models, i.e., the Hierarchical Gaussian Filter (HGF) and Rescorla-Wagner reinforcement learning (RL) models, with regard to how well they explained different aspects of the behavioral data. We then examined the construct validity of our winning models with multiple regression, relating subject-specific model parameter estimates to the individual BIS-11 total scores. In the most predictive model (a three-level HGF), the two free parameters encoded uncertainty-dependent mechanisms of belief updates and significantly explained BIS-11 variance across subjects. Furthermore, in this model, decision noise was a function of trial-wise uncertainty about winning probability. Collectively, our results provide a proof of concept that hierarchical Bayesian models can characterize the decision-making mechanisms linked to the impulsive traits of an individual. These novel indices of gambling mechanisms unmasked during actual play may be useful for online prevention measures for at-risk players and future

  14. A model-based analysis of impulsivity using a slot-machine gambling paradigm

    PubMed Central

    Paliwal, Saee; Petzschner, Frederike H.; Schmitz, Anna Katharina; Tittgemeyer, Marc; Stephan, Klaas E.

    2014-01-01

    Impulsivity plays a key role in decision-making under uncertainty. It is a significant contributor to problem and pathological gambling (PG). Standard assessments of impulsivity by questionnaires, however, have various limitations, partly because impulsivity is a broad, multi-faceted concept. What remains unclear is which of these facets contribute to shaping gambling behavior. In the present study, we investigated impulsivity as expressed in a gambling setting by applying computational modeling to data from 47 healthy male volunteers who played a realistic, virtual slot-machine gambling task. Behaviorally, we found that impulsivity, as measured independently by the 11th revision of the Barratt Impulsiveness Scale (BIS-11), correlated significantly with an aggregate read-out of the following gambling responses: bet increases (BIs), machines switches (MS), casino switches (CS), and double-ups (DUs). Using model comparison, we compared a set of hierarchical Bayesian belief-updating models, i.e., the Hierarchical Gaussian Filter (HGF) and Rescorla–Wagner reinforcement learning (RL) models, with regard to how well they explained different aspects of the behavioral data. We then examined the construct validity of our winning models with multiple regression, relating subject-specific model parameter estimates to the individual BIS-11 total scores. In the most predictive model (a three-level HGF), the two free parameters encoded uncertainty-dependent mechanisms of belief updates and significantly explained BIS-11 variance across subjects. Furthermore, in this model, decision noise was a function of trial-wise uncertainty about winning probability. Collectively, our results provide a proof of concept that hierarchical Bayesian models can characterize the decision-making mechanisms linked to the impulsive traits of an individual. These novel indices of gambling mechanisms unmasked during actual play may be useful for online prevention measures for at-risk players and

  15. Accurate modeling of switched reluctance machine based on hybrid trained WNN

    NASA Astrophysics Data System (ADS)

    Song, Shoujun; Ge, Lefei; Ma, Shaojie; Zhang, Man

    2014-04-01

    According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, the nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.

  16. Study of Two-Dimensional Compressible Non-Acoustic Modeling of Stirling Machine Type Components

    NASA Technical Reports Server (NTRS)

    Tew, Roy C., Jr.; Ibrahim, Mounir B.

    2001-01-01

    A two-dimensional (2-D) computer code was developed for modeling enclosed volumes of gas with oscillating boundaries, such as Stirling machine components. An existing 2-D incompressible flow computer code, CAST, was used as the starting point for the project. CAST was modified to use the compressible non-acoustic Navier-Stokes equations to model an enclosed volume including an oscillating piston. The devices modeled have low Mach numbers and are sufficiently small that the time required for acoustics to propagate across them is negligible. Therefore, acoustics were excluded to enable more time efficient computation. Background information about the project is presented. The compressible non-acoustic flow assumptions are discussed. The governing equations used in the model are presented in transport equation format. A brief description is given of the numerical methods used. Comparisons of code predictions with experimental data are then discussed.

  17. Modelling of classification rules on metabolic patterns including machine learning and expert knowledge.

    PubMed

    Baumgartner, Christian; Böhm, Christian; Baumgartner, Daniela

    2005-04-01

    Machine learning has a great potential to mine potential markers from high-dimensional metabolic data without any a priori knowledge. Exemplarily, we investigated metabolic patterns of three severe metabolic disorders, PAHD, MCADD, and 3-MCCD, on which we constructed classification models for disease screening and diagnosis using a decision tree paradigm and logistic regression analysis (LRA). For the LRA model-building process we assessed the relevance of established diagnostic flags, which have been developed from the biochemical knowledge of newborn metabolism, and compared the models' error rates with those of the decision tree classifier. Both approaches yielded comparable classification accuracy in terms of sensitivity (>95.2%), while the LRA models built on flags showed significantly enhanced specificity. The number of false positive cases did not exceed 0.001%.

  18. A machine learning approach to the potential-field method for implicit modeling of geological structures

    NASA Astrophysics Data System (ADS)

    Gonçalves, Ítalo Gomes; Kumaira, Sissa; Guadagnin, Felipe

    2017-06-01

    Implicit modeling has experienced a rise in popularity over the last decade due to its advantages in terms of speed and reproducibility in comparison with manual digitization of geological structures. The potential-field method consists in interpolating a scalar function that indicates to which side of a geological boundary a given point belongs to, based on cokriging of point data and structural orientations. This work proposes a vector potential-field solution from a machine learning perspective, recasting the problem as multi-class classification, which alleviates some of the original method's assumptions. The potentials related to each geological class are interpreted in a compositional data framework. Variogram modeling is avoided through the use of maximum likelihood to train the model, and an uncertainty measure is introduced. The methodology was applied to the modeling of a sample dataset provided with the software Move™. The calculations were implemented in the R language and 3D visualizations were prepared with the rgl package.

  19. Accurate modeling of switched reluctance machine based on hybrid trained WNN

    SciTech Connect

    Song, Shoujun Ge, Lefei; Ma, Shaojie; Zhang, Man

    2014-04-15

    According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, the nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.

  20. Discriminative feature-rich models for syntax-based machine translation.

    SciTech Connect

    Dixon, Kevin R.

    2012-12-01

    This report describes the campus executive LDRD %E2%80%9CDiscriminative Feature-Rich Models for Syntax-Based Machine Translation,%E2%80%9D which was an effort to foster a better relationship between Sandia and Carnegie Mellon University (CMU). The primary purpose of the LDRD was to fund the research of a promising graduate student at CMU; in this case, Kevin Gimpel was selected from the pool of candidates. This report gives a brief overview of Kevin Gimpel's research.

  1. Chemical Kinetics of Hydrogen Atom Abstraction from Allylic Sites by (3)O2; Implications for Combustion Modeling and Simulation.

    PubMed

    Zhou, Chong-Wen; Simmie, John M; Somers, Kieran P; Goldsmith, C Franklin; Curran, Henry J

    2017-03-09

    Hydrogen atom abstraction from allylic C-H bonds by molecular oxygen plays a very important role in determining the reactivity of fuel molecules having allylic hydrogen atoms. Rate constants for hydrogen atom abstraction by molecular oxygen from molecules with allylic sites have been calculated. A series of molecules with primary, secondary, tertiary, and super secondary allylic hydrogen atoms of alkene, furan, and alkylbenzene families are taken into consideration. Those molecules include propene, 2-butene, isobutene, 2-methylfuran, and toluene containing the primary allylic hydrogen atom; 1-butene, 1-pentene, 2-ethylfuran, ethylbenzene, and n-propylbenzene containing the secondary allylic hydrogen atom; 3-methyl-1-butene, 2-isopropylfuran, and isopropylbenzene containing tertiary allylic hydrogen atom; and 1-4-pentadiene containing super allylic secondary hydrogen atoms. The M06-2X/6-311++G(d,p) level of theory was used to optimize the geometries of all of the reactants, transition states, products and also the hinder rotation treatments for lower frequency modes. The G4 level of theory was used to calculate the electronic single point energies for those species to determine the 0 K barriers to reaction. Conventional transition state theory with Eckart tunnelling corrections was used to calculate the rate constants. The comparison between our calculated rate constants with the available experimental results from the literature shows good agreement for the reactions of propene and isobutene with molecular oxygen. The rate constant for toluene with O2 is about an order magnitude slower than that experimentally derived from a comprehensive model proposed by Oehlschlaeger and coauthors. The results clearly indicate the need for a more detailed investigation of the combustion kinetics of toluene oxidation and its key pyrolysis and oxidation intermediates. Despite this, our computed barriers and rate constants retain an important internal consistency. Rate constants

  2. Machine learning approaches for estimation of prediction interval for the model output.

    PubMed

    Shrestha, Durga L; Solomatine, Dimitri P

    2006-03-01

    A novel method for estimating prediction uncertainty using machine learning techniques is presented. Uncertainty is expressed in the form of the two quantiles (constituting the prediction interval) of the underlying distribution of prediction errors. The idea is to partition the input space into different zones or clusters having similar model errors using fuzzy c-means clustering. The prediction interval is constructed for each cluster on the basis of empirical distributions of the errors associated with all instances belonging to the cluster under consideration and propagated from each cluster to the examples according to their membership grades in each cluster. Then a regression model is built for in-sample data using computed prediction limits as targets, and finally, this model is applied to estimate the prediction intervals (limits) for out-of-sample data. The method was tested on artificial and real hydrologic data sets using various machine learning techniques. Preliminary results show that the method is superior to other methods estimating the prediction interval. A new method for evaluating performance for estimating prediction interval is proposed as well.

  3. Simulation of abrasive flow machining process for 2D and 3D mixture models

    NASA Astrophysics Data System (ADS)

    Dash, Rupalika; Maity, Kalipada

    2015-12-01

    Improvement of surface finish and material removal has been quite a challenge in a finishing operation such as abrasive flow machining (AFM). Factors that affect the surface finish and material removal are media viscosity, extrusion pressure, piston velocity, and particle size in abrasive flow machining process. Performing experiments for all the parameters and accurately obtaining an optimized parameter in a short time are difficult to accomplish because the operation requires a precise finish. Computational fluid dynamics (CFD) simulation was employed to accurately determine optimum parameters. In the current work, a 2D model was designed, and the flow analysis, force calculation, and material removal prediction were performed and compared with the available experimental data. Another 3D model for a swaging die finishing using AFM was simulated at different viscosities of the media to study the effects on the controlling parameters. A CFD simulation was performed by using commercially available ANSYS FLUENT. Two phases were considered for the flow analysis, and multiphase mixture model was taken into account. The fluid was considered to be a

  4. Machine learning models identify molecules active against the Ebola virus in vitro

    PubMed Central

    Ekins, Sean; Freundlich, Joel S.; Clark, Alex M.; Anantpadma, Manu; Davey, Robert A.; Madrid, Peter

    2016-01-01

    The search for small molecule inhibitors of Ebola virus (EBOV) has led to several high throughput screens over the past 3 years. These have identified a range of FDA-approved active pharmaceutical ingredients (APIs) with anti-EBOV activity in vitro and several of which are also active in a mouse infection model. There are millions of additional commercially-available molecules that could be screened for potential activities as anti-EBOV compounds. One way to prioritize compounds for testing is to generate computational models based on the high throughput screening data and then virtually screen compound libraries. In the current study, we have generated Bayesian machine learning models with viral pseudotype entry assay and the EBOV replication assay data. We have validated the models internally and externally. We have also used these models to computationally score the MicroSource library of drugs to select those likely to be potential inhibitors. Three of the highest scoring molecules that were not in the model training sets, quinacrine, pyronaridine and tilorone, were tested in vitro and had EC 50 values of 350, 420 and 230 nM, respectively. Pyronaridine is a component of a combination therapy for malaria that was recently approved by the European Medicines Agency, which may make it more readily accessible for clinical testing. Like other known antimalarial drugs active against EBOV, it shares the 4-aminoquinoline scaffold. Tilorone, is an investigational antiviral agent that has shown a broad array of biological activities including cell growth inhibition in cancer cells, antifibrotic properties, α7 nicotinic receptor agonist activity, radioprotective activity and activation of hypoxia inducible factor-1. Quinacrine is an antimalarial but also has use as an anthelmintic. Our results suggest data sets with less than 1,000 molecules can produce validated machine learning models that can in turn be utilized to identify novel EBOV inhibitors in vitro. PMID:26834994

  5. Modeling complex responses of FM-sensitive cells in the auditory midbrain using a committee machine.

    PubMed

    Chang, T R; Chiu, T W; Sun, X; Poon, Paul W F

    2013-11-06

    Frequency modulation (FM) is an important building block of complex sounds that include speech signals. Exploring the neural mechanisms of FM coding with computer modeling could help understand how speech sounds are processed in the brain. Here, we modeled the single unit responses of auditory neurons recorded from the midbrain of anesthetized rats. These neurons displayed spectral temporal receptive fields (STRFs) that had multiple-trigger features, and were more complex than those with single-trigger features. Their responses have not been modeled satisfactorily with simple artificial neural networks, unlike neurons with simple-trigger features. To improve model performance, here we tested an approach with the committee machine. For a given neuron, the peri-stimulus time histogram (PSTH) was first generated in response to a repeated random FM tone, and peaks in the PSTH were segregated into groups based on the similarity of their pre-spike FM trigger features. Each group was then modeled using an artificial neural network with simple architecture, and, when necessary, by increasing the number of neurons in the hidden layer. After initial training, the artificial neural networks with their optimized weighting coefficients were pooled into a committee machine for training. Finally, the model performance was tested by prediction of the response of the same cell to a novel FM tone. The results showed improvement over simple artificial neural networks, supporting that trigger-feature-based modeling can be extended to cells with complex responses. This article is part of a Special Issue entitled Neural Coding 2012. This article is part of a Special Issue entitled Neural Coding 2012.

  6. A hybrid flowshop scheduling model considering dedicated machines and lot-splitting for the solar cell industry

    NASA Astrophysics Data System (ADS)

    Wang, Li-Chih; Chen, Yin-Yann; Chen, Tzu-Li; Cheng, Chen-Yang; Chang, Chin-Wei

    2014-10-01

    This paper studies a solar cell industry scheduling problem, which is similar to traditional hybrid flowshop scheduling (HFS). In a typical HFS problem, the allocation of machine resources for each order should be scheduled in advance. However, the challenge in solar cell manufacturing is the number of machines that can be adjusted dynamically to complete the job. An optimal production scheduling model is developed to explore these issues, considering the practical characteristics, such as hybrid flowshop, parallel machine system, dedicated machines, sequence independent job setup times and sequence dependent job setup times. The objective of this model is to minimise the makespan and to decide the processing sequence of the orders/lots in each stage, lot-splitting decisions for the orders and the number of machines used to satisfy the demands in each stage. From the experimental results, lot-splitting has significant effect on shortening the makespan, and the improvement effect is influenced by the processing time and the setup time of orders. Therefore, the threshold point to improve the makespan can be identified. In addition, the model also indicates that more lot-splitting approaches, that is, the flexibility of allocating orders/lots to machines is larger, will result in a better scheduling performance.

  7. Hybrid wavelet-support vector machine approach for modelling rainfall-runoff process.

    PubMed

    Komasi, Mehdi; Sharghi, Soroush

    2016-01-01

    Because of the importance of water resources management, the need for accurate modeling of the rainfall-runoff process has rapidly grown in the past decades. Recently, the support vector machine (SVM) approach has been used by hydrologists for rainfall-runoff modeling and the other fields of hydrology. Similar to the other artificial intelligence models, such as artificial neural network (ANN) and adaptive neural fuzzy inference system, the SVM model is based on the autoregressive properties. In this paper, the wavelet analysis was linked to the SVM model concept for modeling the rainfall-runoff process of Aghchai and Eel River watersheds. In this way, the main time series of two variables, rainfall and runoff, were decomposed to multiple frequent time series by wavelet theory; then, these time series were imposed as input data on the SVM model in order to predict the runoff discharge one day ahead. The obtained results show that the wavelet SVM model can predict both short- and long-term runoff discharges by considering the seasonality effects. Also, the proposed hybrid model is relatively more appropriate than classical autoregressive ones such as ANN and SVM because it uses the multi-scale time series of rainfall and runoff data in the modeling process.

  8. Gain scheduled continuous-time model predictive controller with experimental validation on AC machine

    NASA Astrophysics Data System (ADS)

    Wang, Liuping; Gan, Lu

    2013-08-01

    Linear controllers with gain scheduling have been successfully used in the control of nonlinear systems for the past several decades. This paper proposes the design of gain scheduled continuous-time model predictive controller with constraints. Using induction machine as an illustrative example, the paper will show the four steps involved in the design of a gain scheduled predictive controller: (i) linearisation of a nonlinear plant according to operating conditions; (ii) the design of linear predictive controllers for the family of linear models; (iii) gain scheduled predictive control law that will optimise a multiple model objective function with constraints, which will also ensure smooth transitions (i.e. bumpless transfer) between the predictive controllers; (iv) experimental validation of the gain scheduled predictive control system with constraints.

  9. Glucose Oxidase Biosensor Modeling and Predictors Optimization by Machine Learning Methods †

    PubMed Central

    Gonzalez-Navarro, Felix F.; Stilianova-Stoytcheva, Margarita; Renteria-Gutierrez, Livier; Belanche-Muñoz, Lluís A.; Flores-Rios, Brenda L.; Ibarra-Esquer, Jorge E.

    2016-01-01

    Biosensors are small analytical devices incorporating a biological recognition element and a physico-chemical transducer to convert a biological signal into an electrical reading. Nowadays, their technological appeal resides in their fast performance, high sensitivity and continuous measuring capabilities; however, a full understanding is still under research. This paper aims to contribute to this growing field of biotechnology, with a focus on Glucose-Oxidase Biosensor (GOB) modeling through statistical learning methods from a regression perspective. We model the amperometric response of a GOB with dependent variables under different conditions, such as temperature, benzoquinone, pH and glucose concentrations, by means of several machine learning algorithms. Since the sensitivity of a GOB response is strongly related to these dependent variables, their interactions should be optimized to maximize the output signal, for which a genetic algorithm and simulated annealing are used. We report a model that shows a good generalization error and is consistent with the optimization. PMID:27792165

  10. Modelling and simulation for table tennis referee regulation based on finite state machine.

    PubMed

    Cui, Jianjiang; Liu, Zixuan; Xu, Long

    2016-10-13

    As referee's decisions are made artificially in traditional table tennis matches, many factors in a match, such as fatigue and subjective tendency, may lead to unjust decision. Based on finite state machine (FSM), this paper presents a model for table tennis referee regulation to substitute manual decisions. In this model, the trajectory of the ball is recorded through a binocular visual system while the complete rules extracted from the International Table Tennis Federation (ITTF) rules are described based on FSM. The final decision for the competition is made based on expert system theory. Simulation result shows that the proposed model has high accuracy, and can be generalised to other similar games such as badminton, volleyball, etc.

  11. MIP Models and Hybrid Algorithms for Simultaneous Job Splitting and Scheduling on Unrelated Parallel Machines

    PubMed Central

    Ozmutlu, H. Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204

  12. MIP models and hybrid algorithms for simultaneous job splitting and scheduling on unrelated parallel machines.

    PubMed

    Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.

  13. State Event Models for the Formal Analysis of Human-Machine Interactions

    NASA Technical Reports Server (NTRS)

    Combefis, Sebastien; Giannakopoulou, Dimitra; Pecheur, Charles

    2014-01-01

    The work described in this paper was motivated by our experience with applying a framework for formal analysis of human-machine interactions (HMI) to a realistic model of an autopilot. The framework is built around a formally defined conformance relation called "fullcontrol" between an actual system and the mental model according to which the system is operated. Systems are well-designed if they can be described by relatively simple, full-control, mental models for their human operators. For this reason, our framework supports automated generation of minimal full-control mental models for HMI systems, where both the system and the mental models are described as labelled transition systems (LTS). The autopilot that we analysed has been developed in the NASA Ames HMI prototyping tool ADEPT. In this paper, we describe how we extended the models that our HMI analysis framework handles to allow adequate representation of ADEPT models. We then provide a property-preserving reduction from these extended models to LTSs, to enable application of our LTS-based formal analysis algorithms. Finally, we briefly discuss the analyses we were able to perform on the autopilot model with our extended framework.

  14. Unified error model based spatial error compensation for four types of CNC machining center: Part I-Singular function based unified error model

    NASA Astrophysics Data System (ADS)

    Fan, Kaiguo; Yang, Jianguo; Yang, Liyan

    2015-08-01

    To unify the error model for four types of CNC machining center, the comprehensive error model of each type of CNC machining center was established using the homogenous transformation matrix (HTM). The internal rules between the HTMs and the kinematic chains were analyzed in this research. The analysis results show that the HTM elements associated with the motion axes which are at the rear of the reference coordinate system are positive value. On the contrary, the HTM elements associated with the motion axes which are at the front of the reference coordinate system are negative value. To express these internal rules, the singular function was introduced to the HTMs. And a unified error model for four types of CNC machining center was established based on the HTM and the singular function. The unified error model includes 18 error elements which are the main factors affecting the machining accuracy of CNC machine tools. The practical results show that the unified error model is not only suitable for vertical machining center but also suitable for horizontal machining center.

  15. The Development of Surface Profile Models in Abrasive Slurry Jet Micro-machining of Brittle and Ductile materials

    NASA Astrophysics Data System (ADS)

    Nouraei, Hooman

    In low-pressure abrasive slurry jet micro-machining (ASJM), a slurry jet of fine abrasive particles is used to erode micro-sized features such as holes and channels in a variety of brittle and ductile materials with a high degree of accuracy and repeatability without the need for a patterned mask. ASJM causes no tool wear and thermal damage, applies small forces on the workpiece, allows multilevel etching on a single substrate and is relatively quick and inexpensive. In this study for the first time, the mechanics of micro-slurry jet erosion and its relation to the fluid flow of the impinging jet was investigated using a newly developed ASJM system. Existing surface evolution models, previously developed for abrasive air jet machining (AJM), were evaluated and modified through the use of computational fluid dynamic (CFD) models for profile modeling of micro-channels and micro-holes machined with ASJM in brittle materials. A novel numerical-empirical model was also developed in order to compensate for the shortcoming of existing surface evolution models and provide a higher degree of accuracy in predicting the profiles of features in ductile materials machined with ASJM. In addition, the effect of process parameters on the minimum feature size attainable with ASJM as a maskless process was also examined and it was shown that the size of machined features could be further reduced.

  16. Thermal Error Modeling Method with the Jamming of Temperature-Sensitive Points' Volatility on CNC Machine Tools

    NASA Astrophysics Data System (ADS)

    MIAO, Enming; LIU, Yi; XU, Jianguo; LIU, Hui

    2017-03-01

    Aiming at the deficiency of the robustness of thermal error compensation models of CNC machine tools, the mechanism of improving the models' robustness is studied by regarding the Leaderway-V450 machining center as the object. Through the analysis of actual spindle air cutting experimental data on Leaderway-V450 machine, it is found that the temperature-sensitive points used for modeling is volatility, and this volatility directly leads to large changes on the collinear degree among modeling independent variables. Thus, the forecasting accuracy of multivariate regression model is severely affected, and the forecasting robustness becomes poor too. To overcome this effect, a modeling method of establishing thermal error models by using single temperature variable under the jamming of temperature-sensitive points' volatility is put forward. According to the actual data of thermal error measured in different seasons, it is proved that the single temperature variable model can reduce the loss of forecasting accuracy resulted from the volatility of temperature-sensitive points, especially for the prediction of cross quarter data, the improvement of forecasting accuracy is about 5 μm or more. The purpose that improving the robustness of the thermal error models is realized, which can provide a reference for selecting the modeling independent variable in the application of thermal error compensation of CNC machine tools.

  17. Computational modeling of skin reflectance spectra for biological parameter estimation through machine learning

    NASA Astrophysics Data System (ADS)

    Vyas, Saurabh; Van Nguyen, Hien; Burlina, Philippe; Banerjee, Amit; Garza, Luis; Chellappa, Rama

    2012-06-01

    A computational skin re ectance model is used here to provide the re ectance, absorption, scattering, and transmittance based on the constitutive biological components that make up the layers of the skin. The changes in re ectance are mapped back to deviations in model parameters, which include melanosome level, collagen level and blood oxygenation. The computational model implemented in this work is based on the Kubelka- Munk multi-layer re ectance model and the Fresnel Equations that describe a generic N-layer model structure. This assumes the skin as a multi-layered material, with each layer consisting of specic absorption, scattering coecients, re ectance spectra and transmittance based on the model parameters. These model parameters include melanosome level, collagen level, blood oxygenation, blood level, dermal depth, and subcutaneous tissue re ectance. We use this model, coupled with support vector machine based regression (SVR), to predict the biological parameters that make up the layers of the skin. In the proposed approach, the physics-based forward mapping is used to generate a large set of training exemplars. The samples in this dataset are then used as training inputs for the SVR algorithm to learn the inverse mapping. This approach was tested on VIS-range hyperspectral data. Performance validation of the proposed approach was performed by measuring the prediction error on the skin constitutive parameters and exhibited very promising results.

  18. Modeling of variable speed refrigerated display cabinets based on adaptive support vector machine

    NASA Astrophysics Data System (ADS)

    Cao, Zhikun; Han, Hua; Gu, Bo

    2010-01-01

    In this paper the adaptive support vector machine (ASVM) method is introduced to the field of intelligent modeling of refrigerated display cabinets and used to construct a highly precise mathematical model of their performance. A model for a variable speed open vertical display cabinet was constructed using preprocessing techniques for measured data, including the elimination of outlying data points by the use of an exponential weighted moving average (EWMA). Using dynamic loss coefficient adjustment, the adaptation of the SVM for use in this application was achieved. From there, the object function for energy use per unit of display area total energy consumption (TEC)/total display area (TDA) was constructed and solved using the ASVM method. When compared to the results achieved using a back-propagation neural network (BPNN) model, the ASVM model for the refrigerated display cabinet was characterized by its simple structure, fast convergence speed and high prediction accuracy. The ASVM model also has better noise rejection properties than that of original SVM model. It was revealed by the theoretical analysis and experimental results presented in this paper that it is feasible to model of the display cabinet built using the ASVM method.

  19. Deriving global parameter estimates for the Noah land surface model using FLUXNET and machine learning

    NASA Astrophysics Data System (ADS)

    Chaney, Nathaniel W.; Herman, Jonathan D.; Ek, Michael B.; Wood, Eric F.

    2016-11-01

    With their origins in numerical weather prediction and climate modeling, land surface models aim to accurately partition the surface energy balance. An overlooked challenge in these schemes is the role of model parameter uncertainty, particularly at unmonitored sites. This study provides global parameter estimates for the Noah land surface model using 85 eddy covariance sites in the global FLUXNET network. The at-site parameters are first calibrated using a Latin Hypercube-based ensemble of the most sensitive parameters, determined by the Sobol method, to be the minimum stomatal resistance (rs,min), the Zilitinkevich empirical constant (Czil), and the bare soil evaporation exponent (fxexp). Calibration leads to an increase in the mean Kling-Gupta Efficiency performance metric from 0.54 to 0.71. These calibrated parameter sets are then related to local environmental characteristics using the Extra-Trees machine learning algorithm. The fitted Extra-Trees model is used to map the optimal parameter sets over the globe at a 5 km spatial resolution. The leave-one-out cross validation of the mapped parameters using the Noah land surface model suggests that there is the potential to skillfully relate calibrated model parameter sets to local environmental characteristics. The results demonstrate the potential to use FLUXNET to tune the parameterizations of surface fluxes in land surface models and to provide improved parameter estimates over the globe.

  20. The Art of Abstracting.

    ERIC Educational Resources Information Center

    Cremmins, Edward T.

    A three-stage analytical reading method for the composition of informative and indicative abstracts by authors and abstractors is presented in this monograph, along with background information on the abstracting process and a discussion of professional considerations in abstracting. An introduction to abstracts and abstracting precedes general…

  1. Dynamic model of heat and mass transfer in rectangular adsorber of a solar adsorption machine

    NASA Astrophysics Data System (ADS)

    Chekirou, W.; Boukheit, N.; Karaali, A.

    2016-10-01

    This paper presents the study of a rectangular adsorber of solar adsorption cooling machine. The modeling and the analysis of the adsorber are the key point of such studies; because of the complex coupled heat and mass transfer phenomena that occur during the working cycle. The adsorber is heated by solar energy and contains a porous medium constituted of activated carbon AC-35 reacting by adsorption with methanol. To study the solar collector type effect on system's performances, the used model takes into account the variation of ambient temperature and solar intensity along a simulated day, corresponding to a total daily insolation of 26.12 MJ/m2 with ambient temperature average of 27.7 °C, which is useful to know the daily thermal behavior of the rectangular adsorber.

  2. Simulation modeling and tracing optimal trajectory of robotic mining machine effector

    NASA Astrophysics Data System (ADS)

    Fryanov, VN; Pavlova, LD

    2017-02-01

    Within the framework of the robotic coal mine design for deep-level coal beds with the high gas content in the seismically active areas in the southern Kuzbass, the motion path parameters for an effector of a robotic mining machine are evaluated. The simulation model is meant for selection of minimum energy-based optimum trajectory for the robot effector, calculation of stresses and strains in a coal bed in a variable perimeter shortwall in the course of coal extraction, determination of coordinates of a coal bed edge area with the maximum disintegration of coal, and for choice of direction of the robot effector to get in contact with the mentioned area and to break coal at the minimum energy input. It is suggested to use the model in the engineering of the robot intelligence.

  3. EBS Radionuclide Transport Abstraction

    SciTech Connect

    J.D. Schreiber

    2005-08-25

    The purpose of this report is to develop and analyze the engineered barrier system (EBS) radionuclide transport abstraction model, consistent with Level I and Level II model validation, as identified in ''Technical Work Plan for: Near-Field Environment and Transport: Engineered Barrier System: Radionuclide Transport Abstraction Model Report Integration'' (BSC 2005 [DIRS 173617]). The EBS radionuclide transport abstraction (or EBS RT Abstraction) is the conceptual model used in the total system performance assessment for the license application (TSPA-LA) to determine the rate of radionuclide releases from the EBS to the unsaturated zone (UZ). The EBS RT Abstraction conceptual model consists of two main components: a flow model and a transport model. Both models are developed mathematically from first principles in order to show explicitly what assumptions, simplifications, and approximations are incorporated into the models used in the TSPA-LA. The flow model defines the pathways for water flow in the EBS and specifies how the flow rate is computed in each pathway. Input to this model includes the seepage flux into a drift. The seepage flux is potentially split by the drip shield, with some (or all) of the flux being diverted by the drip shield and some passing through breaches in the drip shield that might result from corrosion or seismic damage. The flux through drip shield breaches is potentially split by the waste package, with some (or all) of the flux being diverted by the waste package and some passing through waste package breaches that might result from corrosion or seismic damage. Neither the drip shield nor the waste package survives an igneous intrusion, so the flux splitting submodel is not used in the igneous scenario class. The flow model is validated in an independent model validation technical review. The drip shield and waste package flux splitting algorithms are developed and validated using experimental data. The transport model considers

  4. Fuzzy texture model and support vector machine hybridization for land cover classification of remotely sensed images

    NASA Astrophysics Data System (ADS)

    Jenicka, S.; Suruliandi, A.

    2014-01-01

    Accuracy of land cover classification in remotely sensed images relies on the utilized classifier and extracted features. Texture features are significant in land cover classification. Traditional texture models capture only patterns with discrete boundaries, whereas fuzzy patterns should be classified by assigning due weightage to uncertainty. When a remotely sensed image contains noise, the image may have fuzzy patterns characterizing land covers and fuzzy boundaries separating them. Therefore, a fuzzy texture model is proposed for the effective classification of land covers in remotely sensed images. The model uses a Sugeno fuzzy inference system. A support vector machine (SVM) is used for the precise, fast classification of image pixels. The model is a hybrid of a fuzzy texture model and an SVM for the land cover classification of remotely sensed images. To support this proposal, experiments were conducted in three steps. In the first two steps, the proposed texture model was validated for supervised classifications and segmentation of a standard benchmark database. In the third step, the land cover classification of a remotely sensed image of LISS-IV (an Indian remote sensing satellite) is performed using a multivariate version of the proposed model. The classified image has 95.54% classification accuracy.

  5. The use of machine learning algorithms to design a generalized simplified denitrification model

    NASA Astrophysics Data System (ADS)

    Oehler, F.; Rutherford, J. C.; Coco, G.

    2010-10-01

    We propose to use machine learning (ML) algorithms to design a simplified denitrification model. Boosted regression trees (BRT) and artificial neural networks (ANN) were used to analyse the relationships and the relative influences of different input variables towards total denitrification, and an ANN was designed as a simplified model to simulate total nitrogen emissions from the denitrification process. To calibrate the BRT and ANN models and test this method, we used a database obtained collating datasets from the literature. We used bootstrapping to compute confidence intervals for the calibration and validation process. Both ML algorithms clearly outperformed a commonly used simplified model of nitrogen emissions, NEMIS, which is based on denitrification potential, temperature, soil water content and nitrate concentration. The ML models used soil organic matter % in place of a denitrification potential and pH as a fifth input variable. The BRT analysis reaffirms the importance of temperature, soil water content and nitrate concentration. Generalization, although limited to the data space of the database used to build the ML models, could be improved if pH is used to differentiate between soil types. Further improvements in model performance and generalization could be achieved by adding more data.

  6. Modeling and Control of a Double-effect Absorption Refrigerating Machine

    NASA Astrophysics Data System (ADS)

    Hihara, Eiji; Yamamoto, Yuuji; Saito, Takamoto; Nagaoka, Yoshikazu; Nishiyama, Noriyuki

    For the purpose of impoving the response to cooling load variations and the part load characteristics, the optimal operation of a double-effect absorption refrigerating machine was investigated. The test machine was designed to be able to control energy input and weak solution flow rate continuously. It is composed of a gas-fired high-temperature generator, a separator, a low-temperature generator, an absorber, a condenser, an evaporator, and high- and low-temperature heat exchangers. The working fluid is Lithium Bromide and water solution. The standard output is 80 kW. Based on the experimental data, a simulation model of the static characteristics was developed. The experiments and simulation analysis indicate that there is an optimal weak solution flow rate which maximizes the coefficient of performance under any given cooling load condition. The optimal condition is closely related to the refrigerant steam flow rate flowing from the separator to the high temperature heat exchanger with the medium solution. The heat transfer performance of heat exchangers in the components influences the COP. The change in the overall heat transfer coefficient of absorber has much effect on the COP compared to other components.

  7. CATIA-V 3D Modeling for Design Integration of the Ignitor Machine Load Assembly^*

    NASA Astrophysics Data System (ADS)

    Bianchi, A.; Parodi, B.; Gardella, F.; Coppi, B.

    2007-11-01

    In the framework of the ANSALDO industrial contribution to the Ignitor engineering design, the detailed design of all components of the machine core (Load Assembly) has been completed. The machine Central Post, Central Solenoid, and Poloidal Field Coil systems, the Plasma Chamber and First Wall system, the surrounding mechanical structures, the Vacuum Cryostat and the polyethylene boron sheets attached to it for neutron shielding, have all been analyzed to confirm that they can withstand both normal and off-normal operating loads, as well as the Plasma Chamber and First Wall baking operations, with proper safety margins, for the maximum plasma parameters scenario at 13 T/11 MA, for the reduced scenarios at 9 T/7 MA (limiter) and at 9 T/6 MA (double nul). Both 3D and 2D drawings of each individual component have been produced using the Dassault Systems CATIA-V software. After they have been all integrated into a single 3D CATIA model of the Load Assembly, the electro-fluidic and fluidic lines which supply electrical currents and helium cooling gas to the coils have been added and mechanically incorporated with the components listed above. A global seismic analysis of the Load Assembly with SSE/OBE response spectra has also been performed to verify that it is able to withstand such external events. ^*Work supported in part by ENEA of italy and by the US D.O.E.

  8. One- and two-dimensional Stirling machine simulation using experimentally generated flow turbulence models

    NASA Technical Reports Server (NTRS)

    Goldberg, Louis F.

    1990-01-01

    Investigations of one- and two-dimensional (1- or 2-D) simulations of Stirling machines centered around experimental data generated by the U. of Minnesota Mechanical Engineering Test Rig (METR) are covered. This rig was used to investigate oscillating flows about a zero mean with emphasis on laminar/turbulent flow transitions in tubes. The Space Power Demonstrator Engine (SPDE) and in particular, its heater, were the subjects of the simulations. The heater was treated as a 1- or 2-D entity in an otherwise 1-D system. The 2-D flow effects impacted the transient flow predictions in the heater itself but did not have a major impact on overall system performance. Information propagation effects may be a significant issue in the simulation (if not the performance) of high-frequency, high-pressure Stirling machines. This was investigated further by comparing a simulation against an experimentally validated analytic solution for the fluid dynamics of a transmission line. The applicability of the pressure-linking algorithm for compressible flows may be limited by characteristic number (defined as flow path information traverses per cycle); this warrants further study. Lastly the METR was simulated in 1- and 2-D. A two-parameter k-w foldback function turbulence model was developed and tested against a limited set of METR experimental data.

  9. A geometric process model for M/PH(M/PH)/1/K queue with new service machine procurement lead time

    NASA Astrophysics Data System (ADS)

    Yu, Miaomiao; Tang, Yinghui; Fu, Yonghong

    2013-06-01

    In this article, we consider a geometric process model for M/PH(M/PH)/1/K queue with new service machine procurement lead time. A maintenance policy (N - 1, N) based on the number of failures of the service machine is introduced into the system. Assuming that a failed service machine after repair will not be 'as good as new', and the spare service machine for replacement is only available by an order. More specifically, we suppose that the procurement lead time for delivering the spare service machine follows a phase-type (PH) distribution. Under such assumptions, we apply the matrix-analytic method to develop the steady state probabilities of the system, and then we obtain some system performance measures. Finally, employing an important Lemma, the explicit expression of the long-run average cost rate for the service machine is derived, and the direct search method is also implemented to determine the optimal value of N for minimising the average cost rate.

  10. Hidden Markov models and other machine learning approaches in computational molecular biology

    SciTech Connect

    Baldi, P.

    1995-12-31

    This tutorial was one of eight tutorials selected to be presented at the Third International Conference on Intelligent Systems for Molecular Biology which was held in the United Kingdom from July 16 to 19, 1995. Computational tools are increasingly needed to process the massive amounts of data, to organize and classify sequences, to detect weak similarities, to separate coding from non-coding regions, and reconstruct the underlying evolutionary history. The fundamental problem in machine learning is the same as in scientific reasoning in general, as well as statistical modeling: to come up with a good model for the data. In this tutorial four classes of models are reviewed. They are: Hidden Markov models; artificial Neural Networks; Belief Networks; and Stochastic Grammars. When dealing with DNA and protein primary sequences, Hidden Markov models are one of the most flexible and powerful alignments and data base searches. In this tutorial, attention is focused on the theory of Hidden Markov Models, and how to apply them to problems in molecular biology.

  11. Model for noise-induced hearing loss using support vector machine

    NASA Astrophysics Data System (ADS)

    Qiu, Wei; Ye, Jun; Liu-White, Xiaohong; Hamernik, Roger P.

    2005-09-01

    Contemporary noise standards are based on the assumption that an energy metric such as the equivalent noise level is sufficient for estimating the potential of a noise stimulus to cause noise-induced hearing loss (NIHL). Available data, from laboratory-based experiments (Lei et al., 1994; Hamernik and Qiu, 2001) indicate that while an energy metric may be necessary, it is not sufficient for the prediction of NIHL. A support vector machine (SVM) NIHL prediction model was constructed, based on a 550-subject (noise-exposed chinchillas) database. Training of the model used data from 367 noise-exposed subjects. The model was tested using the remaining 183 subjects. Input variables for the model included acoustic, audiometric, and biological variables, while output variables were PTS and cell loss. The results show that an energy parameter is not sufficient to predict NIHL, especially in complex noise environments. With the kurtosis and other noise and biological parameters included as additional inputs, the performance of SVM prediction model was significantly improved. The SVM prediction model has the potential to reliably predict noise-induced hearing loss. [Work supported by NIOSH.

  12. Ecophysiological Modeling of Grapevine Water Stress in Burgundy Terroirs by a Machine-Learning Approach.

    PubMed

    Brillante, Luca; Mathieu, Olivier; Lévêque, Jean; Bois, Benjamin

    2016-01-01

    In a climate change scenario, successful modeling of the relationships between plant-soil-meteorology is crucial for a sustainable agricultural production, especially for perennial crops. Grapevines (Vitis vinifera L. cv Chardonnay) located in eight experimental plots (Burgundy, France) along a hillslope were monitored weekly for 3 years for leaf water potentials, both at predawn (Ψpd) and at midday (Ψstem). The water stress experienced by grapevine was modeled as a function of meteorological data (minimum and maximum temperature, rainfall) and soil characteristics (soil texture, gravel content, slope) by a gradient boosting machine. Model performance was assessed by comparison with carbon isotope discrimination (δ(13)C) of grape sugars at harvest and by the use of a test-set. The developed models reached outstanding prediction performance (RMSE < 0.08 MPa for Ψstem and < 0.06 MPa for Ψpd), comparable to measurement accuracy. Model predictions at a daily time step improved correlation with δ(13)C data, respect to the observed trend at a weekly time scale. The role of each predictor in these models was described in order to understand how temperature, rainfall, soil texture, gravel content and slope affect the grapevine water status in the studied context. This work proposes a straight-forward strategy to simulate plant water stress in field condition, at a local scale; to investigate ecological relationships in the vineyard and adapt cultural practices to future conditions.

  13. Rotary ultrasonic machining of CFRP: a mechanistic predictive model for cutting force.

    PubMed

    Cong, W L; Pei, Z J; Sun, X; Zhang, C L

    2014-02-01

    Cutting force is one of the most important output variables in rotary ultrasonic machining (RUM) of carbon fiber reinforced plastic (CFRP) composites. Many experimental investigations on cutting force in RUM of CFRP have been reported. However, in the literature, there are no cutting force models for RUM of CFRP. This paper develops a mechanistic predictive model for cutting force in RUM of CFRP. The material removal mechanism of CFRP in RUM has been analyzed first. The model is based on the assumption that brittle fracture is the dominant mode of material removal. CFRP micromechanical analysis has been conducted to represent CFRP as an equivalent homogeneous material to obtain the mechanical properties of CFRP from its components. Based on this model, relationships between input variables (including ultrasonic vibration amplitude, tool rotation speed, feedrate, abrasive size, and abrasive concentration) and cutting force can be predicted. The relationships between input variables and important intermediate variables (indentation depth, effective contact time, and maximum impact force of single abrasive grain) have been investigated to explain predicted trends of cutting force. Experiments are conducted to verify the model, and experimental results agree well with predicted trends from this model.

  14. The use of machine learning algorithms to design a generalized simplified denitrification model

    NASA Astrophysics Data System (ADS)

    Oehler, F.; Rutherford, J. C.; Coco, G.

    2010-04-01

    We designed generalized simplified models using machine learning algorithms (ML) to assess denitrification at the catchment scale. In particular, we designed an artificial neural network (ANN) to simulate total nitrogen emissions from the denitrification process. Boosted regression trees (BRT, another ML) was also used to analyse the relationships and the relative influences of different input variables towards total denitrification. To calibrate the ANN and BRT models, we used a large database obtained by collating datasets from the literature. We developed a simple methodology to give confidence intervals for the calibration and validation process. Both ML algorithms clearly outperformed a commonly used simplified model of nitrogen emissions, NEMIS. NEMIS is based on denitrification potential, temperature, soil water content and nitrate concentration. The ML models used soil organic matter % in place of a denitrification potential and pH as a fifth input variable. The BRT analysis reaffirms the importance of temperature, soil water content and nitrate concentration. Generality of the ANN model may also be improved if pH is used to differentiate between soil types. Further improvements in model performance can be achieved by lessening dataset effects.

  15. Ecophysiological Modeling of Grapevine Water Stress in Burgundy Terroirs by a Machine-Learning Approach

    PubMed Central

    Brillante, Luca; Mathieu, Olivier; Lévêque, Jean; Bois, Benjamin

    2016-01-01

    In a climate change scenario, successful modeling of the relationships between plant-soil-meteorology is crucial for a sustainable agricultural production, especially for perennial crops. Grapevines (Vitis vinifera L. cv Chardonnay) located in eight experimental plots (Burgundy, France) along a hillslope were monitored weekly for 3 years for leaf water potentials, both at predawn (Ψpd) and at midday (Ψstem). The water stress experienced by grapevine was modeled as a function of meteorological data (minimum and maximum temperature, rainfall) and soil characteristics (soil texture, gravel content, slope) by a gradient boosting machine. Model performance was assessed by comparison with carbon isotope discrimination (δ13C) of grape sugars at harvest and by the use of a test-set. The developed models reached outstanding prediction performance (RMSE < 0.08 MPa for Ψstem and < 0.06 MPa for Ψpd), comparable to measurement accuracy. Model predictions at a daily time step improved correlation with δ13C data, respect to the observed trend at a weekly time scale. The role of each predictor in these models was described in order to understand how temperature, rainfall, soil texture, gravel content and slope affect the grapevine water status in the studied context. This work proposes a straight-forward strategy to simulate plant water stress in field condition, at a local scale; to investigate ecological relationships in the vineyard and adapt cultural practices to future conditions. PMID:27375651

  16. Maraging Steel Machining Improvements

    DTIC Science & Technology

    2007-04-23

    APR 2007 2. REPORT TYPE Technical, Success Story 3. DATES COVERED 01-12-2006 to 23-04-2007 4. TITLE AND SUBTITLE Maraging Steel Machining...consumers of cobalt-strengthened maraging steel . An increase in production requires them to reduce the machining time of certain operations producing... maraging steel ; Success Stories 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 1 18. NUMBER OF PAGES 1 19a. NAME OF RESPONSIBLE

  17. Mathematical modeling and multi-criteria optimization of rotary electrical discharge machining process

    NASA Astrophysics Data System (ADS)

    Shrinivas Balraj, U.

    2015-12-01

    In this paper, mathematical modeling of three performance characteristics namely material removal rate, surface roughness and electrode wear rate in rotary electrical discharge machining RENE80 nickel super alloy is done using regression approach. The parameters considered are peak current, pulse on time, pulse off time and electrode rotational speed. The regression approach is very much effective in mathematical modeling when the performance characteristic is influenced by many variables. The modeling of these characteristics is helpful in predicting the performance under a given set of combination of input process parameters. The adequacy of developed models is tested by correlation coefficient and Analysis of Variance. It is observed that the developed models are adequate in establishing the relationship between input parameters and performance characteristics. Further, multi-criteria optimization of process parameter levels is carried using grey based Taguchi method. The experiments are planned based on Taguchi's L9 orthogonal array. The proposed method employs single grey relational grade as a performance index to obtain optimum levels of parameters. It is found that peak current and electrode rotational speed are influential on these characteristics. Confirmation experiments are conducted to validate optimal parameters and it reveals the improvements in material removal rate, surface roughness and electrode wear rate as 13.84%, 12.91% and 19.42% respectively.

  18. Evaluation models for soil nutrient based on support vector machine and artificial neural networks.

    PubMed

    Li, Hao; Leng, Weijia; Zhou, Yibing; Chen, Fudi; Xiu, Zhilong; Yang, Dazuo

    2014-01-01

    Soil nutrient is an important aspect that contributes to the soil fertility and environmental effects. Traditional evaluation approaches of soil nutrient are quite hard to operate, making great difficulties in practical applications. In this paper, we present a series of comprehensive evaluation models for soil nutrient by using support vector machine (SVM), multiple linear regression (MLR), and artificial neural networks (ANNs), respectively. We took the content of organic matter, total nitrogen, alkali-hydrolysable nitrogen, rapidly available phosphorus, and rapidly available potassium as independent variables, while the evaluation level of soil nutrient content was taken as dependent variable. Results show that the average prediction accuracies of SVM models are 77.87% and 83.00%, respectively, while the general regression neural network (GRNN) model's average prediction accuracy is 92.86%, indicating that SVM and GRNN models can be used effectively to assess the levels of soil nutrient with suitable dependent variables. In practical applications, both SVM and GRNN models can be used for determining the levels of soil nutrient.

  19. A Reordering Model Using a Source-Side Parse-Tree for Statistical Machine Translation

    NASA Astrophysics Data System (ADS)

    Hashimoto, Kei; Yamamoto, Hirofumi; Okuma, Hideo; Sumita, Eiichiro; Tokuda, Keiichi

    This paper presents a reordering model using a source-side parse-tree for phrase-based statistical machine translation. The proposed model is an extension of IST-ITG (imposing source tree on inversion transduction grammar) constraints. In the proposed method, the target-side word order is obtained by rotating nodes of the source-side parse-tree. We modeled the node rotation, monotone or swap, using word alignments based on a training parallel corpus and source-side parse-trees. The model efficiently suppresses erroneous target word orderings, especially global orderings. Furthermore, the proposed method conducts a probabilistic evaluation of target word reorderings. In English-to-Japanese and English-to-Chinese translation experiments, the proposed method resulted in a 0.49-point improvement (29.31 to 29.80) and a 0.33-point improvement (18.60 to 18.93) in word BLEU-4 compared with IST-ITG constraints, respectively. This indicates the validity of the proposed reordering model.

  20. Investigating driver injury severity patterns in rollover crashes using support vector machine models.

    PubMed

    Chen, Cong; Zhang, Guohui; Qian, Zhen; Tarefder, Rafiqul A; Tian, Zong

    2016-05-01

    Rollover crash is one of the major types of traffic crashes that induce fatal injuries. It is important to investigate the factors that affect rollover crashes and their influence on driver injury severity outcomes. This study employs support vector machine (SVM) models to investigate driver injury severity patterns in rollover crashes based on two-year crash data gathered in New Mexico. The impacts of various explanatory variables are examined in terms of crash and environmental information, vehicle features, and driver demographics and behavior characteristics. A classification and regression tree (CART) model is utilized to identify significant variables and SVM models with polynomial and Gaussian radius basis function (RBF) kernels are used for model performance evaluation. It is shown that the SVM models produce reasonable prediction performance and the polynomial kernel outperforms the Gaussian RBF kernel. Variable impact analysis reveals that factors including comfortable driving environment conditions, driver alcohol or drug involvement, seatbelt use, number of travel lanes, driver demographic features, maximum vehicle damages in crashes, crash time, and crash location are significantly associated with driver incapacitating injuries and fatalities. These findings provide insights for better understanding rollover crash causes and the impacts of various explanatory factors on driver injury severity patterns.

  1. Data on Support Vector Machines (SVM) model to forecast photovoltaic power.

    PubMed

    Malvoni, M; De Giorgi, M G; Congedo, P M

    2016-12-01

    The data concern the photovoltaic (PV) power, forecasted by a hybrid model that considers weather variations and applies a technique to reduce the input data size, as presented in the paper entitled "Photovoltaic forecast based on hybrid pca-lssvm using dimensionality reducted data" (M. Malvoni, M.G. De Giorgi, P.M. Congedo, 2015) [1]. The quadratic Renyi entropy criteria together with the principal component analysis (PCA) are applied to the Least Squares Support Vector Machines (LS-SVM) to predict the PV power in the day-ahead time frame. The data here shared represent the proposed approach results. Hourly PV power predictions for 1,3,6,12, 24 ahead hours and for different data reduction sizes are provided in Supplementary material.

  2. Object Classification via Planar Abstraction

    NASA Astrophysics Data System (ADS)

    Oesau, Sven; Lafarge, Florent; Alliez, Pierre

    2016-06-01

    We present a supervised machine learning approach for classification of objects from sampled point data. The main idea consists in first abstracting the input object into planar parts at several scales, then discriminate between the different classes of objects solely through features derived from these planar shapes. Abstracting into planar shapes provides a means to both reduce the computational complexity and improve robustness to defects inherent to the acquisition process. Measuring statistical properties and relationships between planar shapes offers invariance to scale and orientation. A random forest is then used for solving the multiclass classification problem. We demonstrate the potential of our approach on a set of indoor objects from the Princeton shape benchmark and on objects acquired from indoor scenes and compare the performance of our method with other point-based shape descriptors.

  3. Development of hardware system using temperature and vibration maintenance models integration concepts for conventional machines monitoring: a case study

    NASA Astrophysics Data System (ADS)

    Adeyeri, Michael Kanisuru; Mpofu, Khumbulani; Kareem, Buliaminu

    2016-12-01

    This article describes the integration of temperature and vibration models for maintenance monitoring of conventional machinery parts in which their optimal and best functionalities are affected by abnormal changes in temperature and vibration values thereby resulting in machine failures, machines breakdown, poor quality of products, inability to meeting customers' demand, poor inventory control and just to mention a few. The work entails the use of temperature and vibration sensors as monitoring probes programmed in microcontroller using C language. The developed hardware consists of vibration sensor of ADXL345, temperature sensor of AD594/595 of type K thermocouple, microcontroller, graphic liquid crystal display, real time clock, etc. The hardware is divided into two: one is based at the workstation (majorly meant to monitor machines behaviour) and the other at the base station (meant to receive transmission of machines information sent from the workstation), working cooperatively for effective functionalities. The resulting hardware built was calibrated, tested using model verification and validated through principles pivoted on least square and regression analysis approach using data read from the gear boxes of extruding and cutting machines used for polyethylene bag production. The results got therein confirmed related correlation existing between time, vibration and temperature, which are reflections of effective formulation of the developed concept.

  4. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  5. Gaussian-binary restricted Boltzmann machines for modeling natural image statistics.

    PubMed

    Melchior, Jan; Wang, Nan; Wiskott, Laurenz

    2017-01-01

    We present a theoretical analysis of Gaussian-binary restricted Boltzmann machines (GRBMs) from the perspective of density models. The key aspect of this analysis is to show that GRBMs can be formulated as a constrained mixture of Gaussians, which gives a much better insight into the model's capabilities and limitations. We further show that GRBMs are capable of learning meaningful features without using a regularization term and that the results are comparable to those of independent component analysis. This is illustrated for both a two-dimensional blind source separation task and for modeling natural image patches. Our findings exemplify that reported difficulties in training GRBMs are due to the failure of the training algorithm rather than the model itself. Based on our analysis we derive a better training setup and show empirically that it leads to faster and more robust training of GRBMs. Finally, we compare different sampling algorithms for training GRBMs and show that Contrastive Divergence performs better than training methods that use a persistent Markov chain.

  6. Using machine learning tools to model complex toxic interactions with limited sampling regimes.

    PubMed

    Bertin, Matthew J; Moeller, Peter; Guillette, Louis J; Chapman, Robert W

    2013-03-19

    A major impediment to understanding the impact of environmental stress, including toxins and other pollutants, on organisms, is that organisms are rarely challenged by one or a few stressors in natural systems. Thus, linking laboratory experiments that are limited by practical considerations to a few stressors and a few levels of these stressors to real world conditions is constrained. In addition, while the existence of complex interactions among stressors can be identified by current statistical methods, these methods do not provide a means to construct mathematical models of these interactions. In this paper, we offer a two-step process by which complex interactions of stressors on biological systems can be modeled in an experimental design that is within the limits of practicality. We begin with the notion that environment conditions circumscribe an n-dimensional hyperspace within which biological processes or end points are embedded. We then randomly sample this hyperspace to establish experimental conditions that span the range of the relevant parameters and conduct the experiment(s) based upon these selected conditions. Models of the complex interactions of the parameters are then extracted using machine learning tools, specifically artificial neural networks. This approach can rapidly generate highly accurate models of biological responses to complex interactions among environmentally relevant toxins, identify critical subspaces where nonlinear responses exist, and provide an expedient means of designing traditional experiments to test the impact of complex mixtures on biological responses. Further, this can be accomplished with an astonishingly small sample size.

  7. Copper Conductivity Model Development and Validation Using Flyer Plate Experiments on the Z-machine

    NASA Astrophysics Data System (ADS)

    Riford, L.; Lemke, R. W.; Cochrane, K.

    2015-11-01

    Magnetically accelerated flyer plate experiments done on Sandia's Z-machine provide insight into a multitude of materials problems at high energies and densities including conductivity model development and validation. In an experiment with ten Cu flyer plates of thicknesses 500-1000 μm, VISAR measurements exhibit a characteristic jump in the velocity correlated with magnetic field burn-through and the expansion of melted material at the free surface. The experiment is modeled using Sandia's shock and multiphysics MHD code ALEGRA. Simulated free surface velocities are within 1% of the measured data early in time, but divergence occurs at the feature, where the simulation indicates a slower burn through time. The cause was found to be in the Cu conductivity model's compressed regime. The model was improved by lowering the conductivity in the region 12.5-16 g/cc and 350-16000 K with a novel parameter based optimization method using the velocity feature as a figure of merit. Sandia National Laboratories is a multiprogram laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U. S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  8. Development of robust calibration models using support vector machines for spectroscopic monitoring of blood glucose

    PubMed Central

    Barman, Ishan; Kong, Chae-Ryon; Dingari, Narahara Chari; Dasari, Ramachandra R.; Feld, Michael S.

    2010-01-01

    Sample-to-sample variability has proven to be a major challenge in achieving calibration transfer in quantitative biological Raman spectroscopy. Multiple morphological and optical parameters, such as tissue absorption and scattering, physiological glucose dynamics and skin heterogeneity, vary significantly in a human population introducing non-analyte specific features into the calibration model. In this paper, we show that fluctuations of such parameters in human subjects introduce curved (non-linear) effects in the relationship between the concentrations of the analyte of interest and the mixture Raman spectra. To account for these curved effects, we propose the use of support vector machines (SVM) as a non-linear regression method over conventional linear regression techniques such as partial least squares (PLS). Using transcutaneous blood glucose detection as an example, we demonstrate that application of SVM enables a significant improvement (at least 30%) in cross-validation accuracy over PLS when measurements from multiple human volunteers are employed in the calibration set. Furthermore, using physical tissue models with randomized analyte concentrations and varying turbidities, we show that the fluctuations in turbidity alone causes curved effects which can only be adequately modeled using non-linear regression techniques. The enhanced levels of accuracy obtained with the SVM based calibration models opens up avenues for prospective prediction in humans and thus for clinical translation of the technology. PMID:21050004

  9. Modeling workflow to design machine translation applications for public health practice

    PubMed Central

    Turner, Anne M.; Brownstein, Megumu K.; Cole, Kate; Karasz, Hilary; Kirchhoff, Katrin

    2014-01-01

    Objective Provide a detailed understanding of the information workflow processes related to translating health promotion materials for limited English proficiency individuals in order to inform the design of context-driven machine translation (MT) tools for public health (PH). Materials and Methods We applied a cognitive work analysis framework to investigate the translation information workflow processes of two large health departments in Washington State. Researchers conducted interviews, performed a task analysis, and validated results with PH professionals to model translation workflow and identify functional requirements for a translation system for PH. Results The study resulted in a detailed description of work related to translation of PH materials, an information workflow diagram, and a description of attitudes towards MT technology. We identified a number of themes that hold design implications for incorporating MT in PH translation practice. A PH translation tool prototype was designed based on these findings. Discussion This study underscores the importance of understanding the work context and information workflow for which systems will be designed. Based on themes and translation information workflow processes, we identified key design guidelines for incorporating MT into PH translation work. Primary amongst these is that MT should be followed by human review for translations to be of high quality and for the technology to be adopted into practice. Counclusion The time and costs of creating multilingual health promotion materials are barriers to translation. PH personnel were interested in MT's potential to improve access to low-cost translated PH materials, but expressed concerns about ensuring quality. We outline design considerations and a potential machine translation tool to best fit MT systems into PH practice. PMID:25445922

  10. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine

    PubMed Central

    Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang

    2016-01-01

    Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust. PMID:27551829

  11. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine.

    PubMed

    Shang, Qiang; Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang

    2016-01-01

    Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust.

  12. Mathematical Modeling and Simulation of the Pressing Section of a Paper Machine Including Dynamic Capillary Effect

    NASA Astrophysics Data System (ADS)

    Printsypar, G.; Iliev, O.; Rief, S.

    2011-12-01

    Paper production is a challenging problem which attracts attention of many scientists. The process which is of our interest takes place in the pressing section of a paper machine. The paper layer is dried by means of the pressing it against fabrics, i.e. press felts. The paper-felt sandwich is transported through the press nips at high speed (for more details see [3]). Since the natural drainage of water in the felts is much longer than the drying in the pressing section we include in the consideration the dynamic capillary effect. The dynamic capillary pressure-saturation relation proposed by Hassanizadeh and Gray (see [2]) is adopted for the pressing process. One of the other issues which is taken into account while modeling the pressing section is the appearance of fully saturated regions. We include in consideration two flow regimes: the one-phase water flow and the two-phase air-water flow. It leads to a free boundary problem. We also account for the complexity of the paper-felt sandwich porous structure. Apart from the two flow regimes the computational domain is divided by layers into nonoverlapping subdomains. Then, the system of equations describing transport processes in the pressing section is stated taking into account all these features. The presented model is discretized by the finite volume method. We carry out some numerical experiments for different configurations of the pressing section (roll press, shoe press) and for parameters which are typical for paper-felt sandwich during the paper production process. The experiments show that the dynamic capillary effect has a significant influence on the distribution of pressure even for small values of the material coefficient (see Fig. 1). The obtained results are in agreement with laboratory experiment performed in [1], which states that the distribution of the pressure is not symmetric with the maximum value occurring in front of the center of the pressing nip and the minimum value less than entry

  13. Gaussian-binary restricted Boltzmann machines for modeling natural image statistics

    PubMed Central

    Wang, Nan; Wiskott, Laurenz

    2017-01-01

    We present a theoretical analysis of Gaussian-binary restricted Boltzmann machines (GRBMs) from the perspective of density models. The key aspect of this analysis is to show that GRBMs can be formulated as a constrained mixture of Gaussians, which gives a much better insight into the model’s capabilities and limitations. We further show that GRBMs are capable of learning meaningful features without using a regularization term and that the results are comparable to those of independent component analysis. This is illustrated for both a two-dimensional blind source separation task and for modeling natural image patches. Our findings exemplify that reported difficulties in training GRBMs are due to the failure of the training algorithm rather than the model itself. Based on our analysis we derive a better training setup and show empirically that it leads to faster and more robust training of GRBMs. Finally, we compare different sampling algorithms for training GRBMs and show that Contrastive Divergence performs better than training methods that use a persistent Markov chain. PMID:28152552

  14. Light and short arc rubs in rotating machines: Experimental tests and modelling

    NASA Astrophysics Data System (ADS)

    Pennacchi, P.; Bachschmid, N.; Tanzi, E.

    2009-10-01

    Rotor-to-stator rub is a non-linear phenomenon which has been analyzed many times in rotordynamics literature, but very often these studies are devoted simply to highlight non-linearities, using very simple rotors, rather than to present reliable models. However, rotor-to-stator rub is actually one of the most common faults during the operation of rotating machinery. The frequency of its occurrence is increasing due to the trend of reducing the radial clearance between the seal and the rotor in modern turbine units, pumps and compressors in order to increase efficiency. Often the rub occurs between rotor and seals and the analysis of the phenomenon cannot set aside the consideration of the different relative stiffness. This paper presents some experimental results obtained by means of a test rig in which rub conditions of real machines are reproduced. In particular short arc rubs are considered and the shaft is stiffer than the obstacle. Then a model, suitable to be employed for real rotating machinery, is presented and the simulations obtained are compared with the experimental results. The model is able to reproduce the behaviour of the test rig.

  15. Accurate Models of Formation Enthalpy Created using Machine Learning and Voronoi Tessellations

    NASA Astrophysics Data System (ADS)

    Ward, Logan; Liu, Rosanne; Krishna, Amar; Hegde, Vinay; Agrawal, Ankit; Choudhary, Alok; Wolverton, Chris

    Several groups in the past decade have used high-throughput Density Functional Theory to predict the properties of hundreds of thousands of compounds. These databases provide the unique capability of being able to quickly query the properties of many compounds. Here, we explore how these datasets can also be used to create models that can predict the properties of compounds at rates several orders of magnitude faster than DFT. Our method relies on using Voronoi tessellations to derive attributes that quantitatively characterize the local environment around each atom, which then are used as input to a machine learning model. In this presentation, we will discuss the application of this technique to predicting the formation enthalpy of compounds using data from the Open Quantum Materials Database (OQMD). To date, we have found that this technique can be used to create models that are about twice as accurate as those created using the Coulomb Matrix and Partial Radial Distribution approaches and are equally as fast to evaluate.

  16. Geometric dimension model of virtual astronaut body for ergonomic analysis of man-machine space system

    NASA Astrophysics Data System (ADS)

    Qianxiang, Zhou

    2012-07-01

    It is very important to clarify the geometric characteristic of human body segment and constitute analysis model for ergonomic design and the application of ergonomic virtual human. The typical anthropometric data of 1122 Chinese men aged 20-35 years were collected using three-dimensional laser scanner for human body. According to the correlation between different parameters, curve fitting were made between seven trunk parameters and ten body parameters with the SPSS 16.0 software. It can be concluded that hip circumference and shoulder breadth are the most important parameters in the models and the two parameters have high correlation with the others parameters of human body. By comparison with the conventional regressive curves, the present regression equation with the seven trunk parameters is more accurate to forecast the geometric dimensions of head, neck, height and the four limbs with high precision. Therefore, it is greatly valuable for ergonomic design and analysis of man-machine system.This result will be very useful to astronaut body model analysis and application.

  17. Modelling effect of magnetic field on material removal in dry electrical discharge machining

    NASA Astrophysics Data System (ADS)

    Abhishek, Gupta; Suhas, S. Joshi

    2017-02-01

    One of the reasons for increased material removal rate in magnetic field assisted dry electrical discharge machining (EDM) is confinement of plasma due to Lorentz forces. This paper presents a mathematical model to evaluate the effect of external magnetic field on crater depth and diameter in single- and multiple-discharge EDM process. The model incorporates three main effects of the magnetic field, which include plasma confinement, mean free path reduction and pulsating magnetic field effects. Upon the application of an external magnetic field, Lorentz forces that are developed across the plasma column confine the plasma column. Also, the magnetic field reduces the mean free path of electrons due to an increase in the plasma pressure and cycloidal path taken by the electrons between the electrodes. As the mean free path of electrons reduces, more ionization occurs in plasma column and eventually an increase in the current density at the inter-electrode gap occurs. The model results for crater depth and its diameter in single discharge dry EDM process show an error of 9%-10% over the respective experimental values.

  18. Curriculum Assessment Using Artificial Neural Network and Support Vector Machine Modeling Approaches: A Case Study. IR Applications. Volume 29

    ERIC Educational Resources Information Center

    Chen, Chau-Kuang

    2010-01-01

    Artificial Neural Network (ANN) and Support Vector Machine (SVM) approaches have been on the cutting edge of science and technology for pattern recognition and data classification. In the ANN model, classification accuracy can be achieved by using the feed-forward of inputs, back-propagation of errors, and the adjustment of connection weights. In…

  19. Revisiting Arabic Diglossic Switching in Light of the MLF Model and Its Sub-Models: The 4-M Model and the Abstract Level Model.

    ERIC Educational Resources Information Center

    Boussofara-Omar, Naima

    2003-01-01

    Discusses two problematic cases that arose when the Matrix Language frame model of codeswitching was applied to Arabic diglossic switching: a co-occurrence of system morphemes from both varieties of Arabic within a single CP; and CPs in which the word order is that of the dialect but the system morphemes are from Standard Arabic and CPs in which…

  20. Machine Shop Grinding Machines.

    ERIC Educational Resources Information Center

    Dunn, James

    This curriculum manual is one in a series of machine shop curriculum manuals intended for use in full-time secondary and postsecondary classes, as well as part-time adult classes. The curriculum can also be adapted to open-entry, open-exit programs. Its purpose is to equip students with basic knowledge and skills that will enable them to enter the…

  1. Estimating the domain of applicability for machine learning QSAR models: a study on aqueous solubility of drug discovery molecules

    NASA Astrophysics Data System (ADS)

    Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-09-01

    We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

  2. Estimating the domain of applicability for machine learning QSAR models: a study on aqueous solubility of drug discovery molecules

    NASA Astrophysics Data System (ADS)

    Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-12-01

    We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

  3. An Insight to the Modeling of 1 × 1 Rib Loop Formation Process on Circular Weft Knitting Machine using Computer

    NASA Astrophysics Data System (ADS)

    Ray, Sadhan Chandra

    2015-10-01

    The mechanics of single jersey loop formation is well-reported is literature. However, as the concept of any model of double jersey loop formation process is not available in accessible international literature. Therefore, it was planned to develop a model of 1 × 1 rib loop formation process on dial and cylinder machine using computer so that the influence of various input variables on the final loop length as well on the profile of tension on the yarn inside Knitting Zone (KZ) can be understood. The model provides an insight into the mechanics of 1 × 1 rib loop formation system on dial and cylinder machine. Besides, the degree of agreement between predicted and measured values of loop length and cam forces as well as theoretical analysis of the model have justified the acceptability of the model.

  4. Machine Learning

    NASA Astrophysics Data System (ADS)

    Hoffmann, Achim; Mahidadia, Ashesh

    The purpose of this chapter is to present fundamental ideas and techniques of machine learning suitable for the field of this book, i.e., for automated scientific discovery. The chapter focuses on those symbolic machine learning methods, which produce results that are suitable to be interpreted and understood by humans. This is particularly important in the context of automated scientific discovery as the scientific theories to be produced by machines are usually meant to be interpreted by humans. This chapter contains some of the most influential ideas and concepts in machine learning research to give the reader a basic insight into the field. After the introduction in Sect. 1, general ideas of how learning problems can be framed are given in Sect. 2. The section provides useful perspectives to better understand what learning algorithms actually do. Section 3 presents the Version space model which is an early learning algorithm as well as a conceptual framework, that provides important insight into the general mechanisms behind most learning algorithms. In section 4, a family of learning algorithms, the AQ family for learning classification rules is presented. The AQ family belongs to the early approaches in machine learning. The next, Sect. 5 presents the basic principles of decision tree learners. Decision tree learners belong to the most influential class of inductive learning algorithms today. Finally, a more recent group of learning systems are presented in Sect. 6, which learn relational concepts within the framework of logic programming. This is a particularly interesting group of learning systems since the framework allows also to incorporate background knowledge which may assist in generalisation. Section 7 discusses Association Rules - a technique that comes from the related field of Data mining. Section 8 presents the basic idea of the Naive Bayesian Classifier. While this is a very popular learning technique, the learning result is not well suited for

  5. Support Vector Machine Model for Automatic Detection and Classification of Seismic Events

    NASA Astrophysics Data System (ADS)

    Barros, Vesna; Barros, Lucas

    2016-04-01

    The automated processing of multiple seismic signals to detect, localize and classify seismic events is a central tool in both natural hazards monitoring and nuclear treaty verification. However, false detections and missed detections caused by station noise and incorrect classification of arrivals are still an issue and the events are often unclassified or poorly classified. Thus, machine learning techniques can be used in automatic processing for classifying the huge database of seismic recordings and provide more confidence in the final output. Applied in the context of the International Monitoring System (IMS) - a global sensor network developed for the Comprehensive Nuclear-Test-Ban Treaty (CTBT) - we propose a fully automatic method for seismic event detection and classification based on a supervised pattern recognition technique called the Support Vector Machine (SVM). According to Kortström et al., 2015, the advantages of using SVM are handleability of large number of features and effectiveness in high dimensional spaces. Our objective is to detect seismic events from one IMS seismic station located in an area of high seismicity and mining activity and classify them as earthquakes or quarry blasts. It is expected to create a flexible and easily adjustable SVM method that can be applied in different regions and datasets. Taken a step further, accurate results for seismic stations could lead to a modification of the model and its parameters to make it applicable to other waveform technologies used to monitor nuclear explosions such as infrasound and hydroacoustic waveforms. As an authorized user, we have direct access to all IMS data and bulletins through a secure signatory account. A set of significant seismic waveforms containing different types of events (e.g. earthquake, quarry blasts) and noise is being analysed to train the model and learn the typical pattern of the signal from these events. Moreover, comparing the performance of the support

  6. Modeling Plan-Related Clinical Complications Using Machine Learning Tools in a Multiplan IMRT Framework

    SciTech Connect

    Zhang, Hao H.; D'Souza, Warren D. Shi Leyuan; Meyer, Robert R.

    2009-08-01

    Purpose: To predict organ-at-risk (OAR) complications as a function of dose-volume (DV) constraint settings without explicit plan computation in a multiplan intensity-modulated radiotherapy (IMRT) framework. Methods and Materials: Several plans were generated by varying the DV constraints (input features) on the OARs (multiplan framework), and the DV levels achieved by the OARs in the plans (plan properties) were modeled as a function of the imposed DV constraint settings. OAR complications were then predicted for each of the plans by using the imposed DV constraints alone (features) or in combination with modeled DV levels (plan properties) as input to machine learning (ML) algorithms. These ML approaches were used to model two OAR complications after head-and-neck and prostate IMRT: xerostomia, and Grade 2 rectal bleeding. Two-fold cross-validation was used for model verification and mean errors are reported. Results: Errors for modeling the achieved DV values as a function of constraint settings were 0-6%. In the head-and-neck case, the mean absolute prediction error of the saliva flow rate normalized to the pretreatment saliva flow rate was 0.42% with a 95% confidence interval of (0.41-0.43%). In the prostate case, an average prediction accuracy of 97.04% with a 95% confidence interval of (96.67-97.41%) was achieved for Grade 2 rectal bleeding complications. Conclusions: ML can be used for predicting OAR complications during treatment planning allowing for alternative DV constraint settings to be assessed within the planning framework.

  7. Prediction of chronic damage in systemic lupus erythematosus by using machine-learning models

    PubMed Central

    Perricone, Carlo; Galvan, Giulio; Morelli, Francesco; Vicente, Luis Nunes; Leccese, Ilaria; Massaro, Laura; Cipriano, Enrica; Spinelli, Francesca Romana; Alessandri, Cristiano; Valesini, Guido; Conti, Fabrizio

    2017-01-01

    Objective The increased survival in Systemic Lupus Erythematosus (SLE) patients implies the development of chronic damage, occurring in up to 50% of cases. Its prevention is a major goal in the SLE management. We aimed at predicting chronic damage in a large monocentric SLE cohort by using neural networks. Methods We enrolled 413 SLE patients (M/F 30/383; mean age ± SD 46.3±11.9 years; mean disease duration ± SD 174.6 ± 112.1 months). Chronic damage was assessed by the SLICC/ACR Damage Index (SDI). We applied Recurrent Neural Networks (RNNs) as a machine-learning model to predict the risk of chronic damage. The clinical data sequences registered for each patient during the follow-up were used for building and testing the RNNs. Results At the first visit in the Lupus Clinic, 35.8% of patients had an SDI≥1. For the RNN model, two groups of patients were analyzed: patients with SDI = 0 at the baseline, developing damage during the follow-up (N = 38), and patients without damage (SDI = 0). We created a mathematical model with an AUC value of 0.77, able to predict damage development. A threshold value of 0.35 (sensitivity 0.74, specificity 0.76) seemed able to identify patients at risk to develop damage. Conclusion We applied RNNs to identify a prediction model for SLE chronic damage. The use of the longitudinal data from the Sapienza Lupus Cohort, including laboratory and clinical items, resulted able to construct a mathematical model, potentially identifying patients at risk to develop damage. PMID:28329014

  8. Hybrid polylingual object model: an efficient and seamless integration of Java and native components on the Dalvik virtual machine.

    PubMed

    Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv

    2014-01-01

    JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.

  9. Hidden Markov model and support vector machine based decoding of finger movements using electrocorticography

    NASA Astrophysics Data System (ADS)

    Wissel, Tobias; Pfeiffer, Tim; Frysch, Robert; Knight, Robert T.; Chang, Edward F.; Hinrichs, Hermann; Rieger, Jochem W.; Rose, Georg

    2013-10-01

    Objective. Support vector machines (SVM) have developed into a gold standard for accurate classification in brain-computer interfaces (BCI). The choice of the most appropriate classifier for a particular application depends on several characteristics in addition to decoding accuracy. Here we investigate the implementation of hidden Markov models (HMM) for online BCIs and discuss strategies to improve their performance. Approach. We compare the SVM, serving as a reference, and HMMs for classifying discrete finger movements obtained from electrocorticograms of four subjects performing a finger tapping experiment. The classifier decisions are based on a subset of low-frequency time domain and high gamma oscillation features. Main results. We show that decoding optimization between the two approaches is due to the way features are extracted and selected and less dependent on the classifier. An additional gain in HMM performance of up to 6% was obtained by introducing model constraints. Comparable accuracies of up to 90% were achieved with both SVM and HMM with the high gamma cortical response providing the most important decoding information for both techniques. Significance. We discuss technical HMM characteristics and adaptations in the context of the presented data as well as for general BCI applications. Our findings suggest that HMMs and their characteristics are promising for efficient online BCIs.

  10. A practical approach to model selection for support vector machines with a Gaussian kernel.

    PubMed

    Varewyck, Matthias; Martens, Jean-Pierre

    2011-04-01

    When learning a support vector machine (SVM) from a set of labeled development patterns, the ultimate goal is to get a classifier attaining a low error rate on new patterns. This so-called generalization ability obviously depends on the choices of the learning parameters that control the learning process. Model selection is the method for identifying appropriate values for these parameters. In this paper, a novel model selection method for SVMs with a Gaussian kernel is proposed. Its aim is to find suitable values for the kernel parameter γ and the cost parameter C with a minimum amount of central processing unit time. The determination of the kernel parameter is based on the argument that, for most patterns, the decision function of the SVM should consist of a sufficiently large number of significant contributions. A unique property of the proposed method is that it retrieves the kernel parameter as a simple analytical function of the dimensionality of the feature space and the dispersion of the classes in that space. An experimental evaluation on a test bed of 17 classification problems has shown that the new method favorably competes with two recently published methods: the classification of new patterns is equally good, but the computational effort to identify the learning parameters is substantially lower.

  11. Manifest: A computer program for 2-D flow modeling in Stirling machines

    NASA Technical Reports Server (NTRS)

    Gedeon, David

    1989-01-01

    A computer program named Manifest is discussed. Manifest is a program one might want to use to model the fluid dynamics in the manifolds commonly found between the heat exchangers and regenerators of Stirling machines; but not just in the manifolds - in the regenerators as well. And in all sorts of other places too, such as: in heaters or coolers, or perhaps even in cylinder spaces. There are probably nonStirling uses for Manifest also. In broad strokes, Manifest will: (1) model oscillating internal compressible laminar fluid flow in a wide range of two-dimensional regions, either filled with porous materials or empty; (2) present a graphics-based user-friendly interface, allowing easy selection and modification of region shape and boundary condition specification; (3) run on a personal computer, or optionally (in the case of its number-crunching module) on a supercomputer; and (4) allow interactive examination of the solution output so the user can view vector plots of flow velocity, contour plots of pressure and temperature at various locations and tabulate energy-related integrals of interest.

  12. Working with Simple Machines

    ERIC Educational Resources Information Center

    Norbury, John W.

    2006-01-01

    A set of examples is provided that illustrate the use of work as applied to simple machines. The ramp, pulley, lever and hydraulic press are common experiences in the life of a student, and their theoretical analysis therefore makes the abstract concept of work more real. The mechanical advantage of each of these systems is also discussed so that…

  13. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere.

    PubMed

    Ma, Denglong; Zhang, Zaoxiao

    2016-07-05

    Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  14. Multiscale modeling of biological functions: from enzymes to molecular machines (Nobel Lecture).

    PubMed

    Warshel, Arieh

    2014-09-15

    A detailed understanding of the action of biological molecules is a pre-requisite for rational advances in health sciences and related fields. Here, the challenge is to move from available structural information to a clear understanding of the underlying function of the system. In light of the complexity of macromolecular complexes, it is essential to use computer simulations to describe how the molecular forces are related to a given function. However, using a full and reliable quantum mechanical representation of large molecular systems has been practically impossible. The solution to this (and related) problems has emerged from the realization that large systems can be spatially divided into a region where the quantum mechanical description is essential (e.g. a region where bonds are being broken), with the remainder of the system being represented on a simpler level by empirical force fields. This idea has been particularly effective in the development of the combined quantum mechanics/molecular mechanics (QM/MM) models. Here, the coupling between the electrostatic effects of the quantum and classical subsystems has been a key to the advances in describing the functions of enzymes and other biological molecules. The same idea of representing complex systems in different resolutions in both time and length scales has been found to be very useful in modeling the action of complex systems. In such cases, starting with coarse grained (CG) representations that were originally found to be very useful in simulating protein folding, and augmenting them with a focus on electrostatic energies, has led to models that are particularly effective in probing the action of molecular machines. The same multiscale idea is likely to play a major role in modeling of even more complex systems, including cells and collections of cells.

  15. Multiscale Modeling of Biological Functions: From Enzymes to Molecular Machines (Nobel Lecture)

    PubMed Central

    Warshel, Arieh

    2016-01-01

    Adetailed understanding of the action of biological molecules is a pre-requisite for rational advances in health sciences and related fields. Here, the challenge is to move from available structural information to a clear understanding of the underlying function of the system. In light of the complexity of macromolecular complexes, it is essential to use computer simulations to describe how the molecular forces are related to a given function. However, using a full and reliable quantum mechanical representation of large molecular systems has been practically impossible. The solution to this (and related) problems has emerged from the realization that large systems can be spatially divided into a region where the quantum mechanical description is essential (e.g. a region where bonds are being broken), with the remainder of the system being represented on a simpler level by empirical force fields. This idea has been particularly effective in the development of the combined quantum mechanics/molecular mechanics (QM/MM) models. Here, the coupling between the electrostatic effects of the quantum and classical subsystems has been a key to the advances in describing the functions of enzymes and other biological molecules. The same idea of representing complex systems in different resolutions in both time and length scales has been found to be very useful in modeling the action of complex systems. In such cases, starting with coarse grained (CG) representations that were originally found to be very useful in simulating protein folding, and augmenting them with a focus on electrostatic energies, has led to models that are particularly effective in probing the action of molecular machines. The same multiscale idea is likely to play a major role in modeling of even more complex systems, including cells and collections of cells. PMID:25060243

  16. Highly predictive support vector machine (SVM) models for anthrax toxin lethal factor (LF) inhibitors.

    PubMed

    Zhang, Xia; Amin, Elizabeth Ambrose

    2016-01-01

    Anthrax is a highly lethal, acute infectious disease caused by the rod-shaped, Gram-positive bacterium Bacillus anthracis. The anthrax toxin lethal factor (LF), a zinc metalloprotease secreted by the bacilli, plays a key role in anthrax pathogenesis and is chiefly responsible for anthrax-related toxemia and host death, partly via inactivation of mitogen-activated protein kinase kinase (MAPKK) enzymes and consequent disruption of key cellular signaling pathways. Antibiotics such as fluoroquinolones are capable of clearing the bacilli but have no effect on LF-mediated toxemia; LF itself therefore remains the preferred target for toxin inactivation. However, currently no LF inhibitor is available on the market as a therapeutic, partly due to the insufficiency of existing LF inhibitor scaffolds in terms of efficacy, selectivity, and toxicity. In the current work, we present novel support vector machine (SVM) models with high prediction accuracy that are designed to rapidly identify potential novel, structurally diverse LF inhibitor chemical matter from compound libraries. These SVM models were trained and validated using 508 compounds with published LF biological activity data and 847 inactive compounds deposited in the Pub Chem BioAssay database. One model, M1, demonstrated particularly favorable selectivity toward highly active compounds by correctly predicting 39 (95.12%) out of 41 nanomolar-level LF inhibitors, 46 (93.88%) out of 49 inactives, and 844 (99.65%) out of 847 Pub Chem inactives in external, unbiased test sets. These models are expected to facilitate the prediction of LF inhibitory activity for existing molecules, as well as identification of novel potential LF inhibitors from large datasets.

  17. Compliance modeling and analysis of a 3-RPS parallel kinematic machine module

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Zhao, Yanqin; Dai, Jiansheng

    2014-07-01

    The compliance modeling and rigidity performance evaluation for the lower mobility parallel manipulators are still to be remained as two overwhelming challenges in the stage of conceptual design due to their geometric complexities. By using the screw theory, this paper explores the compliance modeling and eigencompliance evaluation of a newly patented 1T2R spindle head whose topological architecture is a 3-RPS parallel mechanism. The kinematic definitions and inverse position analysis are briefly addressed in the first place to provide necessary information for compliance modeling. By considering the 3-RPS parallel kinematic machine(PKM) as a typical compliant parallel device, whose three limb assemblages have bending, extending and torsional deflections, an analytical compliance model for the spindle head is established with screw theory and the analytical stiffness matrix of the platform is formulated. Based on the eigenscrew decomposition, the eigencompliance and corresponding eigenscrews are analyzed and the platform's compliance properties are physically interpreted as the suspension of six screw springs. The distributions of stiffness constants of the six screw springs throughout the workspace are predicted in a quick manner with a piece-by-piece calculation algorithm. The numerical simulation reveals a strong dependency of platform's compliance on its configuration in that they are axially symmetric due to structural features. At the last stage, the effects of some design variables such as structural, configurational and dimensional parameters on system rigidity characteristics are investigated with the purpose of providing useful information for the structural design and performance improvement of the PKM. Compared with previous efforts in compliance analysis of PKMs, the present methodology is more intuitive and universal thus can be easily applied to evaluate the overall rigidity performance of other PKMs with high efficiency.

  18. In situ monitoring and machine modeling of snowpack evolution in complex terrains

    NASA Astrophysics Data System (ADS)

    Frolik, J.; Skalka, C.

    2014-12-01

    It is well known that snowpack evolution depends on variety of landscape conditions including tree cover, slope, wind exposure, etc. In this presentation we report on methods that combine modern in-situ sensor technologies with machine learning-based algorithms to obtain improved models of snowpack evolution. Snowcloud is an embedded data collection system for snow hydrology field research campaigns that leverages distributed wireless sensor network technology to provide data at low cost and high spatial-temporal resolution. The system is compact thus allowing it to be deployed readily within dense canopies and/or steep slopes. The system has demonstrated robustness for multiple-seasons of operation thus showing it is applicable to not only short-term strategic monitoring but extended studies as well. We have used data collected by Snowcloud deployments to develop improved models of snowpack evolution using genetic programming (GP). Such models can be used to augment existing sensor infrastructure to obtain better areal snow depth and snow-water equivalence estimations. The presented work will discuss three multi-season deployments and present data (collected at 1-3 hour intervals and a multiple locations) on snowdepth variation throughout the season. The three deployment sites (Eastern Sierra Mountains, CA; Hubbard Brook Experimental Forest, NH; and Sulitjelma, Norway) are varied not only geographically but also terrain-wise within each small study area (~2.5 hectacre). We will also discuss models generated by inductive (GP) learning, including non-linear regression techniques and evaluation, and how short-term Snowcloud field campaigns can augment existing infrastructure.

  19. Abstraction and Consolidation

    ERIC Educational Resources Information Center

    Monaghan, John; Ozmantar, Mehmet Fatih

    2006-01-01

    The framework for this paper is a recently developed theory of abstraction in context. The paper reports on data collected from one student working on tasks concerned with absolute value functions. It examines the relationship between mathematical constructions and abstractions. It argues that an abstraction is a consolidated construction that can…

  20. Chaotic Boltzmann machines

    NASA Astrophysics Data System (ADS)

    Suzuki, Hideyuki; Imura, Jun-Ichi; Horio, Yoshihiko; Aihara, Kazuyuki

    2013-04-01

    The chaotic Boltzmann machine proposed in this paper is a chaotic pseudo-billiard system that works as a Boltzmann machine. Chaotic Boltzmann machines are shown numerically to have computing abilities comparable to conventional (stochastic) Boltzmann machines. Since no randomness is required, efficient hardware implementation is expected. Moreover, the ferromagnetic phase transition of the Ising model is shown to be characterised by the largest Lyapunov exponent of the proposed system. In general, a method to relate probabilistic models to nonlinear dynamics by derandomising Gibbs sampling is presented.

  1. Chaotic Boltzmann machines.

    PubMed

    Suzuki, Hideyuki; Imura, Jun-ichi; Horio, Yoshihiko; Aihara, Kazuyuki

    2013-01-01

    The chaotic Boltzmann machine proposed in this paper is a chaotic pseudo-billiard system that works as a Boltzmann machine. Chaotic Boltzmann machines are shown numerically to have computing abilities comparable to conventional (stochastic) Boltzmann machines. Since no randomness is required, efficient hardware implementation is expected. Moreover, the ferromagnetic phase transition of the Ising model is shown to be characterised by the largest Lyapunov exponent of the proposed system. In general, a method to relate probabilistic models to nonlinear dynamics by derandomising Gibbs sampling is presented.

  2. Reification of abstract concepts to improve comprehension using interactive virtual environments and a knowledge-based design: a renal physiology model.

    PubMed

    Alverson, Dale C; Saiki, Stanley M; Caudell, Thomas P; Goldsmith, Timothy; Stevens, Susan; Saland, Linda; Colleran, Kathleen; Brandt, John; Danielson, Lee; Cerilli, Lisa; Harris, Alexis; Gregory, Martin C; Stewart, Randall; Norenberg, Jeffery; Shuster, George; Panaoitis; Holten, James; Vergera, Victor M; Sherstyuk, Andrei; Kihmm, Kathleen; Lui, Jack; Wang, Kin Lik

    2006-01-01

    Several abstract concepts in medical education are difficult to teach and comprehend. In order to address this challenge, we have been applying the approach of reification of abstract concepts using interactive virtual environments and a knowledge-based design. Reification is the process of making abstract concepts and events, beyond the realm of direct human experience, concrete and accessible to teachers and learners. Entering virtual worlds and simulations not otherwise easily accessible provides an opportunity to create, study, and evaluate the emergence of knowledge and comprehension from the direct interaction of learners with otherwise complex abstract ideas and principles by bringing them to life. Using a knowledge-based design process and appropriate subject matter experts, knowledge structure methods are applied in order to prioritize, characterize important relationships, and create a concept map that can be integrated into the reified models that are subsequently developed. Applying these principles, our interdisciplinary team has been developing a reified model of the nephron into which important physiologic functions can be integrated and rendered into a three dimensional virtual environment called Flatland, a virtual environments development software tool, within which a learners can interact using off-the-shelf hardware. The nephron model can be driven dynamically by a rules-based artificial intelligence engine, applying the rules and concepts developed in conjunction with the subject matter experts. In the future, the nephron model can be used to interactively demonstrate a number of physiologic principles or a variety of pathological processes that may be difficult to teach and understand. In addition, this approach to reification can be applied to a host of other physiologic and pathological concepts in other systems. These methods will require further evaluation to determine their impact and role in learning.

  3. Solving the AI Planning Plus Scheduling Problem Using Model Checking via Automatic Translation from the Abstract Plan Preparation Language (APPL) to the Symbolic Analysis Laboratory (SAL)

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Munoz, Cesar A.; Siminiceanu, Radu I.

    2007-01-01

    This paper describes a translator from a new planning language named the Abstract Plan Preparation Language (APPL) to the Symbolic Analysis Laboratory (SAL) model checker. This translator has been developed in support of the Spacecraft Autonomy for Vehicles and Habitats (SAVH) project sponsored by the Exploration Technology Development Program, which is seeking to mature autonomy technology for the vehicles and operations centers of Project Constellation.

  4. Dry machinability of aluminum alloys.

    SciTech Connect

    Shareef, I.; Natarajan, M.; Ajayi, O. O.; Energy Technology; Department of IMET

    2005-01-01

    Adverse effects of the use of cutting fluids and environmental concerns with regard to cutting fluid disposability is compelling industry to adopt Dry or near Dry Machining, with the aim of eliminating or significantly reducing the use of metal working fluids. Pending EPA regulations on metal cutting, dry machining is becoming a hot topic of research and investigation both in industry and federal research labs. Although the need for dry machining may be apparent, most of the manufacturers still consider dry machining to be impractical and even if possible, very expensive. This perception is mainly due to lack of appropriate cutting tools that can withstand intense heat and Built-up-Edge (BUE) formation during dry machining. The challenge of heat dissipation without coolant requires a completely different approach to tooling. Special tooling utilizing high-performance multi-layer, multi-component, heat resisting, low friction coatings could be a plausible answer to the challenge of dry machining. In pursuit of this goal Argonne National Labs has introduced Nano-crystalline near frictionless carbon (NFC) diamond like coatings (DLC), while industrial efforts have led to the introduction of composite coatings such as titanium aluminum nitride (TiAlN), tungsten carbide/carbon (WC/C) and others. Although, these coatings are considered to be very promising, they have not been tested either from tribological or from dry machining applications point of view. As such a research program in partnership with federal labs and industrial sponsors has started with the goal of exploring the feasibility of dry machining using the newly developed coatings such as Near Frictionless Carbon Coatings (NFC), Titanium Aluminum Nitride (TiAlN), and multi-layer multicomponent nano coatings such as TiAlCrYN and TiAlN/YN. Although various coatings are under investigation as part of the overall dry machinability program, this extended abstract deals with a systematic investigation of dry

  5. Machine Learning Based Multi-Physical-Model Blending for Enhancing Renewable Energy Forecast -- Improvement via Situation Dependent Error Correction

    SciTech Connect

    Lu, Siyuan; Hwang, Youngdeok; Khabibrakhmanov, Ildar; Marianno, Fernando J.; Shao, Xiaoyan; Zhang, Jie; Hodge, Bri-Mathias; Hamann, Hendrik F.

    2015-07-15

    With increasing penetration of solar and wind energy to the total energy supply mix, the pressing need for accurate energy forecasting has become well-recognized. Here we report the development of a machine-learning based model blending approach for statistically combining multiple meteorological models for improving the accuracy of solar/wind power forecast. Importantly, we demonstrate that in addition to parameters to be predicted (such as solar irradiance and power), including additional atmospheric state parameters which collectively define weather situations as machine learning input provides further enhanced accuracy for the blended result. Functional analysis of variance shows that the error of individual model has substantial dependence on the weather situation. The machine-learning approach effectively reduces such situation dependent error thus produces more accurate results compared to conventional multi-model ensemble approaches based on simplistic equally or unequally weighted model averaging. Validation over an extended period of time results show over 30% improvement in solar irradiance/power forecast accuracy compared to forecasts based on the best individual model.

  6. Abstraction and Problem Reformulation

    NASA Technical Reports Server (NTRS)

    Giunchiglia, Fausto

    1992-01-01

    In work done jointly with Toby Walsh, the author has provided a sound theoretical foundation to the process of reasoning with abstraction (GW90c, GWS9, GW9Ob, GW90a). The notion of abstraction formalized in this work can be informally described as: (property 1), the process of mapping a representation of a problem, called (following historical convention (Sac74)) the 'ground' representation, onto a new representation, called the 'abstract' representation, which, (property 2) helps deal with the problem in the original search space by preserving certain desirable properties and (property 3) is simpler to handle as it is constructed from the ground representation by "throwing away details". One desirable property preserved by an abstraction is provability; often there is a relationship between provability in the ground representation and provability in the abstract representation. Another can be deduction or, possibly inconsistency. By 'throwing away details' we usually mean that the problem is described in a language with a smaller search space (for instance a propositional language or a language without variables) in which formulae of the abstract representation are obtained from the formulae of the ground representation by the use of some terminating rewriting technique. Often we require that the use of abstraction results in more efficient .reasoning. However, it might simply increase the number of facts asserted (eg. by allowing, in practice, the exploration of deeper search spaces or by implementing some form of learning). Among all abstractions, three very important classes have been identified. They relate the set of facts provable in the ground space to those provable in the abstract space. We call: TI abstractions all those abstractions where the abstractions of all the provable facts of the ground space are provable in the abstract space; TD abstractions all those abstractions wllere the 'unabstractions' of all the provable facts of the abstract space are

  7. Abstraction in mathematics.

    PubMed Central

    Ferrari, Pier Luigi

    2003-01-01

    Some current interpretations of abstraction in mathematical settings are examined from different perspectives, including history and learning. It is argued that abstraction is a complex concept and that it cannot be reduced to generalization or decontextualization only. In particular, the links between abstraction processes and the emergence of new objects are shown. The role that representations have in abstraction is discussed, taking into account both the historical and the educational perspectives. As languages play a major role in mathematics, some ideas from functional linguistics are applied to explain to what extent mathematical notations are to be considered abstract. Finally, abstraction is examined from the perspective of mathematics education, to show that the teaching ideas resulting from one-dimensional interpretations of abstraction have proved utterly unsuccessful. PMID:12903658

  8. Advancing brain-machine interfaces: moving beyond linear state space models

    PubMed Central

    Rouse, Adam G.; Schieber, Marc H.

    2015-01-01

    Advances in recent years have dramatically improved output control by Brain-Machine Interfaces (BMIs). Such devices nevertheless remain robotic and limited in their movements compared to normal human motor performance. Most current BMIs rely on transforming recorded neural activity to a linear state space composed of a set number of fixed degrees of freedom. Here we consider a variety of ways in which BMI design might be advanced further by applying non-linear dynamics observed in normal motor behavior. We consider (i) the dynamic range and precision of natural movements, (ii) differences between cortical activity and actual body movement, (iii) kinematic and muscular synergies, and (iv) the implications of large neuronal populations. We advance the hypothesis that a given population of recorded neurons may transmit more useful information than can be captured by a single, linear model across all movement phases and contexts. We argue that incorporating these various non-linear characteristics will be an important next step in advancing BMIs to more closely match natural motor performance. PMID:26283932

  9. Application of machine learning algorithms for clinical predictive modeling: a data-mining approach in SCT.

    PubMed

    Shouval, R; Bondi, O; Mishan, H; Shimoni, A; Unger, R; Nagler, A

    2014-03-01

    Data collected from hematopoietic SCT (HSCT) centers are becoming more abundant and complex owing to the formation of organized registries and incorporation of biological data. Typically, conventional statistical methods are used for the development of outcome prediction models and risk scores. However, these analyses carry inherent properties limiting their ability to cope with large data sets with multiple variables and samples. Machine learning (ML), a field stemming from artificial intelligence, is part of a wider approach for data analysis termed data mining (DM). It enables prediction in complex data scenarios, familiar to practitioners and researchers. Technological and commercial applications are all around us, gradually entering clinical research. In the following review, we would like to expose hematologists and stem cell transplanters to the concepts, clinical applications, strengths and limitations of such methods and discuss current research in HSCT. The aim of this review is to encourage utilization of the ML and DM techniques in the field of HSCT, including prediction of transplantation outcome and donor selection.

  10. Fullrmc, a rigid body reverse monte carlo modeling package enabled with machine learning and artificial intelligence

    DOE PAGES

    Aoun, Bachir

    2016-01-22

    Here, a new Reverse Monte Carlo (RMC) package ‘fullrmc’ for atomic or rigid body and molecular, amorphous or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython ,C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with amore » set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modelling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. Also fullrmc provides a unique way with almost no additional computational cost to recur a group’s selection, allowing the system to go out of local minimas by refining a group’s position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group.« less

  11. Fullrmc, a rigid body reverse monte carlo modeling package enabled with machine learning and artificial intelligence

    SciTech Connect

    Aoun, Bachir

    2016-01-22

    Here, a new Reverse Monte Carlo (RMC) package ‘fullrmc’ for atomic or rigid body and molecular, amorphous or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython ,C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with a set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modelling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. Also fullrmc provides a unique way with almost no additional computational cost to recur a group’s selection, allowing the system to go out of local minimas by refining a group’s position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group.

  12. Machine learning based compartment models with permeability for white matter microstructure imaging.

    PubMed

    Nedjati-Gilani, Gemma L; Schneider, Torben; Hall, Matt G; Cawley, Niamh; Hill, Ioana; Ciccarelli, Olga; Drobnjak, Ivana; Wheeler-Kingshott, Claudia A M Gandini; Alexander, Daniel C

    2017-04-15

    Some microstructure parameters, such as permeability, remain elusive because mathematical models that express their relationship to the MR signal accurately are intractable. Here, we propose to use computational models learned from simulations to estimate these parameters. We demonstrate the approach in an example which estimates water residence time in brain white matter. The residence time τi of water inside axons is a potentially important biomarker for white matter pathologies of the human central nervous system, as myelin damage is hypothesised to affect axonal permeability, and thus τi. We construct a computational model using Monte Carlo simulations and machine learning (specifically here a random forest regressor) in order to learn a mapping between features derived from diffusion weighted MR signals and ground truth microstructure parameters, including τi. We test our numerical model using simulated and in vivo human brain data. Simulation results show that estimated parameters have strong correlations with the ground truth parameters (R(2)={0.88,0.95,0.82,0.99}) for volume fraction, residence time, axon radius and diffusivity respectively), and provide a marked improvement over the most widely used Kärger model (R(2)={0.75,0.60,0.11,0.99}). The trained model also estimates sensible microstructure parameters from in vivo human brain data acquired from healthy controls, matching values found in literature, and provides better reproducibility than the Kärger model on both the voxel and ROI level. Finally, we acquire data from two Multiple Sclerosis (MS) patients and compare to the values in healthy subjects. We find that in the splenium of corpus callosum (CC-S) the estimate of the residence time is 0.57±0.05s for the healthy subjects, while in the MS patient with a lesion in CC-S it is 0.33±0.12s in the normal appearing white matter (NAWM) and 0.19±0.11s in the lesion. In the corticospinal tracts (CST) the estimate of the residence time is 0.52±0

  13. Implications of the Turing machine model of computation for processor and programming language design

    NASA Astrophysics Data System (ADS)

    Hunter, Geoffrey

    2004-01-01

    A computational process is classified according to the theoretical model that is capable of executing it; computational processes that require a non-predeterminable amount of intermediate storage for their execution are Turing-machine (TM) processes, while those whose storage are predeterminable are Finite Automation (FA) processes. Simple processes (such as traffic light controller) are executable by Finite Automation, whereas the most general kind of computation requires a Turing Machine for its execution. This implies that a TM process must have a non-predeterminable amount of memory allocated to it at intermediate instants of its execution; i.e. dynamic memory allocation. Many processes encountered in practice are TM processes. The implication for computational practice is that the hardware (CPU) architecture and its operating system must facilitate dynamic memory allocation, and that the programming language used to specify TM processes must have statements with the semantic attribute of dynamic memory allocation, for in Alan Turing"s thesis on computation (1936) the "standard description" of a process is invariant over the most general data that the process is designed to process; i.e. the program describing the process should never have to be modified to allow for differences in the data that is to be processed in different instantiations; i.e. data-invariant programming. Any non-trivial program is partitioned into sub-programs (procedures, subroutines, functions, modules, etc). Examination of the calls/returns between the subprograms reveals that they are nodes in a tree-structure; this tree-structure is independent of the programming language used to encode (define) the process. Each sub-program typically needs some memory for its own use (to store values intermediate between its received data and its computed results); this locally required memory is not needed before the subprogram commences execution, and it is not needed after its execution terminates

  14. Aerodynamic Properties Analysis of Rapid Prototyped Models Versus Conventional Machined Models

    NASA Technical Reports Server (NTRS)

    Springer, A.; Cooper, K.

    1998-01-01

    Initial studies of the aerodynamic characteristics of proposed launch vehicles can be made more accurately if lower cost, high fidelity aerodynamic models are available for wind tunnel testing early in the design phase. This paper discusses the results of a study undertaken at NASA's Marshall Space Flight Center to determine if four rapid prototyping methods using a variety of materials are suitable for the design and manufacturing of high speed wind tunnel models in direct testing applications. It also gives an analysis of whether these materials and processes are of sufficient strength and fidelity to withstand the testing environment. In addition to test data, costs and turn-around times for the various models are given. Based on the results of this study, it can be concluded that rapid prototyping models show promise in limited direct application for preliminary aerodynamic development studies at subsonic, transonic, and supersonic speeds.

  15. Effects of imbalance and geometric error on precision grinding machines

    SciTech Connect

    Bibler, J.E.

    1997-06-01

    To study balancing in grinding, a simple mechanical system was examined. It was essential to study such a well-defined system, as opposed to a large, complex system such as a machining center. The use of a compact, well-defined system enabled easy quantification of the imbalance force input, its phase angle to any geometric decentering, and good understanding of the machine mode shapes. It is important to understand a simple system such as the one I examined given that imbalance is so intimately coupled to machine dynamics. It is possible to extend the results presented here to industrial machines, although that is not part of this work. In addition to the empirical testing, a simple mechanical system to look at how mode shapes, balance, and geometric error interplay to yield spindle error motion was modelled. The results of this model will be presented along with the results from a more global grinding model. The global model, presented at ASPE in November 1996, allows one to examine the effects of changing global machine parameters like stiffness and damping. This geometrically abstract, one-dimensional model will be presented to demonstrate the usefulness of an abstract approach for first-order understanding but it will not be the main focus of this thesis. 19 refs., 36 figs., 10 tables.

  16. Is searching full text more effective than searching abstracts?

    PubMed Central

    Lin, Jimmy

    2009-01-01

    Background With the growing availability of full-text articles online, scientists and other consumers of the life sciences literature now have the ability to go beyond searching bibliographic records (title, abstract, metadata) to directly access full-text content. Motivated by this emerging trend, I posed the following question: is searching full text more effective than searching abstracts? This question is answered by comparing text retrieval algorithms on MEDLINE® abstracts, full-text articles, and spans (paragraphs) within full-text articles using data from the TREC 2007 genomics track evaluation. Two retrieval models are examined: bm25 and the ranking algorithm implemented in the open-source Lucene search engine. Results Experiments show that treating an entire article as an indexing unit does not consistently yield higher effectiveness compared to abstract-only search. However, retrieval based on spans, or paragraphs-sized segments of full-text articles, consistently outperforms abstract-only search. Results suggest that highest overall effectiveness may be achieved by combining evidence from spans and full articles. Conclusion Users searching full text are more likely to find relevant articles than searching only abstracts. This finding affirms the value of full text collections for text retrieval and provides a starting point for future work in exploring algorithms that take advantage of rapidly-growing digital archives. Experimental results also highlight the need to develop distributed text retrieval algorithms, since full-text articles are significantly longer than abstracts and may require the computational resources of multiple machines in a cluster. The MapReduce programming model provides a convenient framework for organizing such computations. PMID:19192280

  17. Modelling and simulation of effect of ultrasonic vibrations on machining of Ti6Al4V.

    PubMed

    Patil, Sandip; Joshi, Shashikant; Tewari, Asim; Joshi, Suhas S

    2014-02-01

    The titanium alloys cause high machining heat generation and consequent rapid wear of cutting tool edges during machining. The ultrasonic assisted turning (UAT) has been found to be very effective in machining of various materials; especially in the machining of "difficult-to-cut" material like Ti6Al4V. The present work is a comprehensive study involving 2D FE transient simulation of UAT in DEFORM framework and their experimental characterization. The simulation shows that UAT reduces the stress level on cutting tool during machining as compared to that of in continuous turning (CT) barring the penetration stage, wherein both tools are subjected to identical stress levels. There is a 40-45% reduction in cutting forces and about 48% reduction in cutting temperature in UAT over that of in CT. However, the reduction magnitude reduces with an increase in the cutting speed. The experimental analysis of UAT process shows that the surface roughness in UAT is lower than in CT, and the UATed surfaces have matte finish as against the glossy finish on the CTed surfaces. Microstructural observations of the chips and machined surfaces in both processes reveal that the intensity of thermal softening and shear band formation is reduced in UAT over that of in CT.

  18. Abstracts and program proceedings of the 1994 meeting of the International Society for Ecological Modelling North American Chapter

    SciTech Connect

    Kercher, J.R.

    1994-06-01

    This document contains information about the 1994 meeting of the International Society for Ecological Modelling North American Chapter. The topics discussed include: extinction risk assessment modelling, ecological risk analysis of uranium mining, impacts of pesticides, demography, habitats, atmospheric deposition, and climate change.

  19. MOAtox: A comprehensive mode of action and acute aquatic toxicity database for predictive model development (SETAC abstract)

    EPA Science Inventory

    The mode of toxic action (MOA) has been recognized as a key determinant of chemical toxicity and as an alternative to chemical class-based predictive toxicity modeling. However, the development of quantitative structure activity relationship (QSAR) and other models has been limit...

  20. (abstract) A Test of the Theoretical Models of Bipolar Outflows: The Bipolar Outflow in Mon R2

    NASA Technical Reports Server (NTRS)

    Xie, Taoling; Goldsmith, Paul; Patel, Nimesh

    1993-01-01

    We report some results of a study of the massive bipolar outflow in the central region of the relatively nearby giant molecular cloud Monoceros R2. We make a quantative comparison of our results with the Shu et al. outflow model which incorporates a radially directed wind sweeping up the ambient material into a shell. We find that this simple model naturally explains the shape of this thin shell. Although Shu's model in its simplest form predicts with reasonable parameters too much mass at very small polar angles, as previously pointed out by Masson and Chernin, it provides a reasonable good fit to the mass distribution at larger polar angles. It is possible that this discrepancy is due to inhomogeneities of the ambient molecular gas which is not considered by the model. We also discuss the constraints imposed by these results on recent jet-driven outflow models.

  1. Modeling the stress dependence of Barkhausen phenomena for stress axis linear and noncollinear with applied magnetic field (abstract)

    SciTech Connect

    Sablik, M.J.; Augustyniak, B.; Chmielewski, M.

    1996-04-01

    The almost linear dependence of the maximum Barkhausen noise signal amplitude on stress has made it a tool for nondestructive evaluation of residual stress. Recently, a model has been developed to account for the stress dependence of the Barkhausen noise signal. The model uses the development of Alessandro {ital et} {ital al}. who use coupled Langevin equations to derive an expression for the Barkhausen noise power spectrum. The model joins this expression to the magnetomechanical hysteresis model of Sablik {ital et} {ital al}., obtaining both a hysteretic and stress-dependent result for the magnetic-field-dependent Barkhausen noise envelope and obtaining specifically the almost linear stress dependence of the Barkhausen noise maximum experimentally. In this paper, we extend the model to derive the angular dependence observed by Kwun of the Barkhausen noise amplitude when stress axis is taken at different angles relative to magnetic field. We also apply the model to the experimental observation that in XC10 French steel, there is an apparent almost linear correlation with stress of hysteresis loss and of the integral of the Barkhausen noise signal over applied field {ital H}. Further, the two quantities, Barkhausen noise integral and hysteresis loss, are linearly correlated with each other. The model shows how that behavior is to be expected for the measured steel because of its sharply rising hysteresis curve. {copyright} {ital 1996 American Institute of Physics.}

  2. Modelling of the radial forging process of a hollow billet with the mandrel on the lever radial forging machine

    NASA Astrophysics Data System (ADS)

    Karamyshev, A. P.; Nekrasov, I. I.; Pugin, A. I.; Fedulov, A. A.

    2016-04-01

    The finite-element method (FEM) has been used in scientific research of forming technological process modelling. Among the others, the process of the multistage radial forging of hollow billets has been modelled. The model includes both the thermal problem, concerning preliminary heating of the billet taking into account thermal expansion, and the deformation problem, when the billet is forged in a special machine. The latter part of the model describes such features of the process as die calibration, die movement, initial die temperature, friction conditions, etc. The results obtained can be used to define the necessary process parameters and die calibration.

  3. Gasoline surrogate modeling of gasoline ignition in a rapid compression machine and comparison to experiments

    SciTech Connect

    Mehl, M; Kukkadapu, G; Kumar, K; Sarathy, S M; Pitz, W J; Sung, S J

    2011-09-15

    The use of gasoline in homogeneous charge compression ignition engines (HCCI) and in duel fuel diesel - gasoline engines, has increased the need to understand its compression ignition processes under engine-like conditions. These processes need to be studied under well-controlled conditions in order to quantify low temperature heat release and to provide fundamental validation data for chemical kinetic models. With this in mind, an experimental campaign has been undertaken in a rapid compression machine (RCM) to measure the ignition of gasoline mixtures over a wide range of compression temperatures and for different compression pressures. By measuring the pressure history during ignition, information on the first stage ignition (when observed) and second stage ignition are captured along with information on the phasing of the heat release. Heat release processes during ignition are important because gasoline is known to exhibit low temperature heat release, intermediate temperature heat release and high temperature heat release. In an HCCI engine, the occurrence of low-temperature and intermediate-temperature heat release can be exploited to obtain higher load operation and has become a topic of much interest for engine researchers. Consequently, it is important to understand these processes under well-controlled conditions. A four-component gasoline surrogate model (including n-heptane, iso-octane, toluene, and 2-pentene) has been developed to simulate real gasolines. An appropriate surrogate mixture of the four components has been developed to simulate the specific gasoline used in the RCM experiments. This chemical kinetic surrogate model was then used to simulate the RCM experimental results for real gasoline. The experimental and modeling results covered ultra-lean to stoichiometric mixtures, compressed temperatures of 640-950 K, and compression pressures of 20 and 40 bar. The agreement between the experiments and model is encouraging in terms of first

  4. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to

  5. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering.

    PubMed

    Shanechi, Maryam M; Orsborn, Amy L; Carmena, Jose M

    2016-04-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain's behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user's motor intention during CLDA-a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter

  6. Realistic modelling of the tool kinematics of radial-axial ring rolling machines in finite element simulation

    NASA Astrophysics Data System (ADS)

    Schwich, Gideon; Jenkouk, Vahid; Hirt, Gerhard

    2016-10-01

    For simulating metal forming processes by means of Finite Element programs it is required to define all tool motions beforehand. This is one of the major difficulties of the conventional Finite Elements Analysis (FEA) for simulating ring rolling processes, since in reality the motions are controlled by closed-loop control systems according to current sensor values. A solution is given by integrating control algorithms into the Finite Element model. In a previous publication the authors have presented a method in which the algorithms of an industrial control system of ring rolling machines are coupled with the Finite Element model. Although this approach enables modelling with realistic kinematic conditions, it has the major drawback that the algorithms of the used control are not disclosed to the users. Hence, it will not be possible to modify the controller for new processes and process optimization. In this paper, therefore, a set of reasonable and simple control algorithms is introduced, which can be used as basis for further improvements of existing control algorithms of ring rolling. The developed approach considers all relevant sensors of ring rolling machines. Using the developed model a ring rolling simulations is carried out and compared to the corresponding experimental results. The results show very good agreement in terms of the ring geometry and the machine loads.

  7. Abstract and keywords.

    PubMed

    Peh, W C G; Ng, K H

    2008-09-01

    The abstract of a scientific paper represents a concise, accurate and factual mini-version of the paper contents. Abstract format may vary according to the individual journal. For original articles, a structured abstract usually consists of the following headings: aims (or objectives), materials and methods, results and conclusion. A few keywords that capture the main topics of the paper help indexing in the medical literature.

  8. Neural control and adaptive neural forward models for insect-like, energy-efficient, and adaptable locomotion of walking machines.

    PubMed

    Manoonpong, Poramate; Parlitz, Ulrich; Wörgötter, Florentin

    2013-01-01

    Living creatures, like walking animals, have found fascinating solutions for the problem of locomotion control. Their movements show the impression of elegance including versatile, energy-efficient, and adaptable locomotion. During the last few decades, roboticists have tried to imitate such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast, animal locomotion mechanisms seem to largely depend not only on central mechanisms (central pattern generators, CPGs) and sensory feedback (afferent-based control) but also on internal forward models (efference copies). They are used to a different degree in different animals. Generally, CPGs organize basic rhythmic motions which are shaped by sensory feedback while internal models are used for sensory prediction and state estimations. According to this concept, we present here adaptive neural locomotion control consisting of a CPG mechanism with neuromodulation and local leg control mechanisms based on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show that the employed embodied neural closed-loop system can be a powerful way for developing robust and adaptable machines.

  9. Neural control and adaptive neural forward models for insect-like, energy-efficient, and adaptable locomotion of walking machines

    PubMed Central

    Manoonpong, Poramate; Parlitz, Ulrich; Wörgötter, Florentin

    2013-01-01

    Living creatures, like walking animals, have found fascinating solutions for the problem of locomotion control. Their movements show the impression of elegance including versatile, energy-efficient, and adaptable locomotion. During the last few decades, roboticists have tried to imitate such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast, animal locomotion mechanisms seem to largely depend not only on central mechanisms (central pattern generators, CPGs) and sensory feedback (afferent-based control) but also on internal forward models (efference copies). They are used to a different degree in different animals. Generally, CPGs organize basic rhythmic motions which are shaped by sensory feedback while internal models are used for sensory prediction and state estimations. According to this concept, we present here adaptive neural locomotion control consisting of a CPG mechanism with neuromodulation and local leg control mechanisms based on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show that the employed embodied neural closed-loop system can be a powerful way for developing robust and adaptable machines. PMID:23408775

  10. A Bayesian network model for predicting aquatic toxicity mode of action using two dimensional theoretical molecular descriptors-abstract

    EPA Science Inventory

    The mode of toxic action (MoA) has been recognized as a key determinant of chemical toxicity but MoA classification in aquatic toxicology has been limited. We developed a Bayesian network model to classify aquatic toxicity mode of action using a recently published dataset contain...

  11. Database machines

    NASA Technical Reports Server (NTRS)

    Stiefel, M. L.

    1983-01-01

    The functions and performance characteristics of data base machines (DBM), including machines currently being studied in research laboratories and those currently offered on a commerical basis are discussed. The cost/benefit considerations that must be recognized in selecting a DBM are discussed, as well as the future outlook for such machines.

  12. Parametric modeling and optimization of laser scanning parameters during laser assisted machining of Inconel 718

    NASA Astrophysics Data System (ADS)

    Venkatesan, K.; Ramanujam, R.; Kuppan, P.

    2016-04-01

    This paper presents a parametric effect, microstructure, micro-hardness and optimization of laser scanning parameters (LSP) on heating experiments during laser assisted machining of Inconel 718 alloy. The laser source used for experiments is a continuous wave Nd:YAG laser with maximum power of 2 kW. The experimental parameters in the present study are cutting speed in the range of 50-100 m/min, feed rate of 0.05-0.1 mm/rev, laser power of 1.25-1.75 kW and approach angle of 60-90°of laser beam axis to tool. The plan of experiments are based on central composite rotatable design L31 (43) orthogonal array. The surface temperature is measured via on-line measurement using infrared pyrometer. Parametric significance on surface temperature is analysed using response surface methodology (RSM), analysis of variance (ANOVA) and 3D surface graphs. The structural change of the material surface is observed using optical microscope and quantitative measurement of heat affected depth that are analysed by Vicker's hardness test. The results indicate that the laser power and approach angle are the most significant parameters to affect the surface temperature. The optimum ranges of laser power and approach angle was identified as 1.25-1.5 kW and 60-65° using overlaid contour plot. The developed second order regression model is found to be in good agreement with experimental values with R2 values of 0.96 and 0.94 respectively for surface temperature and heat affected depth.

  13. The Aachen miniaturized heart-lung machine--first results in a small animal model.

    PubMed

    Schnoering, Heike; Arens, Jutta; Sachweh, Joerg S; Veerman, Melanie; Tolba, Rene; Schmitz-Rode, Thomas; Steinseifer, Ulrich; Vazquez-Jimenez, Jaime F

    2009-11-01

    Congenital heart surgery most often incorporates extracorporeal circulation. Due to foreign surface contact and the administration of foreign blood in many children, inflammatory response and hemolysis are important matters of debate. This is particularly an issue in premature and low birth-weight newborns. Taking these considerations into account, the Aachen miniaturized heart-lung machine (MiniHLM) with a total static priming volume of 102 mL (including tubing) was developed and tested in a small animal model. Fourteen female Chinchilla Bastard rabbits were operated on using two different kinds of circuits. In eight animals, a conventional HLM with Dideco Kids oxygenator and Stöckert roller pump (Sorin group, Milan, Italy) was used, and the Aachen MiniHLM was employed in six animals. Outcome parameters were hemolysis and blood gas analysis including lactate. The rabbits were anesthetized, and a standard median sternotomy was performed. The ascending aorta and the right atrium were cannulated. After initiating cardiopulmonary bypass, the aorta was cross-clamped, and cardiac arrest was induced by blood cardioplegia. Blood samples for hemolysis and blood gas analysis were drawn before, during, and after cardiopulmonary bypass. After 1 h aortic clamp time, all animals were weaned from cardiopulmonary bypass. Blood gas analysis revealed adequate oxygenation and perfusion during cardiopulmonary bypass, irrespective of the employed perfusion system. The use of the Aachen MiniHLM resulted in a statistically significant reduced decrease in fibrinogen during cardiopulmonary bypass. A trend revealing a reduced increase in free hemoglobin during bypass in the MiniHLM group could also be observed. This newly developed Aachen MiniHLM with low priming volume, reduced hemolysis, and excellent gas transfer (O(2) and CO(2)) may reduce circuit-induced complications during heart surgery in neonates.

  14. Seismic waves modeling with the Fourier pseudo-spectral method on massively parallel machines.

    NASA Astrophysics Data System (ADS)

    Klin, Peter

    2015-04-01

    The Fourier pseudo-spectral method (FPSM) is an approach for the 3D numerical modeling of the wave propagation, which is based on the discretization of the spatial domain in a structured grid and relies on global spatial differential operators for the solution of the wave equation. This last peculiarity is advantageous from the accuracy point of view but poses difficulties for an efficient implementation of the method to be run on parallel computers with distributed memory architecture. The 1D spatial domain decomposition approach has been so far commonly adopted in the parallel implementations of the FPSM, but it implies an intensive data exchange among all the processors involved in the computation, which can degrade the performance because of communication latencies. Moreover, the scalability of the 1D domain decomposition is limited, since the number of processors can not exceed the number of grid points along the directions in which the domain is partitioned. This limitation inhibits an efficient exploitation of the computational environments with a very large number of processors. In order to overcome the limitations of the 1D domain decomposition we implemented a parallel version of the FPSM based on a 2D domain decomposition, which allows to achieve a higher degree of parallelism and scalability on massively parallel machines with several thousands of processing elements. The parallel programming is essentially achieved using the MPI protocol but OpenMP parts are also included in order to exploit the single processor multi - threading capabilities, when available. The developed tool is aimed at the numerical simulation of the seismic waves propagation and in particular is intended for earthquake ground motion research. We show the scalability tests performed up to 16k processing elements on the IBM Blue Gene/Q computer at CINECA (Italy), as well as the application to the simulation of the earthquake ground motion in the alluvial plain of the Po river (Italy).

  15. H-atom abstraction reaction for organic substrates via mononuclear copper(II)-superoxo species as a model for DbetaM and PHM.

    PubMed

    Fujii, Tatsuya; Yamaguchi, Syuhei; Hirota, Shun; Masuda, Hideki

    2008-01-07

    Hydrogen atom abstraction reactions have been implicated in oxygenation reactions catalyzed by copper monooxygenases such as peptidylglycine alpha-hydroxylating monooxygenase (PHM) and dopamine beta-monooxygenase (DbetaM). We have investigated mononuclear copper(I) and copper(II) complexes with bis[(6-neopentylamino-2-pyridyl)methyl][(2-pyridyl)methyl]amine (BNPA) as functional models for these enzymes. The reaction of [Cu(II)(bnpa)]2+ with H2O2, affords a quasi-stable mononuclear copper(II)-hydroperoxo complex, [Cu(II)(bnpa)(OOH)]+ (4) which is stabilized by hydrophobic interactions and hydrogen bonds in the vicinity of the copper(II) ion. On the other hand, the reaction of [Cu(I)(bnpa)]+ (1) with O2 generates a trans-mu-1,2-peroxo dicopper(II) complex [Cu(II)2(bnpa)2(O2(2-]2+ (2). Interestingly, the same reactions carried out in the presence of exogenous substrates such as TEMPO-H, produce a mononuclear copper(II)-hydroperoxo complex 4. Under these conditions, the H-atom abstraction reaction proceeds via the mononuclear copper(II)-superoxo intermediate [Cu(II)(bnpa)(O2-)]+ (3), as confirmed from indirect observations using a spin trap reagent. Reactions with several substrates having different bond dissociation energies (BDE) indicate that, under our experimental conditions the H-atom abstraction reaction proceeds for substrates with a weak X-H bond (BDE < 72.6 kcal mol(-1)). These investigations indicate that the copper(II)-hydroperoxo complex is a useful tool for elucidation of H-atom abstraction reaction mechanisms for exogenous substrates. The useful functionality of the complex has been achieved via careful control of experimental conditions and the choice of appropriate ligands for the complex.

  16. Estimating the period and Q of the Chandler Wobble from observations and models of its excitation (Abstract)

    NASA Astrophysics Data System (ADS)

    Gross, R.; Nastula, J.

    2015-08-01

    Any irregularly shaped solid body rotating about some axis that is not aligned with its figure axis will freely wobble as it rotates. For the Earth, this free wobble is known as the Chandler wobble in honor of S.C. Chandler, Jr. who first observed it in 1891. Unlike the forced wobbles of the Earth, such as the annual wobble, whose periods are the same as the periods of the forcing mechanisms, the period of the free Chandler wobble is a function of the internal structure and rheology of the Earth, and its decay time constant, or quality factor Q, is a function of the dissipation mechanism(s), like mantle anelasticity, that are acting to dampen it. Improved estimates of the period and Q of the Chandler wobble can therefore be used to improve our understanding of these properties of the Earth. Here, estimates of the period and Q of the Chandler wobble are obtained by finding those values that minimize the power within the Chandler band of the difference between observed and modeled polar motion excitation spanning 1962- 2010. Atmosphere, ocean, and hydrology models are used to model the excitation caused by both mass and motion variations within these global geophysical fluids. Direct observations of the excitation caused by mass variations as determined from GRACE time varying gravitational field measurements are also used. The resulting estimates of the period and Q of the Chandler wobble will be presented along with a discussion of the robustness of the estimates.

  17. Technical Abstracts, 1988

    SciTech Connect

    Kotowski, M.

    1989-05-01

    This document is a compilation of the abstracts from unclassified documents published by Mechanical Engineering at Lawrence Livermore National Laboratory (LLNL) during the calendar year 1988. Many abstracts summarize work completed and published in report form. These are UCRL-90,000 and 100,000 series documents, which include the full text of articles to be published in journals and of papers to be presented at meetings, and UCID reports, which are informal documents. Not all UCIDs contain abstracts: short summaries were generated when abstracts were not included. Technical Abstracts also provides brief descriptions of those documents assigned to the MISC (miscellaneous) category. These are generally viewgraphs or photographs presented at meetings. The abstracts cover the broad range of technologies within Mechanical Engineering and are grouped by the principal author's division. An eighth category is devoted to abstracts presented at the CUBE symposium sponsored jointly by LLNL, Los Alamos National Laboratory, and Sandia Laboratories. Within these areas, abstracts are listed numerically. An author index and title index are provided at the back of the book for cross referencing. The publications listed may be obtained by contacting LLNL's TID library or the National Technical Information Service, US Department of Commerce, 5285 Port Royal Road, Springfield, VA 22161. Further information may be obtained by contacting the author directly or the persons listed in the introduction of each subject area.

  18. Paper Abstract Animals

    ERIC Educational Resources Information Center

    Sutley, Jane

    2010-01-01

    Abstraction is, in effect, a simplification and reduction of shapes with an absence of detail designed to comprise the essence of the more naturalistic images being depicted. Without even intending to, young children consistently create interesting, and sometimes beautiful, abstract compositions. A child's creations, moreover, will always seem to…

  19. Leadership Abstracts, Volume 10.

    ERIC Educational Resources Information Center

    Milliron, Mark D., Ed.

    1997-01-01

    The abstracts in this series provide brief discussions of issues related to leadership, administration, professional development, technology, and education in community colleges. Volume 10 for 1997 contains the following 12 abstracts: (1) "On Community College Renewal" (Nathan L. Hodges and Mark D. Milliron); (2) "The Community College Niche in a…

  20. Is It Really Abstract?

    ERIC Educational Resources Information Center

    Kernan, Christine

    2011-01-01

    For this author, one of the most enjoyable aspects of teaching elementary art is the willingness of students to embrace the different styles of art introduced to them. In this article, she describes a project that allows upper-elementary students to learn about abstract art and the lives of some of the master abstract artists, implement the idea…

  1. Designing for Mathematical Abstraction

    ERIC Educational Resources Information Center

    Pratt, Dave; Noss, Richard

    2010-01-01

    Our focus is on the design of systems (pedagogical, technical, social) that encourage mathematical abstraction, a process we refer to as "designing for abstraction." In this paper, we draw on detailed design experiments from our research on children's understanding about chance and distribution to re-present this work as a case study in designing…

  2. Leadership Abstracts, 1996.

    ERIC Educational Resources Information Center

    Johnson, Larry, Ed.

    1996-01-01

    The abstracts in this series provide two-page discussions of issues related to leadership, administration, professional development, technology, and education in community colleges. Volume 9 for 1996 includes the following 12 abstracts: (1) "Tech-Prep + School-To-Work: Working Together To Foster Educational Reform," (Roderick F. Beaumont); (2)…

  3. Organizational Communication Abstracts--1975.

    ERIC Educational Resources Information Center

    Falcione, Raymond L.; And Others

    This document includes nearly 700 brief abstracts of works published in 1975 that are relevant to the field of organizational communication. The introduction presents a rationale for the project, a review of research methods developed by the authors for the preparation of abstracts, a statement of limitations as to the completeness of the coverage…

  4. Abstract Datatypes in PVS

    NASA Technical Reports Server (NTRS)

    Owre, Sam; Shankar, Natarajan

    1997-01-01

    PVS (Prototype Verification System) is a general-purpose environment for developing specifications and proofs. This document deals primarily with the abstract datatype mechanism in PVS which generates theories containing axioms and definitions for a class of recursive datatypes. The concepts underlying the abstract datatype mechanism are illustrated using ordered binary trees as an example. Binary trees are described by a PVS abstract datatype that is parametric in its value type. The type of ordered binary trees is then presented as a subtype of binary trees where the ordering relation is also taken as a parameter. We define the operations of inserting an element into, and searching for an element in an ordered binary tree; the bulk of the report is devoted to PVS proofs of some useful properties of these operations. These proofs illustrate various approaches to proving properties of abstract datatype operations. They also describe the built-in capabilities of the PVS proof checker for simplifying abstract datatype expressions.

  5. Abstraction of Drift Seepage

    SciTech Connect

    J.T. Birkholzer

    2004-11-01

    This model report documents the abstraction of drift seepage, conducted to provide seepage-relevant parameters and their probability distributions for use in Total System Performance Assessment for License Application (TSPA-LA). Drift seepage refers to the flow of liquid water into waste emplacement drifts. Water that seeps into drifts may contact waste packages and potentially mobilize radionuclides, and may result in advective transport of radionuclides through breached waste packages [''Risk Information to Support Prioritization of Performance Assessment Models'' (BSC 2003 [DIRS 168796], Section 3.3.2)]. The unsaturated rock layers overlying and hosting the repository form a natural barrier that reduces the amount of water entering emplacement drifts by natural subsurface processes. For example, drift seepage is limited by the capillary barrier forming at the drift crown, which decreases or even eliminates water flow from the unsaturated fractured rock into the drift. During the first few hundred years after waste emplacement, when above-boiling rock temperatures will develop as a result of heat generated by the decay of the radioactive waste, vaporization of percolation water is an additional factor limiting seepage. Estimating the effectiveness of these natural barrier capabilities and predicting the amount of seepage into drifts is an important aspect of assessing the performance of the repository. The TSPA-LA therefore includes a seepage component that calculates the amount of seepage into drifts [''Total System Performance Assessment (TSPA) Model/Analysis for the License Application'' (BSC 2004 [DIRS 168504], Section 6.3.3.1)]. The TSPA-LA calculation is performed with a probabilistic approach that accounts for the spatial and temporal variability and inherent uncertainty of seepage-relevant properties and processes. Results are used for subsequent TSPA-LA components that may handle, for example, waste package corrosion or radionuclide transport.

  6. Using detailed inter-network simulation and model abstraction to investigate and evaluate joint battlespace infosphere (JBI) support technologies

    NASA Astrophysics Data System (ADS)

    Green, David M.; Dallaire, Joel D.; Reaper, Jerome H.

    2004-08-01

    The Joint Battlespace Infosphere (JBI) program is performing a technology investigation into global communications, data mining and warehousing, and data fusion technologies by focusing on techniques and methodologies that support twenty first century military distributed collaboration. Advancement of these technologies is vitally important if military decision makers are to have the right data, in the right format, at the right time and place to support making the right decisions within available timelines. A quantitative understanding of individual and combinational effects arising from the application of technologies within a framework is presently far too complex to evaluate at more than a cursory depth. In order to facilitate quantitative analysis under these circumstances, the Distributed Information Enterprise Modeling and Simulation (DIEMS) team was formed to apply modeling and simulation (M&S) techniques to help in addressing JBI analysis challenges. The DIEMS team has been tasked utilizing collaborative distributed M&S architectures to quantitatively evaluate JBI technologies and tradeoffs. This paper first presents a high level view of the DIEMS project. Once this approach has been established, a more concentrated view of the detailed communications simulation techniques used in generating the underlying support data sets is presented.

  7. Agenda, extended abstracts, and bibliographies for a workshop on Deposit modeling, mineral resources assessment, and their role in sustainable development

    USGS Publications Warehouse

    Briskey, Joseph A.; Schulz, Klaus J.

    2002-01-01

    Global demand for mineral resources continues to increase because of increasing global population and the desire and efforts to improve living standards worldwide. The ability to meet this growing demand for minerals is affected by the concerns about possible environmental degradation associated with minerals production and by competing land uses. Informed planning and decisions concerning sustainability and resource development require a long-term perspective and an integrated approach to land-use, resource, and environmental management worldwide. This, in turn, requires unbiased information on the global distribution of identified and especially undiscovered resources, the economic and political factors influencing their development, and the potential environmental consequences of their exploitation. The purpose of the IGC workshop is to review the state-of-the-art in mineral-deposit modeling and quantitative resource assessment and to examine their role in the sustainability of mineral use. The workshop will address such questions as: Which of the available mineral-deposit models and assessment methods are best suited for predicting the locations, deposit types, and amounts of undiscovered nonfuel mineral resources remaining in the world? What is the availability of global geologic, mineral deposit, and mineral-exploration information? How can mineral-resource assessments be used to address economic and environmental issues? Presentations will include overviews of assessment methods used in previous national and other small-scale assessments of large regions as well as resulting assessment products and their uses.

  8. Constructing query-driven dynamic machine learning model with application to protein-ligand binding sites prediction.

    PubMed

    Yu, Dong-Jun; Hu, Jun; Li, Qian-Mu; Tang, Zhen-Min; Yang, Jing-Yu; Shen, Hong-Bin

    2015-01-01

    We are facing an era with annotated biological data rapidly and continuously generated. How to effectively incorporate new annotated data into the learning step is crucial for enhancing the performance of a bioinformatics prediction model. Although machine-learning-based methods have been extensively used for dealing with various biological problems, existing approaches usually train static prediction models based on fixed training datasets. The static approaches are found having several disadvantages such as low scalability and impractical when training dataset is huge. In view of this, we propose a dynamic learning framework for constructing query-driven prediction models. The key difference between the proposed framework and the existing approaches is that the training set for the machine learning algorithm of the proposed framework is dynamically generated according to the query input, as opposed to training a general model regardless of queries in traditional static methods. Accordingly, a query-driven predictor based on the smaller set of data specifically selected from the entire annotated base dataset will be applied on the query. The new way for constructing the dynamic model enables us capable of updating the annotated base dataset flexibly and using the most relevant core subset as the training set makes the constructed model having better generalization ability on the query, showing "part could be better than all" phenomenon. According to the new framework, we have implemented a dynamic protein-ligand binding sites predictor called OSML (On-site model for ligand binding sites prediction). Computer experiments on 10 different ligand types of three hierarchically organized levels show that OSML outperforms most existing predictors. The results indicate that the current dynamic framework is a promising future direction for bridging the gap between the rapidly accumulated annotated biological data and the effective machine-learning-based predictors. OSML

  9. Machine learning methods for empirical streamflow simulation: a comparison of model accuracy, interpretability, and uncertainty in seasonal watersheds

    NASA Astrophysics Data System (ADS)

    Shortridge, Julie E.; Guikema, Seth D.; Zaitchik, Benjamin F.

    2016-07-01

    In the past decade, machine learning methods for empirical rainfall-runoff modeling have seen extensive development and been proposed as a useful complement to physical hydrologic models, particularly in basins where data to support process-based models are limited. However, the majority of research has focused on a small number of methods, such as artificial neural networks, despite the development of multiple other approaches for non-parametric regression in recent years. Furthermore, this work has often evaluated model performance based on predictive accuracy alone, while not considering broader objectives, such as model interpretability and uncertainty, that are important if such methods are to be used for planning and management decisions. In this paper, we use multiple regression and machine learning approaches (including generalized additive models, multivariate adaptive regression splines, artificial neural networks, random forests, and M5 cubist models) to simulate monthly streamflow in five highly seasonal rivers in the highlands of Ethiopia and compare their performance in terms of predictive accuracy, error structure and bias, model interpretability, and uncertainty when faced with extreme climate conditions. While the relative predictive performance of models differed across basins, data-driven approaches were able to achieve reduced errors when compared to physical models developed for the region. Methods such as random forests and generalized additive models may have advantages in terms of visualization and interpretation of model structure, which can be useful in providing insights into physical watershed function. However, the uncertainty associated with model predictions under extreme climate conditions should be carefully evaluated, since certain models (especially generalized additive models and multivariate adaptive regression splines) become highly variable when faced with high temperatures.

  10. (abstract) A Polarimetric Model for Effects of Brine Infiltrated Snow Cover and Frost Flowers on Sea Ice Backscatter

    NASA Technical Reports Server (NTRS)

    Nghiem, S. V.; Kwok, R.; Yueh, S. H.

    1995-01-01

    A polarimetric scattering model is developed to study effects of snow cover and frost flowers with brine infiltration on thin sea ice. Leads containing thin sea ice in the Artic icepack are important to heat exchange with the atmosphere and salt flux into the upper ocean. Surface characteristics of thin sea ice in leads are dominated by the formation of frost flowers with high salinity. In many cases, the thin sea ice layer is covered by snow, which wicks up brine from sea ice due to capillary force. Snow and frost flowers have a significant impact on polarimetric signatures of thin ice, which needs to be studied for accessing the retrieval of geophysical parameters such as ice thickness. Frost flowers or snow layer is modeled with a heterogeneous mixture consisting of randomly oriented ellipsoids and brine infiltration in an air background. Ice crystals are characterized with three different axial lengths to depict the nonspherical shape. Under the covering multispecies medium, the columinar sea-ice layer is an inhomogeneous anisotropic medium composed of ellipsoidal brine inclusions preferentially oriented in the vertical direction in an ice background. The underlying medium is homogeneous sea water. This configuration is described with layered inhomogeneous media containing multiple species of scatterers. The species are allowed to have different size, shape, and permittivity. The strong permittivity fluctuation theory is extended to account for the multispecies in the derivation of effective permittivities with distributions of scatterer orientations characterized by Eulerian rotation angles. Polarimetric backscattering coefficients are obtained consistently with the same physical description used in the effective permittivity calculation. The mulitspecies model allows the inclusion of high-permittivity species to study effects of brine infiltrated snow cover and frost flowers on thin ice. The results suggest that the frost cover with a rough interface

  11. SWAT and River-2D Modelling of Pinder River for Analysing Snow Trout Habitat under Different Flow Abstraction Scenarios

    NASA Astrophysics Data System (ADS)

    Nale, J. P.; Gosain, A. K.; Khosa, R.

    2015-12-01

    Pinder River, one of major headstreams of River Ganga, originates in Pindari Glaciers of Kumaon Himalayas and after passing through rugged gorges meets Alaknanda at Karanprayag forming one of the five celestial confluences of Upper Ganga region. While other sub-basins of Upper Ganga are facing severe ecological losses, Pinder basin is still in its virginal state and is well known for its beautiful valleys besides being host to unique and rare biodiversity. A proposed 252 MW run-of-river hydroelectric project at Devsari on this river has been a major concern on account of its perceived potential for egregious environmental and social impacts. In this context, the study presented tries to analyse the expected changes in aquatic habitat conditions after this project is operational (with different operation policies). SWAT hydrological modelling platform has been used to derive stream flow simulations under various scenarios ranging from the present to the likely future conditions. To analyse the habitat conditions, a two dimensional hydraulic-habitat model 'River-2D', a module of iRIC software, is used. Snow trout has been identified as the target keystone species and its habitat preferences, in the form of flow depths, flow velocity and substrate condition, are obtained from diverse sources of related literature and are provided as Habitat Suitability Indices to River-2D. Bed morphology constitutes an important River-2D input and has been obtained, for the designated 1 km long study reach of Pinder upto Karanprayag, from a combination of actual field observations and supplemented by SRTM 1 Arc-Second Global digital elevation data. Monthly Weighted Usable Area for three different life stages (Spawning, Juvenile and Adult) of Snow Trout are obtained corresponding to seven different flow discharges ranging from 10 cumec to 1000 cumec. Comparing the present and proposed future river flow conditions obtained from SWAT modelling, losses in Weighted Usable Area, for the

  12. (abstract) Using TOPEX/Poseidon Sea Level Observations to Test the Sensitivity of an Ocean Model to Wind Forcing

    NASA Technical Reports Server (NTRS)

    Fu, Lee-Lueng; Chao, Yi

    1996-01-01

    It has been demonstrated that current-generation global ocean general circulation models (OGCM) are able to simulate large-scale sea level variations fairly well. In this study, a GFDL/MOM-based OGCM was used to investigate its sensitivity to different wind forcing. Simulations of global sea level using wind forcing from the ERS-1 Scatterometer and the NMC operational analysis were compared to the observations made by the TOPEX/Poseidon (T/P) radar altimeter for a two-year period. The result of the study has demonstrated the sensitivity of the OGCM to the quality of wind forcing, as well as the synergistic use of two spaceborne sensors in advancing the study of wind-driven ocean dynamics.

  13. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data.

    PubMed

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-04-05

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  14. [Prediction model of net photosynthetic rate of ginseng under forest based on optimized parameters support vector machine].

    PubMed

    Wu, Hai-wei; Yu, Hai-ye; Zhang, Lei

    2011-05-01

    Using K-fold cross validation method and two support vector machine functions, four kernel functions, grid-search, genetic algorithm and particle swarm optimization, the authors constructed the support vector machine model of the best penalty parameter c and the best correlation coefficient. Using information granulation technology, the authors constructed P particle and epsilon particle about those factors affecting net photosynthetic rate, and reduced these dimensions of the determinant. P particle includes the percent of visible spectrum ingredients. Epsilon particle includes leaf temperature, scattering radiation, air temperature, and so on. It is possible to obtain the best correlation coefficient among photosynthetic effective radiation, visible spectrum and individual net photosynthetic rate by this technology. The authors constructed the training set and the forecasting set including photosynthetic effective radiation, P particle and epsilon particle. The result shows that epsilon-SVR-RBF-genetic algorithm model, nu-SVR-linear-grid-search model and nu-SVR-RBF-genetic algorithm model obtain the correlation coefficient of up to 97% about the forecasting set including photosynthetic effective radiation and P particle. The penalty parameter c of nu-SVR-linear-grid-search model is the minimum, so the model's generalization ability is the best. The authors forecasted the forecasting set including photosynthetic effective radiation, P particle and epsilon particle by the model, and the correlation coefficient is up to 96%.

  15. The evolving market structures of gambling: case studies modelling the socioeconomic assignment of gaming machines in Melbourne and Sydney, Australia.

    PubMed

    Marshall, David C; Baker, Robert G V

    2002-01-01

    The expansion of gambling industries worldwide is intertwined with the growing government dependence on gambling revenue for fiscal assignments. In Australia, electronic gaming machines (EGMs) have dominated recent gambling industry growth. As EGMs have proliferated, growing recognition has emerged that EGM distribution closely reflects levels of socioeconomic disadvantage. More machines are located in less advantaged regions. This paper analyses time-series socioeconomic distributions of EGMs in Melbourne, Australia, an immature EGM market, and then compares the findings with the mature market in Sydney. Similar findings in both cities suggest that market assignment of EGMs transcends differences in historical and legislative environments. This indicates that similar underlying structures are evident in both markets. Modelling the spatial structures of gambling markets provides an opportunity to identify regions most at risk of gambling related problems. Subsequently, policies can be formulated which ensure fiscal revenue from gambling can be better targeted towards regions likely to be most afflicted by excessive gambling-related problems.

  16. Support vector machines for predictive modeling in heterogeneous catalysis: a comprehensive introduction and overfitting investigation based on two real applications.

    PubMed

    Baumes, L A; Serra, J M; Serna, P; Corma, A

    2006-01-01

    This works provides an introduction to support vector machines (SVMs) for predictive modeling in heterogeneous catalysis, describing step by step the methodology with a highlighting of the points which make such technique an attractive approach. We first investigate linear SVMs, working in detail through a simple example based on experimental data derived from a study aiming at optimizing olefin epoxidation catalysts applying high-throughput experimentation. This case study has been chosen to underline SVM features in a visual manner because of the few catalytic variables investigated. It is shown how SVMs transform original data into another representation space of higher dimensionality. The concepts of Vapnik-Chervonenkis dimension and structural risk minimization are introduced. The SVM methodology is evaluated with a second catalytic application, that is, light paraffin isomerization. Finally, we discuss why SVMs is a strategic method, as compared to other machine learning techniques, such as neural networks or induction trees, and why emphasis is put on the problem of overfitting.

  17. Modelling and calibration technique of laser triangulation sensors for integration in robot arms and articulated arm coordinate measuring machines.

    PubMed

    Santolaria, Jorge; Guillomía, David; Cajal, Carlos; Albajez, José A; Aguilar, Juan J

    2009-01-01

    A technique for intrinsic and extrinsic calibration of a laser triangulation sensor (LTS) integrated in an articulated arm coordinate measuring machine (AACMM) is presented in this paper. After applying a novel approach to the AACMM kinematic parameter identification problem, by means of a single calibration gauge object, a one-step calibration method to obtain both intrinsic-laser plane, CCD sensor and camera geometry-and extrinsic parameters related to the AACMM main frame has been developed. This allows the integration of LTS and AACMM mathematical models without the need of additional optimization methods after the prior sensor calibration, usually done in a coordinate measuring machine (CMM) before the assembly of the sensor in the arm. The experimental tests results for accuracy and repeatability show the suitable performance of this technique, resulting in a reliable, quick and friendly calibration method for the AACMM final user. The presented method is also valid for sensor integration in robot arms and CMMs.

  18. Toward the Development of a Fundamentally Based Chemical Model for Cyclopentanone: High-Pressure-Limit Rate Constants for H Atom Abstraction and Fuel Radical Decomposition

    SciTech Connect

    Zhou, Chong-Wen; Simmie, John M.; Pitz, William J.; Curran, Henry J.

    2016-08-25

    Theoretical aspects of the development of a chemical kinetic model for the pyrolysis and combustion of a cyclic ketone, cyclopentanone, are considered. We present calculated thermodynamic and kinetic data for the first time for the principal species including 2- and 3-oxo-cyclopentyl radicals, which are in reasonable agreement with the literature. Furthermore, these radicals can be formed via H atom abstraction reactions by H and Ö atoms and OH, HO2, and CH3 radicals, the rate constants of which have been calculated. Abstraction from the β-hydrogen atom is the dominant process when OH is involved, but the reverse holds true for HO2 radicals. We also determined the subsequent β-scission of the radicals formed, and it is shown that recent tunable VUV photoionization mass spectrometry experiments can be interpreted in this light. The bulk of the calculations used the composite model chemistry G4, which was benchmarked in the simplest case with a coupled cluster treatment, CCSD(T), in the complete basis set limit.

  19. Toward the Development of a Fundamentally Based Chemical Model for Cyclopentanone: High-Pressure-Limit Rate Constants for H Atom Abstraction and Fuel Radical Decomposition

    DOE PAGES

    Zhou, Chong-Wen; Simmie, John M.; Pitz, William J.; ...

    2016-08-25

    Theoretical aspects of the development of a chemical kinetic model for the pyrolysis and combustion of a cyclic ketone, cyclopentanone, are considered. We present calculated thermodynamic and kinetic data for the first time for the principal species including 2- and 3-oxo-cyclopentyl radicals, which are in reasonable agreement with the literature. Furthermore, these radicals can be formed via H atom abstraction reactions by H and Ö atoms and OH, HO2, and CH3 radicals, the rate constants of which have been calculated. Abstraction from the β-hydrogen atom is the dominant process when OH is involved, but the reverse holds true for HO2more » radicals. We also determined the subsequent β-scission of the radicals formed, and it is shown that recent tunable VUV photoionization mass spectrometry experiments can be interpreted in this light. The bulk of the calculations used the composite model chemistry G4, which was benchmarked in the simplest case with a coupled cluster treatment, CCSD(T), in the complete basis set limit.« less

  20. Toward the Development of a Fundamentally Based Chemical Model for Cyclopentanone: High-Pressure-Limit Rate Constants for H Atom Abstraction and Fuel Radical Decomposition.

    PubMed

    Zhou, Chong-Wen; Simmie, John M; Pitz, William J; Curran, Henry J

    2016-09-15

    Theoretical aspects of the development of a chemical kinetic model for the pyrolysis and combustion of a cyclic ketone, cyclopentanone, are considered. Calculated thermodynamic and kinetic data are presented for the first time for the principal species including 2- and 3-oxo-cyclopentyl radicals, which are in reasonable agreement with the literature. These radicals can be formed via H atom abstraction reactions by Ḣ and Ö atoms and ȮH, HȮ2, and ĊH3 radicals, the rate constants of which have been calculated. Abstraction from the β-hydrogen atom is the dominant process when ȮH is involved, but the reverse holds true for HȮ2 radicals. The subsequent β-scission of the radicals formed is also determined, and it is shown that recent tunable VUV photoionization mass spectrometry experiments can be interpreted in this light. The bulk of the calculations used the composite model chemistry G4, which was benchmarked in the simplest case with a coupled cluster treatment, CCSD(T), in the complete basis set limit.

  1. Meeting Abstracts - Annual Meeting 2016.

    PubMed

    2016-04-01

    The AMCP Abstracts program provides a forum through which authors can share their insights and outcomes of advanced managed care practice through publication in AMCP's Journal of Managed Care & Specialty Pharmacy (JMCP). Most of the reviewed and unreviewed abstracts are presented as posters so that interested AMCP meeting attendees can review findings and query authors. The Student/Resident/ Fellow poster presentation (unreviewed) is Wednesday, April 20, 2016, and the Professional poster presentation (reviewed) is Thursday, April 21. The Professional posters will also be displayed on Friday, April 22. The reviewed abstracts are published in the JMCP Meeting Abstracts supplement. The AMCP Managed Care & Specialty Pharmacy Annual Meeting 2016 in San Francisco, California, is expected to attract more than 3,500 managed care pharmacists and other health care professionals who manage and evaluate drug therapies, develop and manage networks, and work with medical managers and information specialists to improve the care of all individuals enrolled in managed care programs. Abstracts were submitted in the following categories: Research Report: describe completed original research on managed care pharmacy services or health care interventions. Examples include (but are not limited to) observational studies using administrative claims, reports of the impact of unique benefit design strategies, and analyses of the effects of innovative administrative or clinical programs. Economic Model: describe models that predict the effect of various benefit design or clinical decisions on a population. For example, an economic model could be used to predict the budget impact of a new pharmaceutical product on a health care system. Solving Problems in Managed Care: describe the specific steps taken to introduce a needed change, develop and implement a new system or program, plan and organize an administrative function, or solve other types of problems in managed care settings. These

  2. Automatic Abstraction in Planning

    NASA Technical Reports Server (NTRS)

    Christensen, J.

    1991-01-01

    Traditionally, abstraction in planning has been accomplished by either state abstraction or operator abstraction, neither of which has been fully automatic. We present a new method, predicate relaxation, for automatically performing state abstraction. PABLO, a nonlinear hierarchical planner, implements predicate relaxation. Theoretical, as well as empirical results are presented which demonstrate the potential advantages of using predicate relaxation in planning. We also present a new definition of hierarchical operators that allows us to guarantee a limited form of completeness. This new definition is shown to be, in some ways, more flexible than previous definitions of hierarchical operators. Finally, a Classical Truth Criterion is presented that is proven to be sound and complete for a planning formalism that is general enough to include most classical planning formalisms that are based on the STRIPS assumption.

  3. Searching Sociological Abstracts.

    ERIC Educational Resources Information Center

    Kerbel, Sandra Sandor

    1981-01-01

    Describes the scope, content, and retrieval characteristics of Sociological Abstracts, an online database of literature in the social sciences. Sample searches are displayed, and the strengths and weaknesses of the database are summarized. (FM)

  4. Conference Abstracts: AEDS '82.

    ERIC Educational Resources Information Center

    Journal of Computers in Mathematics and Science Teaching, 1982

    1982-01-01

    Abstracts from nine selected papers presented at the 1982 Association for Educational Data Systems (AEDS) conference are provided. Copies of conference proceedings may be obtained for fifteen dollars from the Association. (MP)

  5. Abstracts of SIG Sessions.

    ERIC Educational Resources Information Center

    Proceedings of the ASIS Annual Meeting, 1997

    1997-01-01

    Presents abstracts of SIG Sessions. Highlights include digital collections; information retrieval methods; public interest/fair use; classification and indexing; electronic publication; funding; globalization; information technology projects; interface design; networking in developing countries; metadata; multilingual databases; networked…

  6. Abstracts of contributed papers

    SciTech Connect

    Not Available

    1994-08-01

    This volume contains 571 abstracts of contributed papers to be presented during the Twelfth US National Congress of Applied Mechanics. Abstracts are arranged in the order in which they fall in the program -- the main sessions are listed chronologically in the Table of Contents. The Author Index is in alphabetical order and lists each paper number (matching the schedule in the Final Program) with its corresponding page number in the book.

  7. Weibull Multiplicative Model and Machine Learning Models for Full-Automatic Dark-Spot Detection from SAR Images

    NASA Astrophysics Data System (ADS)

    Taravat, A.; Del Frate, F.

    2013-09-01

    As a major aspect of marine pollution, oil release into the sea has serious biological and environmental impacts. Among remote sensing systems (which is a tool that offers a non-destructive investigation method), synthetic aperture radar (SAR) can provide valuable synoptic information about the position and size of the oil spill due to its wide area coverage and day/night, and all-weather capabilities. In this paper we present a new automated method for oil-spill monitoring. A new approach is based on the combination of Weibull Multiplicative Model and machine learning techniques to differentiate between dark spots and the background. First, the filter created based on Weibull Multiplicative Model is applied to each sub-image. Second, the sub-image is segmented by two different neural networks techniques (Pulsed Coupled Neural Networks and Multilayer Perceptron Neural Networks). As the last step, a very simple filtering process is used to eliminate the false targets. The proposed approaches were tested on 20 ENVISAT and ERS2 images which contained dark spots. The same parameters were used in all tests. For the overall dataset, the average accuracies of 94.05 % and 95.20 % were obtained for PCNN and MLP methods, respectively. The average computational time for dark-spot detection with a 256 × 256 image in about 4 s for PCNN segmentation using IDL software which is the fastest one in this field at present. Our experimental results demonstrate that the proposed approach is very fast, robust and effective. The proposed approach can be applied to the future spaceborne SAR images.

  8. Using the Relevance Vector Machine Model Combined with Local Phase Quantization to Predict Protein-Protein Interactions from Protein Sequences.

    PubMed

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Fang, Yu-Hong; Zhao, Yu-Jun; Zhang, Ming

    2016-01-01

    We propose a novel computational method known as RVM-LPQ that combines the Relevance Vector Machine (RVM) model and Local Phase Quantization (LPQ) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the LPQ feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We perform 5-fold cross-validation experiments on Yeast and Human datasets, and we achieve very high accuracies of 92.65% and 97.62%, respectively, which is significantly better than previous works. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the Yeast dataset. The experimental results demonstrate that our RVM-LPQ method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool for future proteomics research.

  9. Using the Relevance Vector Machine Model Combined with Local Phase Quantization to Predict Protein-Protein Interactions from Protein Sequences

    PubMed Central

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Fang, Yu-Hong; Zhao, Yu-Jun; Zhang, Ming

    2016-01-01

    We propose a novel computational method known as RVM-LPQ that combines the Relevance Vector Machine (RVM) model and Local Phase Quantization (LPQ) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the LPQ feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We perform 5-fold cross-validation experiments on Yeast and Human datasets, and we achieve very high accuracies of 92.65% and 97.62%, respectively, which is significantly better than previous works. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the Yeast dataset. The experimental results demonstrate that our RVM-LPQ method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool for future proteomics research. PMID:27314023

  10. Modeling and Control of a Double-effect Absorption Refrigerating Machine

    NASA Astrophysics Data System (ADS)

    Hihara, Eiji; Yamamoto, Yuuji; Saito, Takamoto; Nagaoka, Yoshikazu; Nishiyama, Noriyuki

    Because the heat capacity of absorption refrigerating machines is large compared with vapor compression refrigerating machines, the dynamic characteristics at the change in cooling load conditions are problems to be improved. The control method of energy input and of weak solution flow rate following cooling load variations was investigated. As the changes in cooling load and cooling capacity are moderate, the optimal operation conditions corresponding to the cooling load can be estimated with steady state characteristics. If the relation between the cooling load and the optimal operation conditions is well known, a feed forward control can be employed. In this report a new control algorithm, which is called MOL (Multi-variable Open Loop) control, is proposed. Comparing the MOL control with the conventional chilled water outlet temperature proportional control, the MOL control enables the smooth changes in cooling capacity and the reduction in fuel consumption.

  11. Human-machine interactions

    DOEpatents

    Forsythe, J. Chris; Xavier, Patrick G.; Abbott, Robert G.; Brannon, Nathan G.; Bernard, Michael L.; Speed, Ann E.

    2009-04-28

    Digital technology utilizing a cognitive model based on human naturalistic decision-making processes, including pattern recognition and episodic memory, can reduce the dependency of human-machine interactions on the abilities of a human user and can enable a machine to more closely emulate human-like responses. Such a cognitive model can enable digital technology to use cognitive capacities fundamental to human-like communication and cooperation to interact with humans.

  12. Modeling of Residual Stress and Machining Distortion in Aerospace Components (PREPRINT)

    DTIC Science & Technology

    2010-03-01

    material utilization and aircraft system efficiency which result in lower environmental impact. Distortion of machined titanium and nickel alloys ...such as casting, forging, rolling, etc., in which the material is heated to very high temperatures. A typical wrought component of a titanium or...nickel based alloy begins as an ingot, the cast structure is broken down into billet form, which is then forged into the rough shape of the component

  13. Designing a stencil compiler for the Connection Machine model CM-5

    SciTech Connect

    Brickner, R.G.; Holian, K.; Thiagarajan, B.; Johnsson, S.L. |

    1994-12-31

    In this paper the authors present the design of a stencil compiler for the Connection Machine system CM-5. The stencil compiler will optimize the data motion between processing nodes, minimize the data motion within a node, and minimize the data motion between registers and local memory in a node. The compiler will natively support two-dimensional stencils, but stencils in three dimensions will be automatically decomposed. Lower dimensional stencils are treated as degenerate stencils. The compiler will be integrated as part of the CM Fortran programming system. Much of the compiler code will be adapted from the CM-2/200 stencil compiler, which is part of CMSSL (the Connection Machine Scientific Software Library) Release 3.1 for the CM-2/200, and the compiler will be available as part of the Connection Machine Scientific Software Library (CMSSL) for the CM-5. In addition to setting down design considerations, they report on the implementation status of the stencil compiler. In particular, they discuss optimization strategies and status of code conversion from CM-2/200 to CM-5 architecture, and report on the measured performance of prototype target code which the compiler will generate.

  14. Interpreting support vector machine models for multivariate group wise analysis in neuroimaging.

    PubMed

    Gaonkar, Bilwaj; T Shinohara, Russell; Davatzikos, Christos

    2015-08-01

    Machine learning based classification algorithms like support vector machines (SVMs) have shown great promise for turning a high dimensional neuroimaging data into clinically useful decision criteria. However, tracing imaging based patterns that contribute significantly to classifier decisions remains an open problem. This is an issue of critical importance in imaging studies seeking to determine which anatomical or physiological imaging features contribute to the classifier's decision, thereby allowing users to critically evaluate the findings of such machine learning methods and to understand disease mechanisms. The majority of published work addresses the question of statistical inference for support vector classification using permutation tests based on SVM weight vectors. Such permutation testing ignores the SVM margin, which is critical in SVM theory. In this work we emphasize the use of a statistic that explicitly accounts for the SVM margin and show that the null distributions associated with this statistic are asymptotically normal. Further, our experiments show that this statistic is a lot less conservative as compared to weight based permutation tests and yet specific enough to tease out multivariate patterns in the data. Thus, we can better understand the multivariate patterns that the SVM uses for neuroimaging based classification.

  15. Interpreting support vector machine models for multivariate group wise analysis in neuroimaging

    PubMed Central

    Gaonkar, Bilwaj; Shinohara, Russell T; Davatzikos, Christos

    2015-01-01

    Machine learning based classification algorithms like support vector machines (SVMs) have shown great promise for turning a high dimensional neuroimaging data into clinically useful decision criteria. However, tracing imaging based patterns that contribute significantly to classifier decisions remains an open problem. This is an issue of critical importance in imaging studies seeking to determine which anatomical or physiological imaging features contribute to the classifier’s decision, thereby allowing users to critically evaluate the findings of such machine learning methods and to understand disease mechanisms. The majority of published work addresses the question of statistical inference for support vector classification using permutation tests based on SVM weight vectors. Such permutation testing ignores the SVM margin, which is critical in SVM theory. In this work we emphasize the use of a statistic that explicitly accounts for the SVM margin and show that the null distributions associated with this statistic are asymptotically normal. Further, our experiments show that this statistic is a lot less conservative as compared to weight based permutation tests and yet specific enough to tease out multivariate patterns in the data. Thus, we can better understand the multivariate patterns that the SVM uses for neuroimaging based classification. PMID:26210913

  16. An in vivo autotransplant model of renal preservation: cold storage versus machine perfusion in the prevention of ischemia/reperfusion injury.

    PubMed

    La Manna, Gaetano; Conte, Diletta; Cappuccilli, Maria Laura; Nardo, Bruno; D'Addio, Francesca; Puviani, Lorenza; Comai, Giorgia; Bianchi, Francesca; Bertelli, Riccardo; Lanci, Nicole; Donati, Gabriele; Scolari, Maria Piera; Faenza, Alessandro; Stefoni, Sergio

    2009-07-01

    There is increasing proof that organ preservation by machine perfusion is able to limit ischemia/reperfusion injury in kidney transplantation. This study was designed to compare the efficiency in hypothermic organ preservation by machine perfusion or cold storage in an animal model of kidney autotransplantation. Twelve pigs underwent left nephrectomy after warm ischemic time; the organs were preserved in machine perfusion (n = 6) or cold storage (n = 6) and then autotransplanted with immediate contralateral nephrectomy. The following parameters were compared between the two groups of animals: hematological and urine indexes of renal function, blood/gas analysis values, histological features, tissue adenosine-5'-triphosphate (ATP) content, perforin gene expression in kidney biopsies, and organ weight changes were compared before and after preservation. The amount of cellular ATP was significantly higher in organs preserved by machine perfusion; moreover, the study of apoptosis induction revealed an enhanced perforin expression in the kidneys, which underwent simple hypothermic preservation compared to the machine-preserved ones. Organ weight was significantly decreased after cold storage, but it remained quite stable for machine-perfused kidneys. The present model seems to suggest that organ preservation by hypothermic machine perfusion is able to better control cellular impairment in comparison with cold storage.

  17. Correlation Electron Temperature Fluctuation Measurements on Alcator C-Mod and ASDEX Upgrade: Cross Machine Comparisons and Transport Model Validation

    NASA Astrophysics Data System (ADS)

    White, A. E.; Creely, A. J.; Freethy, S.; Cao, N.; Conway, G. D.; Goerler, T.; Happel, T.; Howard, N. T.; Inman, C.; Rice, J. E.; Rodriguez Fernandez, P.; Sung, C.; C-Mod, Alcator; Upgrade, Asdex

    2016-10-01

    Correlation Electron Cyclotron Emission diagnostics have been developed for Alcator C-Mod and ASDEX Upgrade. Measurements of long wavelength (ktheta rhos <0.5) electron temperature fluctuations have been measured in the core plasma (0.5 machine comparisons, as well as multi-machine transport model validation, using nonlinear simulations with the GENE and GYRO codes and reduced models such as TGLF. Electron temperature fluctuations, and the correlation with density fluctuations, which can be measured with coupled radiometer / reflectometer diagnostics, provide valuable constraints on gyrokinetic models. Recent results in transport model validation at both C-Mod and AUG are presented. This work is supported by the US DOE under Grants DE-SC0006419 and DEFC02-99ER54512-CMOD.

  18. Metacognition and abstract reasoning.

    PubMed

    Markovits, Henry; Thompson, Valerie A; Brisson, Janie

    2015-05-01

    The nature of people's meta-representations of deductive reasoning is critical to understanding how people control their own reasoning processes. We conducted two studies to examine whether people have a metacognitive representation of abstract validity and whether familiarity alone acts as a separate metacognitive cue. In Study 1, participants were asked to make a series of (1) abstract conditional inferences, (2) concrete conditional inferences with premises having many potential alternative antecedents and thus specifically conducive to the production of responses consistent with conditional logic, or (3) concrete problems with premises having relatively few potential alternative antecedents. Participants gave confidence ratings after each inference. Results show that confidence ratings were positively correlated with logical performance on abstract problems and concrete problems with many potential alternatives, but not with concrete problems with content less conducive to normative responses. Confidence ratings were higher with few alternatives than for abstract content. Study 2 used a generation of contrary-to-fact alternatives task to improve levels of abstract logical performance. The resulting increase in logical performance was mirrored by increases in mean confidence ratings. Results provide evidence for a metacognitive representation based on logical validity, and show that familiarity acts as a separate metacognitive cue.

  19. Gaussian Process Regression for Predictive But Interpretable Machine Learning Models: An Example of Predicting Mental Workload across Tasks.

    PubMed

    Caywood, Matthew S; Roberts, Daniel M; Colombe, Jeffrey B; Greenwald, Hal S; Weiland, Monica Z

    2016-01-01

    There is increasing interest in real-time brain-computer interfaces (BCIs) for the passive monitoring of human cognitive state, including cognitive workload. Too often, however, effective BCIs based on machine learning techniques may function as "black boxes" that are difficult to analyze or interpret. In an effort toward more interpretable BCIs, we studied a family of N-back working memory tasks using a machine learning model, Gaussian Process Regression (GPR), which was both powerful and amenable to analysis. Participants performed the N-back task with three stimulus variants, auditory-verbal, visual-spatial, and visual-numeric, each at three working memory loads. GPR models were trained and tested on EEG data from all three task variants combined, in an effort to identify a model that could be predictive of mental workload demand regardless of stimulus modality. To provide a comparison for GPR performance, a model was additionally trained using multiple linear regression (MLR). The GPR model was effective when trained on individual participant EEG data, resulting in an average standardized mean squared error (sMSE) between true and predicted N-back levels of 0.44. In comparison, the MLR model using the same data resulted in an average sMSE of 0.55. We additionally demonstrate how GPR can be used to identify which EEG features are relevant for prediction of cognitive workload in an individual participant. A fraction of EEG features accounted for the majority of the model's predictive power; using only the top 25% of features performed nearly as well as using 100% of features. Subsets of features identified by linear models (ANOVA) were not as efficient as subsets identified by GPR. This raises the possibility of BCIs that require fewer model features while capturing all of the information needed to achieve high predictive accuracy.

  20. A new method for the prediction of chatter stability lobes based on dynamic cutting force simulation model and support vector machine

    NASA Astrophysics Data System (ADS)

    Peng, Chong; Wang, Lun; Liao, T. Warren

    2015-10-01

    Currently, chatter has become the critical factor in hindering machining quality and productivity in machining processes. To avoid cutting chatter, a new method based on dynamic cutting force simulation model and support vector machine (SVM) is presented for the prediction of chatter stability lobes. The cutting force is selected as the monitoring signal, and the wavelet energy entropy theory is used to extract the feature vectors. A support vector machine is constructed using the MATLAB LIBSVM toolbox for pattern classification based on the feature vectors derived from the experimental cutting data. Then combining with the dynamic cutting force simulation model, the stability lobes diagram (SLD) can be estimated. Finally, the predicted results are compared with existing methods such as zero-order analytical (ZOA) and semi-discretization (SD) method as well as actual cutting experimental results to confirm the validity of this new method.

  1. A Semantic Theory of Abstractions: A Preliminary Report

    NASA Technical Reports Server (NTRS)

    Nayak, P. Pandurang; Levy, Alon Y.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    In this paper we present a semantic theory of abstractions based on viewing abstractions as interpretations between theories. This theory captures important aspects of abstractions not captured in the theory of abstractions presented by Giunchiglia and Walsh. Instead of viewing abstractions as syntactic mappings, we view abstractions as a two step process: the intended domain model is first abstracted and then a set of (abstract) formulas is constructed to capture the abstracted domain model. Viewing and justifying abstractions as model level transformations is both natural and insightful. We provide a precise characterization of the abstract theory that exactly implements the intended abstraction, and show that this theory, while being axiomatizable, is not always finitely axiomatizable. A simple corollary of the latter result disproves a conjecture made by Tenenberg that if a theory is finitely axiomatizable, then predicate abstraction of that theory leads to a finitely axiomatizable theory.

  2. Gaussian Process Regression for Predictive But Interpretable Machine Learning Models: An Example of Predicting Mental Workload across Tasks

    PubMed Central

    Caywood, Matthew S.; Roberts, Daniel M.; Colombe, Jeffrey B.; Greenwald, Hal S.; Weiland, Monica Z.

    2017-01-01

    There is increasing interest in real-time brain-computer interfaces (BCIs) for the passive monitoring of human cognitive state, including cognitive workload. Too often, however, effective BCIs based on machine learning techniques may function as “black boxes” that are difficult to analyze or interpret. In an effort toward more interpretable BCIs, we studied a family of N-back working memory tasks using a machine learning model, Gaussian Process Regression (GPR), which was both powerful and amenable to analysis. Participants performed the N-back task with three stimulus variants, auditory-verbal, visual-spatial, and visual-numeric, each at three working memory loads. GPR models were trained and tested on EEG data from all three task variants combined, in an effort to identify a model that could be predictive of mental workload demand regardless of stimulus modality. To provide a comparison for GPR performance, a model was additionally trained using multiple linear regression (MLR). The GPR model was effective when trained on individual participant EEG data, resulting in an average standardized mean squared error (sMSE) between true and predicted N-back levels of 0.44. In comparison, the MLR model using the same data resulted in an average sMSE of 0.55. We additionally demonstrate how GPR can be used to identify which EEG features are relevant for prediction of cognitive workload in an individual participant. A fraction of EEG features accounted for the majority of the model’s predictive power; using only the top 25% of features performed nearly as well as using 100% of features. Subsets of features identified by linear models (ANOVA) were not as efficient as subsets identified by GPR. This raises the possibility of BCIs that require fewer model features while capturing all of the information needed to achieve high predictive accuracy. PMID:28123359

  3. Electric machine

    DOEpatents

    El-Refaie, Ayman Mohamed Fawzi [Niskayuna, NY; Reddy, Patel Bhageerath [Madison, WI

    2012-07-17

    An interior permanent magnet electric machine is disclosed. The interior permanent magnet electric machine comprises a rotor comprising a plurality of radially placed magnets each having a proximal end and a distal end, wherein each magnet comprises a plurality of magnetic segments and at least one magnetic segment towards the distal end comprises a high resistivity magnetic material.

  4. Evaluation of different time domain peak models using extreme learning machine-based peak detection for EEG signal.

    PubMed

    Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Cumming, Paul; Mubin, Marizan

    2016-01-01

    Various peak models have been introduced to detect and analyze peaks in the time domain analysis of electroencephalogram (EEG) signals. In general, peak model in the time domain analysis consists of a set of signal parameters, such as amplitude, width, and slope. Models including those proposed by Dumpala, Acir, Liu, and Dingle are routinely used to detect peaks in EEG signals acquired in clinical studies of epilepsy or eye blink. The optimal peak model is the most reliable peak detection performance in a particular application. A fair measure of performance of different models requires a common and unbiased platform. In this study, we evaluate the performance of the four different peak models using the extreme learning machine (ELM)-based peak detection algorithm. We found that the Dingle model gave the best performance, with 72 % accuracy in the analysis of real EEG data. Statistical analysis conferred that the Dingle model afforded significantly better mean testing accuracy than did the Acir and Liu models, which were in the range 37-52 %. Meanwhile, the Dingle model has no significant difference compared to Dumpala model.

  5. Financial and environmental modelling of water hardness--implications for utilising harvested rainwater in washing machines.

    PubMed

    Morales-Pinzón, Tito; Lurueña, Rodrigo; Gabarrell, Xavier; Gasol, Carles M; Rieradevall, Joan

    2014-02-01

    A study was conducted to determine the financial and environmental effects of water quality on rainwater harvesting systems. The potential for replacing tap water used in washing machines with rainwater was studied, and then analysis presented in this paper is valid for applications that include washing machines where tap water hardness may be important. A wide range of weather conditions, such as rainfall (284-1,794 mm/year); water hardness (14-315 mg/L CaCO3); tap water prices (0.85-2.65 Euros/m(3)) in different Spanish urban areas (from individual buildings to whole neighbourhoods); and other scenarios (including materials and water storage capacity) were analysed. Rainfall was essential for rainwater harvesting, but the tap water prices and the water hardness were the main factors for consideration in the financial and the environmental analyses, respectively. The local tap water hardness and prices can cause greater financial and environmental impacts than the type of material used for the water storage tank or the volume of the tank. The use of rainwater as a substitute for hard water in washing machines favours financial analysis. Although tap water hardness significantly affects the financial analysis, the greatest effect was found in the environmental analysis. When hard tap water needed to be replaced, it was found that a water price of 1 Euro/m(3) could render the use of rainwater financially feasible when using large-scale rainwater harvesting systems. When the water hardness was greater than 300 mg/L CaCO3, a financial analysis revealed that an net present value greater than 270 Euros/dwelling could be obtained at the neighbourhood scale, and there could be a reduction in the Global Warming Potential (100 years) ranging between 35 and 101 kg CO2 eq./dwelling/year.

  6. Quantum Boltzmann Machine

    NASA Astrophysics Data System (ADS)

    Kulchytskyy, Bohdan; Andriyash, Evgeny; Amin, Mohammed; Melko, Roger

    The field of machine learning has been revolutionized by the recent improvements in the training of deep networks. Their architecture is based on a set of stacked layers of simpler modules. One of the most successful building blocks, known as a restricted Boltzmann machine, is an energetic model based on the classical Ising Hamiltonian. In our work, we investigate the benefits of quantum effects on the learning capacity of Boltzmann machines by extending its underlying Hamiltonian with a transverse field. For this purpose, we employ exact and stochastic training procedures on data sets with physical origins.

  7. Leadership Abstracts, 2001.

    ERIC Educational Resources Information Center

    Wilson, Cynthia, Ed.

    2001-01-01

    This is volume 14 of Leadership Abstracts, a newsletter published by the League for Innovation (California). Issue 1 of February 2001, "Developmental Education: A Policy Primer," discusses developmental programs in the community college. According to the article, community college trustees and presidents would serve their constituents well by…

  8. Abstract Film and Beyond.

    ERIC Educational Resources Information Center

    Le Grice, Malcolm

    A theoretical and historical account of the main preoccupations of makers of abstract films is presented in this book. The book's scope includes discussion of nonrepresentational forms as well as examination of experiments in the manipulation of time in films. The ten chapters discuss the following topics: art and cinematography, the first…

  9. Leadership Abstracts, 1993.

    ERIC Educational Resources Information Center

    Doucette, Don, Ed.

    1993-01-01

    This document includes 10 issues of Leadership Abstracts (volume 6, 1993), a newsletter published by the League for Innovation in the Community College (California). The featured articles are: (1) "Reinventing Government" by David T. Osborne; (2) "Community College Workforce Training Programs: Expanding the Mission to Meet Critical Needs" by…

  10. Leadership Abstracts, 1999.

    ERIC Educational Resources Information Center

    Leadership Abstracts, 1999

    1999-01-01

    This document contains five Leadership Abstracts publications published February-December 1999. The article, "Teaching the Teachers: Meeting the National Teacher Preparation Challenge," authored by George R. Boggs and Sadie Bragg, examines the community college role and makes recommendations and a call to action for teacher education.…

  11. Computers in Abstract Algebra

    ERIC Educational Resources Information Center

    Nwabueze, Kenneth K.

    2004-01-01

    The current emphasis on flexible modes of mathematics delivery involving new information and communication technology (ICT) at the university level is perhaps a reaction to the recent change in the objectives of education. Abstract algebra seems to be one area of mathematics virtually crying out for computer instructional support because of the…

  12. 2002 NASPSA Conference Abstracts.

    ERIC Educational Resources Information Center

    Journal of Sport & Exercise Psychology, 2002

    2002-01-01

    Contains abstracts from the 2002 conference of the North American Society for the Psychology of Sport and Physical Activity. The publication is divided into three sections: the preconference workshop, "Effective Teaching Methods in the Classroom;" symposia (motor development, motor learning and control, and sport psychology); and free…

  13. Reasoning abstractly about resources

    NASA Technical Reports Server (NTRS)

    Clement, B.; Barrett, A.

    2001-01-01

    r describes a way to schedule high level activities before distributing them across multiple rovers in order to coordinate the resultant use of shared resources regardless of how each rover decides how to perform its activities. We present an algorithm for summarizing the metric resource requirements of an abstract activity based n the resource usages of its potential refinements.

  14. Conference Abstracts: AEDS '84.

    ERIC Educational Resources Information Center

    Baird, William E.

    1985-01-01

    The Association of Educational Data Systems (AEDS) conference included 102 presentations. Abstracts of seven of these presentations are provided. Topic areas considered include LOGO, teaching probability through a computer game, writing effective computer assisted instructional materials, computer literacy, research on instructional…

  15. Leadership Abstracts, 2002.

    ERIC Educational Resources Information Center

    Wilson, Cynthia, Ed.; Milliron, Mark David, Ed.

    2002-01-01

    This 2002 volume of Leadership Abstracts contains issue numbers 1-12. Articles include: (1) "Skills Certification and Workforce Development: Partnering with Industry and Ourselves," by Jeffrey A. Cantor; (2) "Starting Again: The Brookhaven Success College," by Alice W. Villadsen; (3) "From Digital Divide to Digital Democracy," by Gerardo E. de los…

  16. Annual Conference Abstracts

    ERIC Educational Resources Information Center

    Journal of Engineering Education, 1972

    1972-01-01

    Includes abstracts of papers presented at the 80th Annual Conference of the American Society for Engineering Education. The broad areas include aerospace, affiliate and associate member council, agricultural engineering, biomedical engineering, continuing engineering studies, chemical engineering, civil engineering, computers, cooperative…

  17. Abstracts of SIG Sessions.

    ERIC Educational Resources Information Center

    Proceedings of the ASIS Annual Meeting, 1994

    1994-01-01

    Includes abstracts of 18 special interest group (SIG) sessions. Highlights include natural language processing, information science and terminology science, classification, knowledge-intensive information systems, information value and ownership issues, economics and theories of information science, information retrieval interfaces, fuzzy thinking…

  18. RESEARCH ABSTRACTS, VOLUME VI.

    ERIC Educational Resources Information Center

    COLETTE, SISTER M.

    THIS SIXTH VOLUME OF RESEARCH ABSTRACTS PRESENTS REPORTS OF 35 RESEARCH STUDIES COMPLETED BY CANDIDATES FOR THE MASTER'S DEGREE AT THE CARDINAL STRITCH COLLEGE IN 1964. TWENTY-NINE STUDIES ARE CONCERNED WITH READING, AND SIX ARE CONCERNED WITH THE EDUCATION OF THE MENTALLY HANDICAPPED. OF THE READING STUDIES, FIVE PERTAIN TO THE JUNIOR HIGH LEVEL…

  19. Learning Abstracts, 1999.

    ERIC Educational Resources Information Center

    League for Innovation in the Community Coll.

    This document contains volume two of Learning Abstracts, a bimonthly newsletter from the League for Innovation in the Community College. Articles in these seven issues include: (1) "Get on the Fast Track to Learning: An Accelerated Associate Degree Option" (Gerardo E. de los Santos and Deborah J. Cruise); (2) "The Learning College:…

  20. Annual Conference Abstracts

    ERIC Educational Resources Information Center

    Engineering Education, 1976

    1976-01-01

    Presents the abstracts of 158 papers presented at the American Society for Engineering Education's annual conference at Knoxville, Tennessee, June 14-17, 1976. Included are engineering topics covering education, aerospace, agriculture, biomedicine, chemistry, computers, electricity, acoustics, environment, mechanics, and women. (SL)

  1. Making the Abstract Concrete

    ERIC Educational Resources Information Center

    Potter, Lee Ann

    2005-01-01

    President Ronald Reagan nominated a woman to serve on the United States Supreme Court. He did so through a single-page form letter, completed in part by hand and in part by typewriter, announcing Sandra Day O'Connor as his nominee. While the document serves as evidence of a historic event, it is also a tangible illustration of abstract concepts…

  2. Abstraction through Game Play

    ERIC Educational Resources Information Center

    Avraamidou, Antri; Monaghan, John; Walker, Aisha

    2012-01-01

    This paper examines the computer game play of an 11-year-old boy. In the course of building a virtual house he developed and used, without assistance, an artefact and an accompanying strategy to ensure that his house was symmetric. We argue that the creation and use of this artefact-strategy is a mathematical abstraction. The discussion…

  3. The Local Edge Machine: inference of dynamic models of gene regulation.

    PubMed

    McGoff, Kevin A; Guo, Xin; Deckard, Anastasia; Kelliher, Christina M; Leman, Adam R; Francey, Lauren J; Hogenesch, John B; Haase, Steven B; Harer, John L

    2016-10-19

    We present a novel approach, the Local Edge Machine, for the inference of regulatory interactions directly from time-series gene expression data. We demonstrate its performance, robustness, and scalability on in silico datasets with varying behaviors, sizes, and degrees of complexity. Moreover, we demonstrate its ability to incorporate biological prior information and make informative predictions on a well-characterized in vivo system using data from budding yeast that have been synchronized in the cell cycle. Finally, we use an atlas of transcription data in a mammalian circadian system to illustrate how the method can be used for discovery in the context of large complex networks.

  4. Perspex machine: VII. The universal perspex machine

    NASA Astrophysics Data System (ADS)

    Anderson, James A. D. W.

    2006-01-01

    -linear perspex-machine which is very much easier to program than the original perspex-machine. We then show how to map the whole of perspex space into a unit cube. This allows us to construct a fractal of perspex machines with the cardinality of a real-numbered line or space. This fractal is the universal perspex machine. It can solve, in unit time, the halting problem for itself and for all perspex machines instantiated in real-numbered space, including all Turing machines. We cite an experiment that has been proposed to test the physical reality of the perspex machine's model of time, but we make no claim that the physical universe works this way or that it has the cardinality of the perspex machine. We leave it that the perspex machine provides an upper bound on the computational properties of physical things, including manufactured computers and biological organisms, that have a cardinality no greater than the real-number line.

  5. Support vector machine model for diagnosis of lymph node metastasis in gastric cancer with multidetector computed tomography: a preliminary study

    PubMed Central

    2011-01-01

    Background Lymph node metastasis (LNM) of gastric cancer is an important prognostic factor regarding long-term survival. But several imaging techniques which are commonly used in stomach cannot satisfactorily assess the gastric cancer lymph node status. They can not achieve both high sensitivity and specificity. As a kind of machine-learning methods, Support Vector Machine has the potential to solve this complex issue. Methods The institutional review board approved this retrospective study. 175 consecutive patients with gastric cancer who underwent MDCT before surgery were included. We evaluated the tumor and lymph node indicators on CT images including serosal invasion, tumor classification, tumor maximum diameter, number of lymph nodes, maximum lymph node size and lymph nodes station, which reflected the biological behavior of gastric cancer. Univariate analysis was used to analyze the relationship between the six image indicators with LNM. A SVM model was built with these indicators above as input index. The output index was that lymph node metastasis of the patient was positive or negative. It was confirmed by the surgery and histopathology. A standard machine-learning technique called k-fold cross-validation (5-fold in our study) was used to train and test SVM models. We evaluated the diagnostic capability of the SVM models in lymph node metastasis with the receiver operating characteristic (ROC) curves. And the radiologist classified the lymph node metastasis of patients by using maximum lymph node size on CT images as criterion. We compared the areas under ROC curves (AUC) of the radiologist and SVM models. Results In 175 cases, the cases of lymph node metastasis were 134 and 41 cases were not. The six image indicators all had statistically significant differences between the LNM negative and positive groups. The means of the sensitivity, specificity and AUC of SVM models with 5-fold cross-validation were 88.5%, 78.5% and 0.876, respectively. While the

  6. A comparison of numerical and machine-learning modeling of soil water content with limited input data

    NASA Astrophysics Data System (ADS)

    Karandish, Fatemeh; Šimůnek, Jiří

    2016-12-01

    Soil water content (SWC) is a key factor in optimizing the usage of water resources in agriculture since it provides information to make an accurate estimation of crop water demand. Methods for predicting SWC that have simple data requirements are needed to achieve an optimal irrigation schedule, especially for various water-saving irrigation strategies that are required to resolve both food and water security issues under conditions of water shortages. Thus, a two-year field investigation was carried out to provide a dataset to compare the effectiveness of HYDRUS-2D, a physically-based numerical model, with various machine-learning models, including Multiple Linear Regressions (MLR), Adaptive Neuro-Fuzzy Inference Systems (ANFIS), and Support Vector Machines (SVM), for simulating time series of SWC data under water stress conditions. SWC was monitored using TDRs during the maize growing seasons of 2010 and 2011. Eight combinations of six, simple, independent parameters, including pan evaporation and average air temperature as atmospheric parameters, cumulative growth degree days (cGDD) and crop coefficient (Kc) as crop factors, and water deficit (WD) and irrigation depth (In) as crop stress factors, were adopted for the estimation of SWCs in the machine-learning models. Having Root Mean Square Errors (RMSE) in the range of 0.54-2.07 mm, HYDRUS-2D ranked first for the SWC estimation, while the ANFIS and SVM models with input datasets of cGDD, Kc, WD and In ranked next with RMSEs ranging from 1.27 to 1.9 mm and mean bias errors of -0.07 to 0.27 mm, respectively. However, the MLR models did not perform well for SWC forecasting, mainly due to non-linear changes of SWCs under the irrigation process. The results demonstrated that despite requiring only simple input data, the ANFIS and SVM models could be favorably used for SWC predictions under water stress conditions, especially when there is a lack of data. However, process-based numerical models are undoubtedly a

  7. Developing a support vector machine based QSPR model for prediction of half-life of some herbicides.

    PubMed

    Samghani, Kobra; HosseinFatemi, Mohammad

    2016-07-01

    The half-life (t1/2) of 58 herbicides were modeled by quantitative structure-property relationship (QSPR) based molecular structure descriptors. After calculation and the screening of a large number of molecular descriptors, the most relevant those ones selected by stepwise multiple linear regression were used for developing linear and nonlinear models which developed by using multiple linear regression and support vector machine, respectively. Comparison between statistical parameters of linear and nonlinear models indicates the suitability of SVM over MLR model for predicting the half-life of herbicides. The statistical parameters of R(2) and standard error for training set of SVM model were; 0.96 and 0.087, respectively, and were 0.93 and 0.092 for the test set. The SVM model was evaluated by leave one out cross validation test, which its result indicates the robustness and predictability of the model. The established SVM model was used for predicting the half-life of other herbicides that are located in the applicability domain of model that were determined via leverage approach. The results of this study indicate that the relationship among selected molecular descriptors and herbicide's half-life is non-linear. These results emphases that the process of degradation of herbicides in the environment is very complex and can be affected by various environmental and structural features, therefore simple linear model cannot be able to successfully predict it.

  8. A hybrid model of self organizing maps and least square support vector machine for river flow forecasting

    NASA Astrophysics Data System (ADS)

    Ismail, S.; Shabri, A.; Samsudin, R.

    2012-11-01

    Successful river flow forecasting is a major goal and an essential procedure that is necessary in water resource planning and management. There are many forecasting techniques used for river flow forecasting. This study proposed a hybrid model based on a combination of two methods: Self Organizing Map (SOM) and Least Squares Support Vector Machine (LSSVM) model, referred to as the SOM-LSSVM model for river flow forecasting. The hybrid model uses the SOM algorithm to cluster the entire dataset into several disjointed clusters, where the monthly river flows data with similar input pattern are grouped together from a high dimensional input space onto a low dimensional output layer. By doing this, the data with similar input patterns will be mapped to neighbouring neurons in the SOM's output layer. After the dataset has been decomposed into several disjointed clusters, an individual LSSVM is applied to forecast the river flow. The feasibility of this proposed model is evaluated with respect to the actual river flow data from the Bernam River located in Selangor, Malaysia. The performance of the SOM-LSSVM was compared with other single models such as ARIMA, ANN and LSSVM. The performance of these models was then evaluated using various performance indicators. The experimental results show that the SOM-LSSVM model outperforms the other models and performs better than ANN, LSSVM as well as ARIMA for river flow forecasting. It also indicates that the proposed model can forecast more precisely, and provides a promising alternative technique for river flow forecasting.

  9. D Modelling of Tunnel Excavation Using Pressurized Tunnel Boring Machine in Overconsolidated Soils

    NASA Astrophysics Data System (ADS)

    Demagh, Rafik; Emeriault, Fabrice

    2013-06-01

    The construction of shallow tunnels in urban areas requires a prior assessment of their effects on the existing structures. In the case of shield tunnel boring machines (TBM), the various construction stages carried out constitute a highly three-dimensional problem of soil/structure interaction and are not easy to represent in a complete numerical simulation. Consequently, the tunnelling- induced soil movements are quite difficult to evaluate. A 3D simulation procedure, using a finite differences code, namely FLAC3D, taking into account, in an explicit manner, the main sources of movements in the soil mass is proposed in this paper. It is illustrated by the particular case of Toulouse Subway Line B for which experimental data are available and where the soil is saturated and highly overconsolidated. A comparison made between the numerical simulation results and the insitu measurements shows that the 3D procedure of simulation proposed is relevant, in particular regarding the adopted representation of the different operations performed by the tunnel boring machine (excavation, confining pressure, shield advancement, installation of the tunnel lining, grouting of the annular void, etc). Furthermore, a parametric study enabled a better understanding of the singular behaviour origin observed on the ground surface and within the solid soil mass, till now not mentioned in the literature.

  10. Comparison of machine learning techniques with classical statistical models in predicting health outcomes.

    PubMed

    Song, Xiaowei; Mitnitski, Arnold; Cox, Jafna; Rockwood, Kenneth

    2004-01-01

    Several machine learning techniques (multilayer and single layer perceptron, logistic regression, least square linear separation and support vector machines) are applied to calculate the risk of death from two biomedical data sets, one from patient care records, and another from a population survey. Each dataset contained multiple sources of information: history of related symptoms and other illnesses, physical examination findings, laboratory tests, medications (patient records dataset), health attitudes, and disabilities in activities of daily living (survey dataset). Each technique showed very good mortality prediction in the acute patients data sample (AUC up to 0.89) and fair prediction accuracy for six year mortality (AUC from 0.70 to 0.76) in individuals from epidemiological database surveys. The results suggest that the nature of data is of primary importance rather than the learning technique. However, the consistently superior performance of the artificial neural network (multi-layer perceptron) indicates that nonlinear relationships (which cannot be discerned by linear separation techniques) can provide additional improvement in correctly predicting health outcomes.

  11. Combining Structural Modeling with Ensemble Machine Learning to Accurately Predict Protein Fold Stability and Binding Affinity Effects upon Mutation

    PubMed Central

    Garcia Lopez, Sebastian; Kim, Philip M.

    2014-01-01

    Advances in sequencing have led to a rapid accumulation of mutations, some of which are associated with diseases. However, to draw mechanistic conclusions, a biochemical understanding of these mutations is necessary. For coding mutations, accurate prediction of significant changes in either the stability of proteins or their affinity to their binding partners is required. Traditional methods have used semi-empirical force fields, while newer methods employ machine learning of sequence and structural features. Here, we show how combining both of these approaches leads to a marked boost in accuracy. We introduce ELASPIC, a novel ensemble machine learning approach that is able to predict stability effects upon mutation in both, domain cores and domain-domain interfaces. We combine semi-empirical energy terms, sequence conservation, and a wide variety of molecular details with a Stochastic Gradient Boosting of Decision Trees (SGB-DT) algorithm. The accuracy of our predictions surpasses existing methods by a considerable margin, achieving correlation coefficients of 0.77 for stability, and 0.75 for affinity predictions. Notably, we integrated homology modeling to enable proteome-wide prediction and show that accurate prediction on modeled structures is possible. Lastly, ELASPIC showed significant differences between various types of disease-associated mutations, as well as between disease and common neutral mutations. Unlike pure sequence-based prediction methods that try to predict phenotypic effects of mutations, our predictions unravel the molecular details governing the protein instability, and help us better understand the molecular causes of diseases. PMID:25243403

  12. Landscape epidemiology and machine learning: A geospatial approach to modeling West Nile virus risk in the United States

    NASA Astrophysics Data System (ADS)

    Young, Sean Gregory

    The complex interactions between human health and the physical landscape and environment have been recognized, if not fully understood, since the ancient Greeks. Landscape epidemiology, sometimes called spatial epidemiology, is a sub-discipline of medical geography that uses environmental conditions as explanatory variables in the study of disease or other health phenomena. This theory suggests that pathogenic organisms (whether germs or larger vector and host species) are subject to environmental conditions that can be observed on the landscape, and by identifying where such organisms are likely to exist, areas at greatest risk of the disease can be derived. Machine learning is a sub-discipline of artificial intelligence that can be used to create predictive models from large and complex datasets. West Nile virus (WNV) is a relatively new infectious disease in the United States, and has a fairly well-understood transmission cycle that is believed to be highly dependent on environmental conditions. This study takes a geospatial approach to the study of WNV risk, using both landscape epidemiology and machine learning techniques. A combination of remotely sensed and in situ variables are used to predict WNV incidence with a correlation coefficient as high as 0.86. A novel method of mitigating the small numbers problem is also tested and ultimately discarded. Finally a consistent spatial pattern of model errors is identified, indicating the chosen variables are capable of predicting WNV disease risk across most of the United States, but are inadequate in the northern Great Plains region of the US.

  13. Machine performance assessment and enhancement for a hexapod machine

    SciTech Connect

    Mou, J.I.; King, C.

    1998-03-19

    The focus of this study is to develop a sensor fused process modeling and control methodology to model, assess, and then enhance the performance of a hexapod machine for precision product realization. Deterministic modeling technique was used to derive models for machine performance assessment and enhancement. Sensor fusion methodology was adopted to identify the parameters of the derived models. Empirical models and computational algorithms were also derived and implemented to model, assess, and then enhance the machine performance. The developed sensor fusion algorithms can be implemented on a PC-based open architecture controller to receive information from various sensors, assess the status of the process, determine the proper action, and deliver the command to actuators for task execution. This will enhance a hexapod machine`s capability to produce workpieces within the imposed dimensional tolerances.

  14. MAP3S/RAINE modeling abstracts, 1980. [Concise descriptions of models and availability for calculation of airborne concentration of sulfur dioxide and sulfate

    SciTech Connect

    Michael, P.

    1980-07-01

    The MultiState Atmospheric Power Production Pollution Study (MAP3S) has produced as a primary research output a number of numerical models for the calculation of airborne concentrations of sulfur dioxide and sulfate resulting from anthropogenic sources. Concise descriptions of these models, and of related modeling developments, are collected in this report. For each model, or model component, there is included a listing of the authors, a summary of what it is the model calculates and the method used, a list of references, and a statement of availability.

  15. Generalized Abstract Symbolic Summaries

    NASA Technical Reports Server (NTRS)

    Person, Suzette; Dwyer, Matthew B.

    2009-01-01

    Current techniques for validating and verifying program changes often consider the entire program, even for small changes, leading to enormous V&V costs over a program s lifetime. This is due, in large part, to the use of syntactic program techniques which are necessarily imprecise. Building on recent advances in symbolic execution of heap manipulating programs, in this paper, we develop techniques for performing abstract semantic differencing of program behaviors that offer the potential for improved precision.

  16. CNC electrical discharge machining centers

    SciTech Connect

    Jaggars, S.R.

    1991-10-01

    Computer numerical control (CNC) electrical discharge machining (EDM) centers were investigated to evaluate the application and cost effectiveness of establishing this capability at Allied-Signal Inc., Kansas City Division (KCD). In line with this investigation, metal samples were designed, prepared, and machined on an existing 15-year-old EDM machine and on two current technology CNC EDM machining centers at outside vendors. The results were recorded and evaluated. The study revealed that CNC EDM centers are a capability that should be established at KCD. From the information gained, a machine specification was written and a shop was purchased and installed in the Engineering Shop. The older machine was exchanged for a new model. Additional machines were installed in the Tool Design and Fabrication and Precision Microfinishing departments. The Engineering Shop machine will be principally used for the following purposes: producing deep cavities in small corner radii, machining simulated casting models, machining difficult-to-machine materials, and polishing difficult-to-hand polish mold cavities. 2 refs., 18 figs., 3 tabs.

  17. Machine learning methods applied to pharmacokinetic modelling of remifentanil in healthy volunteers: a multi-method comparison.

    PubMed

    Poynton, M R; Choi, B M; Kim, Y M; Park, I S; Noh, G J; Hong, S O; Boo, Y K; Kang, S H

    2009-01-01

    This study compared the blood concentrations of remifentanil obtained in a previous clinical investigation with the predicted remifentanil concentrations produced by different pharmacokinetic models: a non-linear mixed effects model created by the software NONMEM; an artificial neural network (ANN) model; a support vector machine (SVM) model; and multi-method ensembles. The ensemble created from the mean of the ANN and the non-linear mixed effects model predictions achieved the smallest error and the highest correlation coefficient. The SVM model produced the highest error and the lowest correlation coefficient. Paired t-tests indicated that there was insufficient evidence that the predicted values of the ANN, SVM and two multi-method ensembles differed from the actual measured values at alpha = 0.05. The ensemble method combining the ANN and non-linear mixed effects model predictions outperformed either method alone. These results indicated a potential advantage of ensembles in improving the accuracy and reducing the variance of pharmacokinetic models.

  18. Use of Machine Learning Techniques for Iidentification of Robust Teleconnections to East African Rainfall Variability in Observations and Models

    NASA Technical Reports Server (NTRS)

    Roberts, J. Brent; Robertson, Franklin R.; Funk, Chris

    2014-01-01

    Providing advance warning of East African rainfall variations is a particular focus of several groups including those participating in the Famine Early Warming Systems Network. Both seasonal and long-term model projections of climate variability are being used to examine the societal impacts of hydrometeorological variability on seasonal to interannual and longer time scales. The NASA / USAID SERVIR project, which leverages satellite and modeling-based resources for environmental decision making in developing nations, is focusing on the evaluation of both seasonal and climate model projections to develop downscaled scenarios for using in impact modeling. The utility of these projections is reliant on the ability of current models to capture the embedded relationships between East African rainfall and evolving forcing within the coupled ocean-atmosphere-land climate system. Previous studies have posited relationships between variations in El Niño, the Walker circulation, Pacific decadal variability (PDV), and anthropogenic forcing. This study applies machine learning methods (e.g. clustering, probabilistic graphical model, nonlinear PCA) to observational datasets in an attempt to expose the importance of local and remote forcing mechanisms of East African rainfall variability. The ability of the NASA Goddard Earth Observing System (GEOS5) coupled model to capture the associated relationships will be evaluated using Coupled Model Intercomparison Project Phase 5 (CMIP5) simulations.

  19. Foundations of the Bandera Abstraction Tools

    NASA Technical Reports Server (NTRS)

    Hatcliff, John; Dwyer, Matthew B.; Pasareanu, Corina S.; Robby

    2003-01-01

    Current research is demonstrating that model-checking and other forms of automated finite-state verification can be effective for checking properties of software systems. Due to the exponential costs associated with model-checking, multiple forms of abstraction are often necessary to obtain system models that are tractable for automated checking. The Bandera Tool Set provides multiple forms of automated support for compiling concurrent Java software systems to models that can be supplied to several different model-checking tools. In this paper, we describe the foundations of Bandera's data abstraction mechanism which is used to reduce the cardinality (and the program's state-space) of data domains in software to be model-checked. From a technical standpoint, the form of data abstraction used in Bandera is simple, and it is based on classical presentations of abstract interpretation. We describe the mechanisms that Bandera provides for declaring abstractions, for attaching abstractions to programs, and for generating abstracted programs and properties. The contributions of this work are the design and implementation of various forms of tool support required for effective application of data abstraction to software components written in a programming language like Java which has a rich set of linguistic features.

  20. Evaluating the C-section Rate of Different Physician Practices: Using Machine Learning to Model Standard Practice

    PubMed Central

    Caruana, Rich; Niculescu, Radu S.; Rao, R. Bharat; Simms, Cynthia

    2003-01-01

    The C-section rate of a population of 22,175 expectant mothers is 16.8%; yet the 17 physician groups that serve this population have vastly different group C-section rates, ranging from 13% to 23%. Our goal is to determine retrospectively if the variations in the observed rates can be attributed to variations in the intrinsic risk of the patient sub-populations (i.e. some groups contain more “high-risk C-section” patients), or differences in physician practice (i.e. some groups do more C-sections). We apply machine learning to this problem by training models to predict standard practice from retrospective data. We then use the models of standard practice to evaluate the C-section rate of each physician practice. Our results indicate that although there is variation in intrinsic risk among the groups, there is also much variation in physician practice. PMID:14728149

  1. A Comparison of a Machine Learning Model with EuroSCORE II in Predicting Mortality after Elective Cardiac Surgery: A Decision Curve Analysis

    PubMed Central

    Allyn, Jérôme; Allou, Nicolas; Augustin, Pascal; Philip, Ivan; Martinet, Olivier; Belghiti, Myriem; Provenchere, Sophie; Montravers, Philippe; Ferdynus, Cyril

    2017-01-01

    Background The benefits of cardiac surgery are sometimes difficult to predict and the decision to operate on a given individual is complex. Machine Learning and Decision Curve Analysis (DCA) are recent methods developed to create and evaluate prediction models. Methods and finding We conducted a retrospective cohort study using a prospective collected database from December 2005 to December 2012, from a cardiac surgical center at University Hospital. The different models of prediction of mortality in-hospital after elective cardiac surgery, including EuroSCORE II, a logistic regression model and a machine learning model, were compared by ROC and DCA. Of the 6,520 patients having elective cardiac surgery with cardiopulmonary bypass, 6.3% died. Mean age was 63.4 years old (standard deviation 14.4), and mean EuroSCORE II was 3.7 (4.8) %. The area under ROC curve (IC95%) for the machine learning model (0.795 (0.755–0.834)) was significantly higher than EuroSCORE II or the logistic regression model (respectively, 0.737 (0.691–0.783) and 0.742 (0.698–0.785), p < 0.0001). Decision Curve Analysis showed that the machine learning model, in this monocentric study, has a greater benefit whatever the probability threshold. Conclusions According to ROC and DCA, machine learning model is more accurate in predicting mortality after elective cardiac surgery than EuroSCORE II. These results confirm the use of machine learning methods in the field of medical prediction. PMID:28060903

  2. Modeling using support vector machines on imbalanced data: A case study on the prediction of the sightings of Irrawaddy dolphins

    NASA Astrophysics Data System (ADS)

    Ying, Liew Chin; Labadin, Jane; Chai, Wang Yin; Tuen, Andrew Alek; Peter, Cindy

    2015-05-01

    Support vector machines (SVMs) is a powerful machine learning algorithm for classification particularly in medical, image processing and text analysis related studies. Nonetheless, its application in ecology is scarce. This study aims to demonstrate and compare the classification performance of SVMs models developed with weights and models developed with adoption of systematic random under-sampling technique in predicting a one-class independent dataset. The data used is a typical imbalanced real-world data with 700 data points where only 11% are sighted data points. Conversely, the one-class independent real-world dataset, with twenty data points, used for prediction consists of sighted data only. Both datasets are characterized with seven attributes. The results show that the former models have reported overall accuracy ranged between 87.62% and 90% with G-mean between 0% and 30.07% (0% to 9.09% sensitivity and 97.34% to 100% specificity) while the ROC-AUC values ranged between 75.92% and 88.78%. The latter models have reported overall accuracy ranged between 67.39% and 78.26% with G-mean between 66.51% and 76.30% (78.26% to 95.65% sensitivity and 52.17% to 60.87% specificity) while the ROC-AUC values ranged between 72.59% and 85.82%. Nevertheless, the former models could barely predict the independent dataset successfully. Majority of the models fail to predict a single sighted data point and the best prediction accuracy reported is 30%. The classification performance of the latter models is surprisingly encouraging where majority of the models manage to achieve more than 30% prediction accuracy. In addition, many of the models are capable to attain 65% prediction accuracy, more than double the performance of the former models. Current study thus suggests that, where highly imbalanced ecology data is concerned, modeling using SVMs adopting systematic random under-sampling technique is a more promising mean than w-SVM in obtaining much rewarding classification

  3. Rolling forecasting model of PM2.5 concentration based on support vector machine and particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Chang-Jiang; Dai, Li-Jie; Ma, Lei-Ming

    2016-10-01

    The data of current PM2.5 model forecasting greatly deviate from the measured concentration. In order to solve this problem, Support Vector Machine (SVM) and Particle Swarm Optimization (PSO) are combined to build a rolling forecasting model. The important parameters (C and γ) of SVM are optimized by PSO. The data (from February to July in 2015), consisting of measured PM2.5 concentration, PM2.5 model forecasting concentration and five main model forecasting meteorological factors, are provided by Shanghai Meteorological Bureau in Pudong New Area. The rolling model is used to forecast hourly PM2.5 concentration in 12 hours in advance and the nighttime average concentration (mean value from 9 pm to next day 8 am) during the upcoming day. The training data and the optimal parameters of SVM model are different in every forecasting, that is to say, different models (dynamic models) are built in every forecasting. SVM model is compared with Radical Basis Function Neural Network (RBFNN), Multi-variable Linear Regression (MLR) and WRF-CHEM. Experimental results show that the proposed model improves the forecasting accuracy of hourly PM2.5 concentration in 12 hours in advance and nighttime average concentration during the upcoming day. SVM model performs better than MLR, RBFNN and WRF-CHEM. SVM model greatly improves the forecasting accuracy of PM2.5 concentration one hour in advance, according with the result concluded from previous research. The rolling forecasting model can be applied to the field of PM2.5 concentration forecasting, and can offer help to meteorological administration in PM2.5 concentration monitoring and forecasting.

  4. Models of logistic regression analysis, support vector machine, and back-propagation neural network based on serum tumor markers in colorectal cancer diagnosis.

    PubMed

    Zhang, B; Liang, X L; Gao, H Y; Ye, L S; Wang, Y G

    2016-05-13

    We evaluated the application of three machine learning algorithms, including logistic regression, support vector machine and back-propagation neural network, for diagnosing congenital heart disease and colorectal cancer. By inspecting related serum tumor marker levels in colorectal cancer patients and healthy subjects, early diagnosis models for colorectal cancer were built using three machine learning algorithms to assess their corresponding diagnostic values. Except for serum alpha-fetoprotein, the levels of 11 other serum markers of patients in the colorectal cancer group were higher than those in the benign colorectal cancer group (P < 0.05). The results of logistic regression analysis indicted that individual detection of serum carcinoembryonic antigens, CA199, CA242, CA125, and CA153 and their combined detection was effective for diagnosing colorectal cancer. Combined detection had a better diagnostic effect with a sensitivity of 94.2% and specificity of 97.7%; combining serum carcinoembryonic antigens, CA199, CA242, CA125, and CA153, with the support vector machine diagnosis model and back-propagation, a neural network diagnosis model was built with diagnostic accuracies of 82 and 75%, sensitivities of 85 and 80%, and specificities of 80 and 70%, respectively. Colorectal cancer diagnosis models based on the three machine learning algorithms showed high diagnostic value and can help obtain evidence for the early diagnosis of colorectal cancer.

  5. Groundwater abstraction pollution risk assessment.

    PubMed

    Lytton, L; Howe, S; Sage, R; Greenaway, P

    2003-01-01

    A generic groundwater pollution risk assessment methodology has been developed to enable the evaluation and ranking of the potential risk of pollution to groundwater abstractions. The ranking can then be used to prioritise risk management or mitigation procedures in a robust and quantifiable framework and thus inform business investment decisions. The risk assessment consider the three components of the pollution transport model: source-pathway-receptor. For groundwater abstractions these correspond to land use (with associated pollutants and shallow subsurface characteristics), aquifer and the abstraction borehole. An hierarchical approach was chosen to allow the risk assessment to be successfully carried out with different quality data for different parts of the model. The 400-day groundwater protection zone defines the catchment boundary that form the spatial limit of the land use audit for each receptor. A risk score is obtained for each land use (potential pollution source) within the catchment. These scores are derived by considering the characteristics (such as load, persistence and toxicity) of all pollutants pertaining to each land use, their on-site management and the potential for the unsaturated subsurface to attenuate their effects in the event of a release. Risk scores are also applied to the aquifer characteristics (as pollutant pathway) and to the abstraction borehole (as pollutant receptor). Each risk score is accompanied by an uncertainty score which provides a guide to the confidence in the data used to compile the risk assessment. The application of the methodology has highlighted a number of problems in this type of work and results of initial case studies are being used to trial alternative scoring methods and a more simplified approach to accelerate the process of pollution risk assessment.

  6. Research Abstracts of 1980.

    DTIC Science & Technology

    1980-12-01

    ABSTRACTS OF 1980. 9 - DTIC ELECTEf ii S AN3O 1981j _NAVAL DISTRIBUTION SMT:MIT DENTAL RESEARCH Approved for PUbDiC T INSTITE iii~2 YA3 It81 Naval...Medical Research apd Development Command 30 £ Bethesda, Maryland ( *- i - NTIS - GRA&I DTIC TAB - Urrannouneed NAVAL DENTAL RESEARCH INSTITUTE...r1 w American Assoctat/ion for Dental Research, 58th Annual Session, Los Angeles, California, March 20-23, 1980. 1. AV6ERSON*, D. N., LANGELAND, K

  7. Research Abstracts of 1979.

    DTIC Science & Technology

    1979-12-01

    7 AD-AO82 309 NAVAL DENTAL RESEARCH INST GREAT LAKES IL F/6 6/9 RESCH ABTAT79 991 UNCLASSIFIED NORI-PR-79-11 NL ’NDRI-PR 79-11 December 1979...RESEARCH ABSTRACTS OF 1979 OTICSELZCreD MAR 2?718 S A NAVAL DENTAL RESEARCH INSTITUTE Naval Medical Research and Development Command Bethesda, Maryland...8G 3 23 O4ൌ p.,. ... ....-- - I -- - ’.... .I l l ---,, .. . = ., , ." .;’.- I 1 IV NAVAL DENTAL RESEARCH INSTITUTE NAVAL BASE, BLDG. I-H GREAT LAKES

  8. Cheminformatics models based on machine learning approaches for design of USP1/UAF1 abrogators as anticancer agents.

    PubMed

    Wahi, Divya; Jamal, Salma; Goyal, Sukriti; Singh, Aditi; Jain, Ritu; Rana, Preeti; Grover, Abhinav

    2015-06-01

    Cancer cells have upregulated DNA repair mechanisms, enabling them survive DNA damage induced during repeated rapid cell divisions and targeted chemotherapeutic treatments. Cancer cell proliferation and survival targeting via inhibition of DNA repair pathways is currently a very promiscuous anti-tumor approach. The deubiquitinating enzyme, USP1 is known to promote DNA repair via complexing with UAF1. The USP1/UAF1 complex is responsible for regulating DNA break repair pathways such as trans-lesion synthesis pathway, Fanconi anemia pathway and homologous recombination. Thus, USP1/UAF1 inhibition poses as an efficient anti-cancer strategy. The recently made available high throughput screen data for anti USP1/UAF1 activity prompted us to compute bioactivity predictive models that could help in screening for potential USP1/UAF1 inhibitors having anti-cancer properties. The current study utilizes publicly available high throughput screen data set of chemical compounds evaluated for their potential USP1/UAF1 inhibitory effect. A machine learning approach was devised for generation of computational models that could predict for potential anti USP1/UAF1 biological activity of novel anticancer compounds. Additional efficacy of active compounds was screened by applying SMARTS filter to eliminate molecules with non-drug like features. The structural fragment analysis was further performed to explore structural properties of the molecules. We demonstrated that modern machine learning approaches could be efficiently employed in building predictive computational models and their predictive performance is statistically accurate. The structure fragment analysis revealed the structures that could play an important role in identification of USP1/UAF1 inhibitors.

  9. A general procedure to generate models for urban environmental-noise pollution using feature selection and machine learning methods.

    PubMed

    Torija, Antonio J; Ruiz, Diego P

    2015-02-01

    The prediction of environmental noise in urban environments requires the solution of a complex and non-linear problem, since there are complex relationships among the multitude of variables involved in the characterization and modelling of environmental noise and environmental-noise magnitudes. Moreover, the inclusion of the great spatial heterogeneity characteristic of urban environments seems to be essential in order to achieve an accurate environmental-noise prediction in cities. This problem is addressed in this paper, where a procedure based on feature-selection techniques and machine-learning regression methods is proposed and applied to this environmental problem. Three machine-learning regression methods, which are considered very robust in solving non-linear problems, are used to estimate the energy-equivalent sound-pressure level descriptor (LAeq). These three methods are: (i) multilayer perceptron (MLP), (ii) sequential minimal optimisation (SMO), and (iii) Gaussian processes for regression (GPR). In addition, because of the high number of input variables involved in environmental-noise modelling and estimation in urban environments, which make LAeq prediction models quite complex and costly in terms of time and resources for application to real situations, three different techniques are used to approach feature selection or data reduction. The feature-selection techniques used are: (i) correlation-based feature-subset selection (CFS), (ii) wrapper for feature-subset selection (WFS), and the data reduction technique is principal-component analysis (PCA). The subsequent analysis leads to a proposal of different schemes, depending on the needs regarding data collection and accuracy. The use of WFS as the feature-selection technique with the implementation of SMO or GPR as regression algorithm provides the best LAeq estimation (R(2)=0.94 and mean absolute error (MAE)=1.14-1.16 dB(A)).

  10. A prediction model of drug-induced ototoxicity developed by an optimal support vector machine (SVM) method.

    PubMed

    Zhou, Shu; Li, Guo-Bo; Huang, Lu-Yi; Xie, Huan-Zhang; Zhao, Ying-Lan; Chen, Yu-Zong; Li, Lin-Li; Yang, Sheng-Yong

    2014-08-01

    Drug-induced ototoxicity, as a toxic side effect, is an important issue needed to be considered in drug discovery. Nevertheless, current experimental methods used to evaluate drug-induced ototoxicity are often time-consuming and expensive, indicating that they are not suitable for a large-scale evaluation of drug-induced ototoxicity in the early stage of drug discovery. We thus, in this investigation, established an effective computational prediction model of drug-induced ototoxicity using an optimal support vector machine (SVM) method, GA-CG-SVM. Three GA-CG-SVM models were developed based on three training sets containing agents bearing different risk levels of drug-induced ototoxicity. For comparison, models based on naïve Bayesian (NB) and recursive partitioning (RP) methods were also used on the same training sets. Among all the prediction models, the GA-CG-SVM model II showed the best performance, which offered prediction accuracies of 85.33% and 83.05% for two independent test sets, respectively. Overall, the good performance of the GA-CG-SVM model II indicates that it could be used for the prediction of drug-induced ototoxicity in the early stage of drug discovery.

  11. A Boltzmann machine for the organization of intelligent machines

    NASA Technical Reports Server (NTRS)

    Moed, Michael C.; Saridis, George N.

    1989-01-01

    In the present technological society, there is a major need to build machines that would execute intelligent tasks operating in uncertain environments with minimum interaction with a human operator. Although some designers have built smart robots, utilizing heuristic ideas, there is no systematic approach to design such machines in an engineering manner. Recently, cross-disciplinary research from the fields of computers, systems AI and information theory has served to set the foundations of the emerging area of the design of intelligent machines. Since 1977 Saridis has been developing an approach, defined as Hierarchical Intelligent Control, designed to organize, coordinate and execute anthropomorphic tasks by a machine with minimum interaction with a human operator. This approach utilizes analytical (probabilistic) models to describe and control the various functions of the intelligent machine structured by the intuitively defined principle of Increasing Precision with Decreasing Intelligence (IPDI) (Saridis 1979). This principle, even though resembles the managerial structure of organizational systems (Levis 1988), has been derived on an analytic basis by Saridis (1988). The purpose is to derive analytically a Boltzmann machine suitable for optimal connection of nodes in a neural net (Fahlman, Hinton, Sejnowski, 1985). Then this machine will serve to search for the optimal design of the organization level of an intelligent machine. In order to accomplish this, some mathematical theory of the intelligent machines will be first outlined. Then some definitions of the variables associated with the principle, like machine intelligence, machine knowledge, and precision will be made (Saridis, Valavanis 1988). Then a procedure to establish the Boltzmann machine on an analytic basis will be presented and illustrated by an example in designing the organization level of an Intelligent Machine. A new search technique, the Modified Genetic Algorithm, is presented and proved

  12. Interactional Metadiscourse in Research Article Abstracts

    ERIC Educational Resources Information Center

    Gillaerts, Paul; Van de Velde, Freek

    2010-01-01

    This paper deals with interpersonality in research article abstracts analysed in terms of interactional metadiscourse. The evolution in the distribution of three prominent interactional markers comprised in Hyland's (2005a) model, viz. hedges, boosters and attitude markers, is investigated in three decades of abstract writing in the field of…

  13. Abstraction and context in concept representation.

    PubMed Central

    Hampton, James A

    2003-01-01

    This paper develops the notion of abstraction in the context of the psychology of concepts, and discusses its relation to context dependence in knowledge representation. Three general approaches to modelling conceptual knowledge from the domain of cognitive psychology are discussed, which serve to illustrate a theoretical dimension of increasing levels of abstraction. PMID:12903660

  14. Support Vector Machine and Artificial Neural Network Models for the Classification of Grapevine Varieties Using a Portable NIR Spectrophotometer

    PubMed Central

    Gutiérrez, Salvador; Tardaguila, Javier; Fernández-Novales, Juan; Diago, María P.

    2015-01-01

    The identification of different grapevine varieties, currently attended using visual ampelometry, DNA analysis and very recently, by hyperspectral analysis under laboratory conditions, is an issue of great importance in the wine industry. This work presents support vector machine and artificial neural network’s modelling for grapevine varietal classification from in-field leaf spectroscopy. Modelling was attempted at two scales: site-specific and a global scale. Spectral measurements were obtained on the near-infrared (NIR) spectral range between 1600 to 2400 nm under field conditions in a non-destructive way using a portable spectrophotometer. For the site specific approach, spectra were collected from the adaxial side of 400 individual leaves of 20 grapevine (Vitis vinifera L.) varieties one week after veraison. For the global model, two additional sets of spectra were collected one week before harvest from two different vineyards in another vintage, each one consisting on 48 measurement from individual leaves of six varieties. Several combinations of spectra scatter correction and smoothing filtering were studied. For the training of the models, support vector machines and artificial neural networks were employed using the pre-processed spectra as input and the varieties as the classes of the models. The results from the pre-processing study showed that there was no influence whether using scatter correction or not. Also, a second-degree derivative with a window size of 5 Savitzky-Golay filtering yielded the highest outcomes. For the site-specific model, with 20 classes, the best results from the classifiers thrown an overall score of 87.25% of correctly classified samples. These results were compared under the same conditions with a model trained using partial least squares discriminant analysis, which showed a worse performance in every case. For the global model, a 6-class dataset involving samples from three different vineyards, two years and leaves

  15. Machine learning approach identifies new pathways associated with demyelination in a viral model of multiple sclerosis.

    PubMed

    Ulrich, Reiner; Kalkuhl, Arno; Deschl, Ulrich; Baumgärtner, Wolfgang

    2010-01-01

    Theiler's murine encephalomyelitis is an experimentally virus-induced inflammatory demyelinating disease of the spinal cord, displaying clinical and pathological similarities to chronic progressive multiple sclerosis. The aim of this study was to identify pathways associated with chronic demyelination using an assumption-free combined microarray and immunohistology approach. Movement control as determined by rotarod assay significantly worsened in Theiler's murine encephalomyelitis -virus-infected SJL/J mice from 42 to 196 days after infection (dpi). In the spinal cords, inflammatory changes were detected 14 to 196 dpi, and demyelination progressively increased from 42 to 196 dpi. Microarray analysis revealed 1001 differentially expressed genes over the study period. The dominating changes as revealed by k-means and functional annotation clustering included up-regulations related to intrathecal antibody production and antigen processing and presentation via major histocompatibility class II molecules. A random forest machine learning algorithm revealed that down-regulated lipid and cholesterol biosynthesis, differentially expressed neurite morphogenesis and up-regulated toll-like receptor-4-induced pathways were intimately associated with demyelination as measured by immunohistology. Conclusively, although transcriptional changes were dominated by the adaptive immune response, the main pathways associated with demyelination included up-regulation of toll-like receptor 4 and down-regulation of cholesterol biosynthesis. Cholesterol biosynthesis is a rate limiting step of myelination and its down-regulation is suggested to be involved in chronic demyelination by an inhibition of remyelination.

  16. Protein Kinase Classification with 2866 Hidden Markov Models and One Support Vector Machine

    NASA Technical Reports Server (NTRS)

    Weber, Ryan; New, Michael H.; Fonda, Mark (Technical Monitor)

    2002-01-01

    The main application considered in this paper is predicting true kinases from randomly permuted kinases that share the same length and amino acid distributions as the true kinases. Numerous methods already exist for this classification task, such as HMMs, motif-matchers, and sequence comparison algorithms. We build on some of these efforts by creating a vector from the output of thousands of structurally based HMMs, created offline with Pfam-A seed alignments using SAM-T99, which then must be combined into an overall classification for the protein. Then we use a Support Vector Machine for classifying this large ensemble Pfam-Vector, with a polynomial and chisquared kernel. In particular, the chi-squared kernel SVM performs better than the HMMs and better than the BLAST pairwise comparisons, when predicting true from false kinases in some respects, but no one algorithm is best for all purposes or in all instances so we consider the particular strengths and weaknesses of each.

  17. SPOCK: A SPICE based circuit code for modeling pulsed power machines

    SciTech Connect

    Ingermanson, R.; Parks, D.

    1996-12-31

    SPICE is an industry standard electrical circuit simulation code developed by the University of California at Berkeley over the last twenty years. The authors have developed a number of new SPICE devices of interest to the pulsed power community: plasma opening switches, plasma radiation sources, bremsstrahlung diodes, magnetically insulated transmission lines, explosively driven flux compressors. These new devices are integrated into SPICE using S-Cubed`s MIRIAD technology to create a user-friendly circuit code that runs on Unix workstations or under Windows NT or Windows 95. The new circuit code is called SPOCK--``S-Cubed Power Optimizing Circuit Kit.`` SPOCK allows the user to easily run optimization studies by setting up runs in which any circuit parameters can be systematically varied. Results can be plotted as 1-D line plots, 2-D contour plots, or 3-D ``bedsheet`` plots. The authors demonstrate SPOCK`s capabilities on a color laptop computer, performing realtime analysis of typical configurations of such machines as HAWK and ACE4.

  18. Modelling and analysing track cycling Omnium performances using statistical and machine learning techniques.

    PubMed

    Ofoghi, Bahadorreza; Zeleznikow, John; Dwyer, Dan; Macmahon, Clare

    2013-01-01

    This article describes the utilisation of an unsupervised machine learning technique and statistical approaches (e.g., the Kolmogorov-Smirnov test) that assist cycling experts in the crucial decision-making processes for athlete selection, training, and strategic planning in the track cycling Omnium. The Omnium is a multi-event competition that will be included in the summer Olympic Games for the first time in 2012. Presently, selectors and cycling coaches make decisions based on experience and intuition. They rarely have access to objective data. We analysed both the old five-event (first raced internationally in 2007) and new six-event (first raced internationally in 2011) Omniums and found that the addition of the elimination race component to the Omnium has, contrary to expectations, not favoured track endurance riders. We analysed the Omnium data and also determined the inter-relationships between different individual events as well as between those events and the final standings of riders. In further analysis, we found that there is no maximum ranking (poorest performance) in each individual event that riders can afford whilst still winning a medal. We also found the required times for riders to finish the timed components that are necessary for medal winning. The results of this study consider the scoring system of the Omnium and inform decision-making toward successful participation in future major Omnium competitions.

  19. Plant microRNA-Target Interaction Identification Model Based on the Integration of Prediction Tools and Support Vector Machine

    PubMed Central

    Meng, Jun; Shi, Lin; Luan, Yushi

    2014-01-01

    Background Confident identification of microRNA-target interactions is significant for studying the function of microRNA (miRNA). Although some computational miRNA target prediction methods have been proposed for plants, results of various methods tend to be inconsistent and usually lead to more false positive. To address these issues, we developed an integrated model for identifying plant miRNA–target interactions. Results Three online miRNA target prediction toolkits and machine learning algorithms were integrated to identify and analyze Arabidopsis thaliana miRNA-target interactions. Principle component analysis (PCA) feature extraction and self-training technology were introduced to improve the performance. Results showed that the proposed model outperformed the previously existing methods. The results were validated by using degradome sequencing supported Arabidopsis thaliana miRNA-target interactions. The proposed model constructed on Arabidopsis thaliana was run over Oryza sativa and Vitis vinifera to demonstrate that our model is effective for other plant species. Conclusions The integrated model of online predictors and local PCA-SVM classifier gained credible and high quality miRNA-target interactions. The supervised learning algorithm of PCA-SVM classifier was employed in plant miRNA target identification for the first time. Its performance can be substantially improved if more experimentally proved training samples are provided. PMID:25051153

  20. Method and system employing finite state machine modeling to identify one of a plurality of different electric load types

    SciTech Connect

    Du, Liang; Yang, Yi; Harley, Ronald Gordon; Habetler, Thomas G.; He, Dawei

    2016-08-09

    A system is for a plurality of different electric load types. The system includes a plurality of sensors structured to sense a voltage signal and a current signal for each of the different electric loads; and a processor. The processor acquires a voltage and current waveform from the sensors for a corresponding one of the different electric load types; calculates a power or current RMS profile of the waveform; quantizes the power or current RMS profile into a set of quantized state-values; evaluates a state-duration for each of the quantized state-values; evaluates a plurality of state-types based on the power or current RMS profile and the quantized state-values; generates a state-sequence that describes a corresponding finite state machine model of a generalized load start-up or transient profile for the corresponding electric load type; and identifies the corresponding electric load type.

  1. Towards better modelling of drug-loading in solid lipid nanoparticles: Molecular dynamics, docking experiments and Gaussian Processes machine learning.

    PubMed

    Hathout, Rania M; Metwally, Abdelkader A

    2016-11-01

    This study represents one of the series applying computer-oriented processes and tools in digging for information, analysing data and finally extracting correlations and meaningful outcomes. In this context, binding energies could be used to model and predict the mass of loaded drugs in solid lipid nanoparticles after molecular docking of literature-gathered drugs using MOE® software package on molecularly simulated tripalmitin matrices using GROMACS®. Consequently, Gaussian processes as a supervised machine learning artificial intelligence technique were used to correlate the drugs' descriptors (e.g. M.W., xLogP, TPSA and fragment complexity) with their molecular docking binding energies. Lower percentage bias was obtained compared to previous studies which allows the accurate estimation of the loaded mass of any drug in the investigated solid lipid nanoparticles by just projecting its chemical structure to its main features (descriptors).

  2. Role of hydrogen abstraction acetylene addition mechanisms in the formation of chlorinated naphthalenes. 2. Kinetic modeling and the detailed mechanism of ring closure.

    PubMed

    McIntosh, Grant J; Russell, Douglas K

    2014-12-26

    The dominant formation mechanisms of chlorinated phenylacetylenes, naphthalenes, and phenylvinylacetylenes in relatively low pressure and temperature (∼40 Torr and 1000 K) pyrolysis systems are explored. Mechanism elucidation is achieved through a combination of theoretical and experimental techniques, the former employing a novel simplification of kinetic modeling which utilizes rate constants in a probabilistic framework. Contemporary formation schemes of the compounds of interest generally require successive additions of acetylene to phenyl radicals. As such, infrared laser powered homogeneous pyrolyses of dichloro- or trichloroethylene were perturbed with 1,2,4- or 1,2,3-trichlorobenzene. The resulting changes in product identities were compared with the major products expected from conventional pathways, aided by the results of our previous computational work. This analysis suggests that a Bittner-Howard growth mechanism, with a novel amendment to the conventional scheme made just prior to ring closure, describes the major products well. Expected products from a number of other potentially operative channels are shown to be incongruent with experiment, further supporting the role of Bittner-Howard channels as the unique pathway to naphthalene growth. A simple quantitative analysis which performs very well is achieved by considering the reaction scheme as a probability tree, with relative rate constants being cast as branching probabilities. This analysis describes all chlorinated phenylacetylene, naphthalene, and phenylvinylacetylene congeners. The scheme is then tested in a more general system, i.e., not enforcing a hydrogen abstraction/acetylene addition mechanism, by pyrolyzing mixtures of di- and trichloroethylene without the addition of an aromatic precursor. The model indicates that these mechanisms are still likely to be operative.

  3. Writing a successful research abstract.

    PubMed

    Bliss, Donna Z

    2012-01-01

    Writing and submitting a research abstract provides timely dissemination of the findings of a study and offers peer input for the subsequent development of a quality manuscript. Acceptance of abstracts is competitive. Understanding the expected content of an abstract, the abstract review process and tips for skillful writing will improve the chance of acceptance.

  4. Assessing Multi-Person and Person-Machine Distributed Decision Making Using an Extended Psychological Distancing Model

    DTIC Science & Technology

    1990-02-01

    human-to- human communication patterns during situation assessment and cooperative problem solving tasks. The research proposed for the second URRP year...Hardware development. In order to create an environment within which to study multi-channeled human-to- human communication , a multi-media observation...that machine-to- human communication can be used to increase cohesion between humans and intelligent machines and to promote human-machine team

  5. Abstraction of Seepage into Drifts

    SciTech Connect

    M.L. Wilson; C.K. Ho

    2000-09-26

    A total-system performance assessment (TSPA) for a potential nuclear-waste repository requires an estimate of the amount of water that might contact waste. This paper describes the model used for part of that estimation in a recent TSPA for the Yucca Mountain site. The discussion is limited to estimation of how much water might enter emplacement drifts; additional considerations related to flow within the drifts, and how much water might actually contact waste, are not addressed here. The unsaturated zone at Yucca Mountain is being considered for the potential repository, and a drift opening in unsaturated rock tends to act as a capillary barrier and divert much of the percolating water around it. For TSPA, the important questions regarding seepage are how many waste packages might be subjected to water flow and how much flow those packages might see. Because of heterogeneity of the rock and uncertainty about the future (how the climate will evolve, etc.), it is not possible to predict seepage amounts or locations with certainty. Thus, seepage is treated as a stochastic quantity in TSPA simulations, with the magnitude and spatial distribution of seepage sampled from uncertainty distributions. The distillation of the essential components of process modeling into a form suitable for use in TSPA simulations is referred to as abstraction. In the following sections, seepage process models and abstractions will be summarized and then some illustrative results are presented.

  6. Integrating Subcellular Location for Improving Machine Learning Models of Remote Homology Detection in Eukaryotic Organisms

    SciTech Connect

    Shah, Anuj R.; Oehmen, Chris S.; Harper, Jill K.; Webb-Robertson, Bobbie-Jo M.

    2007-02-23

    Motivation: At the center of bioinformatics, genomics, and pro-teomics is the need for highly accurate genome annotations. Producing high-quality reliable annotations depends on identifying sequences which are related evolutionarily (homologs) on which to infer function. Homology detection is one of the oldest tasks in bioinformatics, however most approaches still fail when presented with sequences that have low residue similarity despite a distant evolutionary relationship (remote homology). Recently, discriminative approaches, such as support vector machines (SVMs) have demonstrated a vast improvement in sensitivity for remote homology detection. These methods however have only focused on one aspect of the sequence at a time, e.g., sequence similarity or motif based scores. However, supplementary information, such as the sub-cellular location of a protein within the cell would give further clues as to possible homologous pairs, additionally eliminating false relationships due to simple functional roles that cannot exist due to location. We have developed a method, SVM-SimLoc that integrates sub-cellular location with sequence similarity information into a pro-tein family classifier and compared it to one of the most accurate sequence based SVM approaches, SVM-Pairwise. Results: The SCOP 1.53 benchmark data set was utilized to assess the performance of SVM-SimLoc. As cellular location prediction is dependent upon the type of sequence, eukaryotic or prokaryotic, the analysis is restricted to the 2630 eukaryotic sequences in the benchmark dataset, evaluating a total of 27 protein families. We demonstrate that the integration of sequence similarity and sub-cellular location yields notably more accurate results than using sequence similarity independently at a significance level of 0.006.

  7. Machine-learning model observer for detection and localization tasks in clinical SPECT-MPI

    NASA Astrophysics Data System (ADS)

    Parages, Felipe M.; O'Connor, J. Michael; Pretorius, P. Hendrik; Brankov, Jovan G.

    2016-03-01

    In this work we propose a machine-learning MO based on Naive-Bayes classification (NB-MO) for the diagnostic tasks of detection, localization and assessment of perfusion defects in clinical SPECT Myocardial Perfusion Imaging (MPI), with the goal of evaluating several image reconstruction methods used in clinical practice. NB-MO uses image features extracted from polar-maps in order to predict lesion detection, localization and severity scores given by human readers in a series of 3D SPECT-MPI. The population used to tune (i.e. train) the NB-MO consisted of simulated SPECT-MPI cases - divided into normals or with lesions in variable sizes and locations - reconstructed using filtered backprojection (FBP) method. An ensemble of five human specialists (physicians) read a subset of simulated reconstructed images, and assigned a perfusion score for each region of the left-ventricle (LV). Polar-maps generated from the simulated volumes along with their corresponding human scores were used to train five NB-MOs (one per human reader), which are subsequently applied (i.e. tested) on three sets of clinical SPECT-MPI polar maps, in order to predict human detection and localization scores. The clinical "testing" population comprises healthy individuals and patients suffering from coronary artery disease (CAD) in three possible regions, namely: LAD, LcX and RCA. Each clinical case was reconstructed using three reconstruction strategies, namely: FBP with no SC (i.e. scatter compensation), OSEM with Triple Energy Window (TEW) SC method, and OSEM with Effective Source Scatter Estimation (ESSE) SC. Alternative Free-Response (AFROC) analysis of perfusion scores shows that NB-MO predicts a higher human performance for scatter-compensated reconstructions, in agreement with what has been reported in published literature. These results suggest that NB-MO has good potential to generalize well to reconstruction methods not used during training, even for reasonably dissimilar datasets (i

  8. Perspective: Web-based machine learning models for real-time screening of thermoelectric materials properties

    NASA Astrophysics Data System (ADS)

    Gaultois, Michael W.; Oliynyk, Anton O.; Mar, Arthur; Sparks, Taylor D.; Mulholland, Gregory J.; Meredig, Bryce

    2016-05-01

    The experimental search for new thermoelectric materials remains largely confined to a limited set of successful chemical and structural families, such as chalcogenides, skutterudites, and Zintl phases. In principle, computational tools such as density functional theory (DFT) offer the possibility of rationally guiding experimental synthesis efforts toward very different chemistries. However, in practice, predicting thermoelectric properties from first principles remains a challenging endeavor [J. Carrete et al., Phys. Rev. X 4, 011019 (2014)], and experimental researchers generally do not directly use computation to drive their own synthesis efforts. To bridge this practical gap between experimental needs and computational tools, we report an open machine learning-based recommendation engine (http://thermoelectrics.citrination.com) for materials researchers that suggests promising new thermoelectric compositions based on pre-screening about 25 000 known materials and also evaluates the feasibility of user-designed compounds. We show this engine can identify interesting chemistries very different from known thermoelectrics. Specifically, we describe the experimental characterization of one example set of compounds derived from our engine, RE12Co5Bi (RE = Gd, Er), which exhibits surprising thermoelectric performance given its unprecedentedly high loading with metallic d and f block elements and warrants further investigation as a new thermoelectric material platform. We show that our engine predicts this family of materials to have low thermal and high electrical conductivities, but modest Seebeck coefficient, all of which are confirmed experimentally. We note that the engine also predicts materials that may simultaneously optimize all three properties entering into zT; we selected RE12Co5Bi for this study due to its interesting chemical composition and known facile synthesis.

  9. Rosen's (M,R) system as an X-machine.

    PubMed

    Palmer, Michael L; Williams, Richard A; Gatherer, Derek

    2016-11-07

    Robert Rosen's (M,R) system is an abstract biological network architecture that is allegedly both irreducible to sub-models of its component states and non-computable on a Turing machine. (M,R) stands as an obstacle to both reductionist and mechanistic presentations of systems biology, principally due to its self-referential structure. If (M,R) has the properties claimed for it, computational systems biology will not be possible, or at best will be a science of approximate simulations rather than accurate models. Several attempts have been made, at both empirical and theoretical levels, to disprove this assertion by instantiating (M,R) in software architectures. So far, these efforts have been inconclusive. In this paper, we attempt to demonstrate why - by showing how both finite state machine and stream X-machine formal architectures fail to capture the self-referential requirements of (M,R). We then show that a solution may be found in communicating X-machines, which remove self-reference using parallel computation, and then synthesise such machine architectures with object-orientation to create a formal basis for future software instantiations of (M,R) systems.

  10. A support vector machine model provides an accurate transcript-level-based diagnostic for major depressive disorder

    PubMed Central

    Yu, J S; Xue, A Y; Redei, E E; Bagheri, N

    2016-01-01

    Major depressive disorder (MDD) is a critical cause of morbidity and disability with an economic cost of hundreds of billions of dollars each year, necessitating more effective treatment strategies and novel approaches to translational research. A notable barrier in addressing this public health threat involves reliable identification of the disorder, as many affected individuals remain undiagnosed or misdiagnosed. An objective blood-based diagnostic test using transcript levels of a panel of markers would provide an invaluable tool for MDD as the infrastructure—including equipment, trained personnel, billing, and governmental approval—for similar tests is well established in clinics worldwide. Here we present a supervised classification model utilizing support vector machines (SVMs) for the analysis of transcriptomic data readily obtained from a peripheral blood specimen. The model was trained on data from subjects with MDD (n=32) and age- and gender-matched controls (n=32). This SVM model provides a cross-validated sensitivity and specificity of 90.6% for the diagnosis of MDD using a panel of 10 transcripts. We applied a logistic equation on the SVM model and quantified a likelihood of depression score. This score gives the probability of a MDD diagnosis and allows the tuning of specificity and sensitivity for individual patients to bring personalized medicine closer in psychiatry. PMID:27779627

  11. Workout Machine

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The Orbotron is a tri-axle exercise machine patterned after a NASA training simulator for astronaut orientation in the microgravity of space. It has three orbiting rings corresponding to roll, pitch and yaw. The user is in the middle of the inner ring with the stomach remaining in the center of all axes, eliminating dizziness. Human power starts the rings spinning, unlike the NASA air-powered system. Marketed by Fantasy Factory (formerly Orbotron, Inc.), the machine can improve aerobic capacity, strength and endurance in five to seven minute workouts.

  12. Improving model predictions for RNA interference activities that use support vector machine regression by combining and filtering features

    PubMed Central

    Peek, Andrew S

    2007-01-01

    Background RNA interference (RNAi) is a naturally occurring phenomenon that results in the suppression of a target RNA sequence utilizing a variety of possible methods and pathways. To dissect the factors that result in effective siRNA sequences a regression kernel Support Vector Machine (SVM) approach was used to quantitatively model RNA interference activities. Results Eight overall feature mapping methods were compared in their abilities to build SVM regression models that predict published siRNA activities. The primary factors in predictive SVM models are position specific nucleotide compositions. The secondary factors are position independent sequence motifs (N-grams) and guide strand to passenger strand sequence thermodynamics. Finally, the factors that are least contributory but are still predictive of efficacy are measures of intramolecular guide strand secondary structure and target strand secondary structure. Of these, the site of the 5' most base of the guide strand is the most informative. Conclusion The capacity of specific feature mapping methods and their ability to build predictive models of RNAi activity suggests a relative biological importance of these features. Some feature mapping methods are more informative in building predictive models and overall t-test filtering provides a method to remove some noisy features or make comparisons among datasets. Together, these features can yield predictive SVM regression models with increased predictive accuracy between predicted and observed activities both within datasets by cross validation, and between independently collected RNAi activity datasets. Feature filtering to remove features should be approached carefully in that it is possible to reduce feature set size without substantially reducing predictive models, but the features retained in the candidate models become increasingly distinct. Software to perform feature prediction and SVM training and testing on nucleic acid sequences can be found at

  13. RVMAB: Using the Relevance Vector Machine Model Combined with Average Blocks to Predict the Interactions of Proteins from Protein Sequences.

    PubMed

    An, Ji-Yong; You, Zhu-Hong; Meng, Fan-Rong; Xu, Shu-Juan; Wang, Yin

    2016-05-18

    Protein-Protein Interactions (PPIs) play essential roles in most cellular processes. Knowledge of PPIs is becoming increasingly more important, which has prompted the development of technologies that are capable of discovering large-scale PPIs. Although many high-throughput biological technologies have been proposed to detect PPIs, there are unavoidable shortcomings, including cost, time intensity, and inherently high false positive and false negative rates. For the sake of these reasons, in silico methods are attracting much attention due to their good performances in predicting PPIs. In this paper, we propose a novel computational method known as RVM-AB that combines the Relevance Vector Machine (RVM) model and Average Blocks (AB) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the AB feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We performed five-fold cross-validation experiments on yeast and Helicobacter pylori datasets, and achieved very high accuracies of 92.98% and 95.58% respectively, which is significantly better than previous works. In addition, we also obtained good prediction accuracies of 88.31%, 89.46%, 91.08%, 91.55%, and 94.81% on other five independent datasets C. elegans, M. musculus, H. sapiens, H. pylori, and E. coli for cross-species prediction. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM-AB method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool. To facilitate extensive studies for future proteomics research, we developed a freely

  14. Analysis of complex networks using aggressive abstraction.

    SciTech Connect

    Colbaugh, Richard; Glass, Kristin.; Willard, Gerald

    2008-10-01

    This paper presents a new methodology for analyzing complex networks in which the network of interest is first abstracted to a much simpler (but equivalent) representation, the required analysis is performed using the abstraction, and analytic conclusions are then mapped back to the original network and interpreted there. We begin by identifying a broad and important class of complex networks which admit abstractions that are simultaneously dramatically simplifying and property preserving we call these aggressive abstractions -- and which can therefore be analyzed using the proposed approach. We then introduce and develop two forms of aggressive abstraction: 1.) finite state abstraction, in which dynamical networks with uncountable state spaces are modeled using finite state systems, and 2.) onedimensional abstraction, whereby high dimensional network dynamics are captured in a meaningful way using a single scalar variable. In each case, the property preserving nature of the abstraction process is rigorously established and efficient algorithms are presented for computing the abstraction. The considerable potential of the proposed approach to complex networks analysis is illustrated through case studies involving vulnerability analysis of technological networks and predictive analysis for social processes.

  15. Finding Feasible Abstract Counter-Examples

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Dwyer, Matthew B.; Visser, Willem; Clancy, Daniel (Technical Monitor)

    2002-01-01

    A strength of model checking is its ability to automate the detection of subtle system errors and produce traces that exhibit those errors. Given the high computational cost of model checking most researchers advocate the use of aggressive property-preserving abstractions. Unfortunately, the more aggressively a system is abstracted the more infeasible behavior it will have. Thus, while abstraction enables efficient model checking it also threatens the usefulness of model checking as a defect detection tool, since it may be difficult to determine whether a counter-example is feasible and hence worth developer time to analyze. We have explored several strategies for addressing this problem by extending an explicit-state model checker, Java PathFinder (JPF), to search for and analyze counter-examples in the presence of abstractions. We demonstrate that these techniques effectively preserve the defect detection ability of model checking in the presence of aggressive abstraction by applying them to check properties of several abstracted multi-threaded Java programs. These new capabilities are not specific to JPF and can be easily adapted to other model checking frameworks; we describe how this was done for the Bandera toolset.

  16. Simulation models relevant to the protection of synchronous machines and transformers

    NASA Astrophysics Data System (ADS)

    Muthumuni, Dharshana De Silva

    2001-07-01

    The purpose of this research is to develop models which can be used to produce realistic test waveforms for the evaluation of protection systems used for generators and transformers. Software models of generators and transformers which have the capability to calculate voltage and current waveforms in the presence of internal faults are presented in this thesis. The thesis also presents accurate models of current transformers used in differential current protection schemes. These include air gapped current transformers which are widely used in transformer and generator protection. The models of generators and transformers can be used with the models of current transformers to obtain test waveforms to evaluate a protection system. The models are validated by comparing the results obtained from simulations with recorded waveforms.

  17. Circuit Model for Parameter Study of Marx Generators on Megajoule Machines

    DTIC Science & Technology

    2001-06-01

    Sandia National Laboratories (SNL) is planning to redesign the pulsed power driver on Z, including the Marx generators, the intermediate-store water...capacitors, the laser triggered switches and the pulse-forming lines to increase the energy delivered to a Z-pinch load. The present Marx system...generator. A circuit model has been developed using MICROCAP to model one of the 36- Marx generators. The model contains the capacitors, inter-stage

  18. Machine-learning techniques for building a diagnostic model for very mild dementia.

    PubMed

    Chen, Rong; Herskovits, Edward H

    2010-08-01

    Many researchers have sought to construct diagnostic models to differentiate individuals with very mild dementia (VMD) from healthy elderly people, based on structural magnetic-resonance (MR) images. These models have, for the most part, been based on discriminant analysis or logistic regression, with few reports of alternative approaches. To determine the relative strengths of different approaches to analyzing structural MR data to distinguish people with VMD from normal elderly control subjects, we evaluated seven different classification approaches, each of which we used to generate a diagnostic model from a training data set acquired from 83 subjects (33 VMD and 50 control). We then evaluated each diagnostic model using an independent data set acquired from 30 subjects (13 VMD and 17 controls). We found that there were significant performance differences across these seven diagnostic models. Relative to the diagnostic models generated by discriminant analysis and logistic regression, the diagnostic models generated by other high-performance diagnostic-model-generation algorithms manifested increased generalizability when diagnostic models were generated from all atlas structures.

  19. Analytical Model for Chip Formation in Case of Orthogonal Machining Process

    NASA Astrophysics Data System (ADS)

    Salvatore, Ferdinando; Mabrouki, Tarek; Hamdi, Hédi

    2011-01-01

    The present work deals with the presentation of analytical methodology allowing the modelling of chip formation. For that a "decomposition approach", based on assuming that the material removal is the summation of two contributions: ploughing and pure cut was adopted. Moreover, this analytical model was calibrated by a finite element model and experimental data in terms of temperature and forces evolutions. The global aim is to propose to the industrial community, an efficient rapid-execution analytical model concerning the material removal in the case of an orthogonal cutting process.

  20. Wacky Machines

    ERIC Educational Resources Information Center

    Fendrich, Jean

    2002-01-01

    Collectors everywhere know that local antique shops and flea markets are treasure troves just waiting to be plundered. Science teachers might take a hint from these hobbyists, for the next community yard sale might be a repository of old, quirky items that are just the things to get students thinking about simple machines. By introducing some…