Science.gov

Sample records for abstract machine model

  1. The Abstract Machine Model for Transaction-based System Control

    SciTech Connect

    Chassin, David P.

    2003-01-31

    Recent work applying statistical mechanics to economic modeling has demonstrated the effectiveness of using thermodynamic theory to address the complexities of large scale economic systems. Transaction-based control systems depend on the conjecture that when control of thermodynamic systems is based on price-mediated strategies (e.g., auctions, markets), the optimal allocation of resources in a market-based control system results in an emergent optimal control of the thermodynamic system. This paper proposes an abstract machine model as the necessary precursor for demonstrating this conjecture and establishes the dynamic laws as the basis for a special theory of emergence applied to the global behavior and control of complex adaptive systems. The abstract machine in a large system amounts to the analog of a particle in thermodynamic theory. The permit the establishment of a theory dynamic control of complex system behavior based on statistical mechanics. Thus we may be better able to engineer a few simple control laws for a very small number of devices types, which when deployed in very large numbers and operated as a system of many interacting markets yields the stable and optimal control of the thermodynamic system.

  2. Abstract quantum computing machines and quantum computational logics

    NASA Astrophysics Data System (ADS)

    Chiara, Maria Luisa Dalla; Giuntini, Roberto; Sergioli, Giuseppe; Leporini, Roberto

    2016-06-01

    Classical and quantum parallelism are deeply different, although it is sometimes claimed that quantum Turing machines are nothing but special examples of classical probabilistic machines. We introduce the concepts of deterministic state machine, classical probabilistic state machine and quantum state machine. On this basis, we discuss the question: To what extent can quantum state machines be simulated by classical probabilistic state machines? Each state machine is devoted to a single task determined by its program. Real computers, however, behave differently, being able to solve different kinds of problems. This capacity can be modeled, in the quantum case, by the mathematical notion of abstract quantum computing machine, whose different programs determine different quantum state machines. The computations of abstract quantum computing machines can be linguistically described by the formulas of a particular form of quantum logic, termed quantum computational logic.

  3. Programming the Navier-Stokes computer: An abstract machine model and a visual editor

    NASA Technical Reports Server (NTRS)

    Middleton, David; Crockett, Tom; Tomboulian, Sherry

    1988-01-01

    The Navier-Stokes computer is a parallel computer designed to solve Computational Fluid Dynamics problems. Each processor contains several floating point units which can be configured under program control to implement a vector pipeline with several inputs and outputs. Since the development of an effective compiler for this computer appears to be very difficult, machine level programming seems necessary and support tools for this process have been studied. These support tools are organized into a graphical program editor. A programming process is described by which appropriate computations may be efficiently implemented on the Navier-Stokes computer. The graphical editor would support this programming process, verifying various programmer choices for correctness and deducing values such as pipeline delays and network configurations. Step by step details are provided and demonstrated with two example programs.

  4. Abstraction Augmented Markov Models.

    PubMed

    Caragea, Cornelia; Silvescu, Adrian; Caragea, Doina; Honavar, Vasant

    2010-12-13

    High accuracy sequence classification often requires the use of higher order Markov models (MMs). However, the number of MM parameters increases exponentially with the range of direct dependencies between sequence elements, thereby increasing the risk of overfitting when the data set is limited in size. We present abstraction augmented Markov models (AAMMs) that effectively reduce the number of numeric parameters of k(th) order MMs by successively grouping strings of length k (i.e., k-grams) into abstraction hierarchies. We evaluate AAMMs on three protein subcellular localization prediction tasks. The results of our experiments show that abstraction makes it possible to construct predictive models that use significantly smaller number of features (by one to three orders of magnitude) as compared to MMs. AAMMs are competitive with and, in some cases, significantly outperform MMs. Moreover, the results show that AAMMs often perform significantly better than variable order Markov models, such as decomposed context tree weighting, prediction by partial match, and probabilistic suffix trees.

  5. Automatic Review of Abstract State Machines by Meta Property Verification

    NASA Technical Reports Server (NTRS)

    Arcaini, Paolo; Gargantini, Angelo; Riccobene, Elvinia

    2010-01-01

    A model review is a validation technique aimed at determining if a model is of sufficient quality and allows defects to be identified early in the system development, reducing the cost of fixing them. In this paper we propose a technique to perform automatic review of Abstract State Machine (ASM) formal specifications. We first detect a family of typical vulnerabilities and defects a developer can introduce during the modeling activity using the ASMs and we express such faults as the violation of meta-properties that guarantee certain quality attributes of the specification. These meta-properties are then mapped to temporal logic formulas and model checked for their violation. As a proof of concept, we also report the result of applying this ASM review process to several specifications.

  6. Multimodeling and Model Abstraction

    USDA-ARS?s Scientific Manuscript database

    The multiplicity of models of the same process or phenomenon is the commonplace in environmental modeling. Last 10 years brought marked interest to making use of the variety of conceptual approaches instead of attempting to find the best model or using a single preferred model. Two systematic approa...

  7. Modelling Metamorphism by Abstract Interpretation

    NASA Astrophysics Data System (ADS)

    Dalla Preda, Mila; Giacobazzi, Roberto; Debray, Saumya; Coogan, Kevin; Townsend, Gregg M.

    Metamorphic malware apply semantics-preserving transformations to their own code in order to foil detection systems based on signature matching. In this paper we consider the problem of automatically extract metamorphic signatures from these malware. We introduce a semantics for self-modifying code, later called phase semantics, and prove its correctness by showing that it is an abstract interpretation of the standard trace semantics. Phase semantics precisely models the metamorphic code behavior by providing a set of traces of programs which correspond to the possible evolutions of the metamorphic code during execution. We show that metamorphic signatures can be automatically extracted by abstract interpretation of the phase semantics, and that regular metamorphism can be modelled as finite state automata abstraction of the phase semantics.

  8. Abstract models of molecular walkers

    NASA Astrophysics Data System (ADS)

    Semenov, Oleg

    Recent advances in single-molecule chemistry have led to designs for artificial multi-pedal walkers that follow tracks of chemicals. The walkers, called molecular spiders, consist of a rigid chemically inert body and several flexible enzymatic legs. The legs can reversibly bind to chemical substrates on a surface, and through their enzymatic action convert them to products. We study abstract models of molecular spiders to evaluate how efficiently they can perform two tasks: molecular transport of cargo over tracks and search for targets on finite surfaces. For the single-spider model our simulations show a transient behavior wherein certain spiders move superdiffusively over significant distances and times. This gives the spiders potential as a faster-than-diffusion transport mechanism. However, analysis shows that single-spider motion eventually decays into an ordinary diffusive motion, owing to the ever increasing size of the region of products. Inspired by cooperative behavior of natural molecular walkers, we propose a symmetric exclusion process (SEP) model for multiple walkers interacting as they move over a one-dimensional lattice. We show that when walkers are sequentially released from the origin, the collective effect is to prevent the leading walkers from moving too far backwards. Hence, there is an effective outward pressure on the leading walkers that keeps them moving superdiffusively for longer times. Despite this improvement the leading spider eventually slows down and moves diffusively, similarly to a single spider. The slowdown happens because all spiders behind the leading spiders never encounter substrates, and thus they are never biased. They cannot keep up with leading spiders, and cannot put enough pressure on them. Next, we investigate search properties of a single and multiple spiders moving over one- and two-dimensional surfaces with various absorbing and reflecting boundaries. For the single-spider model we evaluate by how much the

  9. Machine characterization based on an abstract high-level language machine

    NASA Technical Reports Server (NTRS)

    Saavedra-Barrera, Rafael H.; Smith, Alan Jay; Miya, Eugene

    1989-01-01

    Measurements are presented for a large number of machines ranging from small workstations to supercomputers. The authors combine these measurements into groups of parameters which relate to specific aspects of the machine implementation, and use these groups to provide overall machine characterizations. The authors also define the concept of pershapes, which represent the level of performance of a machine for different types of computation. A metric based on pershapes is introduced that provides a quantitative way of measuring how similar two machines are in terms of their performance distributions. The metric is related to the extent to which pairs of machines have varying relative performance levels depending on which benchmark is used.

  10. Abstraction and model evaluation in category learning.

    PubMed

    Vanpaemel, Wolf; Storms, Gert

    2010-05-01

    Thirty previously published data sets, from seminal category learning tasks, are reanalyzed using the varying abstraction model (VAM). Unlike a prototype-versus-exemplar analysis, which focuses on extreme levels of abstraction only, a VAM analysis also considers the possibility of partial abstraction. Whereas most data sets support no abstraction when only the extreme possibilities are considered, we show that evidence for abstraction can be provided using the broader view on abstraction provided by the VAM. The present results generalize earlier demonstrations of partial abstraction (Vanpaemel & Storms, 2008), in which only a small number of data sets was analyzed. Following the dominant modus operandi in category learning research, Vanpaemel and Storms evaluated the models on their best fit, a practice known to ignore the complexity of the models under consideration. In the present study, in contrast, model evaluation not only relies on the maximal likelihood, but also on the marginal likelihood, which is sensitive to model complexity. Finally, using a large recovery study, it is demonstrated that, across the 30 data sets, complexity differences between the models in the VAM family are small. This indicates that a (computationally challenging) complexity-sensitive model evaluation method is uncalled for, and that the use of a (computationally straightforward) complexity-insensitive model evaluation method is justified.

  11. Towards Compatible and Interderivable Semantic Specifications for the Scheme Programming Language, Part II: Reduction Semantics and Abstract Machines

    NASA Astrophysics Data System (ADS)

    Biernacka, Małgorzata; Danvy, Olivier

    We present a context-sensitive reduction semantics for a lambda-calculus with explicit substitutions and we show that the functional implementation of this small-step semantics mechanically corresponds to that of the abstract machine for Core Scheme presented by Clinger at PLDI’98, including first-class continuations. Starting from this reduction semantics, (1) we refocus it into a small-step abstract machine; (2) we fuse the transition function of this abstract machine with its driver loop, obtaining a big-step abstract machine which is staged; (3) we compress its corridor transitions, obtaining an eval/continue abstract machine; and (4) we unfold its ground closures, which yields an abstract machine that essentially coincides with Clinger’s machine. This lambda-calculus with explicit substitutions therefore aptly accounts for Core Scheme, including Clinger’s permutations and unpermutations.

  12. Model-based machine learning

    PubMed Central

    Bishop, Christopher M.

    2013-01-01

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612

  13. Model-based machine learning.

    PubMed

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  14. Abstracts

    NASA Astrophysics Data System (ADS)

    2012-09-01

    Measuring cosmological parameters with GRBs: status and perspectives New interpretation of the Amati relation The SED Machine - a dedicated transient spectrograph PTF10iue - evidence for an internal engine in a unique Type Ic SN Direct evidence for the collapsar model of long gamma-ray bursts On pair instability supernovae and gamma-ray bursts Pan-STARRS1 observations of ultraluminous SNe The influence of rotation on the critical neutrino luminosity in core-collapse supernovae General relativistic magnetospheres of slowly rotating and oscillating neutron stars Host galaxies of short GRBs GRB 100418A: a bridge between GRB-associated hypernovae and SNe Two super-luminous SNe at z ~ 1.5 from the SNLS Prospects for very-high-energy gamma-ray bursts with the Cherenkov Telescope Array The dynamics and radiation of relativistic flows from massive stars The search for light echoes from the supernova explosion of 1181 AD The proto-magnetar model for gamma-ray bursts Stellar black holes at the dawn of the universe MAXI J0158-744: the discovery of a supersoft X-ray transient Wide-band spectra of magnetar burst emission Dust formation and evolution in envelope-stripped core-collapse supernovae The host galaxies of dark gamma-ray bursts Keck observations of 150 GRB host galaxies Search for properties of GRBs at large redshift The early emission from SNe Spectral properties of SN shock breakout MAXI observation of GRBs and short X-ray transients A three-dimensional view of SN 1987A using light echo spectroscopy X-ray study of the southern extension of the SNR Puppis A All-sky survey of short X-ray transients by MAXI GSC Development of the CALET gamma-ray burst monitor (CGBM)

  15. SATURATED ZONE FLOW AND TRANSPORT MODEL ABSTRACTION

    SciTech Connect

    B.W. ARNOLD

    2004-10-27

    The purpose of the saturated zone (SZ) flow and transport model abstraction task is to provide radionuclide-transport simulation results for use in the total system performance assessment (TSPA) for license application (LA) calculations. This task includes assessment of uncertainty in parameters that pertain to both groundwater flow and radionuclide transport in the models used for this purpose. This model report documents the following: (1) The SZ transport abstraction model, which consists of a set of radionuclide breakthrough curves at the accessible environment for use in the TSPA-LA simulations of radionuclide releases into the biosphere. These radionuclide breakthrough curves contain information on radionuclide-transport times through the SZ. (2) The SZ one-dimensional (I-D) transport model, which is incorporated in the TSPA-LA model to simulate the transport, decay, and ingrowth of radionuclide decay chains in the SZ. (3) The analysis of uncertainty in groundwater-flow and radionuclide-transport input parameters for the SZ transport abstraction model and the SZ 1-D transport model. (4) The analysis of the background concentration of alpha-emitting species in the groundwater of the SZ.

  16. Abstracts

    ERIC Educational Resources Information Center

    American Biology Teacher, 1977

    1977-01-01

    Included are over 50 abstracts of papers being presented at the 1977 National Association of Biology Teachers Convention. Included in each abstract are the title, author, and summary of the paper. Topics include photographic techniques environmental studies, and biological instruction. (MA)

  17. Directory of Energy Information Administration Model Abstracts

    SciTech Connect

    Not Available

    1986-07-16

    This directory partially fulfills the requirements of Section 8c, of the documentation order, which states in part that: The Office of Statistical Standards will annually publish an EIA document based on the collected abstracts and the appendices. This report contains brief statements about each model's title, acronym, purpose, and status, followed by more detailed information on characteristics, uses, and requirements. Sources for additional information are identified. All models active through March 1985 are included. The main body of this directory is an alphabetical list of all active EIA models. Appendix A identifies major EIA modeling systems and the models within these systems, and Appendix B identifies active EIA models by type (basic, auxiliary, and developing). EIA also leases models developed by proprietary software vendors. Documentation for these proprietary models is the responsibility of the companies from which they are leased. EIA has recently leased models from Chase Econometrics, Inc., Data Resources, Inc. (DRI), the Oak Ridge National Laboratory (ORNL), and Wharton Econometric Forecasting Associates (WEFA). Leased models are not abstracted here. The directory is intended for the use of energy and energy-policy analysts in the public and private sectors.

  18. Model Checking Abstract PLEXIL Programs with SMART

    NASA Technical Reports Server (NTRS)

    Siminiceanu, Radu I.

    2007-01-01

    We describe a method to automatically generate discrete-state models of abstract Plan Execution Interchange Language (PLEXIL) programs that can be analyzed using model checking tools. Starting from a high-level description of a PLEXIL program or a family of programs with common characteristics, the generator lays the framework that models the principles of program execution. The concrete parts of the program are not automatically generated, but require the modeler to introduce them by hand. As a case study, we generate models to verify properties of the PLEXIL macro constructs that are introduced as shorthand notation. After an exhaustive analysis, we conclude that the macro definitions obey the intended semantics and behave as expected, but contingently on a few specific requirements on the timing semantics of micro-steps in the concrete executive implementation.

  19. Multivariate cross-classification: applying machine learning techniques to characterize abstraction in neural representations

    PubMed Central

    Kaplan, Jonas T.; Man, Kingson; Greening, Steven G.

    2015-01-01

    Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC), and review several domains where it has recently made an impact. MVCC has been important in establishing correspondences among neural patterns across cognitive domains, including motor-perception matching and cross-sensory matching. It has been used to test for similarity between neural patterns evoked by perception and those generated from memory. Other work has used MVCC to investigate the similarity of representations for semantic categories across different kinds of stimulus presentation, and in the presence of different cognitive demands. We use these examples to demonstrate the power of MVCC as a tool for investigating neural abstraction and discuss some important methodological issues related to its application. PMID:25859202

  20. Multivariate cross-classification: applying machine learning techniques to characterize abstraction in neural representations.

    PubMed

    Kaplan, Jonas T; Man, Kingson; Greening, Steven G

    2015-01-01

    Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC), and review several domains where it has recently made an impact. MVCC has been important in establishing correspondences among neural patterns across cognitive domains, including motor-perception matching and cross-sensory matching. It has been used to test for similarity between neural patterns evoked by perception and those generated from memory. Other work has used MVCC to investigate the similarity of representations for semantic categories across different kinds of stimulus presentation, and in the presence of different cognitive demands. We use these examples to demonstrate the power of MVCC as a tool for investigating neural abstraction and discuss some important methodological issues related to its application.

  1. Machine learning in sedimentation modelling.

    PubMed

    Bhattacharya, B; Solomatine, D P

    2006-03-01

    The paper presents machine learning (ML) models that predict sedimentation in the harbour basin of the Port of Rotterdam. The important factors affecting the sedimentation process such as waves, wind, tides, surge, river discharge, etc. are studied, the corresponding time series data is analysed, missing values are estimated and the most important variables behind the process are chosen as the inputs. Two ML methods are used: MLP ANN and M5 model tree. The latter is a collection of piece-wise linear regression models, each being an expert for a particular region of the input space. The models are trained on the data collected during 1992-1998 and tested by the data of 1999-2000. The predictive accuracy of the models is found to be adequate for the potential use in the operational decision making.

  2. Rough set models of Physarum machines

    NASA Astrophysics Data System (ADS)

    Pancerz, Krzysztof; Schumann, Andrew

    2015-04-01

    In this paper, we consider transition system models of behaviour of Physarum machines in terms of rough set theory. A Physarum machine, a biological computing device implemented in the plasmodium of Physarum polycephalum (true slime mould), is a natural transition system. In the behaviour of Physarum machines, one can notice some ambiguity in Physarum motions that influences exact anticipation of states of machines in time. To model this ambiguity, we propose to use rough set models created over transition systems. Rough sets are an appropriate tool to deal with rough (ambiguous, imprecise) concepts in the universe of discourse.

  3. Abstract models for the synthesis of optimization algorithms.

    NASA Technical Reports Server (NTRS)

    Meyer, G. G. L.; Polak, E.

    1971-01-01

    Systematic approach to the problem of synthesis of optimization algorithms. Abstract models for algorithms are developed which guide the inventive process toward ?conceptual' algorithms which may consist of operations that are inadmissible in a practical method. Once the abstract models are established a set of methods for converting ?conceptual' algorithms falling into the class defined by the abstract models into ?implementable' iterative procedures is presented.

  4. Metagenomic Classification Using an Abstraction Augmented Markov Model

    PubMed Central

    Zhu, Xiujun (Sylvia)

    2016-01-01

    Abstract The abstraction augmented Markov model (AAMM) is an extension of a Markov model that can be used for the analysis of genetic sequences. It is developed using the frequencies of all possible consecutive words with same length (p-mers). This article will review the theory behind AAMM and apply the theory behind AAMM in metagenomic classification. PMID:26618474

  5. "Schema Abstraction" in a Multiple-Trace Memory Model.

    ERIC Educational Resources Information Center

    Psychological Review, 1986

    1986-01-01

    A simulation model of episodic memory, MINERVA Z, is applied to the learning of concepts, as represented by the schema-abstraction task. The model successfully predicts basic findings from the scheme-abstraction literature, including some that have been cited as evidence against exemplary theories of concepts. (Author/LMO)

  6. Modelling abstraction licensing strategies ahead of the UK's water abstraction licensing reform

    NASA Astrophysics Data System (ADS)

    Klaar, M. J.

    2012-12-01

    Within England and Wales, river water abstractions are licensed and regulated by the Environment Agency (EA), who uses compliance with the Environmental Flow Indicator (EFI) to ascertain where abstraction may cause undesirable effects on river habitats and species. The EFI is a percentage deviation from natural flow represented using a flow duration curve. The allowable percentage deviation changes with different flows, and also changes depending on an assessment of the sensitivity of the river to changes in flow (Table 1). Within UK abstraction licensing, resource availability is expressed as a surplus or deficit of water resources in relation to the EFI, and utilises the concept of 'hands-off-flows' (HOFs) at the specified flow statistics detailed in Table 1. Use of a HOF system enables abstraction to cease at set flows, but also enables abstraction to occur at periods of time when more water is available. Compliance at low flows (Q95) is used by the EA to determine the hydrological classification and compliance with the Water Framework Directive (WFD) for identifying waterbodies where flow may be causing or contributing to a failure in good ecological status (GES; Table 2). This compliance assessment shows where the scenario flows are below the EFI and by how much, to help target measures for further investigation and assessment. Currently, the EA is reviewing the EFI methodology in order to assess whether or not it can be used within the reformed water abstraction licensing system which is being planned by the Department for Environment, Food and Rural Affairs (DEFRA) to ensure the licensing system is resilient to the challenges of climate change and population growth, while allowing abstractors to meet their water needs efficiently, and better protect the environment. In order to assess the robustness of the EFI, a simple model has been created which allows a number of abstraction, flow and licensing scenarios to be run to determine WFD compliance using the

  7. Vibration absorber modeling for handheld machine tool

    NASA Astrophysics Data System (ADS)

    Abdullah, Mohd Azman; Mustafa, Mohd Muhyiddin; Jamil, Jazli Firdaus; Salim, Mohd Azli; Ramli, Faiz Redza

    2015-05-01

    Handheld machine tools produce continuous vibration to the users during operation. This vibration causes harmful effects to the health of users for repeated operations in a long period of time. In this paper, a dynamic vibration absorber (DVA) is designed and modeled to reduce the vibration generated by the handheld machine tool. Several designs and models of vibration absorbers with various stiffness properties are simulated, tested and optimized in order to diminish the vibration. Ordinary differential equation is used to derive and formulate the vibration phenomena in the machine tool with and without the DVA. The final transfer function of the DVA is later analyzed using commercial available mathematical software. The DVA with optimum properties of mass and stiffness is developed and applied on the actual handheld machine tool. The performance of the DVA is experimentally tested and validated by the final result of vibration reduction.

  8. Quantum mechanical hamiltonian models of turing machines

    NASA Astrophysics Data System (ADS)

    Benioff, Paul

    1982-11-01

    Quantum mechanical Hamiltonian models, which represent an aribtrary but finite number of steps of any Turing machine computation, are constructed here on a finite lattice of spin-1/2 systems. Different regions of the lattice correspond to different components of the Turing machine (plus recording system). Successive states of any machine computation are represented in the model by spin configuration states. Both time-independent and time-dependent Hamiltonian models are constructed here. The time-independent models do not dissipate energy or degrade the system state as they evolve. They operate close to the quantum limit in that the total system energy uncertainty/computation speed is close to the limit given by the time-energy uncertainty relation. However, the model evolution is time global and the Hamiltonian is more complex. The time-dependent models do not degrade the system state. Also they are time local and the Hamiltonian is less complex.

  9. Coupling Radar Rainfall to Hydrological Models for Water Abstraction Management

    NASA Astrophysics Data System (ADS)

    Asfaw, Alemayehu; Shucksmith, James; Smith, Andrea; MacDonald, Ken

    2015-04-01

    The impacts of climate change and growing water use are likely to put considerable pressure on water resources and the environment. In the UK, a reform to surface water abstraction policy has recently been proposed which aims to increase the efficiency of using available water resources whilst minimising impacts on the aquatic environment. Key aspects to this reform include the consideration of dynamic rather than static abstraction licensing as well as introducing water trading concepts. Dynamic licensing will permit varying levels of abstraction dependent on environmental conditions (i.e. river flow and quality). The practical implementation of an effective dynamic abstraction strategy requires suitable flow forecasting techniques to inform abstraction asset management. Potentially the predicted availability of water resources within a catchment can be coupled to predicted demand and current storage to inform a cost effective water resource management strategy which minimises environmental impacts. The aim of this work is to use a historical analysis of UK case study catchment to compare potential water resource availability using modelled dynamic abstraction scenario informed by a flow forecasting model, against observed abstraction under a conventional abstraction regime. The work also demonstrates the impacts of modelling uncertainties on the accuracy of predicted water availability over range of forecast lead times. The study utilised a conceptual rainfall-runoff model PDM - Probability-Distributed Model developed by Centre for Ecology & Hydrology - set up in the Dove River catchment (UK) using 1km2 resolution radar rainfall as inputs and 15 min resolution gauged flow data for calibration and validation. Data assimilation procedures are implemented to improve flow predictions using observed flow data. Uncertainties in the radar rainfall data used in the model are quantified using artificial statistical error model described by Gaussian distribution and

  10. How Pupils Use a Model for Abstract Concepts in Genetics

    ERIC Educational Resources Information Center

    Venville, Grady; Donovan, Jenny

    2008-01-01

    The purpose of this research was to explore the way pupils of different age groups use a model to understand abstract concepts in genetics. Pupils from early childhood to late adolescence were taught about genes and DNA using an analogical model (the wool model) during their regular biology classes. Changing conceptual understandings of the…

  11. How Pupils Use a Model for Abstract Concepts in Genetics

    ERIC Educational Resources Information Center

    Venville, Grady; Donovan, Jenny

    2008-01-01

    The purpose of this research was to explore the way pupils of different age groups use a model to understand abstract concepts in genetics. Pupils from early childhood to late adolescence were taught about genes and DNA using an analogical model (the wool model) during their regular biology classes. Changing conceptual understandings of the…

  12. Concrete Model Checking with Abstract Matching and Refinement

    NASA Technical Reports Server (NTRS)

    Pasareanu Corina S.; Peianek Radek; Visser, Willem

    2005-01-01

    We propose an abstraction-based model checking method which relies on refinement of an under-approximation of the feasible behaviors of the system under analysis. The method preserves errors to safety properties, since all analyzed behaviors are feasible by definition. The method does not require an abstract transition relation to he generated, but instead executes the concrete transitions while storing abstract versions of the concrete states, as specified by a set of abstraction predicates. For each explored transition. the method checks, with the help of a theorem prover, whether there is any loss of precision introduced by abstraction. The results of these checks are used to decide termination or to refine the abstraction, by generating new abstraction predicates. If the (possibly infinite) concrete system under analysis has a finite bisimulation quotient, then the method is guaranteed to eventually explore an equivalent finite bisimilar structure. We illustrate the application of the approach for checking concurrent programs. We also show how a lightweight variant can be used for efficient software testing.

  13. Concrete Model Checking with Abstract Matching and Refinement

    NASA Technical Reports Server (NTRS)

    Pasareanu Corina S.; Peianek Radek; Visser, Willem

    2005-01-01

    We propose an abstraction-based model checking method which relies on refinement of an under-approximation of the feasible behaviors of the system under analysis. The method preserves errors to safety properties, since all analyzed behaviors are feasible by definition. The method does not require an abstract transition relation to he generated, but instead executes the concrete transitions while storing abstract versions of the concrete states, as specified by a set of abstraction predicates. For each explored transition. the method checks, with the help of a theorem prover, whether there is any loss of precision introduced by abstraction. The results of these checks are used to decide termination or to refine the abstraction, by generating new abstraction predicates. If the (possibly infinite) concrete system under analysis has a finite bisimulation quotient, then the method is guaranteed to eventually explore an equivalent finite bisimilar structure. We illustrate the application of the approach for checking concurrent programs. We also show how a lightweight variant can be used for efficient software testing.

  14. An Investigation of System Identification Techniques for Simulation Model Abstraction

    DTIC Science & Technology

    2000-02-01

    This report summarizes research into the application of system identification techniques to simulation model abstraction. System identification produces...34Mission Simulation," a simulation of a squadron of aircraft performing battlefield air interdiction. The system identification techniques were...simplified mathematical models that approximate the dynamic behaviors of the underlying stochastic simulations. Four state-space system

  15. Suggestive modeling for machine vision

    NASA Astrophysics Data System (ADS)

    Fitzgibbon, Andrew W.; Fisher, Robert B.

    1992-11-01

    Traditional modeling techniques, with roots in CAD systems, do not provide a rich enough modeling environment for computer vision. The models themselves describe the structure rather than appearance of objects, and rarely provide facilities for the recording of the additional information required by a vision system. Encoding appearance explicitly ensures quick access and use of the model, and yields model features that correspond to observable data features. We describe the Suggestive Modelling System (SMS) which has been designed specifically for vision applications, combining the geometric object model with vision-specific annotations. Among SMS's features are: (1) A novel separation of surface shape, extent and position; (2) Encoding of underconstrained positions for subcomponents such as spheres and discs; (3) Incorporation of uncertain property values; (4) Cheap encoding of viewpoint- dependent information in addition to the body-centered model; (5) Hierarchical models; (6) Symbolic labels for each primitive; and (7) Parallel curve, surface, and volume-based representations simplify project management. We will describe how this approach reflects more faithfully the capabilities of current scene analysis algorithms than traditional methods. Results from the Imagine 2 vision system demonstrate the applicability of the models to complex real-world industrial inspection and recognition tasks. In addition a number of other vision-related applications in which the SMS paradigm has proved useful will be discussed.

  16. An abstract specification language for Markov reliability models

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1985-01-01

    Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.

  17. An abstract language for specifying Markov reliability models

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.

    1986-01-01

    Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.

  18. Modeling electronic quantum transport with machine learning

    NASA Astrophysics Data System (ADS)

    Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole

    2014-06-01

    We present a machine learning approach to solve electronic quantum transport equations of one-dimensional nanostructures. The transmission coefficients of disordered systems were computed to provide training and test data sets to the machine. The system's representation encodes energetic as well as geometrical information to characterize similarities between disordered configurations, while the Euclidean norm is used as a measure of similarity. Errors for out-of-sample predictions systematically decrease with training set size, enabling the accurate and fast prediction of new transmission coefficients. The remarkable performance of our model to capture the complexity of interference phenomena lends further support to its viability in dealing with transport problems of undulatory nature.

  19. Sensing position and speed by recording magnetization transitions on mechanically functional machine members (abstract)

    NASA Astrophysics Data System (ADS)

    Garshelis, I. J.

    1997-04-01

    Conventional means of sensing position and speed of moving machine members for control purposes typically requires the use of supplementary, ad hoc devices. Many mechanically functional moving machine members are fabricated from ferromagnetic steels and may, thus, provide an opportunity to themselves carry positionally relevant information in the form of local regions of deliberately instilled remanent magnetization, Mr. To avoid ambiguities associated with the imprecise borders of such regions as well as their possibly preexisting presence, information is more reliably carried in the form of local transitions in the polarity of Mr from a quiescent bias. The presence and physical location of such transitions relative to reference features either on the member itself or on other members undergoing correlated motion constitutes stored information. The presence of a transition is signaled by the transitory appearance of the external field associated with ∇ṡMr as the transition containing region passes by a magnetic-field detecting device fixed to the machine frame. Implanting and removing transitions from parts while in motion is readily accomplished by pulsed currents and biasing magnets. While the whole process of storing, reading, and erasing bits of information in magnetic form follows the concepts and principles of conventional magnetic recording, profoundly different quantitative factors, conditions, and performance requirements affect the implementation of the described sensing system. In particular, the coercivity, Hc, of commonly used steels is 3-30 Oe versus 300-1200 Oe in recording media and both the thickness of the media and the air gaps separating the media surface from the heads used in conventional systems are each 2-3 orders of magnitude smaller than their counterparts in the described system, where speed may also be variable down to zero. While the combined effect of these factors is to greatly diminish the attainable density of recorded

  20. Particle Tracking Model and Abstraction of Transport Processes

    SciTech Connect

    B. Robinson

    2004-10-21

    The purpose of this report is to document the abstraction model being used in total system performance assessment (TSPA) model calculations for radionuclide transport in the unsaturated zone (UZ). The UZ transport abstraction model uses the particle-tracking method that is incorporated into the finite element heat and mass model (FEHM) computer code (Zyvoloski et al. 1997 [DIRS 100615]) to simulate radionuclide transport in the UZ. This report outlines the assumptions, design, and testing of a model for calculating radionuclide transport in the UZ at Yucca Mountain. In addition, methods for determining and inputting transport parameters are outlined for use in the TSPA for license application (LA) analyses. Process-level transport model calculations are documented in another report for the UZ (BSC 2004 [DIRS 164500]). Three-dimensional, dual-permeability flow fields generated to characterize UZ flow (documented by BSC 2004 [DIRS 169861]; DTN: LB03023DSSCP9I.001 [DIRS 163044]) are converted to make them compatible with the FEHM code for use in this abstraction model. This report establishes the numerical method and demonstrates the use of the model that is intended to represent UZ transport in the TSPA-LA. Capability of the UZ barrier for retarding the transport is demonstrated in this report, and by the underlying process model (BSC 2004 [DIRS 164500]). The technical scope, content, and management of this report are described in the planning document ''Technical Work Plan for: Unsaturated Zone Transport Model Report Integration'' (BSC 2004 [DIRS 171282]). Deviations from the technical work plan (TWP) are noted within the text of this report, as appropriate. The latest version of this document is being prepared principally to correct parameter values found to be in error due to transcription errors, changes in source data that were not captured in the report, calculation errors, and errors in interpretation of source data.

  1. Derivation of Rigid Body Analysis Models from Vehicle Architecture Abstractions

    DTIC Science & Technology

    2011-06-17

    models of every type have their basis in some type of physical representation of the design domain. Rather than describing three-dimensional continua of...arrangement, while capturing just enough physical detail to be used as the basis for a meaningful representation of the design , and eventually, analyses that...permit architecture assessment. The design information captured by the abstractions is available at the very earliest stages of the vehicle

  2. Of Models and Machines: Implementing Bounded Rationality.

    PubMed

    Dick, Stephanie

    2015-09-01

    This essay explores the early history of Herbert Simon's principle of bounded rationality in the context of his Artificial Intelligence research in the mid 1950s. It focuses in particular on how Simon and his colleagues at the RAND Corporation translated a model of human reasoning into a computer program, the Logic Theory Machine. They were motivated by a belief that computers and minds were the same kind of thing--namely, information-processing systems. The Logic Theory Machine program was a model of how people solved problems in elementary mathematical logic. However, in making this model actually run on their 1950s computer, the JOHNNIAC, Simon and his colleagues had to navigate many obstacles and material constraints quite foreign to the human experience of logic. They crafted new tools and engaged in new practices that accommodated the affordances of their machine, rather than reflecting the character of human cognition and its bounds. The essay argues that tracking this implementation effort shows that "internal" cognitive practices and "external" tools and materials are not so easily separated as they are in Simon's principle of bounded rationality--the latter often shaping the dynamics of the former.

  3. Modelling the influence of irrigation abstractions on Scotland's water resources.

    PubMed

    Dunn, S M; Chalmers, N; Stalham, M; Lilly, A; Crabtree, B; Johnston, L

    2003-01-01

    Legislation to control abstraction of water in Scotland is limited and for purposes such as irrigation there are no restrictions in place over most of the country. This situation is set to change with implementation of the European Water Framework Directive. As a first step towards the development of appropriate policy for irrigation control there is a need to assess the current scale of irrigation practices in Scotland. This paper presents a modelling approach that has been used to quantify spatially the volume of water abstractions across the country for irrigation of potato crops under typical climatic conditions. A water balance model was developed to calculate soil moisture deficits and identify the potential need for irrigation. The results were then combined with spatial data on potato cropping and integrated to the sub-catchment scale to identify the river systems most at risk from over-abstraction. The results highlight that the areas that have greatest need for irrigation of potatoes are all concentrated in the central east-coast area of Scotland. The difference between irrigation demand in wet and dry years is very significant, although spatial patterns of the distribution are similar.

  4. Towards a generalized energy prediction model for machine tools.

    PubMed

    Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H; Dornfeld, David A; Helu, Moneer; Rachuri, Sudarsan

    2017-04-01

    Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process.

  5. Modeling Electronic Quantum Transport with Machine Learning

    DOE PAGES

    Lopez Bezanilla, Alejandro; von Lilienfeld Toal, Otto A.

    2014-06-11

    We present a machine learning approach to solve electronic quantum transport equations of one-dimensional nanostructures. The transmission coefficients of disordered systems were computed to provide training and test data sets to the machine. The system’s representation encodes energetic as well as geometrical information to characterize similarities between disordered configurations, while the Euclidean norm is used as a measure of similarity. Errors for out-of-sample predictions systematically decrease with training set size, enabling the accurate and fast prediction of new transmission coefficients. The remarkable performance of our model to capture the complexity of interference phenomena lends further support to its viability inmore » dealing with transport problems of undulatory nature.« less

  6. Modeling Electronic Quantum Transport with Machine Learning

    SciTech Connect

    Lopez Bezanilla, Alejandro; von Lilienfeld Toal, Otto A.

    2014-06-11

    We present a machine learning approach to solve electronic quantum transport equations of one-dimensional nanostructures. The transmission coefficients of disordered systems were computed to provide training and test data sets to the machine. The system’s representation encodes energetic as well as geometrical information to characterize similarities between disordered configurations, while the Euclidean norm is used as a measure of similarity. Errors for out-of-sample predictions systematically decrease with training set size, enabling the accurate and fast prediction of new transmission coefficients. The remarkable performance of our model to capture the complexity of interference phenomena lends further support to its viability in dealing with transport problems of undulatory nature.

  7. A Machine Learning Approach to Student Modeling.

    DTIC Science & Technology

    1984-05-01

    machine learning , and describe ACN, a student modeling system that incorporates this approach. This system begins with a set of overly general rules, which it uses to search a problem space until it arrives at the same answer as the student. The ACM computer program then uses the solution path it has discovered to determine positive and negative instances of its initial rules, and employs a discrimination learning mechanism to place additional conditions on these rules. The revised rules will reproduce the solution path without search, and constitute a cognitive model of

  8. Exploiting mid-range DNA patterns for sequence classification: binary abstraction Markov models

    PubMed Central

    Shepard, Samuel S.; McSweeny, Andrew; Serpen, Gursel; Fedorov, Alexei

    2012-01-01

    Messenger RNA sequences possess specific nucleotide patterns distinguishing them from non-coding genomic sequences. In this study, we explore the utilization of modified Markov models to analyze sequences up to 44 bp, far beyond the 8-bp limit of conventional Markov models, for exon/intron discrimination. In order to analyze nucleotide sequences of this length, their information content is first reduced by conversion into shorter binary patterns via the application of numerous abstraction schemes. After the conversion of genomic sequences to binary strings, homogenous Markov models trained on the binary sequences are used to discriminate between exons and introns. We term this approach the Binary Abstraction Markov Model (BAMM). High-quality abstraction schemes for exon/intron discrimination are selected using optimization algorithms on supercomputers. The best MM classifiers are then combined using support vector machines into a single classifier. With this approach, over 95% classification accuracy is achieved without taking reading frame into account. With further development, the BAMM approach can be applied to sequences lacking the genetic code such as ncRNAs and 5′-untranslated regions. PMID:22344692

  9. Engagement Angle Modeling for Multiple-circle Continuous Machining and Its Application in the Pocket Machining

    NASA Astrophysics Data System (ADS)

    WU, Shixiong; MA, Wei; BAI, Haiping; WANG, Chengyong; SONG, Yuexian

    2017-03-01

    The progressive cutting based on auxiliary paths is an effective machining method for the material accumulating region inside the mould pocket. But the method is commonly based on the radial depth of cut as the control parameter, further more there is no more appropriate adjustment and control approach. The end-users often fail to set the parameter correctly, which leads to excessive tool load in the process of actual machining. In order to make more reasonable control of the machining load and tool-path, an engagement angle modeling method for multiple-circle continuous machining is presented. The distribution mode of multiple circles, dynamic changing process of engagement angle, extreme and average value of engagement angle are carefully considered. Based on the engagement angle model, numerous application techniques for mould pocket machining are presented, involving the calculation of the milling force in multiple-circle continuous machining, and rough and finish machining path planning and load control for the material accumulating region inside the pocket, and other aspects. Simulation and actual machining experiments show that the engagement angle modeling method for multiple-circle continuous machining is correct and reliable, and the related numerous application techniques for pocket machining are feasible and effective. The proposed research contributes to the analysis and control tool load effectively and tool-path planning reasonably for the material accumulating region inside the mould pocket.

  10. Engagement Angle Modeling for Multiple-circle Continuous Machining and Its Application in the Pocket Machining

    NASA Astrophysics Data System (ADS)

    WU, Shixiong; MA, Wei; BAI, Haiping; WANG, Chengyong; SONG, Yuexian

    2017-03-01

    The progressive cutting based on auxiliary paths is an effective machining method for the material accumulating region inside the mould pocket. But the method is commonly based on the radial depth of cut as the control parameter, further more there is no more appropriate adjustment and control approach. The end-users often fail to set the parameter correctly, which leads to excessive tool load in the process of actual machining. In order to make more reasonable control of the machining load and tool-path, an engagement angle modeling method for multiple-circle continuous machining is presented. The distribution mode of multiple circles, dynamic changing process of engagement angle, extreme and average value of engagement angle are carefully considered. Based on the engagement angle model, numerous application techniques for mould pocket machining are presented, involving the calculation of the milling force in multiple-circle continuous machining, and rough and finish machining path planning and load control for the material accumulating region inside the pocket, and other aspects. Simulation and actual machining experiments show that the engagement angle modeling method for multiple-circle continuous machining is correct and reliable, and the related numerous application techniques for pocket machining are feasible and effective. The proposed research contributes to the analysis and control tool load effectively and tool-path planning reasonably for the material accumulating region inside the mould pocket.

  11. Modeling quantum physics with machine learning

    NASA Astrophysics Data System (ADS)

    Lopez-Bezanilla, Alejandro; Arsenault, Louis-Francois; Millis, Andrew; Littlewood, Peter; von Lilienfeld, Anatole

    2014-03-01

    Machine Learning (ML) is a systematic way of inferring new results from sparse information. It directly allows for the resolution of computationally expensive sets of equations by making sense of accumulated knowledge and it is therefore an attractive method for providing computationally inexpensive 'solvers' for some of the important systems of condensed matter physics. In this talk a non-linear regression statistical model is introduced to demonstrate the utility of ML methods in solving quantum physics related problem, and is applied to the calculation of electronic transport in 1D channels. DOE contract number DE-AC02-06CH11357.

  12. Entity-Centric Abstraction and Modeling Framework for Transportation Architectures

    NASA Technical Reports Server (NTRS)

    Lewe, Jung-Ho; DeLaurentis, Daniel A.; Mavris, Dimitri N.; Schrage, Daniel P.

    2007-01-01

    A comprehensive framework for representing transpportation architectures is presented. After discussing a series of preceding perspectives and formulations, the intellectual underpinning of the novel framework using an entity-centric abstraction of transportation is described. The entities include endogenous and exogenous factors and functional expressions are offered that relate these and their evolution. The end result is a Transportation Architecture Field which permits analysis of future concepts under the holistic perspective. A simulation model which stems from the framework is presented and exercised producing results which quantify improvements in air transportation due to advanced aircraft technologies. Finally, a modeling hypothesis and its accompanying criteria are proposed to test further use of the framework for evaluating new transportation solutions.

  13. Information Model for Machine-Tool-Performance Tests

    PubMed Central

    Lee, Y. Tina; Soons, Johannes A.; Donmez, M. Alkan

    2001-01-01

    This report specifies an information model of machine-tool-performance tests in the EXPRESS [1] language. The information model provides a mechanism for describing the properties and results of machine-tool-performance tests. The objective of the information model is a standardized, computer-interpretable representation that allows for efficient archiving and exchange of performance test data throughout the life cycle of the machine. The report also demonstrates the implementation of the information model using three different implementation methods. PMID:27500031

  14. Finite State Machines and Modal Models in Ptolemy II

    DTIC Science & Technology

    2009-11-01

    Finite State Machines and Modal Models in Ptolemy II Edward A. Lee Electrical Engineering and Computer Sciences University of California at Berkeley...DATES COVERED 00-00-2009 to 00-00-2009 4. TITLE AND SUBTITLE Finite State Machines and Modal Models in Ptolemy II 5a. CONTRACT NUMBER 5b...describes the usage and semantics of finite-state machines (FSMs) and modal models in Ptolemy II. FSMs are actors whose behavior is described using a

  15. Workshop on Fielded Applications of Machine Learning Held in Amherst, Massachusetts on 30 June-1 July 1993. Abstracts.

    DTIC Science & Technology

    1993-01-01

    engineering has led to many AI systems that are now regularly used in industry and elsewhere. The ultimate test of machine learning , the subfield of Al that...applications of machine learning suggest the time was ripe for a meeting on this topic. For this reason, Pat Langley (Siemens Corporate Research) and Yves...Kodratoff (Universite de Paris, Sud) organized an invited workshop on applications of machine learning . The goal of the gathering was to familiarize

  16. A rule-based approach to model checking of UML state machines

    NASA Astrophysics Data System (ADS)

    Grobelna, Iwona; Grobelny, Michał; Stefanowicz, Łukasz

    2016-12-01

    In the paper a new approach to formal verification of control process specification expressed by means of UML state machines in version 2.x is proposed. In contrast to other approaches from the literature, we use the abstract and universal rule-based logical model suitable both for model checking (using the nuXmv model checker), but also for logical synthesis in form of rapid prototyping. Hence, a prototype implementation in hardware description language VHDL can be obtained that fully reflects the primary, already formally verified specification in form of UML state machines. Presented approach allows to increase the assurance that implemented system meets the user-defined requirements.

  17. A Machine-Learning-Driven Sky Model.

    PubMed

    Satylmys, Pynar; Bashford-Rogers, Thomas; Chalmers, Alan; Debattista, Kurt

    2017-01-01

    Sky illumination is responsible for much of the lighting in a virtual environment. A machine-learning-based approach can compactly represent sky illumination from both existing analytic sky models and from captured environment maps. The proposed approach can approximate the captured lighting at a significantly reduced memory cost and enable smooth transitions of sky lighting to be created from a small set of environment maps captured at discrete times of day. The author's results demonstrate accuracy close to the ground truth for both analytical and capture-based methods. The approach has a low runtime overhead, so it can be used as a generic approach for both offline and real-time applications.

  18. Modeling Patient Treatment With Medical Records: An Abstraction Hierarchy to Understand User Competencies and Needs.

    PubMed

    St-Maurice, Justin D; Burns, Catherine M

    2017-07-28

    Health care is a complex sociotechnical system. Patient treatment is evolving and needs to incorporate the use of technology and new patient-centered treatment paradigms. Cognitive work analysis (CWA) is an effective framework for understanding complex systems, and work domain analysis (WDA) is useful for understanding complex ecologies. Although previous applications of CWA have described patient treatment, due to their scope of work patients were previously characterized as biomedical machines, rather than patient actors involved in their own care. An abstraction hierarchy that characterizes patients as beings with complex social values and priorities is needed. This can help better understand treatment in a modern approach to care. The purpose of this study was to perform a WDA to represent the treatment of patients with medical records. The methods to develop this model included the analysis of written texts and collaboration with subject matter experts. Our WDA represents the ecology through its functional purposes, abstract functions, generalized functions, physical functions, and physical forms. Compared with other work domain models, this model is able to articulate the nuanced balance between medical treatment, patient education, and limited health care resources. Concepts in the analysis were similar to the modeling choices of other WDAs but combined them in as a comprehensive, systematic, and contextual overview. The model is helpful to understand user competencies and needs. Future models could be developed to model the patient's domain and enable the exploration of the shared decision-making (SDM) paradigm. Our work domain model links treatment goals, decision-making constraints, and task workflows. This model can be used by system developers who would like to use ecological interface design (EID) to improve systems. Our hierarchy is the first in a future set that could explore new treatment paradigms. Future hierarchies could model the patient as a

  19. Robustness of thermal error compensation model of CNC machine tool

    NASA Astrophysics Data System (ADS)

    Lang, Xianli; Miao, Enming; Gong, Yayun; Niu, Pengcheng; Xu, Zhishang

    2013-01-01

    Thermal error is the major factor in restricting the accuracy of CNC machining. The modeling accuracy is the key of thermal error compensation which can achieve precision machining of CNC machine tool. The traditional thermal error compensation models mostly focus on the fitting accuracy without considering the robustness of the models, it makes the research results into practice is difficult. In this paper, the experiment of model robustness is done in different spinde speeds of leaderway V-450 machine tool. Combining fuzzy clustering and grey relevance selects temperature-sensitive points of thermal error. Using multiple linear regression model (MLR) and distributed lag model (DL) establishes model of the multi-batch experimental data and then gives robustness analysis, demonstrates the difference between fitting precision and prediction precision in engineering application, and provides a reference method to choose thermal error compensation model of CNC machine tool in the practical engineering application.

  20. Selected translated abstracts of Russian-language climate-change publications. 4: General circulation models

    SciTech Connect

    Burtis, M.D.; Razuvaev, V.N.; Sivachok, S.G.

    1996-10-01

    This report presents English-translated abstracts of important Russian-language literature concerning general circulation models as they relate to climate change. Into addition to the bibliographic citations and abstracts translated into English, this report presents the original citations and abstracts in Russian. Author and title indexes are included to assist the reader in locating abstracts of particular interest.

  1. Modeling of cumulative tool wear in machining metal matrix composites

    SciTech Connect

    Hung, N.P.; Tan, V.K.; Oon, B.E.

    1995-12-31

    Metal matrix composites (MMCs) are notoriously known for their low machinability because of the abrasive and brittle reinforcement. Although a near-net-shape product could be produced, finish machining is still required for the final shape and dimension. The classical Taylor`s tool life equation that relates tool life and cutting conditions has been traditionally used to study machinability. The turning operation is commonly used to investigate the machinability of a material; tedious and costly milling experiments have to be performed separately; while a facing test is not applicable for the Taylor`s model since the facing speed varies as the tool moves radially. Collecting intensive machining data for MMCs is often difficult because of the constraints on size, cost of the material, and the availability of sophisticated machine tools. A more flexible model and machinability testing technique are, therefore, sought. This study presents and verifies new models for turning, facing, and milling operations. Different cutting conditions were utilized to assess the machinability of MMCs reinforced with silicon carbide or alumina particles. Experimental data show that tool wear does not depend on the order of different cutting speeds since abrasion is the main wear mechanism. Correlation between data for turning, milling, and facing is presented. It is more economical to rank machinability using data for facing and then to convert the data for turning and milling, if required. Subsurface damages such as work-hardened and cracked matrix alloy, and fractured and delaminated particles are discussed.

  2. Generative Modeling for Machine Learning on the D-Wave

    SciTech Connect

    Thulasidasan, Sunil

    2016-11-15

    These are slides on Generative Modeling for Machine Learning on the D-Wave. The following topics are detailed: generative models; Boltzmann machines: a generative model; restricted Boltzmann machines; learning parameters: RBM training; practical ways to train RBM; D-Wave as a Boltzmann sampler; mapping RBM onto the D-Wave; Chimera restricted RBM; mapping binary RBM to Ising model; experiments; data; D-Wave effective temperature, parameters noise, etc.; experiments: contrastive divergence (CD) 1 step; after 50 steps of CD; after 100 steps of CD; D-Wave (experiments 1, 2, 3); D-Wave observations.

  3. Two-Stage Machine Learning model for guideline development.

    PubMed

    Mani, S; Shankle, W R; Dick, M B; Pazzani, M J

    1999-05-01

    We present a Two-Stage Machine Learning (ML) model as a data mining method to develop practice guidelines and apply it to the problem of dementia staging. Dementia staging in clinical settings is at present complex and highly subjective because of the ambiguities and the complicated nature of existing guidelines. Our model abstracts the two-stage process used by physicians to arrive at the global Clinical Dementia Rating Scale (CDRS) score. The model incorporates learning intermediate concepts (CDRS category scores) in the first stage that then become the feature space for the second stage (global CDRS score). The sample consisted of 678 patients evaluated in the Alzheimer's Disease Research Center at the University of California, Irvine. The demographic variables, functional and cognitive test results used by physicians for the task of dementia severity staging were used as input to the machine learning algorithms. Decision tree learners and rule inducers (C4.5, Cart, C4.5 rules) were selected for our study as they give expressive models, and Naive Bayes was used as a baseline algorithm for comparison purposes. We first learned the six CDRS category scores (memory, orientation, judgement and problem solving, personal care, home and hobbies, and community affairs). These learned CDRS category scores were then used to learn the global CDRS scores. The Two-Stage ML model classified as well as or better than the published inter-rater agreements for both the category and global CDRS scoring by dementia experts. Furthermore, for the most critical distinction, normal versus very mildly impaired, the Two-Stage ML model was 28.1 and 6.6% more accurate than published performances by domain experts. Our study of the CDRS examined one of the largest, most diverse samples in the literature, suggesting that our findings are robust. The Two-Stage ML model also identified a CDRS category, Judgment and Problem Solving, which has low classification accuracy similar to published

  4. Semi-supervised prediction of protein subcellular localization using abstraction augmented Markov models

    PubMed Central

    2010-01-01

    Background Determination of protein subcellular localization plays an important role in understanding protein function. Knowledge of the subcellular localization is also essential for genome annotation and drug discovery. Supervised machine learning methods for predicting the localization of a protein in a cell rely on the availability of large amounts of labeled data. However, because of the high cost and effort involved in labeling the data, the amount of labeled data is quite small compared to the amount of unlabeled data. Hence, there is a growing interest in developing semi-supervised methods for predicting protein subcellular localization from large amounts of unlabeled data together with small amounts of labeled data. Results In this paper, we present an Abstraction Augmented Markov Model (AAMM) based approach to semi-supervised protein subcellular localization prediction problem. We investigate the effectiveness of AAMMs in exploiting unlabeled data. We compare semi-supervised AAMMs with: (i) Markov models (MMs) (which do not take advantage of unlabeled data); (ii) an expectation maximization (EM); and (iii) a co-training based approaches to semi-supervised training of MMs (that make use of unlabeled data). Conclusions The results of our experiments on three protein subcellular localization data sets show that semi-supervised AAMMs: (i) can effectively exploit unlabeled data; (ii) are more accurate than both the MMs and the EM based semi-supervised MMs; and (iii) are comparable in performance, and in some cases outperform, the co-training based semi-supervised MMs. PMID:21034431

  5. Developing a PLC-friendly state machine model: lessons learned

    NASA Astrophysics Data System (ADS)

    Pessemier, Wim; Deconinck, Geert; Raskin, Gert; Saey, Philippe; Van Winckel, Hans

    2014-07-01

    Modern Programmable Logic Controllers (PLCs) have become an attractive platform for controlling real-time aspects of astronomical telescopes and instruments due to their increased versatility, performance and standardization. Likewise, vendor-neutral middleware technologies such as OPC Unified Architecture (OPC UA) have recently demonstrated that they can greatly facilitate the integration of these industrial platforms into the overall control system. Many practical questions arise, however, when building multi-tiered control systems that consist of PLCs for low level control, and conventional software and platforms for higher level control. How should the PLC software be structured, so that it can rely on well-known programming paradigms on the one hand, and be mapped to a well-organized OPC UA interface on the other hand? Which programming languages of the IEC 61131-3 standard closely match the problem domains of the abstraction levels within this structure? How can the recent additions to the standard (such as the support for namespaces and object-oriented extensions) facilitate a model based development approach? To what degree can our applications already take advantage of the more advanced parts of the OPC UA standard, such as the high expressiveness of the semantic modeling language that it defines, or the support for events, aggregation of data, automatic discovery, ... ? What are the timing and concurrency problems to be expected for the higher level tiers of the control system due to the cyclic execution of control and communication tasks by the PLCs? We try to answer these questions by demonstrating a semantic state machine model that can readily be implemented using IEC 61131 and OPC UA. One that does not aim to capture all possible states of a system, but rather one that attempts to organize the course-grained structure and behaviour of a system. In this paper we focus on the intricacies of this seemingly simple task, and on the lessons that we

  6. Context in Models of Human-Machine Systems

    NASA Technical Reports Server (NTRS)

    Callantine, Todd J.; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    All human-machine systems models represent context. This paper proposes a theory of context through which models may be usefully related and integrated for design. The paper presents examples of context representation in various models, describes an application to developing models for the Crew Activity Tracking System (CATS), and advances context as a foundation for integrated design of complex dynamic systems.

  7. Predicting Market Impact Costs Using Nonparametric Machine Learning Models

    PubMed Central

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235

  8. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    PubMed

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  9. Towards an Abstraction-Friendly Programming Model for High Productivity and High Performance Computing

    SciTech Connect

    Liao, C; Quinlan, D; Panas, T

    2009-10-06

    General purpose languages, such as C++, permit the construction of various high level abstractions to hide redundant, low level details and accelerate programming productivity. Example abstractions include functions, data structures, classes, templates and so on. However, the use of abstractions significantly impedes static code analyses and optimizations, including parallelization, applied to the abstractions complex implementations. As a result, there is a common perception that performance is inversely proportional to the level of abstraction. On the other hand, programming large scale, possibly heterogeneous high-performance computing systems is notoriously difficult and programmers are less likely to abandon the help from high level abstractions when solving real-world, complex problems. Therefore, the need for programming models balancing both programming productivity and execution performance has reached a new level of criticality. We are exploring a novel abstraction-friendly programming model in order to support high productivity and high performance computing. We believe that standard or domain-specific semantics associated with high level abstractions can be exploited to aid compiler analyses and optimizations, thus helping achieving high performance without losing high productivity. We encode representative abstractions and their useful semantics into an abstraction specification file. In the meantime, an accessible, source-to-source compiler infrastructure (the ROSE compiler) is used to facilitate recognizing high level abstractions and utilizing their semantics for more optimization opportunities. Our initial work has shown that recognizing abstractions and knowing their semantics within a compiler can dramatically extend the applicability of existing optimizations, including automatic parallelization. Moreover, a new set of optimizations have become possible within an abstraction-friendly and semantics-aware programming model. In the future, we will

  10. Mesoscale modeling of molecular machines: cyclic dynamics and hydrodynamical fluctuations.

    PubMed

    Cressman, Andrew; Togashi, Yuichi; Mikhailov, Alexander S; Kapral, Raymond

    2008-05-01

    Proteins acting as molecular machines can undergo cyclic internal conformational motions that are coupled to ligand binding and dissociation events. In contrast to their macroscopic counterparts, nanomachines operate in a highly fluctuating environment, which influences their operation. To bridge the gap between detailed microscopic and simple phenomenological descriptions, a mesoscale approach, which combines an elastic network model of a machine with a particle-based mesoscale description of the solvent, is employed. The time scale of the cyclic hinge motions of the machine prototype is strongly affected by hydrodynamical coupling to the solvent.

  11. X: A Comprehensive Analytic Model for Parallel Machines

    SciTech Connect

    Li, Ang; Song, Shuaiwen; Brugel, Eric; Kumar, Akash; Chavarría-Miranda, Daniel; Corporaal, Henk

    2016-05-23

    To continuously comply with Moore’s Law, modern parallel machines become increasingly complex. Effectively tuning application performance for these machines therefore becomes a daunting task. Moreover, identifying performance bottlenecks at application and architecture level, as well as evaluating various optimization strategies, are becoming extremely difficult when the entanglement of numerous correlated factors is being presented. To tackle these challenges, we present a visual analytical model named “X”. It is intuitive and sufficiently flexible to track all the typical features of a parallel machine.

  12. Modeling situated abstraction : action coalescence via multidimensional coherence.

    SciTech Connect

    Sallach, D. L.; Decision and Information Sciences; Univ. of Chicago

    2007-01-01

    Situated social agents weigh dozens of priorities, each with its own complexities. Domains of interest are intertwined, and progress in one area either complements or conflicts with other priorities. Interpretive agents address these complexities through: (1) integrating cognitive complexities through the use of radial concepts, (2) recognizing the role of emotion in prioritizing alternatives and urgencies, (3) using Miller-range constraints to avoid oversimplified notions omniscience, and (4) constraining actions to 'moves' in multiple prototype games. Situated agent orientations are dynamically grounded in pragmatic considerations as well as intertwined with internal and external priorities. HokiPoki is a situated abstraction designed to shape and focus strategic agent orientations. The design integrates four pragmatic pairs: (1) problem and solution, (2) dependence and power, (3) constraint and affordance, and (4) (agent) intent and effect. In this way, agents are empowered to address multiple facets of a situation in an exploratory, or even arbitrary, order. HokiPoki is open to the internal orientation of the agent as it evolves, but also to the communications and actions of other agents.

  13. Symbolic LTL Compilation for Model Checking: Extended Abstract

    NASA Technical Reports Server (NTRS)

    Rozier, Kristin Y.; Vardi, Moshe Y.

    2007-01-01

    In Linear Temporal Logic (LTL) model checking, we check LTL formulas representing desired behaviors against a formal model of the system designed to exhibit these behaviors. To accomplish this task, the LTL formulas must be translated into automata [21]. We focus on LTL compilation by investigating LTL satisfiability checking via a reduction to model checking. Having shown that symbolic LTL compilation algorithms are superior to explicit automata construction algorithms for this task [16], we concentrate here on seeking a better symbolic algorithm.We present experimental data comparing algorithmic variations such as normal forms, encoding methods, and variable ordering and examine their effects on performance metrics including processing time and scalability. Safety critical systems, such as air traffic control, life support systems, hazardous environment controls, and automotive control systems, pervade our daily lives, yet testing and simulation alone cannot adequately verify their reliability [3]. Model checking is a promising approach to formal verification for safety critical systems which involves creating a formal mathematical model of the system and translating desired safety properties into a formal specification for this model. The complement of the specification is then checked against the system model. When the model does not satisfy the specification, model-checking tools accompany this negative answer with a counterexample, which points to an inconsistency between the system and the desired behaviors and aids debugging efforts.

  14. Applying model abstraction techniques to optimize monitoring networks for detecting subsurface contaminant transport

    USDA-ARS?s Scientific Manuscript database

    Improving strategies for monitoring subsurface contaminant transport includes performance comparison of competing models, developed independently or obtained via model abstraction. Model comparison and parameter discrimination involve specific performance indicators selected to better understand s...

  15. Particle Tracking Model and Abstraction of Transport Processes

    SciTech Connect

    B. Robinson

    2000-04-07

    The purpose of the transport methodology and component analysis is to provide the numerical methods for simulating radionuclide transport and model setup for transport in the unsaturated zone (UZ) site-scale model. The particle-tracking method of simulating radionuclide transport is incorporated into the FEHM computer code and the resulting changes in the FEHM code are to be submitted to the software configuration management system. This Analysis and Model Report (AMR) outlines the assumptions, design, and testing of a model for calculating radionuclide transport in the unsaturated zone at Yucca Mountain. In addition, methods for determining colloid-facilitated transport parameters are outlined for use in the Total System Performance Assessment (TSPA) analyses. Concurrently, process-level flow model calculations are being carrier out in a PMR for the unsaturated zone. The computer code TOUGH2 is being used to generate three-dimensional, dual-permeability flow fields, that are supplied to the Performance Assessment group for subsequent transport simulations. These flow fields are converted to input files compatible with the FEHM code, which for this application simulates radionuclide transport using the particle-tracking algorithm outlined in this AMR. Therefore, this AMR establishes the numerical method and demonstrates the use of the model, but the specific breakthrough curves presented do not necessarily represent the behavior of the Yucca Mountain unsaturated zone.

  16. Modelling machine ensembles with discrete event dynamical system theory

    NASA Technical Reports Server (NTRS)

    Hunter, Dan

    1990-01-01

    Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).

  17. Modelling machine ensembles with discrete event dynamical system theory

    NASA Technical Reports Server (NTRS)

    Hunter, Dan

    1990-01-01

    Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).

  18. Modeling powder encapsulation in dosator-based machines: I. Theory.

    PubMed

    Khawam, Ammar

    2011-12-15

    Automatic encapsulation machines have two dosing principles: dosing disc and dosator. Dosator-based machines compress the powder to plugs that are transferred into capsules. The encapsulation process in dosator-based capsule machines was modeled in this work. A model was proposed to predict the weight and length of produced plugs. According to the model, the plug weight is a function of piston dimensions, powder-bed height, bulk powder density and precompression densification inside dosator while plug length is a function of piston height, set piston displacement, spring stiffness and powder compressibility. Powder densification within the dosator can be achieved by precompression, compression or both. Precompression densification depends on the powder to piston height ratio while compression densification depends on piston displacement against powder. This article provides the theoretical basis of the encapsulation model, including applications and limitations. The model will be applied to experimental data separately.

  19. Component based modelling of piezoelectric ultrasonic actuators for machining applications

    NASA Astrophysics Data System (ADS)

    Saleem, A.; Salah, M.; Ahmed, N.; Silberschmidt, V. V.

    2013-07-01

    Ultrasonically Assisted Machining (UAM) is an emerging technology that has been utilized to improve the surface finishing in machining processes such as turning, milling, and drilling. In this context, piezoelectric ultrasonic transducers are being used to vibrate the cutting tip while machining at predetermined amplitude and frequency. However, modelling and simulation of these transducers is a tedious and difficult task. This is due to the inherent nonlinearities associated with smart materials. Therefore, this paper presents a component-based model of ultrasonic transducers that mimics the nonlinear behaviour of such a system. The system is decomposed into components, a mathematical model of each component is created, and the whole system model is accomplished by aggregating the basic components' model. System parameters are identified using Finite Element technique which then has been used to simulate the system in Matlab/SIMULINK. Various operation conditions are tested and performed to demonstrate the system performance.

  20. Committee of machine learning predictors of hydrological models uncertainty

    NASA Astrophysics Data System (ADS)

    Kayastha, Nagendra; Solomatine, Dimitri

    2014-05-01

    In prediction of uncertainty based on machine learning methods, the results of various sampling schemes namely, Monte Carlo sampling (MCS), generalized likelihood uncertainty estimation (GLUE), Markov chain Monte Carlo (MCMC), shuffled complex evolution metropolis algorithm (SCEMUA), differential evolution adaptive metropolis (DREAM), particle swarm optimization (PSO) and adaptive cluster covering (ACCO)[1] used to build a predictive models. These models predict the uncertainty (quantiles of pdf) of a deterministic output from hydrological model [2]. Inputs to these models are the specially identified representative variables (past events precipitation and flows). The trained machine learning models are then employed to predict the model output uncertainty which is specific for the new input data. For each sampling scheme three machine learning methods namely, artificial neural networks, model tree, locally weighted regression are applied to predict output uncertainties. The problem here is that different sampling algorithms result in different data sets used to train different machine learning models which leads to several models (21 predictive uncertainty models). There is no clear evidence which model is the best since there is no basis for comparison. A solution could be to form a committee of all models and to sue a dynamic averaging scheme to generate the final output [3]. This approach is applied to estimate uncertainty of streamflows simulation from a conceptual hydrological model HBV in the Nzoia catchment in Kenya. [1] N. Kayastha, D. L. Shrestha and D. P. Solomatine. Experiments with several methods of parameter uncertainty estimation in hydrological modeling. Proc. 9th Intern. Conf. on Hydroinformatics, Tianjin, China, September 2010. [2] D. L. Shrestha, N. Kayastha, and D. P. Solomatine, and R. Price. Encapsulation of parameteric uncertainty statistics by various predictive machine learning models: MLUE method, Journal of Hydroinformatic, in press

  1. Parallel phase model : a programming model for high-end parallel machines with manycores.

    SciTech Connect

    Wu, Junfeng; Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  2. Simulation model for Vuilleumier cycle machines and analysis of characteristics

    NASA Astrophysics Data System (ADS)

    Sekiya, Hiroshi; Terada, Fusao

    1992-11-01

    Numerical analysis using the computer is useful in predicting and evaluating the performance of the Vuilleumier (VM) cycle machine in research and development. The 3rd-order method must be employed particularly in the case of detailed analysis of performance and design optimization. This paper describes our simulation model for the VM machine, which is based on that method. The working space is divided into thirty-eight control volumes for the VM heat pump test machine, and the fundamental equations are derived rigorously by applying the conservative equations of mass, momentum, and energy to each control volume, using staggered mesh. These equations are solved simultaneously by the Adams-Moulton method. Then, the test machine is investigated in terms of the pressure and temperature fluctuations of the working gas, the energy flow, and the performance at each speed of revolution. The calculated results are examined in comparison with the experimental ones.

  3. Efficient Plasma Ion Source Modeling With Adaptive Mesh Refinement (Abstract)

    SciTech Connect

    Kim, J.S.; Vay, J.L.; Friedman, A.; Grote, D.P.

    2005-03-15

    Ion beam drivers for high energy density physics and inertial fusion energy research require high brightness beams, so there is little margin of error allowed for aberration at the emitter. Thus, accurate plasma ion source computer modeling is required to model the plasma sheath region and time-dependent effects correctly.A computer plasma source simulation module that can be used with a powerful heavy ion fusion code, WARP, or as a standalone code, is being developed. In order to treat the plasma sheath region accurately and efficiently, the module will have the capability of handling multiple spatial scale problems by using Adaptive Mesh Refinement (AMR). We will report on our progress on the project.

  4. Abstract: Sample Size Planning for Latent Curve Models.

    PubMed

    Lai, Keke

    2011-11-30

    When designing a study that uses structural equation modeling (SEM), an important task is to decide an appropriate sample size. Historically, this task is approached from the power analytic perspective, where the goal is to obtain sufficient power to reject a false null hypothesis. However, hypothesis testing only tells if a population effect is zero and fails to address the question about the population effect size. Moreover, significance tests in the SEM context often reject the null hypothesis too easily, and therefore the problem in practice is having too much power instead of not enough power. An alternative means to infer the population effect is forming confidence intervals (CIs). A CI is more informative than hypothesis testing because a CI provides a range of plausible values for the population effect size of interest. Given the close relationship between CI and sample size, the sample size for an SEM study can be planned with the goal to obtain sufficiently narrow CIs for the population model parameters of interest. Latent curve models (LCMs) is an application of SEM with mean structure to studying change over time. The sample size planning method for LCM from the CI perspective is based on maximum likelihood and expected information matrix. Given a sample, to form a CI for the model parameter of interest in LCM, it requires the sample covariance matrix S, sample mean vector [Formula: see text], and sample size N. Therefore, the width (w) of the resulting CI can be considered a function of S, [Formula: see text], and N. Inverting the CI formation process gives the sample size planning process. The inverted process requires a proxy for the population covariance matrix Σ, population mean vector μ, and the desired width ω as input, and it returns N as output. The specification of the input information for sample size planning needs to be performed based on a systematic literature review. In the context of covariance structure analysis, Lai and Kelley

  5. Phase Transitions in a Model of Y-Molecules Abstract

    NASA Astrophysics Data System (ADS)

    Holz, Danielle; Ruth, Donovan; Toral, Raul; Gunton, James

    Immunoglobulin is a Y-shaped molecule that functions as an antibody to neutralize pathogens. In special cases where there is a high concentration of immunoglobulin molecules, self-aggregation can occur and the molecules undergo phase transitions. This prevents the molecules from completing their function. We used a simplified model of 2-Dimensional Y-molecules with three identical arms on a triangular lattice with 2-dimensional Grand Canonical Ensemble. The molecules were permitted to be placed, removed, rotated or moved on the lattice. Once phase coexistence was found, we used histogram reweighting and multicanonical sampling to calculate our phase diagram.

  6. Modelling and Control of Mini-Flying Machines

    NASA Astrophysics Data System (ADS)

    Castillo, Pedro; Lozano, Rogelio; Dzul, Alejandro E.

    Problems in the motion control of aircraft are of perennial interest to the control engineer as they tend to be of complex and nonlinear nature. Modelling and Control of Mini-Flying Machines is an exposition of models developed for various types of mini-aircraft: planar Vertical Take-off and Landing aircraft; helicopters; quadrotor mini-rotorcraft; other fixed-wing aircraft; blimps; for each of which it propounds: detailed models derived from Euler-Lagrange methods; appropriate nonlinear control strategies and convergence properties; real-time experimental comparisons of the performance of control algorithms= ; review of the principal sensors, on-board electronics, real-time architectu= re and communications systems for mini-flying machine control, including di= scussion of their performance; detailed explanation of the use of the Kalman filter to flying machine loca= lization. http://www.springeronline.com/alert/article?a=3D1_1fva7w_172cml_63f_6

  7. Hydro- abrasive jet machining modeling for computer control and optimization

    NASA Astrophysics Data System (ADS)

    Groppetti, R.; Jovane, F.

    1993-06-01

    Use of hydro-abrasive jet machining (HAJM) for machining a wide variety of materials—metals, poly-mers, ceramics, fiber-reinforced composites, metal-matrix composites, and bonded or hybridized mate-rials—primarily for two- and three-dimensional cutting and also for drilling, turning, milling, and deburring, has been reported. However, the potential of this innovative process has not been explored fully. This article discusses process control, integration, and optimization of HAJM to establish a plat-form for the implementation of real-time adaptive control constraint (ACC), adaptive control optimiza-tion (ACO), and CAD/CAM integration. It presents the approach followed and the main results obtained during the development, implementation, automation, and integration of a HAJM cell and its computer-ized controller. After a critical analysis of the process variables and models reported in the literature to identify process variables and to define a process model suitable for HAJM real-time control and optimi-zation, to correlate process variables and parameters with machining results, and to avoid expensive and time-consuming experiments for determination of the optimal machining conditions, a process predic-tion and optimization model was identified and implemented. Then, the configuration of the HAJM cell, architecture, and multiprogramming operation of the controller in terms of monitoring, control, process result prediction, and process condition optimization were analyzed. This prediction and optimization model for selection of optimal machining conditions using multi-objective programming was analyzed. Based on the definition of an economy function and a productivity function, with suitable constraints relevant to required machining quality, required kerfing depth, and available resources, the model was applied to test cases based on experimental results.

  8. An Interactive Simulation System for Modeling Stands, Harvests, and Machines

    Treesearch

    Jingxin Wang; W. Dale Greene

    1999-01-01

    A interactive computer simulation program models stands, harvest, and machine factors and evaluates their interatcitons while performing felling, skidding, or fowarding activities. A stand generator allows the user to generate either natural or planted stands. Fellings with chainsaw, drive-to-tree feller-bunchers, or harvesters and extraction with grapple skidders or...

  9. Restricted Boltzmann machines for the long range Ising models

    NASA Astrophysics Data System (ADS)

    Aoki, Ken-Ichi; Kobayashi, Tamao

    2016-12-01

    We set up restricted Boltzmann machines (RBM) to reproduce the long range Ising (LRI) models of the Ohmic type in one dimension. The RBM parameters are tuned by using the standard machine learning procedure with an additional method of configuration with probability (CwP). The quality of resultant RBM is evaluated through the susceptibility with respect to the magnetic external field. We compare the results with those by block decimation renormalization group (BDRG) method, and our RBM clear the test with satisfactory precision.

  10. Control of discrete event systems modeled as hierarchical state machines

    NASA Technical Reports Server (NTRS)

    Brave, Y.; Heymann, M.

    1991-01-01

    The authors examine a class of discrete event systems (DESs) modeled as asynchronous hierarchical state machines (AHSMs). For this class of DESs, they provide an efficient method for testing reachability, which is an essential step in many control synthesis procedures. This method utilizes the asynchronous nature and hierarchical structure of AHSMs, thereby illustrating the advantage of the AHSM representation as compared with its equivalent (flat) state machine representation. An application of the method is presented where an online minimally restrictive solution is proposed for the problem of maintaining a controlled AHSM within prescribed legal bounds.

  11. A non linear analytical model of switched reluctance machines

    NASA Astrophysics Data System (ADS)

    Sofiane, Y.; Tounzi, A.; Piriou, F.

    2002-06-01

    Nowadays, the switched reluctance machine are widely used. To determine their performances and to elaborate control strategy, we generally use the linear analytical model. Unhappily, this last is not very accurate. To yield accurate modelling results, we use then numerical models based on either 2D or 3D Finite Element Method. However, this approach is very expensive in terms of computation time and remains suitable to study the behaviour of eventually a whole device. However, it is not, a priori, adapted to elaborate control strategy for electrical machines. This paper deals with a non linear analytical model in terms of variable inductances. The theoretical development of the proposed model is introduced. Then, the model is applied to study the behaviour of a whole controlled switched reluctance machine. The parameters of the structure are identified from a 2D numerical model. They can also be determined from an experimental bench. Then, the results given by the proposed model are compared to those issue from the 2D-FEM approach and from the classical linear analytical model.

  12. Machine learning models in breast cancer survival prediction.

    PubMed

    Montazeri, Mitra; Montazeri, Mohadeseh; Montazeri, Mahdieh; Beigzadeh, Amin

    2016-01-01

    Breast cancer is one of the most common cancers with a high mortality rate among women. With the early diagnosis of breast cancer survival will increase from 56% to more than 86%. Therefore, an accurate and reliable system is necessary for the early diagnosis of this cancer. The proposed model is the combination of rules and different machine learning techniques. Machine learning models can help physicians to reduce the number of false decisions. They try to exploit patterns and relationships among a large number of cases and predict the outcome of a disease using historical cases stored in datasets. The objective of this study is to propose a rule-based classification method with machine learning techniques for the prediction of different types of Breast cancer survival. We use a dataset with eight attributes that include the records of 900 patients in which 876 patients (97.3%) and 24 (2.7%) patients were females and males respectively. Naive Bayes (NB), Trees Random Forest (TRF), 1-Nearest Neighbor (1NN), AdaBoost (AD), Support Vector Machine (SVM), RBF Network (RBFN), and Multilayer Perceptron (MLP) machine learning techniques with 10-cross fold technique were used with the proposed model for the prediction of breast cancer survival. The performance of machine learning techniques were evaluated with accuracy, precision, sensitivity, specificity, and area under ROC curve. Out of 900 patients, 803 patients and 97 patients were alive and dead, respectively. In this study, Trees Random Forest (TRF) technique showed better results in comparison to other techniques (NB, 1NN, AD, SVM and RBFN, MLP). The accuracy, sensitivity and the area under ROC curve of TRF are 96%, 96%, 93%, respectively. However, 1NN machine learning technique provided poor performance (accuracy 91%, sensitivity 91% and area under ROC curve 78%). This study demonstrates that Trees Random Forest model (TRF) which is a rule-based classification model was the best model with the highest level of

  13. Three dimensional CAD model of the Ignitor machine

    NASA Astrophysics Data System (ADS)

    Orlandi, S.; Zanaboni, P.; Macco, A.; Sioli, V.; Risso, E.

    1998-11-01

    defind The final, global product of all the structural and thermomechanical design activities is a complete three dimensional CAD (AutoCAD and Intergraph Design Review) model of the IGNITOR machine. With this powerful tool, any interface, modification, or upgrading of the machine design is managed as an integrated part of the general effort aimed at the construction of the Ignitor facility. ind The activities that are underway, to complete the design of the core of the experiment and that will be described, concern the following: ind - the cryogenic cooling system, ind - the radial press, the center post, the mechanical supports (legs) of the entire machine, ind - the inner mechanical supports of major components such as the plasma chamber and the outer poloidal field coils.

  14. Abstract Model of the SATS Concept of Operations: Initial Results and Recommendations

    NASA Technical Reports Server (NTRS)

    Dowek, Gilles; Munoz, Cesar; Carreno, Victor A.

    2004-01-01

    An abstract mathematical model of the concept of operations for the Small Aircraft Transportation System (SATS) is presented. The Concept of Operations consist of several procedures that describe nominal operations for SATS, Several safety properties of the system are proven using formal techniques. The final goal of the verification effort is to show that under nominal operations, aircraft are safely separated. The abstract model was written and formally verified in the Prototype Verification System (PVS).

  15. Applying Machine Trust Models to Forensic Investigations

    NASA Astrophysics Data System (ADS)

    Wojcik, Marika; Venter, Hein; Eloff, Jan; Olivier, Martin

    Digital forensics involves the identification, preservation, analysis and presentation of electronic evidence for use in legal proceedings. In the presence of contradictory evidence, forensic investigators need a means to determine which evidence can be trusted. This is particularly true in a trust model environment where computerised agents may make trust-based decisions that influence interactions within the system. This paper focuses on the analysis of evidence in trust-based environments and the determination of the degree to which evidence can be trusted. The trust model proposed in this work may be implemented in a tool for conducting trust-based forensic investigations. The model takes into account the trust environment and parameters that influence interactions in a computer network being investigated. Also, it allows for crimes to be reenacted to create more substantial evidentiary proof.

  16. Global ocean modeling on the Connection Machine

    SciTech Connect

    Smith, R.D.; Dukowicz, J.K.; Malone, R.C.

    1993-10-01

    The authors have developed a version of the Bryan-Cox-Semtner ocean model (Bryan, 1969; Semtner, 1976; Cox, 1984) for massively parallel computers. Such models are three-dimensional, Eulerian models that use latitude and longitude as the horizontal spherical coordinates and fixed depth levels as the vertical coordinate. The incompressible Navier-Stokes equations, with a turbulent eddy viscosity, and mass continuity equation are solved, subject to the hydrostatic and Boussinesq approximations. The traditional model formulation uses a rigid-lid approximation (vertical velocity = 0 at the ocean surface) to eliminate fast surface waves. These waves would otherwise require that a very short time step be used in numerical simulations, which would greatly increase the computational cost. To solve the equations with the rigid-lid assumption, the equations of motion are split into two parts: a set of twodimensional ``barotropic`` equations describing the vertically-averaged flow, and a set of three-dimensional ``baroclinic`` equations describing temperature, salinity and deviations of the horizontal velocities from the vertically-averaged flow.

  17. Problems in modeling man machine control behavior in biodynamic environments

    NASA Technical Reports Server (NTRS)

    Jex, H. R.

    1972-01-01

    Reviewed are some current problems in modeling man-machine control behavior in a biodynamic environment. It is given in two parts: (1) a review of the models which are appropriate for manual control behavior and the added elements necessary to deal with biodynamic interfaces; and (2) a review of some biodynamic interface pilot/vehicle problems which have occurred, been solved, or need to be solved.

  18. Bilingual Cluster Based Models for Statistical Machine Translation

    NASA Astrophysics Data System (ADS)

    Yamamoto, Hirofumi; Sumita, Eiichiro

    We propose a domain specific model for statistical machine translation. It is well-known that domain specific language models perform well in automatic speech recognition. We show that domain specific language and translation models also benefit statistical machine translation. However, there are two problems with using domain specific models. The first is the data sparseness problem. We employ an adaptation technique to overcome this problem. The second issue is domain prediction. In order to perform adaptation, the domain must be provided, however in many cases, the domain is not known or changes dynamically. For these cases, not only the translation target sentence but also the domain must be predicted. This paper focuses on the domain prediction problem for statistical machine translation. In the proposed method, a bilingual training corpus, is automatically clustered into sub-corpora. Each sub-corpus is deemed to be a domain. The domain of a source sentence is predicted by using its similarity to the sub-corpora. The predicted domain (sub-corpus) specific language and translation models are then used for the translation decoding. This approach gave an improvement of 2.7 in BLEU score on the IWSLT05 Japanese to English evaluation corpus (improving the score from 52.4 to 55.1). This is a substantial gain and indicates the validity of the proposed bilingual cluster based models.

  19. Knowledge in formation: The machine-modeled frame of mind

    SciTech Connect

    Shore, B.

    1996-12-31

    Artificial Intelligence researchers have used the digital computer as a model for the human mind in two different ways. Most obviously, the computer has been used as a tool on which simulations of thinking-as-programs are developed and tested. Less obvious, but of great significance, is the use of the computer as a conceptual model for the human mind. This essay traces the sources of this machine-modeled conception of cognition in a great variety of social institutions and everyday experienced treating them as {open_quotes}cultural models{close_quotes} which have contributed to the naturalness of The mine-as-machine paradigm for many Americans. The roots of these models antedate the actual development of modern computers, and take the form of a {open_quotes}modularity schema{close_quotes} that has shaped the cultural and cognitive landscape of modernity. The essay concludes with a consideration of some of the cognitive consequences of this extension of machine logic into modern life, and proposes an important distinction between information processing models of thought and meaning-making in how human cognition is conceptualized.

  20. Adding Abstraction and Reuse to a Network Modelling Tool Using the Reuseware Composition Framework

    NASA Astrophysics Data System (ADS)

    Johannes, Jendrik; Fernández, Miguel A.

    Domain-specific modelling (DSM) environments enable experts in a certain domain to actively participate in model-driven development. Developing DSM environments need to be cost-efficient, since they are only used by a limited group of domain experts. Different model-driven technologies promise to allow this cost-efficient development. [1] presented experiences in developing a DSM environment for telecommunication network modelling. There, challenges were identified that need to be addressed by other new modelling technologies. In this paper, we now present the results of addressing one of theses challenges - abstraction and reuse support - with the Reuseware Composition Framework. We show how we identified the abstraction and reuse features required in the telecommunication DSM environment in a case study and extended the existing environment with these features using Reuseware. We discuss the advantages of using this technology and propose a process for further improving the abstraction and reuse capabilities of the DSM environment in the future.

  1. The rise of machine consciousness: studying consciousness with computational models.

    PubMed

    Reggia, James A

    2013-08-01

    Efforts to create computational models of consciousness have accelerated over the last two decades, creating a field that has become known as artificial consciousness. There have been two main motivations for this controversial work: to develop a better scientific understanding of the nature of human/animal consciousness and to produce machines that genuinely exhibit conscious awareness. This review begins by briefly explaining some of the concepts and terminology used by investigators working on machine consciousness, and summarizes key neurobiological correlates of human consciousness that are particularly relevant to past computational studies. Models of consciousness developed over the last twenty years are then surveyed. These models are largely found to fall into five categories based on the fundamental issue that their developers have selected as being most central to consciousness: a global workspace, information integration, an internal self-model, higher-level representations, or attention mechanisms. For each of these five categories, an overview of past work is given, a representative example is presented in some detail to illustrate the approach, and comments are provided on the contributions and limitations of the methodology. Three conclusions are offered about the state of the field based on this review: (1) computational modeling has become an effective and accepted methodology for the scientific study of consciousness, (2) existing computational models have successfully captured a number of neurobiological, cognitive, and behavioral correlates of conscious information processing as machine simulations, and (3) no existing approach to artificial consciousness has presented a compelling demonstration of phenomenal machine consciousness, or even clear evidence that artificial phenomenal consciousness will eventually be possible. The paper concludes by discussing the importance of continuing work in this area, considering the ethical issues it raises

  2. Transferable Atomic Multipole Machine Learning Models for Small Organic Molecules.

    PubMed

    Bereau, Tristan; Andrienko, Denis; von Lilienfeld, O Anatole

    2015-07-14

    Accurate representation of the molecular electrostatic potential, which is often expanded in distributed multipole moments, is crucial for an efficient evaluation of intermolecular interactions. Here we introduce a machine learning model for multipole coefficients of atom types H, C, O, N, S, F, and Cl in any molecular conformation. The model is trained on quantum-chemical results for atoms in varying chemical environments drawn from thousands of organic molecules. Multipoles in systems with neutral, cationic, and anionic molecular charge states are treated with individual models. The models' predictive accuracy and applicability are illustrated by evaluating intermolecular interaction energies of nearly 1,000 dimers and the cohesive energy of the benzene crystal.

  3. Parallelizing the track-target model for the MIMD machine

    SciTech Connect

    Zhong Xiong, W.; Swietlik, C.

    1992-09-01

    Military Tracking-Target systems are important analysis tools for modelling the major functions of a strategic defense system operating against a ballistic missile threat during a simulated end-to-end scenario. As demands grow for modelling more trajectories with increasing numbers of missile types, so have demands for more processing power. Argonne National Laboratory has developed the parallel version of this Tracking-Target model. The parallel version has exhibited speedups of up to a factor of 6.3 resulting from a shared memory multiprocessor machine. This paper documents a project to implement the Tracking-Target model on a parallel processing environment.

  4. Abstract Machines for Polymorphous Computing

    DTIC Science & Technology

    2007-12-01

    In this paper , the scope of the word “configuration” is expanded to include also the mapping of the application onto the reconfigurable...optimization. The focus of this paper is thus on the on-line refinement component and its interaction with the configuration store. For a given instance...Mattson, J. Namkoong, J. D. Owens, B. Towles , and A. Chang., “Imagine: Media Processing with Streams,” IEEE Micro, March/April 2001, pp. 35-46. [27

  5. Development of machine learning models for diagnosis of glaucoma.

    PubMed

    Kim, Seong Jae; Cho, Kyong Jin; Oh, Sejong

    2017-01-01

    The study aimed to develop machine learning models that have strong prediction power and interpretability for diagnosis of glaucoma based on retinal nerve fiber layer (RNFL) thickness and visual field (VF). We collected various candidate features from the examination of retinal nerve fiber layer (RNFL) thickness and visual field (VF). We also developed synthesized features from original features. We then selected the best features proper for classification (diagnosis) through feature evaluation. We used 100 cases of data as a test dataset and 399 cases of data as a training and validation dataset. To develop the glaucoma prediction model, we considered four machine learning algorithms: C5.0, random forest (RF), support vector machine (SVM), and k-nearest neighbor (KNN). We repeatedly composed a learning model using the training dataset and evaluated it by using the validation dataset. Finally, we got the best learning model that produces the highest validation accuracy. We analyzed quality of the models using several measures. The random forest model shows best performance and C5.0, SVM, and KNN models show similar accuracy. In the random forest model, the classification accuracy is 0.98, sensitivity is 0.983, specificity is 0.975, and AUC is 0.979. The developed prediction models show high accuracy, sensitivity, specificity, and AUC in classifying among glaucoma and healthy eyes. It will be used for predicting glaucoma against unknown examination records. Clinicians may reference the prediction results and be able to make better decisions. We may combine multiple learning models to increase prediction accuracy. The C5.0 model includes decision rules for prediction. It can be used to explain the reasons for specific predictions.

  6. Stochastic Local Interaction (SLI) model: Bridging machine learning and geostatistics

    NASA Astrophysics Data System (ADS)

    Hristopulos, Dionissios T.

    2015-12-01

    Machine learning and geostatistics are powerful mathematical frameworks for modeling spatial data. Both approaches, however, suffer from poor scaling of the required computational resources for large data applications. We present the Stochastic Local Interaction (SLI) model, which employs a local representation to improve computational efficiency. SLI combines geostatistics and machine learning with ideas from statistical physics and computational geometry. It is based on a joint probability density function defined by an energy functional which involves local interactions implemented by means of kernel functions with adaptive local kernel bandwidths. SLI is expressed in terms of an explicit, typically sparse, precision (inverse covariance) matrix. This representation leads to a semi-analytical expression for interpolation (prediction), which is valid in any number of dimensions and avoids the computationally costly covariance matrix inversion.

  7. 97. View of International Business Machine (IBM) digital computer model ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    97. View of International Business Machine (IBM) digital computer model 7090 magnetic core installation, international telephone and telegraph (ITT) Artic Services Inc., Official photograph BMEWS site II, Clear, AK, by unknown photographer, 17 September 1965, BMEWS, clear as negative no. A-6604. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  8. Modeling and analysis of uncertainty in on-machine form characterization of diamond-machined optical micro-structured surfaces

    NASA Astrophysics Data System (ADS)

    Zhu, Wu-Le; Zhu, Zhiwei; Ren, Mingjun; Ehmann, Kornel F.; Ju, Bing-Feng

    2016-12-01

    Ultra-precision diamond machining is widely used in the manufacture of optical micro-structured surfaces with sub-micron form accuracy. As optical performance is highly-dependent on surface form accuracy, it is critically important to use reliable form characterization methods for surface quality control. To ascertain the characteristics of real machined surfaces, a reliable on-machine spiral scanning approach with high fidelity is presented in this paper. However, since many uncertainty contributors that lead to significant variations in the characterization results are unavoidable, an error analysis model is developed to identify the associated uncertainties to facilitate the reliable quantification of the demanding specifications of the manufactured surfaces. To accomplish this, both the diamond machining process and the on-machine spiral scanning procedure are investigated. Through the proposed model, via the Monte Carlo method, the estimation of form error parameters of a compound eye lens array is conducted in correlation with form deviations, scanning centering errors, measurement drift and noise, etc. Application experiments, using an on-machine scanning tunneling microscope, verify the proposed model and also confirm its potential superiority over the conventional off-machine raster scanning method for surface characterization and quality control.

  9. Transferable Atomic Multipole Machine Learning Models for Small Organic Molecules

    SciTech Connect

    Bereau, Tristan; Andrienko, Denis; von Lilienfeld, O. Anatole

    2015-07-01

    Accurate representation of the molecular electrostatic potential, which is often expanded in distributed multipole moments, is crucial for an efficient evaluation of intermolecular interactions. Here we introduce a machine learning model for multipole coefficients of atom types H, C, O, N, S, F, and Cl in any molecular conformation. The model is trained on quantum chemical results for atoms in varying chemical environments drawn from thousands of organic molecules. Multipoles in systems with neutral, cationic, and anionic molecular charge states are treated with individual models. The models’ predictive accuracy and applicability are illustrated by evaluating intermolecular interaction energies of nearly 1,000 dimers and the cohesive energy of the benzene crystal.

  10. "Machine" consciousness and "artificial" thought: an operational architectonics model guided approach.

    PubMed

    Fingelkurts, Andrew A; Fingelkurts, Alexander A; Neves, Carlos F H

    2012-01-05

    Instead of using low-level neurophysiology mimicking and exploratory programming methods commonly used in the machine consciousness field, the hierarchical operational architectonics (OA) framework of brain and mind functioning proposes an alternative conceptual-theoretical framework as a new direction in the area of model-driven machine (robot) consciousness engineering. The unified brain-mind theoretical OA model explicitly captures (though in an informal way) the basic essence of brain functional architecture, which indeed constitutes a theory of consciousness. The OA describes the neurophysiological basis of the phenomenal level of brain organization. In this context the problem of producing man-made "machine" consciousness and "artificial" thought is a matter of duplicating all levels of the operational architectonics hierarchy (with its inherent rules and mechanisms) found in the brain electromagnetic field. We hope that the conceptual-theoretical framework described in this paper will stimulate the interest of mathematicians and/or computer scientists to abstract and formalize principles of hierarchy of brain operations which are the building blocks for phenomenal consciousness and thought. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. Modelling, abstraction, and computation in systems biology: A view from computer science.

    PubMed

    Melham, Tom

    2013-04-01

    Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology.

  12. Editorial: Mathematical Methods and Modeling in Machine Fault Diagnosis

    DOE PAGES

    Yan, Ruqiang; Chen, Xuefeng; Li, Weihua; ...

    2014-12-18

    Modern mathematics has commonly been utilized as an effective tool to model mechanical equipment so that their dynamic characteristics can be studied analytically. This will help identify potential failures of mechanical equipment by observing change in the equipment’s dynamic parameters. On the other hand, dynamic signals are also important and provide reliable information about the equipment’s working status. Modern mathematics has also provided us with a systematic way to design and implement various signal processing methods, which are used to analyze these dynamic signals, and to enhance intrinsic signal components that are directly related to machine failures. This special issuemore » is aimed at stimulating not only new insights on mathematical methods for modeling but also recently developed signal processing methods, such as sparse decomposition with potential applications in machine fault diagnosis. Finally, the papers included in this special issue provide a glimpse into some of the research and applications in the field of machine fault diagnosis through applications of the modern mathematical methods.« less

  13. Editorial: Mathematical Methods and Modeling in Machine Fault Diagnosis

    SciTech Connect

    Yan, Ruqiang; Chen, Xuefeng; Li, Weihua; Sheng, Shuangwen

    2014-12-18

    Modern mathematics has commonly been utilized as an effective tool to model mechanical equipment so that their dynamic characteristics can be studied analytically. This will help identify potential failures of mechanical equipment by observing change in the equipment’s dynamic parameters. On the other hand, dynamic signals are also important and provide reliable information about the equipment’s working status. Modern mathematics has also provided us with a systematic way to design and implement various signal processing methods, which are used to analyze these dynamic signals, and to enhance intrinsic signal components that are directly related to machine failures. This special issue is aimed at stimulating not only new insights on mathematical methods for modeling but also recently developed signal processing methods, such as sparse decomposition with potential applications in machine fault diagnosis. Finally, the papers included in this special issue provide a glimpse into some of the research and applications in the field of machine fault diagnosis through applications of the modern mathematical methods.

  14. Modeling of autoresonant control of a parametrically excited screen machine

    NASA Astrophysics Data System (ADS)

    Abolfazl Zahedi, S.; Babitsky, Vladimir

    2016-10-01

    Modelling of nonlinear dynamic response of a screen machine described by the nonlinear coupled differential equations and excited by the system of autoresonant control is presented. The displacement signal of the screen is fed to the screen excitation directly by means of positive feedback. Negative feedback is used to fix the level of screen amplitude response within the expected range. The screen is anticipated to vibrate with a parametric resonance and the excitation, stabilization and control response of the system are studied in the stable mode. Autoresonant control is thoroughly investigated and output tracking is reported. The control developed provides the possibility of self-tuning and self-adaptation mechanisms that allow the screen machine to maintain a parametric resonant mode of oscillation under a wide range of uncertainty of mass and viscosity.

  15. Modeling the Swift BAT Trigger Algorithm with Machine Learning

    NASA Astrophysics Data System (ADS)

    Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori

    2016-02-01

    To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift/BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of ≳97% (≲3% error), which is a significant improvement on a cut in GRB flux, which has an accuracy of 89.6% (10.4% error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of {n}0∼ {0.48}-0.23+0.41 {{{Gpc}}}-3 {{{yr}}}-1 with power-law indices of {n}1∼ {1.7}-0.5+0.6 and {n}2∼ -{5.9}-0.1+5.7 for GRBs above and below a break point of {z}1∼ {6.8}-3.2+2.8. This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting.

  16. Hydrogen-atom abstraction from a model amino acid: dependence on the attacking radical.

    PubMed

    Amos, Ruth I J; Chan, Bun; Easton, Christopher J; Radom, Leo

    2015-01-22

    We have used computational chemistry to examine the reactivity of a model amino acid toward hydrogen abstraction by HO•, HOO•, and Br•. The trends in the calculated condensed-phase (acetic acid) free energy barriers are in accord with experimental relative reactivities. Our calculations suggest that HO• is likely to be the abstracting species for reactions with hydrogen peroxide. For HO• abstractions, the barriers decrease as the site of reaction becomes more remote from the electron-withdrawing α-substituents, in accord with a diminishing polar deactivating effect. We find that the transition structures for α- and β-abstractions have additional hydrogen-bonding interactions, which lead to lower gas-phase vibrationless electronic barriers at these positions. Such favorable interactions become less important in a polar solvent such as acetic acid, and this leads to larger calculated barriers when the effect of solvation is taken into account. For Br• abstractions, the α-barrier is the smallest while the β-barrier is the largest, with the barrier gradually becoming smaller further along the side chain. We attribute the low barrier for the α-abstraction in this case to the partial reflection of the thermodynamic effect of the captodatively stabilized α-radical product in the more product-like transition structure, while the trend of decreasing barriers in the order β > γ > δ ∼ ε is explained by the diminishing polar deactivating effect. More generally, the favorable influence of thermodynamic effects on the α-abstraction barrier is found to be smaller when the transition structure for hydrogen abstraction is earlier.

  17. Model-Driven Engineering of Machine Executable Code

    NASA Astrophysics Data System (ADS)

    Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira

    Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.

  18. First Look at Photometric Reduction via Mixed-Model Regression (Poster abstract)

    NASA Astrophysics Data System (ADS)

    Dose, E.

    2016-12-01

    (Abstract only) Mixed-model regression is proposed as a new approach to photometric reduction, especially for variable-star photometry in several filters. Mixed-model regression adds to normal multivariate regression certain "random effects": categorical-variable terms that model and extract specific systematic errors such as image-to-image zero-point fluctuations (cirrus effect) or even errors in comp-star catalog magnitudes.

  19. Geochemistry Model Abstraction and Sensitivity Studies for the 21 PWR CSNF Waste Package

    SciTech Connect

    P. Bernot; S. LeStrange; E. Thomas; K. Zarrabi; S. Arthur

    2002-10-29

    The CSNF geochemistry model abstraction, as directed by the TWP (BSC 2002b), was developed to provide regression analysis of EQ6 cases to obtain abstracted values of pH (and in some cases HCO{sub 3}{sup -} concentration) for use in the Configuration Generator Model. The pH of the system is the controlling factor over U mineralization, CSNF degradation rate, and HCO{sub 3}{sup -} concentration in solution. The abstraction encompasses a large variety of combinations for the degradation rates of materials. The ''base case'' used EQ6 simulations looking at differing steel/alloy corrosion rates, drip rates, and percent fuel exposure. Other values such as the pH/HCO{sub 3}{sup -} dependent fuel corrosion rate and the corrosion rate of A516 were kept constant. Relationships were developed for pH as a function of these differing rates to be used in the calculation of total C and subsequently, the fuel rate. An additional refinement to the abstraction was the addition of abstracted pH values for cases where there was limited O{sub 2} for waste package corrosion and a flushing fluid other than J-13, which has been used in all EQ6 calculation up to this point. These abstractions also used EQ6 simulations with varying combinations of corrosion rates of materials to abstract the pH (and HCO{sub 3}{sup -} in the case of the limiting O{sub 2} cases) as a function of WP materials corrosion rates. The goodness of fit for most of the abstracted values was above an R{sup 2} of 0.9. Those below this value occurred during the time at the very beginning of WP corrosion when large variations in the system pH are observed. However, the significance of F-statistic for all the abstractions showed that the variable relationships are significant. For the abstraction, an analysis of the minerals that may form the ''sludge'' in the waste package was also presented. This analysis indicates that a number a different iron and aluminum minerals may form in the waste package other than those

  20. Technical Work Plan for: Near Field Environment: Engineered System: Radionuclide Transport Abstraction Model Report

    SciTech Connect

    J.D. Schreiber

    2006-12-08

    This technical work plan (TWP) describes work activities to be performed by the Near-Field Environment Team. The objective of the work scope covered by this TWP is to generate Revision 03 of EBS Radionuclide Transport Abstraction, referred to herein as the radionuclide transport abstraction (RTA) report. The RTA report is being revised primarily to address condition reports (CRs), to address issues identified by the Independent Validation Review Team (IVRT), to address the potential impact of transport, aging, and disposal (TAD) canister design on transport models, and to ensure integration with other models that are closely associated with the RTA report and being developed or revised in other analysis/model reports in response to IVRT comments. The RTA report will be developed in accordance with the most current version of LP-SIII.10Q-BSC and will reflect current administrative procedures (LP-3.15Q-BSC, ''Managing Technical Product Inputs''; LP-SIII.2Q-BSC, ''Qualification of Unqualified Data''; etc.), and will develop related Document Input Reference System (DIRS) reports and data qualifications as applicable in accordance with prevailing procedures. The RTA report consists of three models: the engineered barrier system (EBS) flow model, the EBS transport model, and the EBS-unsaturated zone (UZ) interface model. The flux-splitting submodel in the EBS flow model will change, so the EBS flow model will be validated again. The EBS transport model and validation of the model will be substantially revised in Revision 03 of the RTA report, which is the main subject of this TWP. The EBS-UZ interface model may be changed in Revision 03 of the RTA report due to changes in the conceptualization of the UZ transport abstraction model (a particle tracker transport model based on the discrete fracture transfer function will be used instead of the dual-continuum transport model previously used). Validation of the EBS-UZ interface model will be revised to be consistent with

  1. Machine learning and docking models for Mycobacterium tuberculosis topoisomerase I.

    PubMed

    Ekins, Sean; Godbole, Adwait Anand; Kéri, György; Orfi, Lászlo; Pato, János; Bhat, Rajeshwari Subray; Verma, Rinkee; Bradley, Erin K; Nagaraja, Valakunja

    2017-03-01

    There is a shortage of compounds that are directed towards new targets apart from those targeted by the FDA approved drugs used against Mycobacterium tuberculosis. Topoisomerase I (Mttopo I) is an essential mycobacterial enzyme and a promising target in this regard. However, it suffers from a shortage of known inhibitors. We have previously used computational approaches such as homology modeling and docking to propose 38 FDA approved drugs for testing and identified several active molecules. To follow on from this, we now describe the in vitro testing of a library of 639 compounds. These data were used to create machine learning models for Mttopo I which were further validated. The combined Mttopo I Bayesian model had a 5 fold cross validation receiver operator characteristic of 0.74 and sensitivity, specificity and concordance values above 0.76 and was used to select commercially available compounds for testing in vitro. The recently described crystal structure of Mttopo I was also compared with the previously described homology model and then used to dock the Mttopo I actives norclomipramine and imipramine. In summary, we describe our efforts to identify small molecule inhibitors of Mttopo I using a combination of machine learning modeling and docking studies in conjunction with screening of the selected molecules for enzyme inhibition. We demonstrate the experimental inhibition of Mttopo I by small molecule inhibitors and show that the enzyme can be readily targeted for lead molecule development. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents

    SciTech Connect

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard; Parker, Lynne Edwards

    2014-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.

  3. Modelling fate and transport of pesticides in river catchments with drinking water abstractions

    NASA Astrophysics Data System (ADS)

    Desmet, Nele; Seuntjens, Piet; Touchant, Kaatje

    2010-05-01

    When drinking water is abstracted from surface water, the presence of pesticides may have a large impact on the purification costs. In order to respect imposed thresholds at points of drinking water abstraction in a river catchment, sustainable pesticide management strategies might be required in certain areas. To improve management strategies, a sound understanding of the emission routes, the transport, the environmental fate and the sources of pesticides is needed. However, pesticide monitoring data on which measures are founded, are generally scarce. Data scarcity hampers the interpretation and the decision making. In such a case, a modelling approach can be very useful as a tool to obtain complementary information. Modelling allows to take into account temporal and spatial variability in both discharges and concentrations. In the Netherlands, the Meuse river is used for drinking water abstraction and the government imposes the European drinking water standard for individual pesticides (0.1 ?g.L-1) for surface waters at points of drinking water abstraction. The reported glyphosate concentrations in the Meuse river frequently exceed the standard and this enhances the request for targeted measures. In this study, a model for the Meuse river was developed to estimate the contribution of influxes at the Dutch-Belgian border on the concentration levels detected at the drinking water intake 250 km downstream and to assess the contribution of the tributaries to the glyphosate loads. The effects of glyphosate decay on environmental fate were considered as well. Our results show that the application of a river model allows to asses fate and transport of pesticides in a catchment in spite of monitoring data scarcity. Furthermore, the model provides insight in the contribution of different sub basins to the pollution level. The modelling results indicate that the effect of local measures to reduce pesticides concentrations in the river at points of drinking water

  4. Model-based object classification using unification grammars and abstract representations

    NASA Astrophysics Data System (ADS)

    Liburdy, Kathleen A.; Schalkoff, Robert J.

    1993-04-01

    The design and implementation of a high level computer vision system which performs object classification is described. General object labelling and functional analysis require models of classes which display a wide range of geometric variations. A large representational gap exists between abstract criteria such as `graspable' and current geometric image descriptions. The vision system developed and described in this work addresses this problem and implements solutions based on a fusion of semantics, unification, and formal language theory. Object models are represented using unification grammars, which provide a framework for the integration of structure and semantics. A methodology for the derivation of symbolic image descriptions capable of interacting with the grammar-based models is described and implemented. A unification-based parser developed for this system achieves object classification by determining if the symbolic image description can be unified with the abstract criteria of an object model. Future research directions are indicated.

  5. Modelling difficulties in abstract thinking in psychosis: the importance of socio-developmental background.

    PubMed

    Berg, A O; Melle, I; Zuber, V; Simonsen, C; Nerhus, M; Ueland, T; Andreassen, O A; Sundet, K; Vaskinn, A

    2017-01-01

    Abstract thinking is important in modern understanding of neurocognitive abilities, and a symptom of thought disorder in psychosis. In patients with psychosis, we assessed if socio-developmental background influences abstract thinking, and the association with executive functioning and clinical psychosis symptoms. Participants (n = 174) had a diagnosis of psychotic or bipolar disorder, were 17-65 years, intelligence quotient (IQ) > 70, fluent in a Scandinavian language, and their full primary education in Norway. Immigrants (N = 58) were matched (1:2) with participants without a history of migration (N = 116). All participants completed a neurocognitive and clinical assessment. Socio-developmental background was operationalised as human developmental index (HDI) of country of birth, at year of birth. Structural equation modelling was used to assess the model with best fit. The model with best fit, χ(2) = 96.591, df = 33, p < .001, confirmed a significant indirect effect of HDI scores on abstract thinking through executive functioning, but not through clinical psychosis symptoms. This study found that socio-developmental background influences abstract thinking in psychosis by indirect effect through executive functioning. We should take into account socio-developmental background in the interpretation of neurocognitive performance in patients with psychosis, and prioritise cognitive remediation in treatment of immigrant patients.

  6. Using machine learning to model dose-response relationships.

    PubMed

    Linden, Ariel; Yarnold, Paul R; Nallamothu, Brahmajee K

    2016-12-01

    Establishing the relationship between various doses of an exposure and a response variable is integral to many studies in health care. Linear parametric models, widely used for estimating dose-response relationships, have several limitations. This paper employs the optimal discriminant analysis (ODA) machine-learning algorithm to determine the degree to which exposure dose can be distinguished based on the distribution of the response variable. By framing the dose-response relationship as a classification problem, machine learning can provide the same functionality as conventional models, but can additionally make individual-level predictions, which may be helpful in practical applications like establishing responsiveness to prescribed drug regimens. Using data from a study measuring the responses of blood flow in the forearm to the intra-arterial administration of isoproterenol (separately for 9 black and 13 white men, and pooled), we compare the results estimated from a generalized estimating equations (GEE) model with those estimated using ODA. Generalized estimating equations and ODA both identified many statistically significant dose-response relationships, separately by race and for pooled data. Post hoc comparisons between doses indicated ODA (based on exact P values) was consistently more conservative than GEE (based on estimated P values). Compared with ODA, GEE produced twice as many instances of paradoxical confounding (findings from analysis of pooled data that are inconsistent with findings from analyses stratified by race). Given its unique advantages and greater analytic flexibility, maximum-accuracy machine-learning methods like ODA should be considered as the primary analytic approach in dose-response applications. © 2016 John Wiley & Sons, Ltd.

  7. Support Vector Machines for Petrophysical Modelling and Lithoclassification

    NASA Astrophysics Data System (ADS)

    Al-Anazi, Ammal Fannoush Khalifah

    2011-12-01

    Given increasing challenges of oil and gas production from partially depleted conventional or unconventional reservoirs, reservoir characterization is a key element of the reservoir development workflow. Reservoir characterization impacts well placement, injection and production strategies, and field management. Reservoir characterization projects point and line data to a large three-dimensional volume. The relationship between variables, e.g. porosity and permeability, is often established by regression yet the complexities between measured variables often lead to poor correlation coefficients between the regressed variables. Recent advances in machine learning methods have provided attractive alternatives for constructing interpretation models of rock properties in heterogeneous reservoirs. Here, Support Vector Machines (SVMs), a class of a learning machine that is formulated to output regression models and classifiers of competitive generalization capability, has been explored to determine its capabilities for determining the relationship, both in regression and in classification, between reservoir rock properties. This thesis documents research on the capability of SVMs to model petrophysical and elastic properties in heterogeneous sandstone and carbonate reservoirs. Specifically, the capabilities of SVM regression and classification has been examined and compared to neural network-based methods, namely multilayered neural networks, radial basis function neural networks, general regression neural networks, probabilistic neural networks, and linear discriminant analysis. The petrophysical properties that have been evaluated include porosity, permeability, Poisson's ratio and Young's modulus. Statistical error analysis reveals that the SVM method yields comparable or superior predictions of petrophysical and elastic rock properties and classification of the lithology compared to neural networks. The SVM method also shows uniform prediction capability under the

  8. Modeling of Unsteady Three-dimensional Flows in Multistage Machines

    NASA Technical Reports Server (NTRS)

    Hall, Kenneth C.; Pratt, Edmund T., Jr.; Kurkov, Anatole (Technical Monitor)

    2003-01-01

    Despite many years of development, the accurate and reliable prediction of unsteady aerodynamic forces acting on turbomachinery blades remains less than satisfactory, especially when viewed next to the great success investigators have had in predicting steady flows. Hall and Silkowski (1997) have proposed that one of the main reasons for the discrepancy between theory and experiment and/or industrial experience is that many of the current unsteady aerodynamic theories model a single blade row in an infinitely long duct, ignoring potentially important multistage effects. However, unsteady flows are made up of acoustic, vortical, and entropic waves. These waves provide a mechanism for the rotors and stators of multistage machines to communicate with one another. In other words, wave behavior makes unsteady flows fundamentally a multistage (and three-dimensional) phenomenon. In this research program, we have has as goals (1) the development of computationally efficient computer models of the unsteady aerodynamic response of blade rows embedded in a multistage machine (these models will ultimately be capable of analyzing three-dimensional viscous transonic flows), and (2) the use of these computer codes to study a number of important multistage phenomena.

  9. Applications and modelling of bulk HTSs in brushless ac machines

    NASA Astrophysics Data System (ADS)

    Barnes, G. J.; McCulloch, M. D.; Dew-Hughes, D.

    2000-06-01

    The use of high temperature superconducting material in its bulk form for engineering applications is attractive due to the large power densities that can be achieved. In brushless electrical machines, there are essentially four properties that can be exploited; their hysteretic nature, their flux shielding properties, their ability to trap large flux densities and their ability to produce levitation. These properties translate to hysteresis machines, reluctance machines, trapped-field synchronous machines and linear motors respectively. Each one of these machines is addressed separately and computer simulations that reveal the current and field distributions within the machines are used to explain their operation.

  10. Identifying crop vulnerability to groundwater abstraction: modelling and expert knowledge in a GIS.

    PubMed

    Procter, Chris; Comber, Lex; Betson, Mark; Buckley, Dennis; Frost, Andy; Lyons, Hester; Riding, Alison; Voyce, Kevin

    2006-11-01

    Water use is expected to increase and climate change scenarios indicate the need for more frequent water abstraction. Abstracting groundwater may have a detrimental effect on soil moisture availability for crop growth and yields. This work presents an elegant and robust method for identifying zones of crop vulnerability to abstraction. Archive groundwater level datasets were used to generate a composite groundwater surface that was subtracted from a digital terrain model. The result was the depth from surface to groundwater and identified areas underlain by shallow groundwater. Knowledge from an expert agronomist was used to define classes of risk in terms of their depth below ground level. Combining information on the permeability of geological drift types further refined the assessment of the risk of crop growth vulnerability. The nature of the mapped output is one that is easy to communicate to the intended farming audience because of the general familiarity of mapped information. Such Geographic Information System (GIS)-based products can play a significant role in the characterisation of catchments under the EU Water Framework Directive especially in the process of public liaison that is fundamental to the setting of priorities for management change. The creation of a baseline allows the impact of future increased water abstraction rates to be modelled and the vulnerability maps are in a format that can be readily understood by the various stakeholders. This methodology can readily be extended to encompass additional data layers and for a range of groundwater vulnerability issues including water resources, ecological impacts, nitrate and phosphorus.

  11. Modeling the Swift Bat Trigger Algorithm with Machine Learning

    NASA Technical Reports Server (NTRS)

    Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori

    2016-01-01

    To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift / BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of greater than or equal to 97 percent (less than or equal to 3 percent error), which is a significant improvement on a cut in GRB flux, which has an accuracy of 89.6 percent (10.4 percent error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of n (sub 0) approaching 0.48 (sup plus 0.41) (sub minus 0.23) per cubic gigaparsecs per year with power-law indices of n (sub 1) approaching 1.7 (sup plus 0.6) (sub minus 0.5) and n (sub 2) approaching minus 5.9 (sup plus 5.7) (sub minus 0.1) for GRBs above and below a break point of z (redshift) (sub 1) approaching 6.8 (sup plus 2.8) (sub minus 3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting.

  12. Modeling the Swift BAT Trigger Algorithm with Machine Learning

    NASA Technical Reports Server (NTRS)

    Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori

    2015-01-01

    To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. (2014) is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of approximately greater than 97% (approximately less than 3% error), which is a significant improvement on a cut in GRB flux which has an accuracy of 89:6% (10:4% error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of eta(sub 0) approximately 0.48(+0.41/-0.23) Gpc(exp -3) yr(exp -1) with power-law indices of eta(sub 1) approximately 1.7(+0.6/-0.5) and eta(sub 2) approximately -5.9(+5.7/-0.1) for GRBs above and below a break point of z(sub 1) approximately 6.8(+2.8/-3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting. The code used in this is analysis is publicly available online.

  13. Improved machine learning models for predicting selective compounds.

    PubMed

    Ning, Xia; Walters, Michael; Karypis, George; Karypisxy, George

    2012-01-23

    The identification of small potent compounds that selectively bind to the target under consideration with high affinities is a critical step toward successful drug discovery. However, there is still a lack of efficient and accurate computational methods to predict compound selectivity properties. In this paper, we propose a set of machine learning methods to do compound selectivity prediction. In particular, we propose a novel cascaded learning method and a multitask learning method. The cascaded method decomposes the selectivity prediction into two steps, one model for each step, so as to effectively filter out nonselective compounds. The multitask method incorporates both activity and selectivity models into one multitask model so as to better differentiate compound selectivity properties. We conducted a comprehensive set of experiments and compared the results with those of other conventional selectivity prediction methods, and our results demonstrated that the cascaded and multitask methods significantly improve the selectivity prediction performance.

  14. Ecological footprint model using the support vector machine technique.

    PubMed

    Ma, Haibo; Chang, Wenjuan; Cui, Guangbai

    2012-01-01

    The per capita ecological footprint (EF) is one of the most widely recognized measures of environmental sustainability. It aims to quantify the Earth's biological resources required to support human activity. In this paper, we summarize relevant previous literature, and present five factors that influence per capita EF. These factors are: National gross domestic product (GDP), urbanization (independent of economic development), distribution of income (measured by the Gini coefficient), export dependence (measured by the percentage of exports to total GDP), and service intensity (measured by the percentage of service to total GDP). A new ecological footprint model based on a support vector machine (SVM), which is a machine-learning method based on the structural risk minimization principle from statistical learning theory was conducted to calculate the per capita EF of 24 nations using data from 123 nations. The calculation accuracy was measured by average absolute error and average relative error. They were 0.004883 and 0.351078% respectively. Our results demonstrate that the EF model based on SVM has good calculation performance.

  15. Ecological Footprint Model Using the Support Vector Machine Technique

    PubMed Central

    Ma, Haibo; Chang, Wenjuan; Cui, Guangbai

    2012-01-01

    The per capita ecological footprint (EF) is one of the most widely recognized measures of environmental sustainability. It aims to quantify the Earth's biological resources required to support human activity. In this paper, we summarize relevant previous literature, and present five factors that influence per capita EF. These factors are: National gross domestic product (GDP), urbanization (independent of economic development), distribution of income (measured by the Gini coefficient), export dependence (measured by the percentage of exports to total GDP), and service intensity (measured by the percentage of service to total GDP). A new ecological footprint model based on a support vector machine (SVM), which is a machine-learning method based on the structural risk minimization principle from statistical learning theory was conducted to calculate the per capita EF of 24 nations using data from 123 nations. The calculation accuracy was measured by average absolute error and average relative error. They were 0.004883 and 0.351078% respectively. Our results demonstrate that the EF model based on SVM has good calculation performance. PMID:22291949

  16. Programming with models: modularity and abstraction provide powerful capabilities for systems biology

    PubMed Central

    Mallavarapu, Aneil; Thomson, Matthew; Ullian, Benjamin; Gunawardena, Jeremy

    2008-01-01

    Mathematical models are increasingly used to understand how phenotypes emerge from systems of molecular interactions. However, their current construction as monolithic sets of equations presents a fundamental barrier to progress. Overcoming this requires modularity, enabling sub-systems to be specified independently and combined incrementally, and abstraction, enabling generic properties of biological processes to be specified independently of specific instances. These, in turn, require models to be represented as programs rather than as datatypes. Programmable modularity and abstraction enables libraries of modules to be created, which can be instantiated and reused repeatedly in different contexts with different components. We have developed a computational infrastructure that accomplishes this. We show here why such capabilities are needed, what is required to implement them and what can be accomplished with them that could not be done previously. PMID:18647734

  17. Medical record review conduction model for improving interrater reliability of abstracting medical-related information.

    PubMed

    Engel, Lisa; Henderson, Courtney; Fergenbaum, Jennifer; Colantonio, Angela

    2009-09-01

    Medical record review (MRR) is often used in clinical research and evaluation, yet there is limited literature regarding best practices in conducting a MRR, and there are few studies reporting interrater reliability (IRR) from MRR data. The aim of this research was twofold: (a) to develop a MRR abstraction tool and standardize the MRR process and (b) to examine the IRR from MRR data. This study introduces the MRR-Conduction Model, which was used to implement a MRR, and examines the IRR between two abstractors who collected preinjury medical and psychiatric, incident-related medical and postinjury head symptom information from the medical records of 47 neurologically injured workers. Results showed that the percentage agreement was > or =85% and the unweighted kappa statistic was > or =.60 for most variables, indicating substantial IRR. An effective and reliable MRR to abstract medical-related information requires planning and time. The MRR-Conduction Model is proposed to guide the process of creating a MRR.

  18. A salamander's flexible spinal network for locomotion, modeled at two levels of abstraction.

    PubMed

    Knüsel, Jeremie; Bicanski, Andrej; Ryczko, Dimitri; Cabelguen, Jean-Marie; Ijspeert, Auke Jan

    2013-08-01

    Animals have to coordinate a large number of muscles in different ways to efficiently move at various speeds and in different and complex environments. This coordination is in large part based on central pattern generators (CPGs). These neural networks are capable of producing complex rhythmic patterns when activated and modulated by relatively simple control signals. Although the generation of particular gaits by CPGs has been successfully modeled at many levels of abstraction, the principles underlying the generation and selection of a diversity of patterns of coordination in a single neural network are still not well understood. The present work specifically addresses the flexibility of the spinal locomotor networks in salamanders. We compare an abstract oscillator model and a CPG network composed of integrate-and-fire neurons, according to their ability to account for different axial patterns of coordination, and in particular the transition in gait between swimming and stepping modes. The topology of the network is inspired by models of the lamprey CPG, complemented by additions based on experimental data from isolated spinal cords of salamanders. Oscillatory centers of the limbs are included in a way that preserves the flexibility of the axial network. Similarly to the selection of forward and backward swimming in lamprey models via different excitation to the first axial segment, we can account for the modification of the axial coordination pattern between swimming and forward stepping on land in the salamander model, via different uncoupled frequencies in limb versus axial oscillators (for the same level of excitation). These results transfer partially to a more realistic model based on formal spiking neurons, and we discuss the difference between the abstract oscillator model and the model built with formal spiking neurons.

  19. An initial-abstraction, constant-loss model for unit hydrograph modeling for applicable watersheds in Texas

    USGS Publications Warehouse

    Asquith, William H.; Roussel, Meghan C.

    2007-01-01

    Estimation of representative hydrographs from design storms, which are known as design hydrographs, provides for cost-effective, riskmitigated design of drainage structures such as bridges, culverts, roadways, and other infrastructure. During 2001?07, the U.S. Geological Survey (USGS), in cooperation with the Texas Department of Transportation, investigated runoff hydrographs, design storms, unit hydrographs,and watershed-loss models to enhance design hydrograph estimation in Texas. Design hydrographs ideally should mimic the general volume, peak, and shape of observed runoff hydrographs. Design hydrographs commonly are estimated in part by unit hydrographs. A unit hydrograph is defined as the runoff hydrograph that results from a unit pulse of excess rainfall uniformly distributed over the watershed at a constant rate for a specific duration. A time-distributed, watershed-loss model is required for modeling by unit hydrographs. This report develops a specific time-distributed, watershed-loss model known as an initial-abstraction, constant-loss model. For this watershed-loss model, a watershed is conceptualized to have the capacity to store or abstract an absolute depth of rainfall at and near the beginning of a storm. Depths of total rainfall less than this initial abstraction do not produce runoff. The watershed also is conceptualized to have the capacity to remove rainfall at a constant rate (loss) after the initial abstraction is satisfied. Additional rainfall inputs after the initial abstraction is satisfied contribute to runoff if the rainfall rate (intensity) is larger than the constant loss. The initial abstraction, constant-loss model thus is a two-parameter model. The initial-abstraction, constant-loss model is investigated through detailed computational and statistical analysis of observed rainfall and runoff data for 92 USGS streamflow-gaging stations (watersheds) in Texas with contributing drainage areas from 0.26 to 166 square miles. The analysis is

  20. Influence of Material Models Used in Finite Element Modeling on Cutting Forces in Machining

    NASA Astrophysics Data System (ADS)

    Jivishov, Vusal; Rzayev, Elchin

    2016-08-01

    Finite element modeling of machining is significantly influenced by various modeling input parameters such as boundary conditions, mesh size and distribution, as well as properties of workpiece and tool materials. The flow stress model of the workpiece material is the most critical input parameter. However, it is very difficult to obtain experimental values under the same conditions as in machining operations.. This paper analyses the influence of different material models for two steels (AISI 1045 and hardened AISI 52100) in finite element modelling of cutting forces. In this study, the machining process is scaled by a constant ratio of the variable depth of cut h and cutting edge radius rβ. The simulation results are compared with experimental measurements. This comparison illustrates some of the capabilities and limitations of FEM modelling.

  1. Kinetic modeling of α-hydrogen abstractions from unsaturated and saturated oxygenate compounds by hydrogen atoms.

    PubMed

    Paraskevas, Paschalis D; Sabbe, Maarten K; Reyniers, Marie-Françoise; Papayannakos, Nikos G; Marin, Guy B

    2014-10-09

    Hydrogen-abstraction reactions play a significant role in thermal biomass conversion processes, as well as regular gasification, pyrolysis, or combustion. In this work, a group additivity model is constructed that allows prediction of reaction rates and Arrhenius parameters of hydrogen abstractions by hydrogen atoms from alcohols, ethers, esters, peroxides, ketones, aldehydes, acids, and diketones in a broad temperature range (300-2000 K). A training set of 60 reactions was developed with rate coefficients and Arrhenius parameters calculated by the CBS-QB3 method in the high-pressure limit with tunneling corrections using Eckart tunneling coefficients. From this set of reactions, 15 group additive values were derived for the forward and the reverse reaction, 4 referring to primary and 11 to secondary contributions. The accuracy of the model is validated upon an ab initio and an experimental validation set of 19 and 21 reaction rates, respectively, showing that reaction rates can be predicted with a mean factor of deviation of 2 for the ab initio and 3 for the experimental values. Hence, this work illustrates that the developed group additive model can be reliably applied for the accurate prediction of kinetics of α-hydrogen abstractions by hydrogen atoms from a broad range of oxygenates.

  2. A hot-atom reaction kinetic model for H abstraction from solid surfaces

    NASA Astrophysics Data System (ADS)

    Kammler, Th.; Kolovos-Vellianitis, D.; Küppers, J.

    2000-07-01

    Measurements of the abstraction reaction kinetics in the interaction of gaseous H atoms with D adsorbed on metal and semiconductor surfaces, H(g)+D(ad)/S→ products, have shown that the kinetics of the HD products are at variance with the expectations drawn from the operation of Eley-Rideal mechanisms. Furthermore, in addition to HD product molecules, D 2 products were observed which are not expected in an Eley-Rideal scenario. Products and kinetics of abstraction reactions on Ni(100), Pt(111), and Cu(111) surfaces were recently explained by a random-walk model based solely on the operation of hot-atom mechanistic steps. Based on the same reaction scenario, the present work provides numerical solutions of the appropriate kinetic equations in the limit of the steady-state approximation for hot-atom species. It is shown that the HD and D 2 product kinetics derived from global kinetic rate constants are the same as those obtained from local probabilities in the random walk model. The rate constants of the hot-atom kinetics provide a background for the interpretation of measured data, which was missing up to now. Assuming that reconstruction affects the competition between hot-atom sticking and hot-atom reaction, the application of the present model at D abstraction from Cu(100) surfaces reproduces the essential characteristics of the experimentally determined kinetics.

  3. Ontological modelling of knowledge management for human-machine integrated design of ultra-precision grinding machine

    NASA Astrophysics Data System (ADS)

    Hong, Haibo; Yin, Yuehong; Chen, Xing

    2016-11-01

    Despite the rapid development of computer science and information technology, an efficient human-machine integrated enterprise information system for designing complex mechatronic products is still not fully accomplished, partly because of the inharmonious communication among collaborators. Therefore, one challenge in human-machine integration is how to establish an appropriate knowledge management (KM) model to support integration and sharing of heterogeneous product knowledge. Aiming at the diversity of design knowledge, this article proposes an ontology-based model to reach an unambiguous and normative representation of knowledge. First, an ontology-based human-machine integrated design framework is described, then corresponding ontologies and sub-ontologies are established according to different purposes and scopes. Second, a similarity calculation-based ontology integration method composed of ontology mapping and ontology merging is introduced. The ontology searching-based knowledge sharing method is then developed. Finally, a case of human-machine integrated design of a large ultra-precision grinding machine is used to demonstrate the effectiveness of the method.

  4. Rotary ATPases: models, machine elements and technical specifications.

    PubMed

    Stewart, Alastair G; Sobti, Meghna; Harvey, Richard P; Stock, Daniela

    2013-01-01

    Rotary ATPases are molecular rotary motors involved in biological energy conversion. They either synthesize or hydrolyze the universal biological energy carrier adenosine triphosphate. Recent work has elucidated the general architecture and subunit compositions of all three sub-types of rotary ATPases. Composite models of the intact F-, V- and A-type ATPases have been constructed by fitting high-resolution X-ray structures of individual subunits or sub-complexes into low-resolution electron densities of the intact enzymes derived from electron cryo-microscopy. Electron cryo-tomography has provided new insights into the supra-molecular arrangement of eukaryotic ATP synthases within mitochondria and mass-spectrometry has started to identify specifically bound lipids presumed to be essential for function. Taken together these molecular snapshots show that nano-scale rotary engines have much in common with basic design principles of man made machines from the function of individual "machine elements" to the requirement of the right "fuel" and "oil" for different types of motors.

  5. A Reference Model for Virtual Machine Launching Overhead

    SciTech Connect

    Wu, Hao; Ren, Shangping; Garzoglio, Gabriele; Timm, Steven; Bernabeu, Gerard; Chadwick, Keith; Noh, Seo-Young

    2016-07-01

    Cloud bursting is one of the key research topics in the cloud computing communities. A well designed cloud bursting module enables private clouds to automatically launch virtual machines (VMs) to public clouds when more resources are needed. One of the main challenges in developing a cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on system operational data obtained from FermiCloud, a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows, the VM launching overhead is not a constant. It varies with physical resource utilization, such as CPU and I/O device utilizations, at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launching overhead reference model is needed. In this paper, we first develop a VM launching overhead reference model based on operational data we have obtained on FermiCloud. Second, we apply the developed reference model on FermiCloud and compare calculated VM launching overhead values based on the model with measured overhead values on FermiCloud. Our empirical results on FermiCloud indicate that the developed reference model is accurate. We believe, with the guidance of the developed reference model, efficient resource allocation algorithms can be developed for cloud bursting process to minimize the operational cost and resource waste.

  6. Physiological model of motion analysis for machine vision

    NASA Astrophysics Data System (ADS)

    Young, Richard A.; Lesperance, Ronald M.

    1993-09-01

    We studied the spatio-temporal shape of `receptive fields' of simple cells in the monkey visual cortex. Receptive fields are maps of the regions in space and time that affect a cell's electrical responses. Fields with no change in shape over time responded to all directions of motion; fields with changing shape over time responded to only some directions of motion. A Gaussian Derivative (GD) model fit these fields well, in a transformed variable space that aligned the centers and principal axes of the field and model in space-time. The model accounts for fields that vary in orientation, location, spatial scale, motion properties, and number of lobes. The model requires only ten parameters (the minimum possible) to describe fields in two dimensions of space and one of time. A difference-of-offset-Gaussians (DOOG) provides a plausible physiological means to form GD model fields. Because of its simplicity, the GD model improves the efficiency of machine vision systems for analyzing motion. An implementation produced robust local estimates of the direction and speed of moving objects in real scenes.

  7. CRADA final report for CRADA number Y-1293-0185: Process modelling and machining operations development

    SciTech Connect

    Arnold, J.B.; Kruse, K.L.; Stone, P.K.

    1996-09-16

    Lockheed Martin Energy Systems, Inc. and Ferro Corporation (formerly W. R. Grace, the original CRADA partner) have collaborated on an effort to develop techniques and processes for the cost-effective machining of ceramic components. The purpose of this effort was to develop a machining model, and grinding equipment machines and techniques for fabricating precision ceramic components. This project was designed to support Department of Energy (DOE) technical needs in manufacturing hard materials as well as enabling U.S. industry to maintain a position of leadership in the production of precision grinding machines and the machining of structural ceramic components.

  8. The abstract geometry modeling language (AgML): experience and road map toward eRHIC

    NASA Astrophysics Data System (ADS)

    Webb, Jason; Lauret, Jerome; Perevoztchikov, Victor

    2014-06-01

    The STAR experiment has adopted an Abstract Geometry Modeling Language (AgML) as the primary description of our geometry model. AgML establishes a level of abstraction, decoupling the definition of the detector from the software libraries used to create the concrete geometry model. Thus, AgML allows us to support both our legacy GEANT 3 simulation application and our ROOT/TGeo based reconstruction software from a single source, which is demonstrably self- consistent. While AgML was developed primarily as a tool to migrate away from our legacy FORTRAN-era geometry codes, it also provides a rich syntax geared towards the rapid development of detector models. AgML has been successfully employed by users to quickly develop and integrate the descriptions of several new detectors in the RHIC/STAR experiment including the Forward GEM Tracker (FGT) and Heavy Flavor Tracker (HFT) upgrades installed in STAR for the 2012 and 2013 runs. AgML has furthermore been heavily utilized to study future upgrades to the STAR detector as it prepares for the eRHIC era. With its track record of practical use in a live experiment in mind, we present the status, lessons learned and future of the AgML language as well as our experience in bringing the code into our production and development environments. We will discuss the path toward eRHIC and pushing the current model to accommodate for detector miss-alignment and high precision physics.

  9. Omnibus risk assessment via accelerated failure time kernel machine modeling.

    PubMed

    Sinnott, Jennifer A; Cai, Tianxi

    2013-12-01

    Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.

  10. Model Reduction and Thermal Regulation by Model Predictive Control of a New Cylindricity Measuring Machine

    NASA Astrophysics Data System (ADS)

    Bouderbala, K.; Girault, M.; Videcoq, E.; Nouira, H.; Salgado, J.; Petit, D.

    2015-08-01

    This paper deals with the thermal regulation at the 10 level of a high-accuracy cylindricity measurement machine subject to thermal disturbances, generated by four heat sources (laser interferometers). A reduced model identified from simulated data using the modal identification method was associated with a model predictive controller (MPC). The control was applied to minimize the thermal perturbation effects on the principal organ of the cylindricity measurement machine. A parametric study of the penalization coefficient was conducted, which validated the robustness of the controller. The association of both reduced model and MPC allowed significant reduction of the effects of the disturbances on the temperature, a promising result for future applications.

  11. Fault Modeling of Extreme Scale Applications Using Machine Learning

    SciTech Connect

    Vishnu, Abhinav; Dam, Hubertus van; Tallent, Nathan R.; Kerbyson, Darren J.; Hoisie, Adolfy

    2016-05-01

    Faults are commonplace in large scale systems. These systems experience a variety of faults such as transient, permanent and intermittent. Multi-bit faults are typically not corrected by the hardware resulting in an error. Here, this paper attempts to answer an important question: Given a multi-bit fault in main memory, will it result in an application error — and hence a recovery algorithm should be invoked — or can it be safely ignored? We propose an application fault modeling methodology to answer this question. Given a fault signature (a set of attributes comprising of system and application state), we use machine learning to create a model which predicts whether a multibit permanent/transient main memory fault will likely result in error. We present the design elements such as the fault injection methodology for covering important data structures, the application and system attributes which should be used for learning the model, the supervised learning algorithms (and potentially ensembles), and important metrics. Lastly, we use three applications — NWChem, LULESH and SVM — as examples for demonstrating the effectiveness of the proposed fault modeling methodology.

  12. Fault Modeling of Extreme Scale Applications Using Machine Learning

    SciTech Connect

    Vishnu, Abhinav; Dam, Hubertus van; Tallent, Nathan R.; Kerbyson, Darren J.; Hoisie, Adolfy

    2016-05-01

    Faults are commonplace in large scale systems. These systems experience a variety of faults such as transient, permanent and intermittent. Multi-bit faults are typically not corrected by the hardware resulting in an error. Here, this paper attempts to answer an important question: Given a multi-bit fault in main memory, will it result in an application error — and hence a recovery algorithm should be invoked — or can it be safely ignored? We propose an application fault modeling methodology to answer this question. Given a fault signature (a set of attributes comprising of system and application state), we use machine learning to create a model which predicts whether a multibit permanent/transient main memory fault will likely result in error. We present the design elements such as the fault injection methodology for covering important data structures, the application and system attributes which should be used for learning the model, the supervised learning algorithms (and potentially ensembles), and important metrics. Lastly, we use three applications — NWChem, LULESH and SVM — as examples for demonstrating the effectiveness of the proposed fault modeling methodology.

  13. Derivative Free Optimization of Complex Systems with the Use of Statistical Machine Learning Models

    DTIC Science & Technology

    2015-09-12

    AFRL-AFOSR-VA-TR-2015-0278 DERIVATIVE FREE OPTIMIZATION OF COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS Katya Scheinberg...COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-11-1-0239 5c.  PROGRAM ELEMENT...developed, which has been the focus of our research. 15. SUBJECT TERMS optimization, Derivative-Free Optimization, Statistical Machine Learning 16. SECURITY

  14. Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations

    DOE PAGES

    Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.; ...

    2017-08-29

    Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less

  15. Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations.

    PubMed

    Matzen, Laura E; Haass, Michael J; Divis, Kristin M; Wang, Zhiyuan; Wilson, Andrew T

    2017-08-29

    Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene have visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. Finally, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.

  16. Predictive Surface Roughness Model for End Milling of Machinable Glass Ceramic

    NASA Astrophysics Data System (ADS)

    Mohan Reddy, M.; Gorin, Alexander; Abou-El-Hossein, K. A.

    2011-02-01

    Advanced ceramics of Machinable glass ceramic is attractive material to produce high accuracy miniaturized components for many applications in various industries such as aerospace, electronics, biomedical, automotive and environmental communications due to their wear resistance, high hardness, high compressive strength, good corrosion resistance and excellent high temperature properties. Many research works have been conducted in the last few years to investigate the performance of different machining operations when processing various advanced ceramics. Micro end-milling is one of the machining methods to meet the demand of micro parts. Selecting proper machining parameters are important to obtain good surface finish during machining of Machinable glass ceramic. Therefore, this paper describes the development of predictive model for the surface roughness of Machinable glass ceramic in terms of speed, feed rate by using micro end-milling operation.

  17. Predictive modeling in pediatric traumatic brain injury using machine learning.

    PubMed

    Chong, Shu-Ling; Liu, Nan; Barbier, Sylvaine; Ong, Marcus Eng Hock

    2015-03-17

    Pediatric traumatic brain injury (TBI) constitutes a significant burden and diagnostic challenge in the emergency department (ED). While large North American research networks have derived clinical prediction rules for the head injured child, these may not be generalizable to practices in countries with traditionally low rates of computed tomography (CT). We aim to study predictors for moderate to severe TBI in our ED population aged < 16 years. This was a retrospective case-control study based on data from a prospective surveillance head injury database. Cases were included if patients presented from 2006 to 2014, with moderate to severe TBI. Controls were age-matched head injured children from the registry, obtained in a 4 control: 1 case ratio. These children remained well on diagnosis and follow up. Demographics, history, and physical examination findings were analyzed and patients followed up for the clinical course and outcome measures of death and neurosurgical intervention. To predict moderate to severe TBI, we built a machine learning (ML) model and a multivariable logistic regression model and compared their performances by means of Receiver Operating Characteristic (ROC) analysis. There were 39 cases and 156 age-matched controls. The following 4 predictors remained statistically significant after multivariable analysis: Involvement in road traffic accident, a history of loss of consciousness, vomiting and signs of base of skull fracture. The logistic regression model was created with these 4 variables while the ML model was built with 3 extra variables, namely the presence of seizure, confusion and clinical signs of skull fracture. At the optimal cutoff scores, the ML method improved upon the logistic regression method with respect to the area under the ROC curve (0.98 vs 0.93), sensitivity (94.9% vs 82.1%), specificity (97.4% vs 92.3%), PPV (90.2% vs 72.7%), and NPV (98.7% vs 95.4%). In this study, we demonstrated the feasibility of using machine

  18. Modeling the Virtual Machine Launching Overhead under Fermicloud

    SciTech Connect

    Garzoglio, Gabriele; Wu, Hao; Ren, Shangping; Timm, Steven; Bernabeu, Gerard; Noh, Seo-Young

    2014-11-12

    FermiCloud is a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows. The Cloud Bursting module of the FermiCloud enables the FermiCloud, when more computational resources are needed, to automatically launch virtual machines to available resources such as public clouds. One of the main challenges in developing the cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on FermiCloud’s system operational data, the VM launching overhead is not a constant. It varies with physical resource (CPU, memory, I/O device) utilization at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launch overhead reference model is needed. The paper is to develop a VM launch overhead reference model based on operational data we have obtained on FermiCloud and uses the reference model to guide the cloud bursting process.

  19. Color model for fruit quality inspection with machine vision

    NASA Astrophysics Data System (ADS)

    Rao, Xiuqin; Ying, Yibin

    2005-11-01

    A real time machine vision system for fruit quality inspection was developed, which consists of rollers, an encoder, a lighting chamber, a TMS-7DSP CCD camera (PULNIX Inc.), a computer (P4 1.8G, 128M) and a set of grading controller. The system was made for size detecting of fruit, and then sorting fruits into 3 groups by the skin color: red group, yellow group, and green group which was immaturity. Color model for segmenting fruits from background and classing fruits into different groups was discussed. RGB color model was used to segment fruits from background, an equation of red component and blue component was used to segment the figure of relationship between red and blue component into two zones, which represent background and a fruit respectively. And then HIS color model was introduced to class fruits into three groups, Hue component was used as the optimum feature for this objective because that there were less overlap on this component of the three groups.100 navel orange was used to class by their skin color, total error was 2.1%.

  20. Simple models for estimating dementia severity using machine learning.

    PubMed

    Shankle, W R; Mania, S; Dick, M B; Pazzani, M J

    1998-01-01

    Estimating dementia severity using the Clinical Dementia Rating (CDR) Scale is a two-stage process that currently is costly and impractical in community settings, and at best has an interrater reliability of 80%. Because staging of dementia severity is economically and clinically important, we used Machine Learning (ML) algorithms with an Electronic Medical Record (EMR) to identify simpler models for estimating total CDR scores. Compared to a gold standard, which required 34 attributes to derive total CDR scores, ML algorithms identified models with as few as seven attributes. The classification accuracy varied with the algorithm used with naïve Bayes giving the highest. (76%) The mildly demented severity class was the only one with significantly reduced accuracy (59%). If one groups the severity classes into normal, very mild-to-mildly demented, and moderate-to-severely demented, then classification accuracies are clinically acceptable (85%). These simple models can be used in community settings where it is currently not possible to estimate dementia severity due to time and cost constraints.

  1. Machine learning and cosmological simulations - I. Semi-analytical models

    NASA Astrophysics Data System (ADS)

    Kamdar, Harshil M.; Turk, Matthew J.; Brunner, Robert J.

    2016-01-01

    We present a new exploratory framework to model galaxy formation and evolution in a hierarchical Universe by using machine learning (ML). Our motivations are two-fold: (1) presenting a new, promising technique to study galaxy formation, and (2) quantitatively analysing the extent of the influence of dark matter halo properties on galaxies in the backdrop of semi-analytical models (SAMs). We use the influential Millennium Simulation and the corresponding Munich SAM to train and test various sophisticated ML algorithms (k-Nearest Neighbors, decision trees, random forests, and extremely randomized trees). By using only essential dark matter halo physical properties for haloes of M > 1012 M⊙ and a partial merger tree, our model predicts the hot gas mass, cold gas mass, bulge mass, total stellar mass, black hole mass and cooling radius at z = 0 for each central galaxy in a dark matter halo for the Millennium run. Our results provide a unique and powerful phenomenological framework to explore the galaxy-halo connection that is built upon SAMs and demonstrably place ML as a promising and a computationally efficient tool to study small-scale structure formation.

  2. Parameterizing Phrase Based Statistical Machine Translation Models: An Analytic Study

    ERIC Educational Resources Information Center

    Cer, Daniel

    2011-01-01

    The goal of this dissertation is to determine the best way to train a statistical machine translation system. I first develop a state-of-the-art machine translation system called Phrasal and then use it to examine a wide variety of potential learning algorithms and optimization criteria and arrive at two very surprising results. First, despite the…

  3. Parameterizing Phrase Based Statistical Machine Translation Models: An Analytic Study

    ERIC Educational Resources Information Center

    Cer, Daniel

    2011-01-01

    The goal of this dissertation is to determine the best way to train a statistical machine translation system. I first develop a state-of-the-art machine translation system called Phrasal and then use it to examine a wide variety of potential learning algorithms and optimization criteria and arrive at two very surprising results. First, despite the…

  4. Access, Equity, and Opportunity. Women in Machining: A Model Program.

    ERIC Educational Resources Information Center

    Warner, Heather

    The Women in Machining (WIM) program is a Machine Action Project (MAP) initiative that was developed in response to a local skilled metalworking labor shortage, despite a virtual absence of women and people of color from area shops. The project identified post-war stereotypes and other barriers that must be addressed if women are to have an equal…

  5. Modeling Stochastic Kinetics of Molecular Machines at Multiple Levels: From Molecules to Modules

    PubMed Central

    Chowdhury, Debashish

    2013-01-01

    A molecular machine is either a single macromolecule or a macromolecular complex. In spite of the striking superficial similarities between these natural nanomachines and their man-made macroscopic counterparts, there are crucial differences. Molecular machines in a living cell operate stochastically in an isothermal environment far from thermodynamic equilibrium. In this mini-review we present a catalog of the molecular machines and an inventory of the essential toolbox for theoretically modeling these machines. The tool kits include 1), nonequilibrium statistical-physics techniques for modeling machines and machine-driven processes; and 2), statistical-inference methods for reverse engineering a functional machine from the empirical data. The cell is often likened to a microfactory in which the machineries are organized in modular fashion; each module consists of strongly coupled multiple machines, but different modules interact weakly with each other. This microfactory has its own automated supply chain and delivery system. Buoyed by the success achieved in modeling individual molecular machines, we advocate integration of these models in the near future to develop models of functional modules. A system-level description of the cell from the perspective of molecular machinery (the mechanome) is likely to emerge from further integrations that we envisage here. PMID:23746505

  6. Modeling stochastic kinetics of molecular machines at multiple levels: from molecules to modules.

    PubMed

    Chowdhury, Debashish

    2013-06-04

    A molecular machine is either a single macromolecule or a macromolecular complex. In spite of the striking superficial similarities between these natural nanomachines and their man-made macroscopic counterparts, there are crucial differences. Molecular machines in a living cell operate stochastically in an isothermal environment far from thermodynamic equilibrium. In this mini-review we present a catalog of the molecular machines and an inventory of the essential toolbox for theoretically modeling these machines. The tool kits include 1), nonequilibrium statistical-physics techniques for modeling machines and machine-driven processes; and 2), statistical-inference methods for reverse engineering a functional machine from the empirical data. The cell is often likened to a microfactory in which the machineries are organized in modular fashion; each module consists of strongly coupled multiple machines, but different modules interact weakly with each other. This microfactory has its own automated supply chain and delivery system. Buoyed by the success achieved in modeling individual molecular machines, we advocate integration of these models in the near future to develop models of functional modules. A system-level description of the cell from the perspective of molecular machinery (the mechanome) is likely to emerge from further integrations that we envisage here. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  7. Alternative Models of Service, Centralized Machine Operations. Phase II Report. Volume II.

    ERIC Educational Resources Information Center

    Technology Management Corp., Alexandria, VA.

    A study was conducted to determine if the centralization of playback machine operations for the national free library program would be feasible, economical, and desirable. An alternative model of playback machine services was constructed and compared with existing network operations considering both cost and service. The alternative model was…

  8. Modelling the sensitivity of river reaches to water abstraction: RAPHSA- a hydroecology tool for environmental managers

    NASA Astrophysics Data System (ADS)

    Klaar, Megan; Laize, Cedric; Maddock, Ian; Acreman, Mike; Tanner, Kath; Peet, Sarah

    2014-05-01

    A key challenge for environmental managers is the determination of environmental flows which allow a maximum yield of water resources to be taken from surface and sub-surface sources, whilst ensuring sufficient water remains in the environment to support biota and habitats. It has long been known that sensitivity to changes in water levels resulting from river and groundwater abstractions varies between rivers. Whilst assessment at the catchment scale is ideal for determining broad pressures on water resources and ecosystems, assessment of the sensitivity of reaches to changes in flow has previously been done on a site-by-site basis, often with the application of detailed but time consuming techniques (e.g. PHABSIM). While this is appropriate for a limited number of sites, it is costly in terms of money and time resources and therefore not appropriate for application at a national level required by responsible licensing authorities. To address this need, the Environment Agency (England) is developing an operational tool to predict relationships between physical habitat and flow which may be applied by field staff to rapidly determine the sensitivity of physical habitat to flow alteration for use in water resource management planning. An initial model of river sensitivity to abstraction (defined as the change in physical habitat related to changes in river discharge) was developed using site characteristics and data from 66 individual PHABSIM surveys throughout the UK (Booker & Acreman, 2008). By applying a multivariate multiple linear regression analysis to the data to define habitat availability-flow curves using resource intensity as predictor variables, the model (known as RAPHSA- Rapid Assessment of Physical Habitat Sensitivity to Abstraction) is able to take a risk-based approach to modeled certainty. Site specific information gathered using desk-based, or a variable amount of field work can be used to predict the shape of the habitat- flow curves, with the

  9. A comparative study of slope failure prediction using logistic regression, support vector machine and least square support vector machine models

    NASA Astrophysics Data System (ADS)

    Zhou, Lim Yi; Shan, Fam Pei; Shimizu, Kunio; Imoto, Tomoaki; Lateh, Habibah; Peng, Koay Swee

    2017-08-01

    A comparative study of logistic regression, support vector machine (SVM) and least square support vector machine (LSSVM) models has been done to predict the slope failure (landslide) along East-West Highway (Gerik-Jeli). The effects of two monsoon seasons (southwest and northeast) that occur in Malaysia are considered in this study. Two related factors of occurrence of slope failure are included in this study: rainfall and underground water. For each method, two predictive models are constructed, namely SOUTHWEST and NORTHEAST models. Based on the results obtained from logistic regression models, two factors (rainfall and underground water level) contribute to the occurrence of slope failure. The accuracies of the three statistical models for two monsoon seasons are verified by using Relative Operating Characteristics curves. The validation results showed that all models produced prediction of high accuracy. For the results of SVM and LSSVM, the models using RBF kernel showed better prediction compared to the models using linear kernel. The comparative results showed that, for SOUTHWEST models, three statistical models have relatively similar performance. For NORTHEAST models, logistic regression has the best predictive efficiency whereas the SVM model has the second best predictive efficiency.

  10. AZOrange - High performance open source machine learning for QSAR modeling in a graphical programming environment

    PubMed Central

    2011-01-01

    Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of

  11. An abstract cell model that describes the self-organization of cell function in living systems.

    PubMed

    Wolkenhauer, Olaf; Hofmeyr, Jan-Hendrik S

    2007-06-07

    The principal aim of systems biology is to search for general principles that govern living systems. We develop an abstract dynamic model of a cell, rooted in Mesarović and Takahara's general systems theory. In this conceptual framework the function of the cell is delineated by the dynamic processes it can realize. We abstract basic cellular processes, i.e., metabolism, signalling, gene expression, into a mapping and consider cell functions, i.e., cell differentiation, proliferation, etc. as processes that determine the basic cellular processes that realize a particular cell function. We then postulate the existence of a 'coordination principle' that determines cell function. These ideas are condensed into a theorem: If basic cellular processes for the control and regulation of cell functions are present, then the coordination of cell functions is realized autonomously from within the system. Inspired by Robert Rosen's notion of closure to efficient causation, introduced as a necessary condition for a natural system to be an organism, we show that for a mathematical model of a self-organizing cell the associated category must be cartesian closed. Although the semantics of our cell model differ from Rosen's (M,R)-systems, the proof of our theorem supports (in parts) Rosen's argument that living cells have non-simulable properties. Whereas models that form cartesian closed categories can capture self-organization (which is a, if not the, fundamental property of living systems), conventional computer simulations of these models (such as virtual cells) cannot. Simulations can mimic living systems, but they are not like living systems.

  12. Crystal structure representations for machine learning models of formation energies

    SciTech Connect

    Faber, Felix; Lindmaa, Alexander; von Lilienfeld, O. Anatole; Armiento, Rickard

    2015-04-20

    We introduce and evaluate a set of feature vector representations of crystal structures for machine learning (ML) models of formation energies of solids. ML models of atomization energies of organic molecules have been successful using a Coulomb matrix representation of the molecule. We consider three ways to generalize such representations to periodic systems: (i) a matrix where each element is related to the Ewald sum of the electrostatic interaction between two different atoms in the unit cell repeated over the lattice; (ii) an extended Coulomb-like matrix that takes into account a number of neighboring unit cells; and (iii) an ansatz that mimics the periodicity and the basic features of the elements in the Ewald sum matrix using a sine function of the crystal coordinates of the atoms. The representations are compared for a Laplacian kernel with Manhattan norm, trained to reproduce formation energies using a dataset of 3938 crystal structures obtained from the Materials Project. For training sets consisting of 3000 crystals, the generalization error in predicting formation energies of new structures corresponds to (i) 0.49, (ii) 0.64, and (iii) 0.37eV/atom for the respective representations.

  13. Derivation of a model of the exciter of a brushless synchronous machine

    NASA Astrophysics Data System (ADS)

    Vleeshouwers, J. M.

    1992-06-01

    The modeling of the brushless exciter for a machine used in a wind turbine is addressed. A brushless exciter reduces the susceptability of the machine to atmospheric conditions and therefore the need for maintenance compared to a synchronous machine equipped with brushes and sliprings. Furthermore, no large excitation winding power supply is needed. In large wind turbines which apply a synchronous machine, these advantages will be vital. A brushless exciter is usually constructed as a small synchronous machine with rectifier. According to manufacturers, exciters are designed to function as a current transformer. The method which has been developed in an earlier resarch project to model the synchronous machine with rectifier is concluded to be applicable to model the exciter, provided that the effect of resistances on the commutation may be neglected. This restricts the technique to modeling exciters of machines in the 100 kW range and larger. For smaller exciters the existing modeling approach is not applicable. Measurements of a small exciter (of a 37.5 kVa machine) show that higher harmonics in the exciter significantly contribute to its behavior. Based on experimental data a simple linear first order dynamic model was developed for the small exciter. The model parameters can be deduced from the steady state current gain and a simple dynamic experiment.

  14. Mutation-selection dynamics and error threshold in an evolutionary model for Turing machines.

    PubMed

    Musso, Fabio; Feverati, Giovanni

    2012-01-01

    We investigate the mutation-selection dynamics for an evolutionary computation model based on Turing machines. The use of Turing machines allows for very simple mechanisms of code growth and code activation/inactivation through point mutations. To any value of the point mutation probability corresponds a maximum amount of active code that can be maintained by selection and the Turing machines that reach it are said to be at the error threshold. Simulations with our model show that the Turing machines population evolve toward the error threshold. Mathematical descriptions of the model point out that this behaviour is due more to the mutation-selection dynamics than to the intrinsic nature of the Turing machines. This indicates that this result is much more general than the model considered here and could play a role also in biological evolution.

  15. Abstractive dissociation of oxygen over Al(111): a nonadiabatic quantum model.

    PubMed

    Katz, Gil; Kosloff, Ronnie; Zeiri, Yehuda

    2004-02-22

    The dissociation of oxygen on a clean aluminum surface is studied theoretically. A nonadiabatic quantum dynamical model is used, based on four electronically distinct potential energy surfaces characterized by the extent of charge transfer from the metal to the adsorbate. A flat surface approximation is used to reduce the computation complexity. The conservation of the helicopter angular momentum allows Boltzmann averaging of the outcome of the propagation of a three degrees of freedom wave function. The dissociation event is simulated by solving the time-dependent Schrödinger equation for a period of 30 femtoseconds. As a function of incident kinetic energy, the dissociation yield follows the experimental trend. An attempt at simulation employing only the lowest adiabatic surface failed, qualitatively disagreeing with both experiment and nonadiabatic calculations. The final products, adsorptive dissociation and abstractive dissociation, are obtained by carrying out a semiclassical molecular dynamics simulation with surface hopping which describes the back charge transfer from an oxygen atom negative ion to the surface. The final adsorbed oxygen pair distribution compares well with experiment. By running the dynamical events backward in time, a correlation is established between the products and the initial conditions which lead to their production. Qualitative agreement is thus obtained with recent experiments that show suppression of abstraction by rotational excitation.

  16. Experimental "evolutional machines": mathematical and experimental modeling of biological evolution

    NASA Astrophysics Data System (ADS)

    Brilkov, A. V.; Loginov, I. A.; Morozova, E. V.; Shuvaev, A. N.; Pechurkin, N. S.

    Experimentalists possess model systems of two major types for study of evolution continuous cultivation in the chemostat and long-term development in closed laboratory microecosystems with several trophic structure If evolutionary changes or transfer from one steady state to another in the result of changing qualitative properties of the system take place in such systems the main characteristics of these evolution steps can be measured By now this has not been realized from the point of view of methodology though a lot of data on the work of both types of evolutionary machines has been collected In our experiments with long-term continuous cultivation we used the bacterial strains containing in plasmids the cloned genes of bioluminescence and green fluorescent protein which expression level can be easily changed and controlled In spite of the apparent kinetic diversity of evolutionary transfers in two types of systems the general mechanisms characterizing the increase of used energy flow by populations of primer producent can be revealed at their study According to the energy approach at spontaneous transfer from one steady state to another e g in the process of microevolution competition or selection heat dissipation characterizing the rate of entropy growth should increase rather then decrease or maintain steady as usually believed The results of our observations of experimental evolution require further development of thermodynamic theory of open and closed biological systems and further study of general mechanisms of biological

  17. Guidelines for Developing and Reporting Machine Learning Predictive Models in Biomedical Research: A Multidisciplinary View.

    PubMed

    Luo, Wei; Phung, Dinh; Tran, Truyen; Gupta, Sunil; Rana, Santu; Karmakar, Chandan; Shilton, Alistair; Yearwood, John; Dimitrova, Nevenka; Ho, Tu Bao; Venkatesh, Svetha; Berk, Michael

    2016-12-16

    As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community.

  18. Guidelines for Developing and Reporting Machine Learning Predictive Models in Biomedical Research: A Multidisciplinary View

    PubMed Central

    2016-01-01

    Background As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. Objective To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. Methods A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. Results The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. Conclusions A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. PMID:27986644

  19. Complex hybrid models combining deterministic and machine learning components for numerical climate modeling and weather prediction.

    PubMed

    Krasnopolsky, Vladimir M; Fox-Rabinovitz, Michael S

    2006-03-01

    A new practical application of neural network (NN) techniques to environmental numerical modeling has been developed. Namely, a new type of numerical model, a complex hybrid environmental model based on a synergetic combination of deterministic and machine learning model components, has been introduced. Conceptual and practical possibilities of developing hybrid models are discussed in this paper for applications to climate modeling and weather prediction. The approach presented here uses NN as a statistical or machine learning technique to develop highly accurate and fast emulations for time consuming model physics components (model physics parameterizations). The NN emulations of the most time consuming model physics components, short and long wave radiation parameterizations or full model radiation, presented in this paper are combined with the remaining deterministic components (like model dynamics) of the original complex environmental model--a general circulation model or global climate model (GCM)--to constitute a hybrid GCM (HGCM). The parallel GCM and HGCM simulations produce very similar results but HGCM is significantly faster. The speed-up of model calculations opens the opportunity for model improvement. Examples of developed HGCMs illustrate the feasibility and efficiency of the new approach for modeling complex multidimensional interdisciplinary systems.

  20. Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness

    NASA Astrophysics Data System (ADS)

    Kusuma, K. K.; Maruf, A.

    2016-02-01

    Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.

  1. A Consistent Information Criterion for Support Vector Machines in Diverging Model Spaces.

    PubMed

    Zhang, Xiang; Wu, Yichao; Wang, Lan; Li, Runze

    Information criteria have been popularly used in model selection and proved to possess nice theoretical properties. For classification, Claeskens et al. (2008) proposed support vector machine information criterion for feature selection and provided encouraging numerical evidence. Yet no theoretical justification was given there. This work aims to fill the gap and to provide some theoretical justifications for support vector machine information criterion in both fixed and diverging model spaces. We first derive a uniform convergence rate for the support vector machine solution and then show that a modification of the support vector machine information criterion achieves model selection consistency even when the number of features diverges at an exponential rate of the sample size. This consistency result can be further applied to selecting the optimal tuning parameter for various penalized support vector machine methods. Finite-sample performance of the proposed information criterion is investigated using Monte Carlo studies and one real-world gene selection problem.

  2. A Consistent Information Criterion for Support Vector Machines in Diverging Model Spaces

    PubMed Central

    Zhang, Xiang; Wu, Yichao; Wang, Lan; Li, Runze

    2015-01-01

    Information criteria have been popularly used in model selection and proved to possess nice theoretical properties. For classification, Claeskens et al. (2008) proposed support vector machine information criterion for feature selection and provided encouraging numerical evidence. Yet no theoretical justification was given there. This work aims to fill the gap and to provide some theoretical justifications for support vector machine information criterion in both fixed and diverging model spaces. We first derive a uniform convergence rate for the support vector machine solution and then show that a modification of the support vector machine information criterion achieves model selection consistency even when the number of features diverges at an exponential rate of the sample size. This consistency result can be further applied to selecting the optimal tuning parameter for various penalized support vector machine methods. Finite-sample performance of the proposed information criterion is investigated using Monte Carlo studies and one real-world gene selection problem. PMID:27239164

  3. DFT modeling of chemistry on the Z machine

    NASA Astrophysics Data System (ADS)

    Mattsson, Thomas

    2013-06-01

    Density Functional Theory (DFT) has proven remarkably accurate in predicting properties of matter under shock compression for a wide-range of elements and compounds: from hydrogen to xenon via water. Materials where chemistry plays a role are of particular interest for many applications. For example the deep interiors of Neptune, Uranus, and hundreds of similar exoplanets are composed of molecular ices of carbon, hydrogen, oxygen, and nitrogen at pressures of several hundred GPa and temperatures of many thousand Kelvin. High-quality thermophysical experimental data and high-fidelity simulations including chemical reaction are necessary to constrain planetary models over a large range of conditions. As examples of where chemical reactions are important, and demonstration of the high fidelity possible for these both structurally and chemically complex systems, we will discuss shock- and re-shock of liquid carbon dioxide (CO2) in the range 100 to 800 GPa, shock compression of the hydrocarbon polymers polyethylene (PE) and poly(4-methyl-1-pentene) (PMP), and finally simulations of shock compression of glow discharge polymer (GDP) including the effects of doping with germanium. Experimental results from Sandia's Z machine have time and again validated the DFT simulations at extreme conditions and the combination of experiment and DFT provide reliable data for evaluating existing and constructing future wide-range equations of state models for molecular compounds like CO2 and polymers like PE, PMP, and GDP. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  4. A Sustainable Model for Integrating Current Topics in Machine Learning Research into the Undergraduate Curriculum

    ERIC Educational Resources Information Center

    Georgiopoulos, M.; DeMara, R. F.; Gonzalez, A. J.; Wu, A. S.; Mollaghasemi, M.; Gelenbe, E.; Kysilka, M.; Secretan, J.; Sharma, C. A.; Alnsour, A. J.

    2009-01-01

    This paper presents an integrated research and teaching model that has resulted from an NSF-funded effort to introduce results of current Machine Learning research into the engineering and computer science curriculum at the University of Central Florida (UCF). While in-depth exposure to current topics in Machine Learning has traditionally occurred…

  5. A Sustainable Model for Integrating Current Topics in Machine Learning Research into the Undergraduate Curriculum

    ERIC Educational Resources Information Center

    Georgiopoulos, M.; DeMara, R. F.; Gonzalez, A. J.; Wu, A. S.; Mollaghasemi, M.; Gelenbe, E.; Kysilka, M.; Secretan, J.; Sharma, C. A.; Alnsour, A. J.

    2009-01-01

    This paper presents an integrated research and teaching model that has resulted from an NSF-funded effort to introduce results of current Machine Learning research into the engineering and computer science curriculum at the University of Central Florida (UCF). While in-depth exposure to current topics in Machine Learning has traditionally occurred…

  6. Using financial risk measures for analyzing generalization performance of machine learning models.

    PubMed

    Takeda, Akiko; Kanamori, Takafumi

    2014-09-01

    We propose a unified machine learning model (UMLM) for two-class classification, regression and outlier (or novelty) detection via a robust optimization approach. The model embraces various machine learning models such as support vector machine-based and minimax probability machine-based classification and regression models. The unified framework makes it possible to compare and contrast existing learning models and to explain their differences and similarities. In this paper, after relating existing learning models to UMLM, we show some theoretical properties for UMLM. Concretely, we show an interpretation of UMLM as minimizing a well-known financial risk measure (worst-case value-at risk (VaR) or conditional VaR), derive generalization bounds for UMLM using such a risk measure, and prove that solving problems of UMLM leads to estimators with the minimized generalization bounds. Those theoretical properties are applicable to related existing learning models.

  7. Categorization of Sentence Types in Medical Abstracts

    PubMed Central

    McKnight, Larry; Srinivasan, Padmini

    2003-01-01

    This study evaluated the use of machine learning techniques in the classification of sentence type. 7253 structured abstracts and 204 unstructured abstracts of Randomized Controlled Trials from MedLINE were parsed into sentences and each sentence was labeled as one of four types (Introduction, Method, Result, or Conclusion). Support Vector Machine (SVM) and Linear Classifier models were generated and evaluated on cross-validated data. Treating sentences as a simple "bag of words", the SVM model had an average ROC area of 0.92. Adding a feature of relative sentence location improved performance markedly for some models and overall increasing the average ROC to 0.95. Linear classifier performance was significantly worse than the SVM in all datasets. Using the SVM model trained on structured abstracts to predict unstructured abstracts yielded performance similar to that of models trained with unstructured abstracts in 3 of the 4 types. We conclude that classification of sentence type seems feasible within the domain of RCT's. Identification of sentence types may be helpful for providing context to end users or other text summarization techniques. PMID:14728211

  8. A Model for Predicting Integrated Man-Machine System Reliability: Model Logic and Description

    DTIC Science & Technology

    1974-11-01

    A MODEL FOR PREDICTING INTEGRATED MAN-MACHINE SYSTEMS RELIABILITY prepared for Naval Si nand Deparrmem aw nr. Con :’III’lit UNCLASSIFIED...was substantially modified so as to allow its use for system reliability and system availability predictive purposes. The resultant new model is...from 4 to 20 members was substantially modified so as to allow its use for system reliability and system availability predictive purposes. The

  9. On problems in defining abstract and metaphysical concepts--emergence of a new model.

    PubMed

    Nahod, Bruno; Nahod, Perina Vukša

    2014-12-01

    Basic anthropological terminology is the first project covering terms from the domain of the social sciences under the Croatian Special Field Terminology program (Struna). Problems that have been sporadically noticed or whose existence could have been presumed during the processing of terms mainly from technical fields and sciences have finally emerged in "anthropology". The principles of the General Theory of Terminology (GTT), which are followed in Struna, were put to a truly exacting test, and sometimes stretched beyond their limits when applied to concepts that do not necessarily have references in the physical world; namely, abstract and metaphysical concepts. We are currently developing a new terminographical model based on Idealized Cognitive Models (ICM), which will hopefully ensure a better cross-filed implementation of various types of concepts and their relations. The goal of this paper is to introduce the theoretical bases of our model. Additionally, we will present a pilot study of the series of experiments in which we are trying to investigate the nature of conceptual categorization in special languages and its proposed difference form categorization in general language.

  10. Comparison of two different surfaces for 3d model abstraction in support of remote sensing simulations

    SciTech Connect

    Pope, Paul A; Ranken, Doug M

    2010-01-01

    A method for abstracting a 3D model by shrinking a triangular mesh, defined upon a best fitting ellipsoid surrounding the model, onto the model's surface has been previously described. This ''shrinkwrap'' process enables a semi-regular mesh to be defined upon an object's surface. This creates a useful data structure for conducting remote sensing simulations and image processing. However, using a best fitting ellipsoid having a graticule-based tessellation to seed the shrinkwrap process suffers from a mesh which is too dense at the poles. To achieve a more regular mesh, the use of a best fitting, subdivided icosahedron was tested. By subdividing each of the twenty facets of the icosahedron into regular triangles of a predetermined size, arbitrarily dense, highly-regular starting meshes can be created. Comparisons of the meshes resulting from these two seed surfaces are described. Use of a best fitting icosahedron-based mesh as the seed surface in the shrinkwrap process is preferable to using a best fitting ellipsoid. The impacts to remote sensing simulations, specifically generation of synthetic imagery, is illustrated.

  11. (abstract) Modeling Protein Families and Human Genes: Hidden Markov Models and a Little Beyond

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre

    1994-01-01

    We will first give a brief overview of Hidden Markov Models (HMMs) and their use in Computational Molecular Biology. In particular, we will describe a detailed application of HMMs to the G-Protein-Coupled-Receptor Superfamily. We will also describe a number of analytical results on HMMs that can be used in discrimination tests and database mining. We will then discuss the limitations of HMMs and some new directions of research. We will conclude with some recent results on the application of HMMs to human gene modeling and parsing.

  12. (abstract) Modeling Protein Families and Human Genes: Hidden Markov Models and a Little Beyond

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre

    1994-01-01

    We will first give a brief overview of Hidden Markov Models (HMMs) and their use in Computational Molecular Biology. In particular, we will describe a detailed application of HMMs to the G-Protein-Coupled-Receptor Superfamily. We will also describe a number of analytical results on HMMs that can be used in discrimination tests and database mining. We will then discuss the limitations of HMMs and some new directions of research. We will conclude with some recent results on the application of HMMs to human gene modeling and parsing.

  13. Distributed model for electromechanical interaction in rotordynamics of cage rotor electrical machines

    NASA Astrophysics Data System (ADS)

    Laiho, Antti; Holopainen, Timo P.; Klinge, Paul; Arkkio, Antero

    2007-05-01

    In this work the effects of the electromechanical interaction on rotordynamics and vibration characteristics of cage rotor electrical machines were considered. An eccentric rotor motion distorts the electromagnetic field in the air-gap between the stator and rotor inducing a total force, the unbalanced magnetic pull, exerted on the rotor. In this paper a low-order parametric model for the unbalanced magnetic pull is coupled with a three-dimensional finite element structural model of the electrical machine. The main contribution of the work is to present a computationally efficient electromechanical model for vibration analysis of cage rotor machines. In this model, the interaction between the mechanical and electromagnetic systems is distributed over the air gap of the machine. This enables the inclusion of rotor and stator deflections into the analysis and, thus, yields more realistic prediction for the effects of electromechanical interaction. The model was tested by implementing it for two electrical machines with nominal speeds close to one of the rotor bending critical speeds. Rated machine data was used in order to predict the effects of the electromechanical interaction on vibration characteristics of the example machines.

  14. Estimation and forecasting of machine health condition using ARMA/GARCH model

    NASA Astrophysics Data System (ADS)

    Pham, Hong Thom; Yang, Bo-Suk

    2010-02-01

    This paper proposes the hybrid model of autoregressive moving average (ARMA) and generalized autoregressive conditional heteroscedasticity (GARCH) to estimate and forecast the machine state based on vibration signal. The main idea in this study is to employ the linear ARMA model and the nonlinear GARCH model to explain the wear and fault condition of machine, respectively. The successful outcomes of the ARMA/GARCH prediction model can give obvious explanation for future states of machine, which enhance the worth of machine condition monitoring as well as condition-based maintenance in practical applications. The advance of the proposed model is verified in empirical results as applying for a real system of a methane compressor in a petrochemical plant.

  15. Modelling of the dynamic behaviour of hard-to-machine alloys

    NASA Astrophysics Data System (ADS)

    Hokka, M.; Leemet, T.; Shrot, A.; Bäker, M.; Kuokkala, V.-T.

    2012-08-01

    Machining of titanium alloys and nickel based superalloys can be difficult due to their excellent mechanical properties combining high strength, ductility, and excellent overall high temperature performance. Machining of these alloys can, however, be improved by simulating the processes and by optimizing the machining parameters. The simulations, however, need accurate material models that predict the material behaviour in the range of strains and strain rates that occur in the machining processes. In this work, the behaviour of titanium 15-3-3-3 alloy and nickel based superalloy 625 were characterized in compression, and Johnson-Cook material model parameters were obtained from the results. For the titanium alloy, the adiabatic Johnson-Cook model predicts softening of the material adequately, but the high strain hardening rate of Alloy 625 in the model prevents the localization of strain and no shear bands were formed when using this model. For Alloy 625, the Johnson-Cook model was therefore modified to decrease the strain hardening rate at large strains. The models were used in the simulations of orthogonal cutting of the material. For both materials, the models are able to predict the serrated chip formation, frequently observed in the machining of these alloys. The machining forces also match relatively well, but some differences can be seen in the details of the experimentally obtained and simulated chip shapes.

  16. The Application of Machine Learning to Student Modelling.

    ERIC Educational Resources Information Center

    Self, John

    1986-01-01

    Considers possibility of developing a computer tutor around an explicit concept-learning theory derived from machine learning techniques. Some problems with using the focusing (and similar) algorithms in this role are discussed and possible solutions are developed. Design for a guided discovery learning system for tutoring concepts is proposed.…

  17. Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things

    NASA Astrophysics Data System (ADS)

    Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik

    2017-09-01

    This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.

  18. A Collaborative 20 Questions Model for Target Search with Human-Machine Interaction

    DTIC Science & Technology

    2013-05-01

    recognition ( ATR ) sensor. In the ATR set- ting the objective of the human-machine-interaction is to collabo- rate on estimating an unknown target location...where the human is repeatedly queried about target location in order to improve ATR performance. We propose a 20 questions framework for studying the...SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report ( SAR ) 18. NUMBER OF PAGES 5 19a. NAME OF RESPONSIBLE

  19. Improving protein–protein interactions prediction accuracy using protein evolutionary information and relevance vector machine model

    PubMed Central

    An, Ji‐Yong; Meng, Fan‐Rong; Chen, Xing; Yan, Gui‐Ying; Hu, Ji‐Pu

    2016-01-01

    Abstract Predicting protein–protein interactions (PPIs) is a challenging task and essential to construct the protein interaction networks, which is important for facilitating our understanding of the mechanisms of biological systems. Although a number of high‐throughput technologies have been proposed to predict PPIs, there are unavoidable shortcomings, including high cost, time intensity, and inherently high false positive rates. For these reasons, many computational methods have been proposed for predicting PPIs. However, the problem is still far from being solved. In this article, we propose a novel computational method called RVM‐BiGP that combines the relevance vector machine (RVM) model and Bi‐gram Probabilities (BiGP) for PPIs detection from protein sequences. The major improvement includes (1) Protein sequences are represented using the Bi‐gram probabilities (BiGP) feature representation on a Position Specific Scoring Matrix (PSSM), in which the protein evolutionary information is contained; (2) For reducing the influence of noise, the Principal Component Analysis (PCA) method is used to reduce the dimension of BiGP vector; (3) The powerful and robust Relevance Vector Machine (RVM) algorithm is used for classification. Five‐fold cross‐validation experiments executed on yeast and Helicobacter pylori datasets, which achieved very high accuracies of 94.57 and 90.57%, respectively. Experimental results are significantly better than previous methods. To further evaluate the proposed method, we compare it with the state‐of‐the‐art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM‐BiGP method is significantly better than the SVM‐based method. In addition, we achieved 97.15% accuracy on imbalance yeast dataset, which is higher than that of balance yeast dataset. The promising experimental results show the efficiency and robust of the proposed method, which can be an automatic

  20. Study of the machining process of nano-electrical discharge machining based on combined atomistic-continuum modeling method

    NASA Astrophysics Data System (ADS)

    Zhang, Guojun; Guo, Jianwen; Ming, Wuyi; Huang, Yu; Shao, Xinyu; Zhang, Zhen

    2014-01-01

    Nano-electrical discharge machining (nano-EDM) is an attractive measure to manufacture parts with nanoscale precision, however, due to the incompleteness of its theories, the development of more advanced nano-EDM technology is impeded. In this paper, a computational simulation model combining the molecular dynamics simulation model and the two-temperature model for single discharge process in nano-EDM is constructed to study the machining mechanism of nano-EDM from the thermal point of view. The melting process is analyzed. Before the heated material gets melted, thermal compressive stress higher than 3 GPa is induced. After the material gets melted, the compressive stress gets relieved. The cooling and solidifying processes are also analyzed. It is found that during the cooling process of the melted material, tensile stress higher than 3 GPa arises, which leads to the disintegration of material. The formation of the white layer is attributed to the homogeneous solidification, and additionally, the resultant residual stress is analyzed.

  1. Machine Learning Models for Detection of Regions of High Model Form Uncertainty in RANS

    NASA Astrophysics Data System (ADS)

    Ling, Julia; Templeton, Jeremy

    2015-11-01

    Reynolds Averaged Navier Stokes (RANS) models are widely used because of their computational efficiency and ease-of-implementation. However, because they rely on inexact turbulence closures, they suffer from significant model form uncertainty in many flows. Many RANS models make use of the Boussinesq hypothesis, which assumes a non-negative, scalar eddy viscosity that provides a linear relation between the Reynolds stresses and the mean strain rate. In many flows of engineering relevance, this eddy viscosity assumption is violated, leading to inaccuracies in the RANS predictions. For example, in near wall regions, the Boussinesq hypothesis fails to capture the correct Reynolds stress anisotropy. In regions of flow curvature, the linear relation between Reynolds stresses and mean strain rate may be inaccurate. This model form uncertainty cannot be quantified by simply varying the model parameters, as it is rooted in the model structure itself. Machine learning models were developed to detect regions of high model form uncertainty. These machine learning models consisted of binary classifiers that predicted, on a point-by-point basis, whether or not key RANS assumptions were violated. These classifiers were trained and evaluated for their sensitivity, specificity, and generalizability on a database of canonical flows.

  2. What good are abstract and what-if models? Lessons from the Gaïa hypothesis.

    PubMed

    Dutreuil, Sébastien

    2014-08-01

    This article on the epistemology of computational models stems from an analysis of the Gaïa hypothesis (GH). It begins with James Kirchner's criticisms of the central computational model of GH: Daisyworld. Among other things, the model has been criticized for being too abstract, describing fictional entities (fictive daisies on an imaginary planet) and trying to answer counterfactual (what-if) questions (how would a planet look like if life had no influence on it?). For these reasons the model has been considered not testable and therefore not legitimate in science, and in any case not very interesting since it explores non actual issues. This criticism implicitly assumes that science should only be involved in the making of models that are "actual" (by opposition to what-if) and "specific" (by opposition to abstract). I challenge both of these criticisms in this article. First by showing that although the testability-understood as the comparison of model output with empirical data-is an important procedure for explanatory models, there are plenty of models that are not testable. The fact that these are not testable (in this restricted sense) has nothing to do with their being "abstract" or "what-if" but with their being predictive models. Secondly, I argue that "abstract" and "what-if" models aim at (respectable) epistemic purposes distinct from those pursued by "actual and specific models". Abstract models are used to propose how-possibly explanation or to pursue theorizing. What-if models are used to attribute causal or explanatory power to a variable of interest. The fact that they aim at different epistemic goals entails that it may not be accurate to consider the choice between different kinds of model as a "strategy".

  3. A comparison of machine learning and Bayesian modelling for molecular serotyping.

    PubMed

    Newton, Richard; Wernisch, Lorenz

    2017-08-11

    Streptococcus pneumoniae is a human pathogen that is a major cause of infant mortality. Identifying the pneumococcal serotype is an important step in monitoring the impact of vaccines used to protect against disease. Genomic microarrays provide an effective method for molecular serotyping. Previously we developed an empirical Bayesian model for the classification of serotypes from a molecular serotyping array. With only few samples available, a model driven approach was the only option. In the meanwhile, several thousand samples have been made available to us, providing an opportunity to investigate serotype classification by machine learning methods, which could complement the Bayesian model. We compare the performance of the original Bayesian model with two machine learning algorithms: Gradient Boosting Machines and Random Forests. We present our results as an example of a generic strategy whereby a preliminary probabilistic model is complemented or replaced by a machine learning classifier once enough data are available. Despite the availability of thousands of serotyping arrays, a problem encountered when applying machine learning methods is the lack of training data containing mixtures of serotypes; due to the large number of possible combinations. Most of the available training data comprises samples with only a single serotype. To overcome the lack of training data we implemented an iterative analysis, creating artificial training data of serotype mixtures by combining raw data from single serotype arrays. With the enhanced training set the machine learning algorithms out perform the original Bayesian model. However, for serotypes currently lacking sufficient training data the best performing implementation was a combination of the results of the Bayesian Model and the Gradient Boosting Machine. As well as being an effective method for classifying biological data, machine learning can also be used as an efficient method for revealing subtle biological

  4. Abstraction and Consolidation

    ERIC Educational Resources Information Center

    Monaghan, John; Ozmantar, Mehmet Fatih

    2004-01-01

    What is involved in consolidating a new mathematical abstraction? This paper examines the work of one student who was working on a task designed to consolidate two recently constructed absolute function abstractions. The study adopts an activity theoretic model of abstraction in context. Selected protocol data are presented. The initial state of…

  5. The Modelling Of Basing Holes Machining Of Automatically Replaceable Cubical Units For Reconfigurable Manufacturing Systems With Low-Waste Production

    NASA Astrophysics Data System (ADS)

    Bobrovskij, N. M.; Levashkin, D. G.; Bobrovskij, I. N.; Melnikov, P. A.; Lukyanov, A. A.

    2017-01-01

    Article is devoted the decision of basing holes machining accuracy problems of automatically replaceable cubical units (carriers) for reconfigurable manufacturing systems with low-waste production (RMS). Results of automatically replaceable units basing holes machining modeling on the basis of the dimensional chains analysis are presented. Influence of machining parameters processing on accuracy spacings on centers between basing apertures is shown. The mathematical model of carriers basing holes machining accuracy is offered.

  6. A stochastic model for the cell formation problem considering machine reliability

    NASA Astrophysics Data System (ADS)

    Esmailnezhad, Bahman; Fattahi, Parviz; Kheirkhah, Amir Saman

    2015-03-01

    This paper presents a new mathematical model to solve cell formation problem in cellular manufacturing systems, where inter-arrival time, processing time, and machine breakdown time are probabilistic. The objective function maximizes the number of operations of each part with more arrival rate within one cell. Because a queue behind each machine; queuing theory is used to formulate the model. To solve the model, two metaheurstic algorithms such as modified particle swarm optimization and genetic algorithm are proposed. For the generation of initial solutions in these algorithms, a new heuristic method is developed, which always creates feasible solutions. Both metaheurstic algorithms are compared against global solutions obtained from Lingo software's branch and bound (B&B). Also, a statistical method will be used for comparison of solutions of two metaheurstic algorithms. The results of numerical examples indicate that considering the machine breakdown has significant effect on block structures of machine-part matrixes.

  7. ABSTRACTION OF INFORMATION FROM 2- AND 3-DIMENSIONAL PORFLOW MODELS INTO A 1-D GOLDSIM MODEL - 11404

    SciTech Connect

    Taylor, G.; Hiergesell, R.

    2010-11-16

    The Savannah River National Laboratory has developed a 'hybrid' approach to Performance Assessment modeling which has been used for a number of Performance Assessments. This hybrid approach uses a multi-dimensional modeling platform (PorFlow) to develop deterministic flow fields and perform contaminant transport. The GoldSim modeling platform is used to develop the Sensitivity and Uncertainty analyses. Because these codes are performing complementary tasks, it is incumbent upon them that for the deterministic cases they produce very similar results. This paper discusses two very different waste forms, one with no engineered barriers and one with engineered barriers, each of which present different challenges to the abstraction of data. The hybrid approach to Performance Assessment modeling used at the SRNL uses a 2-D unsaturated zone (UZ) and a 3-D saturated zone (SZ) model in the PorFlow modeling platform. The UZ model consists of the waste zone and the unsaturated zoned between the waste zone and the water table. The SZ model consists of source cells beneath the waste form to the points of interest. Both models contain 'buffer' cells so that modeling domain boundaries do not adversely affect the calculation. The information pipeline between the two models is the contaminant flux. The domain contaminant flux, typically in units of moles (or Curies) per year from the UZ model is used as a boundary condition for the source cells in the SZ. The GoldSim modeling component of the hybrid approach is an integrated UZ-SZ model. The model is a 1-D representation of the SZ, typically 1-D in the UZ, but as discussed below, depending on the waste form being analyzed may contain pseudo-2-D elements. A waste form at the Savannah River Site (SRS) which has no engineered barriers is commonly referred to as a slit trench. A slit trench, as its name implies, is an unlined trench, typically 6 m deep, 6 m wide, and 200 m long. Low level waste consisting of soil, debris, rubble, wood

  8. Human factors model concerning the man-machine interface of mining crewstations

    NASA Technical Reports Server (NTRS)

    Rider, James P.; Unger, Richard L.

    1989-01-01

    The U.S. Bureau of Mines is developing a computer model to analyze the human factors aspect of mining machine operator compartments. The model will be used as a research tool and as a design aid. It will have the capability to perform the following: simulated anthropometric or reach assessment, visibility analysis, illumination analysis, structural analysis of the protective canopy, operator fatigue analysis, and computation of an ingress-egress rating. The model will make extensive use of graphics to simplify data input and output. Two dimensional orthographic projections of the machine and its operator compartment are digitized and the data rebuilt into a three dimensional representation of the mining machine. Anthropometric data from either an individual or any size population may be used. The model is intended for use by equipment manufacturers and mining companies during initial design work on new machines. In addition to its use in machine design, the model should prove helpful as an accident investigation tool and for determining the effects of machine modifications made in the field on the critical areas of visibility and control reach ability.

  9. Vacation model for Markov machine repair problem with two heterogeneous unreliable servers and threshold recovery

    NASA Astrophysics Data System (ADS)

    Jain, Madhu; Meena, Rakesh Kumar

    2017-06-01

    Markov model of multi-component machining system comprising two unreliable heterogeneous servers and mixed type of standby support has been studied. The repair job of broken down machines is done on the basis of bi-level threshold policy for the activation of the servers. The server returns back to render repair job when the pre-specified workload of failed machines is build up. The first (second) repairman turns on only when the work load of N1 (N2) failed machines is accumulated in the system. The both servers may go for vacation in case when all the machines are in good condition and there are no pending repair jobs for the repairmen. Runge-Kutta method is implemented to solve the set of governing equations used to formulate the Markov model. Various system metrics including the mean queue length, machine availability, throughput, etc., are derived to determine the performance of the machining system. To provide the computational tractability of the present investigation, a numerical illustration is provided. A cost function is also constructed to determine the optimal repair rate of the server by minimizing the expected cost incurred on the system. The hybrid soft computing method is considered to develop the adaptive neuro-fuzzy inference system (ANFIS). The validation of the numerical results obtained by Runge-Kutta approach is also facilitated by computational results generated by ANFIS.

  10. Utilisation of Modeling, Stress Analysis, Kinematics Optimisation, and Hypothetical Estimation of Lifetime in the Design Process of Mobile Working Machines

    NASA Astrophysics Data System (ADS)

    Izrael, Gregor; Bukoveczky, Juraj; Gulan, Ladislav

    2011-12-01

    The contribution deals with several methods used in the construction process such as model creation, verification of technical parameters of the machine, and life estimation of the selected modules. Determination of life cycle for mobile working machines, and their carrying modules respectively by investigation and subsequent processing of results gained by service measurements. Machine life claimed by a producer is only relative, because life of these machines depends not only on the way of work on that particular machine but also the state of material which is manipulated by the machine and in great extent the operator, their observance of security regulations, and prescribed working conditions.

  11. Temperature Control of Fimbriation Circuit Switch in Uropathogenic Escherichia coli: Quantitative Analysis via Automated Model Abstraction

    PubMed Central

    Kuwahara, Hiroyuki; Myers, Chris J.; Samoilov, Michael S.

    2010-01-01

    Uropathogenic Escherichia coli (UPEC) represent the predominant cause of urinary tract infections (UTIs). A key UPEC molecular virulence mechanism is type 1 fimbriae, whose expression is controlled by the orientation of an invertible chromosomal DNA element—the fim switch. Temperature has been shown to act as a major regulator of fim switching behavior and is overall an important indicator as well as functional feature of many urologic diseases, including UPEC host-pathogen interaction dynamics. Given this panoptic physiological role of temperature during UTI progression and notable empirical challenges to its direct in vivo studies, in silico modeling of corresponding biochemical and biophysical mechanisms essential to UPEC pathogenicity may significantly aid our understanding of the underlying disease processes. However, rigorous computational analysis of biological systems, such as fim switch temperature control circuit, has hereto presented a notoriously demanding problem due to both the substantial complexity of the gene regulatory networks involved as well as their often characteristically discrete and stochastic dynamics. To address these issues, we have developed an approach that enables automated multiscale abstraction of biological system descriptions based on reaction kinetics. Implemented as a computational tool, this method has allowed us to efficiently analyze the modular organization and behavior of the E. coli fimbriation switch circuit at different temperature settings, thus facilitating new insights into this mode of UPEC molecular virulence regulation. In particular, our results suggest that, with respect to its role in shutting down fimbriae expression, the primary function of FimB recombinase may be to effect a controlled down-regulation (rather than increase) of the ON-to-OFF fim switching rate via temperature-dependent suppression of competing dynamics mediated by recombinase FimE. Our computational analysis further implies that this down

  12. A Wavelet Support Vector Machine Combination Model for Singapore Tourist Arrival to Malaysia

    NASA Astrophysics Data System (ADS)

    Rafidah, A.; Shabri, Ani; Nurulhuda, A.; Suhaila, Y.

    2017-08-01

    In this study, wavelet support vector machine model (WSVM) is proposed and applied for monthly data Singapore tourist time series prediction. The WSVM model is combination between wavelet analysis and support vector machine (SVM). In this study, we have two parts, first part we compare between the kernel function and second part we compare between the developed models with single model, SVM. The result showed that kernel function linear better than RBF while WSVM outperform with single model SVM to forecast monthly Singapore tourist arrival to Malaysia.

  13. Thermal Error Modeling of a Machine Tool Using Data Mining Scheme

    NASA Astrophysics Data System (ADS)

    Wang, Kun-Chieh; Tseng, Pai-Chang

    In this paper the knowledge discovery technique is used to build an effective and transparent mathematic thermal error model for machine tools. Our proposed thermal error modeling methodology (called KRL) integrates the schemes of K-means theory (KM), rough-set theory (RS), and linear regression model (LR). First, to explore the machine tool's thermal behavior, an integrated system is designed to simultaneously measure the temperature ascents at selected characteristic points and the thermal deformations at spindle nose under suitable real machining conditions. Second, the obtained data are classified by the KM method, further reduced by the RS scheme, and a linear thermal error model is established by the LR technique. To evaluate the performance of our proposed model, an adaptive neural fuzzy inference system (ANFIS) thermal error model is introduced for comparison. Finally, a verification experiment is carried out and results reveal that the proposed KRL model is effective in predicting thermal behavior in machine tools. Our proposed KRL model is transparent, easily understood by users, and can be easily programmed or modified for different machining conditions.

  14. Precision holding prediction model for moving joint surfaces of large machine tool

    NASA Astrophysics Data System (ADS)

    Wang, Mulan; Chen, Xuanyu; Ding, Wenzheng; Xu, Kaiyun

    2017-01-01

    In large machine tool, the plastic guide rail is more and more widely used because of its good mechanical properties. Based on the actual operating conditions of the machine tool, this paper analyzes the precision holding performance of the main bearing surface of the large machine tool with plastic guide rail moving. The precision holding performance of the plastic sliding guide rail is studied in detail from several aspects, such as the lubrication condition, the operating parameters of the machine tool and the material properties. The precision holding model of the moving binding surface of the plastic coated guide rail is established. At the same time, the experimental research on the accuracy of the guide rail is carried out, which verifies the validity of the theoretical model.

  15. Quantum turing machine and brain model represented by Fock space

    NASA Astrophysics Data System (ADS)

    Iriyama, Satoshi; Ohya, Masanori

    2016-05-01

    The adaptive dynamics is known as a new mathematics to treat with a complex phenomena, for example, chaos, quantum algorithm and psychological phenomena. In this paper, we briefly review the notion of the adaptive dynamics, and explain the definition of the generalized Turing machine (GTM) and recognition process represented by the Fock space. Moreover, we show that there exists the quantum channel which is described by the GKSL master equation to achieve the Chaos Amplifier used in [M. Ohya and I. V. Volovich, J. Opt. B 5(6) (2003) 639., M. Ohya and I. V. Volovich, Rep. Math. Phys. 52(1) (2003) 25.

  16. Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool

    NASA Astrophysics Data System (ADS)

    Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo

    2017-05-01

    Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.

  17. Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool

    NASA Astrophysics Data System (ADS)

    Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo

    2017-03-01

    Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.

  18. Modelling of internal architecture of kinesin nanomotor as a machine language.

    PubMed

    Khataee, H R; Ibrahim, M Y

    2012-09-01

    Kinesin is a protein-based natural nanomotor that transports molecular cargoes within cells by walking along microtubules. Kinesin nanomotor is considered as a bio-nanoagent which is able to sense the cell through its sensors (i.e. its heads and tail), make the decision internally and perform actions on the cell through its actuator (i.e. its motor domain). The study maps the agent-based architectural model of internal decision-making process of kinesin nanomotor to a machine language using an automata algorithm. The applied automata algorithm receives the internal agent-based architectural model of kinesin nanomotor as a deterministic finite automaton (DFA) model and generates a regular machine language. The generated regular machine language was acceptable by the architectural DFA model of the nanomotor and also in good agreement with its natural behaviour. The internal agent-based architectural model of kinesin nanomotor indicates the degree of autonomy and intelligence of the nanomotor interactions with its cell. Thus, our developed regular machine language can model the degree of autonomy and intelligence of kinesin nanomotor interactions with its cell as a language. Modelling of internal architectures of autonomous and intelligent bio-nanosystems as machine languages can lay the foundation towards the concept of bio-nanoswarms and next phases of the bio-nanorobotic systems development.

  19. Evaluation of machining methods for trabecular metal implants in a rabbit intramedullary osseointegration model.

    PubMed

    Deglurkar, Mukund; Davy, Dwight T; Stewart, Matthew; Goldberg, Victor M; Welter, Jean F

    2007-02-01

    Implant success is dependent in part on the interaction of the implant with the surrounding tissues. Porous tantalum implants (Trabecular Metal, TM) have been shown to have excellent osseointegration. Machining this material to complex shapes with close tolerances is difficult because of its open structure and the ductile nature of metallic tantalum. Conventional machining results in occlusion of most of the surface porosity by the smearing of soft metal. This study compared TM samples finished by three processing techniques: conventional machining, electrical discharge machining, and nonmachined, "as-prepared." The TM samples were studied in a rabbit distal femoral intramedullary osseointegration model and in cell culture. We assessed the effects of these machining methods at 4, 8, and 12 weeks after implant placement. The finishing technique had a profound effect on the physical presentation of the implant interface: conventional machining reduced surface porosity to 30% compared to bulk porosities in the 70% range. Bone ongrowth was similar in all groups, while bone ingrowth was significantly greater in the nonmachined samples. The resulting mechanical properties of the bone implant-interface were similar in all three groups, with only interface stiffness and interface shear modulus being significantly higher in the machined samples.

  20. Machine learning algorithms outperform conventional regression models in predicting development of hepatocellular carcinoma.

    PubMed

    Singal, Amit G; Mukherjee, Ashin; Elmunzer, B Joseph; Higgins, Peter D R; Lok, Anna S; Zhu, Ji; Marrero, Jorge A; Waljee, Akbar K

    2013-11-01

    Predictive models for hepatocellular carcinoma (HCC) have been limited by modest accuracy and lack of validation. Machine-learning algorithms offer a novel methodology, which may improve HCC risk prognostication among patients with cirrhosis. Our study's aim was to develop and compare predictive models for HCC development among cirrhotic patients, using conventional regression analysis and machine-learning algorithms. We enrolled 442 patients with Child A or B cirrhosis at the University of Michigan between January 2004 and September 2006 (UM cohort) and prospectively followed them until HCC development, liver transplantation, death, or study termination. Regression analysis and machine-learning algorithms were used to construct predictive models for HCC development, which were tested on an independent validation cohort from the Hepatitis C Antiviral Long-term Treatment against Cirrhosis (HALT-C) Trial. Both models were also compared with the previously published HALT-C model. Discrimination was assessed using receiver operating characteristic curve analysis, and diagnostic accuracy was assessed with net reclassification improvement and integrated discrimination improvement statistics. After a median follow-up of 3.5 years, 41 patients developed HCC. The UM regression model had a c-statistic of 0.61 (95% confidence interval (CI) 0.56-0.67), whereas the machine-learning algorithm had a c-statistic of 0.64 (95% CI 0.60-0.69) in the validation cohort. The HALT-C model had a c-statistic of 0.60 (95% CI 0.50-0.70) in the validation cohort and was outperformed by the machine-learning algorithm. The machine-learning algorithm had significantly better diagnostic accuracy as assessed by net reclassification improvement (P<0.001) and integrated discrimination improvement (P=0.04). Machine-learning algorithms improve the accuracy of risk stratifying patients with cirrhosis and can be used to accurately identify patients at high-risk for developing HCC.

  1. Machine Learning Algorithms Outperform Conventional Regression Models in Predicting Development of Hepatocellular Carcinoma

    PubMed Central

    Singal, Amit G.; Mukherjee, Ashin; Elmunzer, B. Joseph; Higgins, Peter DR; Lok, Anna S.; Zhu, Ji; Marrero, Jorge A; Waljee, Akbar K

    2015-01-01

    Background Predictive models for hepatocellular carcinoma (HCC) have been limited by modest accuracy and lack of validation. Machine learning algorithms offer a novel methodology, which may improve HCC risk prognostication among patients with cirrhosis. Our study's aim was to develop and compare predictive models for HCC development among cirrhotic patients, using conventional regression analysis and machine learning algorithms. Methods We enrolled 442 patients with Child A or B cirrhosis at the University of Michigan between January 2004 and September 2006 (UM cohort) and prospectively followed them until HCC development, liver transplantation, death, or study termination. Regression analysis and machine learning algorithms were used to construct predictive models for HCC development, which were tested on an independent validation cohort from the Hepatitis C Antiviral Long-term Treatment against Cirrhosis (HALT-C) Trial. Both models were also compared to the previously published HALT-C model. Discrimination was assessed using receiver operating characteristic curve analysis and diagnostic accuracy was assessed with net reclassification improvement and integrated discrimination improvement statistics. Results After a median follow-up of 3.5 years, 41 patients developed HCC. The UM regression model had a c-statistic of 0.61 (95%CI 0.56-0.67), whereas the machine learning algorithm had a c-statistic of 0.64 (95%CI 0.60–0.69) in the validation cohort. The machine learning algorithm had significantly better diagnostic accuracy as assessed by net reclassification improvement (p<0.001) and integrated discrimination improvement (p=0.04). The HALT-C model had a c-statistic of 0.60 (95%CI 0.50-0.70) in the validation cohort and was outperformed by the machine learning algorithm (p=0.047). Conclusion Machine learning algorithms improve the accuracy of risk stratifying patients with cirrhosis and can be used to accurately identify patients at high-risk for developing HCC

  2. Slip-line modeling of machining with a rounded-edge tool—Part I: new model and theory

    NASA Astrophysics Data System (ADS)

    Fang, N.

    2003-04-01

    The effect of tool edge roundness attracts growing attention from the international machining research community due to ever accelerating applications of precision, super-precision, micro-, and nano-machining technologies in a wide variety of modern industries. A new slip-line model for machining with a rounded-edge tool and its associated hodograph are proposed in this paper. The model consists of 27 slip-line sub-regions, each sub-region having its own physical meaning. It is demonstrated that the model simultaneously takes into account nine effects, such as the shear-zone effect and the size effect, which commonly occur in machining. Eight groups of machining parameters, such as the ploughing (parasitic or non-cutting) force and the chip up-curl radius, can be simultaneously predicted from the model. Furthermore, the model incorporates eight slip-line models previously developed for machining during the last six decades as special cases. An additional special case that involves a parallel-sided shear zone can also be derived from the new model. A mathematical formulation of the model is established based on Dewhurst and Collins's (1973) matrix technique for numerically solving slip-line problems. A purely analytical equation is proposed to predict the thickness of the primary shear zone. This equation is also employed to predict the shear strain-rate in the primary shear zone.

  3. Technical note: Evaluation of three machine learning models for surface ocean CO2 mapping

    NASA Astrophysics Data System (ADS)

    Zeng, Jiye; Matsunaga, Tsuneo; Saigusa, Nobuko; Shirai, Tomoko; Nakaoka, Shin-ichiro; Tan, Zheng-Hong

    2017-04-01

    Reconstructing surface ocean CO2 from scarce measurements plays an important role in estimating oceanic CO2 uptake. There are varying degrees of differences among the 14 models included in the Surface Ocean CO2 Mapping (SOCOM) inter-comparison initiative, in which five models used neural networks. This investigation evaluates two neural networks used in SOCOM, self-organizing maps and feedforward neural networks, and introduces a machine learning model called a support vector machine for ocean CO2 mapping. The technique note provides a practical guide to selecting the models.

  4. An Introduction to Topic Modeling as an Unsupervised Machine Learning Way to Organize Text Information

    ERIC Educational Resources Information Center

    Snyder, Robin M.

    2015-01-01

    The field of topic modeling has become increasingly important over the past few years. Topic modeling is an unsupervised machine learning way to organize text (or image or DNA, etc.) information such that related pieces of text can be identified. This paper/session will present/discuss the current state of topic modeling, why it is important, and…

  5. A Framework for Modeling Human-Machine Interactions

    NASA Technical Reports Server (NTRS)

    Shafto, Michael G.; Rosekind, Mark R. (Technical Monitor)

    1996-01-01

    Modern automated flight-control systems employ a variety of different behaviors, or modes, for managing the flight. While developments in cockpit automation have resulted in workload reduction and economical advantages, they have also given rise to an ill-defined class of human-machine problems, sometimes referred to as 'automation surprises'. Our interest in applying formal methods for describing human-computer interaction stems from our ongoing research on cockpit automation. In this area of aeronautical human factors, there is much concern about how flight crews interact with automated flight-control systems, so that the likelihood of making errors, in particular mode-errors, is minimized and the consequences of such errors are contained. The goal of the ongoing research on formal methods in this context is: (1) to develop a framework for describing human interaction with control systems; (2) to formally categorize such automation surprises; and (3) to develop tests for identification of these categories early in the specification phase of a new human-machine system.

  6. A Framework for Modeling Human-Machine Interactions

    NASA Technical Reports Server (NTRS)

    Shafto, Michael G.; Rosekind, Mark R. (Technical Monitor)

    1996-01-01

    Modern automated flight-control systems employ a variety of different behaviors, or modes, for managing the flight. While developments in cockpit automation have resulted in workload reduction and economical advantages, they have also given rise to an ill-defined class of human-machine problems, sometimes referred to as 'automation surprises'. Our interest in applying formal methods for describing human-computer interaction stems from our ongoing research on cockpit automation. In this area of aeronautical human factors, there is much concern about how flight crews interact with automated flight-control systems, so that the likelihood of making errors, in particular mode-errors, is minimized and the consequences of such errors are contained. The goal of the ongoing research on formal methods in this context is: (1) to develop a framework for describing human interaction with control systems; (2) to formally categorize such automation surprises; and (3) to develop tests for identification of these categories early in the specification phase of a new human-machine system.

  7. A hybrid analytical model for open-circuit field calculation of multilayer interior permanent magnet machines

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Xia, Changliang; Yan, Yan; Geng, Qiang; Shi, Tingna

    2017-08-01

    Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff's law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell's equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.

  8. Learning with Technology: Video Modeling with Concrete-Representational-Abstract Sequencing for Students with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Yakubova, Gulnoza; Hughes, Elizabeth M.; Shinaberry, Megan

    2016-01-01

    The purpose of this study was to determine the effectiveness of a video modeling intervention with concrete-representational-abstract instructional sequence in teaching mathematics concepts to students with autism spectrum disorder (ASD). A multiple baseline across skills design of single-case experimental methodology was used to determine the…

  9. Teaching Subtraction and Multiplication with Regrouping Using the Concrete-Representational-Abstract Sequence and Strategic Instruction Model

    ERIC Educational Resources Information Center

    Flores, Margaret M.; Hinton, Vanessa; Strozier, Shaunita D.

    2014-01-01

    Based on Common Core Standards (2010), mathematics interventions should emphasize conceptual understanding of numbers and operations as well as fluency. For students at risk for failure, the concrete-representational-abstract (CRA) sequence and the Strategic Instruction Model (SIM) have been shown effective in teaching computation with an emphasis…

  10. Learning with Technology: Video Modeling with Concrete-Representational-Abstract Sequencing for Students with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Yakubova, Gulnoza; Hughes, Elizabeth M.; Shinaberry, Megan

    2016-01-01

    The purpose of this study was to determine the effectiveness of a video modeling intervention with concrete-representational-abstract instructional sequence in teaching mathematics concepts to students with autism spectrum disorder (ASD). A multiple baseline across skills design of single-case experimental methodology was used to determine the…

  11. Product Quality Modelling Based on Incremental Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Wang, J.; Zhang, W.; Qin, B.; Shi, W.

    2012-05-01

    Incremental Support vector machine (ISVM) is a new learning method developed in recent years based on the foundations of statistical learning theory. It is suitable for the problem of sequentially arriving field data and has been widely used for product quality prediction and production process optimization. However, the traditional ISVM learning does not consider the quality of the incremental data which may contain noise and redundant data; it will affect the learning speed and accuracy to a great extent. In order to improve SVM training speed and accuracy, a modified incremental support vector machine (MISVM) is proposed in this paper. Firstly, the margin vectors are extracted according to the Karush-Kuhn-Tucker (KKT) condition; then the distance from the margin vectors to the final decision hyperplane is calculated to evaluate the importance of margin vectors, where the margin vectors are removed while their distance exceed the specified value; finally, the original SVs and remaining margin vectors are used to update the SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also can preserve the important samples. The MISVM has been experimented on two public data and one field data of zinc coating weight in strip hot-dip galvanizing, and the results shows that the proposed method can improve the prediction accuracy and the training speed effectively. Furthermore, it can provide the necessary decision supports and analysis tools for auto control of product quality, and also can extend to other process industries, such as chemical process and manufacturing process.

  12. Interpreting linear support vector machine models with heat map molecule coloring.

    PubMed

    Rosenbaum, Lars; Hinselmann, Georg; Jahn, Andreas; Zell, Andreas

    2011-03-25

    Model-based virtual screening plays an important role in the early drug discovery stage. The outcomes of high-throughput screenings are a valuable source for machine learning algorithms to infer such models. Besides a strong performance, the interpretability of a machine learning model is a desired property to guide the optimization of a compound in later drug discovery stages. Linear support vector machines showed to have a convincing performance on large-scale data sets. The goal of this study is to present a heat map molecule coloring technique to interpret linear support vector machine models. Based on the weights of a linear model, the visualization approach colors each atom and bond of a compound according to its importance for activity. We evaluated our approach on a toxicity data set, a chromosome aberration data set, and the maximum unbiased validation data sets. The experiments show that our method sensibly visualizes structure-property and structure-activity relationships of a linear support vector machine model. The coloring of ligands in the binding pocket of several crystal structures of a maximum unbiased validation data set target indicates that our approach assists to determine the correct ligand orientation in the binding pocket. Additionally, the heat map coloring enables the identification of substructures important for the binding of an inhibitor. In combination with heat map coloring, linear support vector machine models can help to guide the modification of a compound in later stages of drug discovery. Particularly substructures identified as important by our method might be a starting point for optimization of a lead compound. The heat map coloring should be considered as complementary to structure based modeling approaches. As such, it helps to get a better understanding of the binding mode of an inhibitor.

  13. Interpreting linear support vector machine models with heat map molecule coloring

    PubMed Central

    2011-01-01

    Background Model-based virtual screening plays an important role in the early drug discovery stage. The outcomes of high-throughput screenings are a valuable source for machine learning algorithms to infer such models. Besides a strong performance, the interpretability of a machine learning model is a desired property to guide the optimization of a compound in later drug discovery stages. Linear support vector machines showed to have a convincing performance on large-scale data sets. The goal of this study is to present a heat map molecule coloring technique to interpret linear support vector machine models. Based on the weights of a linear model, the visualization approach colors each atom and bond of a compound according to its importance for activity. Results We evaluated our approach on a toxicity data set, a chromosome aberration data set, and the maximum unbiased validation data sets. The experiments show that our method sensibly visualizes structure-property and structure-activity relationships of a linear support vector machine model. The coloring of ligands in the binding pocket of several crystal structures of a maximum unbiased validation data set target indicates that our approach assists to determine the correct ligand orientation in the binding pocket. Additionally, the heat map coloring enables the identification of substructures important for the binding of an inhibitor. Conclusions In combination with heat map coloring, linear support vector machine models can help to guide the modification of a compound in later stages of drug discovery. Particularly substructures identified as important by our method might be a starting point for optimization of a lead compound. The heat map coloring should be considered as complementary to structure based modeling approaches. As such, it helps to get a better understanding of the binding mode of an inhibitor. PMID:21439031

  14. Abstract Constructions.

    ERIC Educational Resources Information Center

    Pietropola, Anne

    1998-01-01

    Describes a lesson designed to culminate a year of eighth-grade art classes in which students explore elements of design and space by creating 3-D abstract constructions. Outlines the process of using foam board and markers to create various shapes and optical effects. (DSK)

  15. Abstract Planters.

    ERIC Educational Resources Information Center

    Burnham, Barrie

    2001-01-01

    Describes an art project where students create slab planters that are functional clay objects. Discusses how the students prepare their drawings for the planter and how to create the planter. Explains that the subject matter is depicted in a progression from reality to abstraction. (CMK)

  16. Research Abstracts.

    ERIC Educational Resources Information Center

    Plotnik, Eric

    2001-01-01

    Presents six research abstracts from the ERIC (Educational Resources Information Center) database. Topics include: effectiveness of distance versus traditional on-campus education; improved attribution recall from diversification of environmental context during computer-based instruction; qualitative analysis of situated Web-based learning;…

  17. Research Abstracts.

    ERIC Educational Resources Information Center

    Plotnick, Eric

    2001-01-01

    Presents research abstracts from the ERIC Clearinghouse on Information and Technology. Topics include: classroom communication apprehension and distance education; outcomes of a distance-delivered science course; the NASA/Kennedy Space Center Virtual Science Mentor program; survey of traditional and distance learning higher education members;…

  18. Research Abstracts.

    ERIC Educational Resources Information Center

    Plotnik, Eric

    2001-01-01

    Presents six research abstracts from the ERIC (Educational Resources Information Center) database. Topics include: effectiveness of distance versus traditional on-campus education; improved attribution recall from diversification of environmental context during computer-based instruction; qualitative analysis of situated Web-based learning;…

  19. Automating Construction of Machine Learning Models With Clinical Big Data: Proposal Rationale and Methods.

    PubMed

    Luo, Gang; Stone, Bryan L; Johnson, Michael D; Tarczy-Hornoch, Peter; Wilcox, Adam B; Mooney, Sean D; Sheng, Xiaoming; Haug, Peter J; Nkoy, Flory L

    2017-08-29

    To improve health outcomes and cut health care costs, we often need to conduct prediction/classification using large clinical datasets (aka, clinical big data), for example, to identify high-risk patients for preventive interventions. Machine learning has been proposed as a key technology for doing this. Machine learning has won most data science competitions and could support many clinical activities, yet only 15% of hospitals use it for even limited purposes. Despite familiarity with data, health care researchers often lack machine learning expertise to directly use clinical big data, creating a hurdle in realizing value from their data. Health care researchers can work with data scientists with deep machine learning knowledge, but it takes time and effort for both parties to communicate effectively. Facing a shortage in the United States of data scientists and hiring competition from companies with deep pockets, health care systems have difficulty recruiting data scientists. Building and generalizing a machine learning model often requires hundreds to thousands of manual iterations by data scientists to select the following: (1) hyper-parameter values and complex algorithms that greatly affect model accuracy and (2) operators and periods for temporally aggregating clinical attributes (eg, whether a patient's weight kept rising in the past year). This process becomes infeasible with limited budgets. This study's goal is to enable health care researchers to directly use clinical big data, make machine learning feasible with limited budgets and data scientist resources, and realize value from data. This study will allow us to achieve the following: (1) finish developing the new software, Automated Machine Learning (Auto-ML), to automate model selection for machine learning with clinical big data and validate Auto-ML on seven benchmark modeling problems of clinical importance; (2) apply Auto-ML and novel methodology to two new modeling problems crucial for care

  20. Automating Construction of Machine Learning Models With Clinical Big Data: Proposal Rationale and Methods

    PubMed Central

    Stone, Bryan L; Johnson, Michael D; Tarczy-Hornoch, Peter; Wilcox, Adam B; Mooney, Sean D; Sheng, Xiaoming; Haug, Peter J; Nkoy, Flory L

    2017-01-01

    Background To improve health outcomes and cut health care costs, we often need to conduct prediction/classification using large clinical datasets (aka, clinical big data), for example, to identify high-risk patients for preventive interventions. Machine learning has been proposed as a key technology for doing this. Machine learning has won most data science competitions and could support many clinical activities, yet only 15% of hospitals use it for even limited purposes. Despite familiarity with data, health care researchers often lack machine learning expertise to directly use clinical big data, creating a hurdle in realizing value from their data. Health care researchers can work with data scientists with deep machine learning knowledge, but it takes time and effort for both parties to communicate effectively. Facing a shortage in the United States of data scientists and hiring competition from companies with deep pockets, health care systems have difficulty recruiting data scientists. Building and generalizing a machine learning model often requires hundreds to thousands of manual iterations by data scientists to select the following: (1) hyper-parameter values and complex algorithms that greatly affect model accuracy and (2) operators and periods for temporally aggregating clinical attributes (eg, whether a patient’s weight kept rising in the past year). This process becomes infeasible with limited budgets. Objective This study’s goal is to enable health care researchers to directly use clinical big data, make machine learning feasible with limited budgets and data scientist resources, and realize value from data. Methods This study will allow us to achieve the following: (1) finish developing the new software, Automated Machine Learning (Auto-ML), to automate model selection for machine learning with clinical big data and validate Auto-ML on seven benchmark modeling problems of clinical importance; (2) apply Auto-ML and novel methodology to two new

  1. Improving Domain-specific Machine Translation by Constraining the Language Model

    DTIC Science & Technology

    2012-07-01

    of greater amounts of training data in the two models, especially in the target language model (Brants et al., 2007). Och (2005) reports findings...train with the largest language models (NIST, 2006). The highest scoring Arabic-English system used a 1-trillion-word language model ( Och , 2006...References Brants, T.; Popat, A. C.; Xu, P.; Och , F. J.; Dean, J. Large Language Models in Machine Translation. Joint Meeting of the Conference on Empirical

  2. Scientist-Centered Workflow Abstractions via Generic Actors, Workflow Templates, and Context-Awareness for Groundwater Modeling and Analysis

    SciTech Connect

    Chin, George; Sivaramakrishnan, Chandrika; Critchlow, Terence J.; Schuchardt, Karen L.; Ngu, Anne Hee Hiong

    2011-07-04

    A drawback of existing scientific workflow systems is the lack of support to domain scientists in designing and executing their own scientific workflows. Many domain scientists avoid developing and using workflows because the basic objects of workflows are too low-level and high-level tools and mechanisms to aid in workflow construction and use are largely unavailable. In our research, we are prototyping higher-level abstractions and tools to better support scientists in their workflow activities. Specifically, we are developing generic actors that provide abstract interfaces to specific functionality, workflow templates that encapsulate workflow and data patterns that can be reused and adapted by scientists, and context-awareness mechanisms to gather contextual information from the workflow environment on behalf of the scientist. To evaluate these scientist-centered abstractions on real problems, we apply them to construct and execute scientific workflows in the specific domain area of groundwater modeling and analysis.

  3. PredicT-ML: a tool for automating machine learning model building with big clinical data.

    PubMed

    Luo, Gang

    2016-01-01

    Predictive modeling is fundamental to transforming large clinical data sets, or "big clinical data," into actionable knowledge for various healthcare applications. Machine learning is a major predictive modeling approach, but two barriers make its use in healthcare challenging. First, a machine learning tool user must choose an algorithm and assign one or more model parameters called hyper-parameters before model training. The algorithm and hyper-parameter values used typically impact model accuracy by over 40 %, but their selection requires many labor-intensive manual iterations that can be difficult even for computer scientists. Second, many clinical attributes are repeatedly recorded over time, requiring temporal aggregation before predictive modeling can be performed. Many labor-intensive manual iterations are required to identify a good pair of aggregation period and operator for each clinical attribute. Both barriers result in time and human resource bottlenecks, and preclude healthcare administrators and researchers from asking a series of what-if questions when probing opportunities to use predictive models to improve outcomes and reduce costs. This paper describes our design of and vision for PredicT-ML (prediction tool using machine learning), a software system that aims to overcome these barriers and automate machine learning model building with big clinical data. The paper presents the detailed design of PredicT-ML. PredicT-ML will open the use of big clinical data to thousands of healthcare administrators and researchers and increase the ability to advance clinical research and improve healthcare.

  4. Combining Psychological Models with Machine Learning to Better Predict People’s Decisions

    DTIC Science & Technology

    2012-03-09

    in some applications (Kaelbling, Littman, & Cassandra, 1998; Neumann & Morgenstern, 1944; Russell & Norvig , 2003). However, research into people’s...scientists often model peoples’ decisions through machine learning techniques (Russell & Norvig , 2003). These models are based on statistical methods such as...A., & Kraus, S. (2011). Using aspiration adaptation theory to improve learning. In Aamas (p. 423-430). Russell, S. J., & Norvig , P. (2003

  5. Tool wear predictive model based on least squares support vector machines

    NASA Astrophysics Data System (ADS)

    Shi, Dongfeng; Gindy, Nabil N.

    2007-05-01

    The development of tool wear monitoring system for machining processes has been well recognised in industry due to the ever-increased demand for product quality and productivity improvement. This paper presents a new tool wear predictive model by combination of least squares support vector machines (LS-SVM) and principal component analysis (PCA) technique. The corresponding tool wear monitoring system is developed based on the platform of PXI and LabVIEW. PCA is firstly proposed to extract features from multiple sensory signals acquired from machining processes. Then, LS-SVM-based tool wear prediction model is constructed by learning correlation between extracted features and actual tool wear. The effectiveness of proposed predictive model and corresponding tool wear monitoring system is demonstrated by experimental results from broaching trials.

  6. Genetic Optimization of Training Sets for Improved Machine Learning Models of Molecular Properties.

    PubMed

    Browning, Nicholas J; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole; Roethlisberger, Ursula

    2017-04-06

    The training of molecular models of quantum mechanical properties based on statistical machine learning requires large data sets which exemplify the map from chemical structure to molecular property. Intelligent a priori selection of training examples is often difficult or impossible to achieve, as prior knowledge may be unavailable. Ordinarily representative selection of training molecules from such data sets is achieved through random sampling. We use genetic algorithms for the optimization of training set composition consisting of tens of thousands of small organic molecules. The resulting machine learning models are considerably more accurate: in the limit of small training sets, mean absolute errors for out-of-sample predictions are reduced by up to ∼75%. We discuss and present optimized training sets consisting of 10 molecular classes for all molecular properties studied. We show that these classes can be used to design improved training sets for the generation of machine learning models of the same properties in similar but unrelated molecular sets.

  7. Modelling of Tool Wear and Residual Stress during Machining of AISI H13 Tool Steel

    SciTech Connect

    Outeiro, Jose C.; Pina, Jose C.; Umbrello, Domenico; Rizzuti, Stefania

    2007-05-17

    Residual stresses can enhance or impair the ability of a component to withstand loading conditions in service (fatigue, creep, stress corrosion cracking, etc.), depending on their nature: compressive or tensile, respectively. This poses enormous problems in structural assembly as this affects the structural integrity of the whole part. In addition, tool wear issues are of critical importance in manufacturing since these affect component quality, tool life and machining cost. Therefore, prediction and control of both tool wear and the residual stresses in machining are absolutely necessary. In this work, a two-dimensional Finite Element model using an implicit Lagrangian formulation with an automatic remeshing was applied to simulate the orthogonal cutting process of AISI H13 tool steel. To validate such model the predicted and experimentally measured chip geometry, cutting forces, temperatures, tool wear and residual stresses on the machined affected layers were compared. The proposed FE model allowed us to investigate the influence of tool geometry, cutting regime parameters and tool wear on residual stress distribution in the machined surface and subsurface of AISI H13 tool steel. The obtained results permit to conclude that in order to reduce the magnitude of surface residual stresses, the cutting speed should be increased, the uncut chip thickness (or feed) should be reduced and machining with honed tools having large cutting edge radii produce better results than chamfered tools. Moreover, increasing tool wear increases the magnitude of surface residual stresses.

  8. A Review of Current Machine Learning Methods Used for Cancer Recurrence Modeling and Prediction

    SciTech Connect

    Hemphill, Geralyn M.

    2016-09-27

    Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type has become a necessity in cancer research. A major challenge in cancer management is the classification of patients into appropriate risk groups for better treatment and follow-up. Such risk assessment is critically important in order to optimize the patient’s health and the use of medical resources, as well as to avoid cancer recurrence. This paper focuses on the application of machine learning methods for predicting the likelihood of a recurrence of cancer. It is not meant to be an extensive review of the literature on the subject of machine learning techniques for cancer recurrence modeling. Other recent papers have performed such a review, and I will rely heavily on the results and outcomes from these papers. The electronic databases that were used for this review include PubMed, Google, and Google Scholar. Query terms used include “cancer recurrence modeling”, “cancer recurrence and machine learning”, “cancer recurrence modeling and machine learning”, and “machine learning for cancer recurrence and prediction”. The most recent and most applicable papers to the topic of this review have been included in the references. It also includes a list of modeling and classification methods to predict cancer recurrence.

  9. Colour Model for Outdoor Machine Vision for Tropical Regions and its Comparison with the CIE Model

    NASA Astrophysics Data System (ADS)

    Sahragard, Nasrolah; Ramli, Abdul Rahman B.; Hamiruce Marhaban, Mohammad; Mansor, Shattri B.

    2011-02-01

    Accurate modeling of daylight and surface reflectance are very useful for most outdoor machine vision applications specifically those which are based on color recognition. Existing daylight CIE model has drawbacks that limit its ability to predict the color of incident light. These limitations include lack of considering ambient light, effects of light reflected off the ground, and context specific information. Previously developed color model is only tested for a few geographical places in North America and its accountability is under question for other places in the world. Besides, existing surface reflectance models are not easily applied to outdoor images. A reflectance model with combined diffuse and specular reflection in normalized HSV color space could be used to predict color. In this paper, a new daylight color model showing the color of daylight for a broad range of sky conditions is developed which will suit weather conditions of tropical places such as Malaysia. A comparison of this daylight color model and daylight CIE model will be discussed. The colors of matte and specular surfaces have been estimated by use of the developed color model and surface reflection function in this paper. The results are shown to be highly reliable.

  10. A Rapid Compression Machine Modelling Study of the Heptane Isomers

    SciTech Connect

    Silke, E J; Curran, H J; Simmie, J M; Pitz, W J; Westbrook, C K

    2005-05-10

    Previously we have reported on the combustion behavior of all nine isomers of heptane in a rapid compression machine (RCM) with stoichiometric fuel and ''air'' mixtures at a compressed gas pressure of 15 atm. The dependence of autoignition delay times on molecular structure was illustrated. Here, we report some additional experimental work that was performed in order to address unusual results regarding significant differences in the ignition delay times recorded at the same fuel and oxygen composition, but with different fractions of nitrogen and argon diluent gases. Moreover, we have begun to simulate these experiments with detailed chemical kinetic mechanisms. These mechanisms are based on previous studies of other alkane molecules, in particular, n-heptane and iso-octane. We have focused our attention on n-heptane in order to systematically redevelop the chemistry and thermochemistry for this C{sub 7} isomer with the intention of extending our greater knowledge gained to the other eight isomers. The addition of new reaction types, that were not included previously, has had a significant impact on the simulations, particularly at low temperatures.

  11. Machine learning for many-body physics: The case of the Anderson impurity model

    SciTech Connect

    Arsenault, Louis-François; Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole; Millis, Andrew J.

    2014-10-31

    We applied machine learning methods in order to find the Green's function of the Anderson impurity model, a basic model system of quantum many-body condensed-matter physics. Furthermore, different methods of parametrizing the Green's function are investigated; a representation in terms of Legendre polynomials is found to be superior due to its limited number of coefficients and its applicability to state of the art methods of solution. The dependence of the errors on the size of the training set is determined. Our results indicate that a machine learning approach to dynamical mean-field theory may be feasible.

  12. Machine learning for many-body physics: The case of the Anderson impurity model

    NASA Astrophysics Data System (ADS)

    Arsenault, Louis-François; Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole; Millis, Andrew J.

    2014-10-01

    Machine learning methods are applied to finding the Green's function of the Anderson impurity model, a basic model system of quantum many-body condensed-matter physics. Different methods of parametrizing the Green's function are investigated; a representation in terms of Legendre polynomials is found to be superior due to its limited number of coefficients and its applicability to state of the art methods of solution. The dependence of the errors on the size of the training set is determined. The results indicate that a machine learning approach to dynamical mean-field theory may be feasible.

  13. Global Abstracts.

    PubMed

    Weber, Ellen J

    2017-10-01

    Editor's note: EMJ has partnered with the journals of multiple international emergency medicine societies to share from each a highlighted research study, as selected by their editors. This edition will feature an abstract from each publication. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. Predicting Mouse Liver Microsomal Stability with “Pruned” Machine Learning Models and Public Data

    PubMed Central

    Perryman, Alexander L.; Stratton, Thomas P.; Ekins, Sean; Freundlich, Joel S.

    2015-01-01

    Purpose Mouse efficacy studies are a critical hurdle to advance translational research of potential therapeutic compounds for many diseases. Although mouse liver microsomal (MLM) stability studies are not a perfect surrogate for in vivo studies of metabolic clearance, they are the initial model system used to assess metabolic stability. Consequently, we explored the development of machine learning models that can enhance the probability of identifying compounds possessing MLM stability. Methods Published assays on MLM half-life values were identified in PubChem, reformatted, and curated to create a training set with 894 unique small molecules. These data were used to construct machine learning models assessed with internal cross-validation, external tests with a published set of antitubercular compounds, and independent validation with an additional diverse set of 571 compounds (PubChem data on percent metabolism). Results “Pruning” out the moderately unstable/moderately stable compounds from the training set produced models with superior predictive power. Bayesian models displayed the best predictive power for identifying compounds with a half-life ≥1 hour. Conclusions Our results suggest the pruning strategy may be of general benefit to improve test set enrichment and provide machine learning models with enhanced predictive value for the MLM stability of small organic molecules. This study represents the most exhaustive study to date of using machine learning approaches with MLM data from public sources. PMID:26415647

  15. Predicting Mouse Liver Microsomal Stability with "Pruned" Machine Learning Models and Public Data.

    PubMed

    Perryman, Alexander L; Stratton, Thomas P; Ekins, Sean; Freundlich, Joel S

    2016-02-01

    Mouse efficacy studies are a critical hurdle to advance translational research of potential therapeutic compounds for many diseases. Although mouse liver microsomal (MLM) stability studies are not a perfect surrogate for in vivo studies of metabolic clearance, they are the initial model system used to assess metabolic stability. Consequently, we explored the development of machine learning models that can enhance the probability of identifying compounds possessing MLM stability. Published assays on MLM half-life values were identified in PubChem, reformatted, and curated to create a training set with 894 unique small molecules. These data were used to construct machine learning models assessed with internal cross-validation, external tests with a published set of antitubercular compounds, and independent validation with an additional diverse set of 571 compounds (PubChem data on percent metabolism). "Pruning" out the moderately unstable / moderately stable compounds from the training set produced models with superior predictive power. Bayesian models displayed the best predictive power for identifying compounds with a half-life ≥1 h. Our results suggest the pruning strategy may be of general benefit to improve test set enrichment and provide machine learning models with enhanced predictive value for the MLM stability of small organic molecules. This study represents the most exhaustive study to date of using machine learning approaches with MLM data from public sources.

  16. New models for energy beam machining enable accurate generation of free forms.

    PubMed

    Axinte, Dragos; Billingham, John; Bilbao Guillerna, Aitor

    2017-09-01

    We demonstrate that, despite differences in their nature, many energy beam controlled-depth machining processes (for example, waterjet, pulsed laser, focused ion beam) can be modeled using the same mathematical framework-a partial differential evolution equation that requires only simple calibrations to capture the physics of each process. The inverse problem can be solved efficiently through the numerical solution of the adjoint problem and leads to beam paths that generate prescribed three-dimensional features with minimal error. The viability of this modeling approach has been demonstrated by generating accurate free-form surfaces using three processes that operate at very different length scales and with different physical principles for material removal: waterjet, pulsed laser, and focused ion beam machining. Our approach can be used to accurately machine materials that are hard to process by other means for scalable applications in a wide variety of industries.

  17. New models for energy beam machining enable accurate generation of free forms

    PubMed Central

    Axinte, Dragos; Billingham, John; Bilbao Guillerna, Aitor

    2017-01-01

    We demonstrate that, despite differences in their nature, many energy beam controlled-depth machining processes (for example, waterjet, pulsed laser, focused ion beam) can be modeled using the same mathematical framework—a partial differential evolution equation that requires only simple calibrations to capture the physics of each process. The inverse problem can be solved efficiently through the numerical solution of the adjoint problem and leads to beam paths that generate prescribed three-dimensional features with minimal error. The viability of this modeling approach has been demonstrated by generating accurate free-form surfaces using three processes that operate at very different length scales and with different physical principles for material removal: waterjet, pulsed laser, and focused ion beam machining. Our approach can be used to accurately machine materials that are hard to process by other means for scalable applications in a wide variety of industries. PMID:28948223

  18. Experience with abstract notation one

    NASA Technical Reports Server (NTRS)

    Harvey, James D.; Weaver, Alfred C.

    1990-01-01

    The development of computer science has produced a vast number of machine architectures, programming languages, and compiler technologies. The cross product of these three characteristics defines the spectrum of previous and present data representation methodologies. With regard to computer networks, the uniqueness of these methodologies presents an obstacle when disparate host environments are to be interconnected. Interoperability within a heterogeneous network relies upon the establishment of data representation commonality. The International Standards Organization (ISO) is currently developing the abstract syntax notation one standard (ASN.1) and the basic encoding rules standard (BER) that collectively address this problem. When used within the presentation layer of the open systems interconnection reference model, these two standards provide the data representation commonality required to facilitate interoperability. The details of a compiler that was built to automate the use of ASN.1 and BER are described. From this experience, insights into both standards are given and potential problems relating to this development effort are discussed.

  19. Numerically Controlled Machining Of Wind-Tunnel Models

    NASA Technical Reports Server (NTRS)

    Kovtun, John B.

    1990-01-01

    New procedure for dynamic models and parts for wind-tunnel tests or radio-controlled flight tests constructed. Involves use of single-phase numerical control (NC) technique to produce highly-accurate, symmetrical models in less time.

  20. Remotely sensed data assimilation technique to develop machine learning models for use in water management

    NASA Astrophysics Data System (ADS)

    Zaman, Bushra

    Increasing population and water conflicts are making water management one of the most important issues of the present world. It has become absolutely necessary to find ways to manage water more efficiently. Technological advancement has introduced various techniques for data acquisition and analysis, and these tools can be used to address some of the critical issues that challenge water resource management. This research used learning machine techniques and information acquired through remote sensing, to solve problems related to soil moisture estimation and crop identification on large spatial scales. In this dissertation, solutions were proposed in three problem areas that can be important in the decision making process related to water management in irrigated systems. A data assimilation technique was used to build a learning machine model that generated soil moisture estimates commensurate with the scale of the data. The research was taken further by developing a multivariate machine learning algorithm to predict root zone soil moisture both in space and time. Further, a model was developed for supervised classification of multi-spectral reflectance data using a multi-class machine learning algorithm. The procedure was designed for classifying crops but the model is data dependent and can be used with other datasets and hence can be applied to other landcover classification problems. The dissertation compared the performance of relevance vector and the support vector machines in estimating soil moisture. A multivariate relevance vector machine algorithm was tested in the spatio-temporal prediction of soil moisture, and the multi-class relevance vector machine model was used for classifying different crop types. It was concluded that the classification scheme may uncover important data patterns contributing greatly to knowledge bases, and to scientific and medical research. The results for the soil moisture models would give a rough idea to farmers

  1. Fusing Dual-Event Datasets for Mycobacterium Tuberculosis Machine Learning Models and their Evaluation

    PubMed Central

    Ekins, Sean; Freundlich, Joel S.; Reynolds, Robert C.

    2013-01-01

    The search for new tuberculosis treatments continues as we need to find molecules that can act more quickly, be accommodated in multi-drug regimens, and overcome ever increasing levels of drug resistance. Multiple large scale phenotypic high-throughput screens against Mycobacterium tuberculosis (Mtb) have generated dose response data, enabling the generation of machine learning models. These models also incorporated cytotoxicity data and were recently validated with a large external dataset. A cheminformatics data-fusion approach followed by Bayesian machine learning, Support Vector Machine or Recursive Partitioning model development (based on publicly available Mtb screening data) was used to compare individual datasets and subsequent combined models. A set of 1924 commercially available molecules with promising antitubercular activity (and lack of relative cytotoxicity to Vero cells) were used to evaluate the predictive nature of the models. We demonstrate that combining three datasets incorporating antitubercular and cytotoxicity data in Vero cells from our previous screens results in external validation receiver operator curve (ROC) of 0.83 (Bayesian or RP Forest). Models that do not have the highest five-fold cross validation ROC scores can outperform other models in a test set dependent manner. We demonstrate with predictions for a recently published set of Mtb leads from GlaxoSmithKline that no single machine learning model may be enough to identify compounds of interest. Dataset fusion represents a further useful strategy for machine learning construction as illustrated with Mtb. Coverage of chemistry and Mtb target spaces may also be limiting factors for the whole-cell screening data generated to date. PMID:24144044

  2. The Use of Machine Aids in Dynamic Multi-Task Environments: A Comparison of an Optimal Model to Human Behavior.

    DTIC Science & Technology

    1982-06-01

    the model. Subject performance was found to vary with the type of decision considered. When searching for tasks to deal with, subjects employed simple ...is present to indicate whether a man or a machine is servicing a task. The case of multiple machine aids, having various abilities is a simple ...cost" in modeling man/machine systems is a simple way of representing the penalty for procrastination. Holding cost does not behave exactly like a

  3. Nonlinear and Digital Man-machine Control Systems Modeling

    NASA Technical Reports Server (NTRS)

    Mekel, R.

    1972-01-01

    An adaptive modeling technique is examined by which controllers can be synthesized to provide corrective dynamics to a human operator's mathematical model in closed loop control systems. The technique utilizes a class of Liapunov functions formulated for this purpose, Liapunov's stability criterion and a model-reference system configuration. The Liapunov function is formulated to posses variable characteristics to take into consideration the identification dynamics. The time derivative of the Liapunov function generate the identification and control laws for the mathematical model system. These laws permit the realization of a controller which updates the human operator's mathematical model parameters so that model and human operator produce the same response when subjected to the same stimulus. A very useful feature is the development of a digital computer program which is easily implemented and modified concurrent with experimentation. The program permits the modeling process to interact with the experimentation process in a mutually beneficial way.

  4. State Machine Modeling of the Space Launch System Solid Rocket Boosters

    NASA Technical Reports Server (NTRS)

    Harris, Joshua A.; Patterson-Hine, Ann

    2013-01-01

    The Space Launch System is a Shuttle-derived heavy-lift vehicle currently in development to serve as NASA's premiere launch vehicle for space exploration. The Space Launch System is a multistage rocket with two Solid Rocket Boosters and multiple payloads, including the Multi-Purpose Crew Vehicle. Planned Space Launch System destinations include near-Earth asteroids, the Moon, Mars, and Lagrange points. The Space Launch System is a complex system with many subsystems, requiring considerable systems engineering and integration. To this end, state machine analysis offers a method to support engineering and operational e orts, identify and avert undesirable or potentially hazardous system states, and evaluate system requirements. Finite State Machines model a system as a finite number of states, with transitions between states controlled by state-based and event-based logic. State machines are a useful tool for understanding complex system behaviors and evaluating "what-if" scenarios. This work contributes to a state machine model of the Space Launch System developed at NASA Ames Research Center. The Space Launch System Solid Rocket Booster avionics and ignition subsystems are modeled using MATLAB/Stateflow software. This model is integrated into a larger model of Space Launch System avionics used for verification and validation of Space Launch System operating procedures and design requirements. This includes testing both nominal and o -nominal system states and command sequences.

  5. A mechanistic ultrasonic vibration amplitude model during rotary ultrasonic machining of CFRP composites.

    PubMed

    Ning, Fuda; Wang, Hui; Cong, Weilong; Fernando, P K S C

    2017-04-01

    Rotary ultrasonic machining (RUM) has been investigated in machining of brittle, ductile, as well as composite materials. Ultrasonic vibration amplitude, as one of the most important input variables, affects almost all the output variables in RUM. Numerous investigations on measuring ultrasonic vibration amplitude without RUM machining have been reported. In recent years, ultrasonic vibration amplitude measurement with RUM of ductile materials has been investigated. It is found that the ultrasonic vibration amplitude with RUM was different from that without RUM under the same input variables. RUM is primarily used in machining of brittle materials through brittle fracture removal. With this reason, the method for measuring ultrasonic vibration amplitude in RUM of ductile materials is not feasible for measuring that in RUM of brittle materials. However, there are no reported methods for measuring ultrasonic vibration amplitude in RUM of brittle materials. In this study, ultrasonic vibration amplitude in RUM of brittle materials is investigated by establishing a mechanistic amplitude model through cutting force. Pilot experiments are conducted to validate the calculation model. The results show that there are no significant differences between amplitude values calculated by model and those obtained from experimental investigations. The model can provide a relationship between ultrasonic vibration amplitude and input variables, which is a foundation for building models to predict other output variables in RUM.

  6. Application of autoregressive distributed lag model to thermal error compensation of machine tools

    NASA Astrophysics Data System (ADS)

    Miao, Enming; Niu, Pengcheng; Fei, Yetai; Yan, Yan

    2011-12-01

    Since Thermal error in precision CNC machine tools cannot be ignored, it is essential to construct a simple and effective thermal error compensation mathematical model. In this paper, three modeling methods are introduced in detail. The first is multiple linear regression model; the second is congruence model, which combines multiple linear regression model with AR model of its residual error; and the third is autoregressive distributed lag model(ADL), which is compared and analyzed. Multiple linear regression analysis is used most commonly in thermal error compensation, since it is a simple and quick modeling method. But thermal error is nonlinear and interactive, so it is difficult to model a precise least squares model of thermal error. The congruence model and autoregressive distributed lag model belong to time series analysis method which has the advantage of establishing a precise mathematical model. The distinctions between the two models are that: the congruence model divides the parameter into two parts to estimate them respectively, but autoregressive distributed lag model estimates parameter uniformly, so congruence model is less accurate than autoregressive distributed lag model in modeling. This paper, based upon an actual example, concludes that autoregressive distributed lag model for thermal error of precision CNC machine tools is a good way to improve modeling accuracy.

  7. Modeling powder encapsulation in dosator-based machines: II. Experimental evaluation.

    PubMed

    Khawam, Ammar; Schultz, Leon

    2011-12-15

    A theoretical model was previously derived to predict powder encapsulation in dosator-based machines. The theoretical basis of the model was discussed earlier. In this part; the model was evaluated experimentally using two powder formulations with substantially different flow behavior. Encapsulation experiments were performed using a Zanasi encapsulation machine under two sets of experimental conditions. Model predicted outcomes such as encapsulation fill weight and plug height were compared to those experimentally obtained. Results showed a high correlation between predicted and actual outcomes demonstrating the model's success in predicting the encapsulation of both formulations. The model is a potentially useful in silico analysis tool that can be used for capsule dosage form development in accordance to quality by design (QbD) principles.

  8. Probabilistic Regularized Extreme Learning Machine for Robust Modeling of Noise Data.

    PubMed

    Lu, XinJiang; Ming, Li; Liu, WenBo; Li, Han-Xiong

    2017-08-17

    The extreme learning machine (ELM) has been extensively studied in the machine learning field and has been widely implemented due to its simplified algorithm and reduced computational costs. However, it is less effective for modeling data with non-Gaussian noise or data containing outliers. Here, a probabilistic regularized ELM is proposed to improve modeling performance with data containing non-Gaussian noise and/or outliers. While traditional ELM minimizes modeling error by using a worst-case scenario principle, the proposed method constructs a new objective function to minimize both mean and variance of this modeling error. Thus, the proposed method considers the modeling error distribution. A solution method is then developed for this new objective function and the proposed method is further proved to be more robust when compared with traditional ELM, even when subject to noise or outliers. Several experimental cases demonstrate that the proposed method has better modeling performance for problems with non-Gaussian noise or outliers.

  9. An economic production quantity model for deteriorating items with preventive maintenance policy and random machine breakdown

    NASA Astrophysics Data System (ADS)

    Agus Widyadana, Gede; Wee, Hui Ming

    2012-10-01

    In recent years, many researches on economic production quantity (EPQ) models with machine breakdown and preventive maintenance have been developed, but few of them have developed integrated models for deteriorating items. In this study, we develop EPQ models for deteriorating items with preventive maintenance, random machine breakdown and immediate corrective action. Corrective and preventive maintenance times are assumed to be stochastic and the unfulfilled demands are lost sales. Two EPQ models of uniform distribution and exponential distribution of corrective and maintenance times are developed. An example and sensitivity analysis is given to illustrate the models. For the exponential distribution model, it is shown that the corrective time parameter is one of the most sensitive parameters to the optimal total cost.

  10. River suspended sediment modelling using the CART model: A comparative study of machine learning techniques.

    PubMed

    Choubin, Bahram; Darabi, Hamid; Rahmati, Omid; Sajedi-Hosseini, Farzaneh; Kløve, Bjørn

    2017-10-02

    Suspended sediment load (SSL) modelling is an important issue in integrated environmental and water resources management, as sediment affects water quality and aquatic habitats. Although classification and regression tree (CART) algorithms have been applied successfully to ecological and geomorphological modelling, their applicability to SSL estimation in rivers has not yet been investigated. In this study, we evaluated use of a CART model to estimate SSL based on hydro-meteorological data. We also compared the accuracy of the CART model with that of the four most commonly used models for time series modelling of SSL, i.e. adaptive neuro-fuzzy inference system (ANFIS), multi-layer perceptron (MLP) neural network and two kernels of support vector machines (RBF-SVM and P-SVM). The models were calibrated using river discharge, stage, rainfall and monthly SSL data for the Kareh-Sang River gauging station in the Haraz watershed in northern Iran, where sediment transport is a considerable issue. In addition, different combinations of input data with various time lags were explored to estimate SSL. The best input combination was identified through trial and error, percent bias (PBIAS), Taylor diagrams and violin plots for each model. For evaluating the capability of the models, different statistics such as Nash-Sutcliffe efficiency (NSE), Kling-Gupta efficiency (KGE) and percent bias (PBIAS) were used. The results showed that the CART model performed best in predicting SSL (NSE=0.77, KGE=0.8, PBIAS<±15), followed by RBF-SVM (NSE=0.68, KGE=0.72, PBIAS<±15). Thus the CART model can be a helpful tool in basins where hydro-meteorological data are readily available. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. SAINT: A combined simulation language for modeling man-machine systems

    NASA Technical Reports Server (NTRS)

    Seifert, D. J.

    1979-01-01

    SAINT (Systems Analysis of Integrated Networks of Tasks) is a network modeling and simulation technique for design and analysis of complex man machine systems. SAINT provides the conceptual framework for representing systems that consist of discrete task elements, continuous state variables, and interactions between them. It also provides a mechanism for combining human performance models and dynamic system behaviors in a single modeling structure. The SAINT technique is described and applications of the SAINT are discussed.

  12. Multiscale Modeling and Analysis of an Ultra-Precision Damage Free Machining Method

    NASA Astrophysics Data System (ADS)

    Guan, Chaoliang; Peng, Wenqiang

    2016-06-01

    Under the condition of high laser flux, laser induced damage of optical element does not occur is the key to success of laser fusion ignition system. US government survey showed that the processing defects caused the laser induced damage threshold (LIDT) to decrease is one of the three major challenges. Cracks and scratches caused by brittle and plastic removal machining are fatal flaws. Using hydrodynamic effect polishing method can obtain damage free surface on quartz glass. The material removal mechanism of this typical ultra-precision machining process was modeled in multiscale. In atomic scale, chemical modeling illustrated the weakening and breaking of chemical bond energy. In particle scale, micro contact modeling given the elastic remove mode boundary of materials. In slurry scale, hydrodynamic flow modeling showed the dynamic pressure and shear stress distribution which are relations with machining effect. Experiment was conducted on a numerically controlled system, and one quartz glass optical component was polished in the elastic mode. Results show that the damages are removed away layer by layer as the removal depth increases due to the high damage free machining ability of the HEP. And the LIDT of sample was greatly improved.

  13. Lateral-Directional Parameter Estimation on the X-48B Aircraft Using an Abstracted, Multi-Objective Effector Model

    NASA Technical Reports Server (NTRS)

    Ratnayake, Nalin A.; Waggoner, Erin R.; Taylor, Brian R.

    2011-01-01

    The problem of parameter estimation on hybrid-wing-body aircraft is complicated by the fact that many design candidates for such aircraft involve a large number of aerodynamic control effectors that act in coplanar motion. This adds to the complexity already present in the parameter estimation problem for any aircraft with a closed-loop control system. Decorrelation of flight and simulation data must be performed in order to ascertain individual surface derivatives with any sort of mathematical confidence. Non-standard control surface configurations, such as clamshell surfaces and drag-rudder modes, further complicate the modeling task. In this paper, time-decorrelation techniques are applied to a model structure selected through stepwise regression for simulated and flight-generated lateral-directional parameter estimation data. A virtual effector model that uses mathematical abstractions to describe the multi-axis effects of clamshell surfaces is developed and applied. Comparisons are made between time history reconstructions and observed data in order to assess the accuracy of the regression model. The Cram r-Rao lower bounds of the estimated parameters are used to assess the uncertainty of the regression model relative to alternative models. Stepwise regression was found to be a useful technique for lateral-directional model design for hybrid-wing-body aircraft, as suggested by available flight data. Based on the results of this study, linear regression parameter estimation methods using abstracted effectors are expected to perform well for hybrid-wing-body aircraft properly equipped for the task.

  14. Programming and machining of complex parts based on CATIA solid modeling

    NASA Astrophysics Data System (ADS)

    Zhu, Xiurong

    2017-09-01

    The complex parts of the use of CATIA solid modeling programming and simulation processing design, elaborated in the field of CNC machining, programming and the importance of processing technology. In parts of the design process, first make a deep analysis on the principle, and then the size of the design, the size of each chain, connected to each other. After the use of backstepping and a variety of methods to calculate the final size of the parts. In the selection of parts materials, careful study, repeated testing, the final choice of 6061 aluminum alloy. According to the actual situation of the processing site, it is necessary to make a comprehensive consideration of various factors in the machining process. The simulation process should be based on the actual processing, not only pay attention to shape. It can be used as reference for machining.

  15. Horizontal-axis washing machines offer large savings: New models entering North American market

    SciTech Connect

    Shepard, M.

    1992-12-31

    Long popular in Europe, new horizontal-axis clothes washers are entering the North American market, creating opportunities for government and utility conservation efforts. Unlike vertical-axis machines, which immerse the clothes in water, horizontal-axis designs use a tumbling action and require far less water, water-heating energy, and detergent. One development in this area is the recent reintroduction by the Frigidaire Company of a full-size, front-load, horizontal-axis washing machine. The new model is an improved version of an earlier design that was discontinued in mid-1991 during changes in manufacturing facilities. It is available under the Sears Kenmore, White-Westinghouse, and Gibson labels. While several European and commercial-grade front-load washers are sold in the US, they are all considerably more expensive than the Frigidaire machine, making it the most efficient clothes washer currently available in a mainstream North American consumer product line.

  16. Beyond modeling abstractions: learning nouns over developmental time in atypical populations and individuals

    PubMed Central

    Sims, Clare E.; Schilling, Savannah M.; Colunga, Eliana

    2013-01-01

    Connectionist models that capture developmental change over time have much to offer in the field of language development research. Several models in the literature have made good contact with developmental data, effectively captured behavioral tasks, and accurately represented linguistic input available to young children. However, fewer models of language development have truly captured the process of developmental change over time. In this review paper, we discuss several prominent connectionist models of early word learning, focusing on semantic development, as well as our recent work modeling the emergence of word learning biases in different populations. We also discuss the potential of these kinds of models to capture children’s language development at the individual level. We argue that a modeling approach that truly captures change over time has the potential to inform theory, guide research, and lead to innovations in early language intervention. PMID:24324450

  17. The Synthesis of Precise Rotating Machine Mathematical Model, Operating Natural Signals and Virtual Data

    NASA Astrophysics Data System (ADS)

    Zhilenkov, A. A.; Kapitonov, A. A.

    2017-07-01

    It is known that synchronous machines catalogue data are presented for the case of two-phase machine in rotating coordinate system, e.g. for their description with Park-Gorev’s equation system. Nevertheless, many problems require control of phase currents and voltages, for instance, in modeling of the systems, in which synchronous generators supply powerful rectifiers. Modeling of complex systems with synchronous generators, semiconductor convertors and etc. (with phase currents control necessary for power switch commutation algorithms) becomes achievable with the equation system described in this article. Given model can be used in digital control systems with internal model. It doesn’t require high capacity of computing resources and provides sufficient modeling accuracy.

  18. Algerian Abstract

    NASA Image and Video Library

    2017-09-27

    Algerian Abstract - April 8th, 1985 Description: What look like pale yellow paint streaks slashing through a mosaic of mottled colors are ridges of wind-blown sand that make up Erg Iguidi, an area of ever-shifting sand dunes extending from Algeria into Mauritania in northwestern Africa. Erg Iguidi is one of several Saharan ergs, or sand seas, where individual dunes often surpass 500 meters-nearly a third of a mile-in both width and height. Credit: USGS/NASA/Landsat 5 To learn more about the Landsat satellite go to: landsat.gsfc.nasa.gov/ NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Join us on Facebook

  19. Dynamic modelling and analysis of multi-machine power systems including wind farms

    NASA Astrophysics Data System (ADS)

    Tabesh, Ahmadreza

    2005-11-01

    This thesis introduces a small-signal dynamic model, based on a frequency response approach, for the analysis of a multi-machine power system with special focus on an induction machine based wind farm. The proposed approach is an alternative method to the conventional eigenvalue analysis method which is widely employed for small-signal dynamic analyses of power systems. The proposed modelling approach is successfully applied and evaluated for a power system that (i) includes multiple synchronous generators, and (ii) a wind farm based on either fixed-speed, variable-speed, or doubly-fed induction machine based wind energy conversion units. The salient features of the proposed method, as compared with the conventional eigenvalue analysis method, are: (i) computational efficiency since the proposed method utilizes the open-loop transfer-function matrix of the system, (ii) performance indices that are obtainable based on frequency response data and quantitatively describe the dynamic behavior of the system, and (iii) capability to formulate various wind energy conversion unit, within a wind farm, in a modular form. The developed small-signal dynamic model is applied to a set of multi-machine study systems and the results are validated based on comparison (i) with digital time-domain simulation results obtained from PSCAD/EMTDC software tool, and (ii) where applicable with eigenvalue analysis results.

  20. Hydro-abrasive jet machining modeling for computer control and optimization

    SciTech Connect

    Groppetti, R. ); Jovane, F. )

    1993-06-01

    Use of hydro-abrasive jet machining (HAJM) for machining a wide variety of materials--metals, polymers, ceramics, fiber-reinforced composites, metal-matrix composites, and bonded or hybridized materials-primarily for two- and three-dimensional cutting and also for drilling, turning, milling, and deburring, has been reported. However, the potential of this innovative process has not been explored fully. This article discusses process control, integration, and optimization of HAJM to establish a platform for the implementation of real-time adaptive control constraint (ACC), adaptive control optimization (ACO), and CAD/CAM integration. It presents the approach followed and the main results obtained during the development, implementation, automation, and integration of a HAJM cell and its computerized controller. After a critical analysis of the process variables and models reported in the literature to identify process variables and to define a process model suitable for HAJM real-time control and optimization, to correlate process variables and parameters with machining results, and to avoid expensive and time-consuming experiments for determination of the optimal machining conditions, a process prediction and optimization model was identified and implemented. Then, the configuration of the HAJM cell, architecture, and multiprogramming operation of the controller in terms of monitoring, control, process result prediction, and process condition optimization were analyzed.

  1. Numerical Modeling of Laser Machining on the Ceramic Surface

    SciTech Connect

    Hardalov, Chavdar Momchilov; Christov, Christo Georgiev; Mihalev, Mihail Stoyanov

    2007-04-23

    A computer model of laser modification of ceramic surfaces is created. The particle distribution in ceramics with various temperature gradient of the surface tension by heating with both continuous and pulsed laser beam has been determined. The computer simulations show significant influence of the Marangoni effect on both particle and temperature distribution within the melt pool in case of High-temperature ceramics. The enthalpy-porosity technique for describing the liquid-solid phase change process has been implemented for the modeling as well.

  2. Vehicle Concept Model Abstractions For Integrated Geometric, Inertial ,Rigid Body, Powertrain and FE Analysis

    DTIC Science & Technology

    2011-06-17

    validation before prototyping. This serialized optimization process provides critical CAE support for NVH assessment to the designer during the...requiring a comprehensive geometric description. They can be used to optimize the architecture layout of a vehicle, conduct iterative design studies...Once the concept vehicle model meets the minimum requirements, detailed models should be developed for localized optimization and final design

  3. Modeling aspects of estuarine eutrophication. (Latest citations from the Selected Water Resources Abstracts database). Published Search

    SciTech Connect

    Not Available

    1993-05-01

    The bibliography contains citations concerning mathematical modeling of existing water quality stresses in estuaries, harbors, bays, and coves. Both physical hydraulic and numerical models for estuarine circulation are discussed. (Contains a minimum of 96 citations and includes a subject term index and title list.)

  4. Fractured rock hydrogeology: Modeling studies. (Latest citations from the Selected Water Resources Abstracts database). Published Search

    SciTech Connect

    Not Available

    1993-07-01

    The bibliography contains citations concerning the use of mathematical and conceptual models in describing the hydraulic parameters of fluid flow in fractured rock. Topics include the use of tracers, solute and mass transport studies, and slug test analyses. The use of modeling techniques in injection well performance prediction is also discussed. (Contains 250 citations and includes a subject term index and title list.)

  5. ShrinkWrap: 3D model abstraction for remote sensing simulation

    SciTech Connect

    Pope, Paul A

    2009-01-01

    Remote sensing simulations often require the use of 3D models of objects of interest. There are a multitude of these models available from various commercial sources. There are image processing, computational, database storage, and . data access advantages to having a regularized, encapsulating, triangular mesh representing the surface of a 3D object model. However, this is usually not how these models are stored. They can have too much detail in some areas, and not enough detail in others. They can have a mix of planar geometric primitives (triangles, quadrilaterals, n-sided polygons) representing not only the surface of the model, but also interior features. And the exterior mesh is usually not regularized nor encapsulating. This paper presents a method called SHRlNKWRAP which can be used to process 3D object models to achieve output models having the aforementioned desirable traits. The method works by collapsing an encapsulating sphere, which has a regularized triangular mesh on its surface, onto the surface of the model. A GUI has been developed to make it easy to leverage this capability. The SHRlNKWRAP processing chain and use of the GUI are described and illustrated.

  6. Abstraction and art.

    PubMed Central

    Gortais, Bernard

    2003-01-01

    In a given social context, artistic creation comprises a set of processes, which relate to the activity of the artist and the activity of the spectator. Through these processes we see and understand that the world is vaster than it is said to be. Artistic processes are mediated experiences that open up the world. A successful work of art expresses a reality beyond actual reality: it suggests an unknown world using the means and the signs of the known world. Artistic practices incorporate the means of creation developed by science and technology and change forms as they change. Artists and the public follow different processes of abstraction at different levels, in the definition of the means of creation, of representation and of perception of a work of art. This paper examines how the processes of abstraction are used within the framework of the visual arts and abstract painting, which appeared during a period of growing importance for the processes of abstraction in science and technology, at the beginning of the twentieth century. The development of digital platforms and new man-machine interfaces allow multimedia creations. This is performed under the constraint of phases of multidisciplinary conceptualization using generic representation languages, which tend to abolish traditional frontiers between the arts: visual arts, drama, dance and music. PMID:12903659

  7. Abstraction and art.

    PubMed

    Gortais, Bernard

    2003-07-29

    In a given social context, artistic creation comprises a set of processes, which relate to the activity of the artist and the activity of the spectator. Through these processes we see and understand that the world is vaster than it is said to be. Artistic processes are mediated experiences that open up the world. A successful work of art expresses a reality beyond actual reality: it suggests an unknown world using the means and the signs of the known world. Artistic practices incorporate the means of creation developed by science and technology and change forms as they change. Artists and the public follow different processes of abstraction at different levels, in the definition of the means of creation, of representation and of perception of a work of art. This paper examines how the processes of abstraction are used within the framework of the visual arts and abstract painting, which appeared during a period of growing importance for the processes of abstraction in science and technology, at the beginning of the twentieth century. The development of digital platforms and new man-machine interfaces allow multimedia creations. This is performed under the constraint of phases of multidisciplinary conceptualization using generic representation languages, which tend to abolish traditional frontiers between the arts: visual arts, drama, dance and music.

  8. The Sausage Machine: A New Two-Stage Parsing Model.

    ERIC Educational Resources Information Center

    Frazier, Lyn; Fodor, Janet Dean

    1978-01-01

    The human sentence parsing device assigns phrase structure to sentences in two steps. The first stage parser assigns lexical and phrasal nodes to substrings of words. The second stage parser then adds higher nodes to link these phrasal packages together into a complete phrase marker. This model is compared with others. (Author/RD)

  9. Modelling rollover behaviour of exacavator-based forest machines

    Treesearch

    M.W. Veal; S.E. Taylor; Robert B. Rummer

    2003-01-01

    This poster presentation provides results from analytical and computer simulation models of rollover behaviour of hydraulic excavators. These results are being used as input to the operator protective structure standards development process. Results from rigid body mechanics and computer simulation methods agree well with field rollover test data. These results show...

  10. Fault Modeling of Extreme Scale Applications Using Machine Learning

    DOE PAGES

    Vishnu, Abhinav; Dam, Hubertus van; Tallent, Nathan R.; ...

    2016-05-01

    Faults are commonplace in large scale systems. These systems experience a variety of faults such as transient, permanent and intermittent. Multi-bit faults are typically not corrected by the hardware resulting in an error. Here, this paper attempts to answer an important question: Given a multi-bit fault in main memory, will it result in an application error — and hence a recovery algorithm should be invoked — or can it be safely ignored? We propose an application fault modeling methodology to answer this question. Given a fault signature (a set of attributes comprising of system and application state), we use machinemore » learning to create a model which predicts whether a multibit permanent/transient main memory fault will likely result in error. We present the design elements such as the fault injection methodology for covering important data structures, the application and system attributes which should be used for learning the model, the supervised learning algorithms (and potentially ensembles), and important metrics. Lastly, we use three applications — NWChem, LULESH and SVM — as examples for demonstrating the effectiveness of the proposed fault modeling methodology.« less

  11. A tool for urban soundscape evaluation applying Support Vector Machines for developing a soundscape classification model.

    PubMed

    Torija, Antonio J; Ruiz, Diego P; Ramos-Ridao, Angel F

    2014-06-01

    To ensure appropriate soundscape management in urban environments, the urban-planning authorities need a range of tools that enable such a task to be performed. An essential step during the management of urban areas from a sound standpoint should be the evaluation of the soundscape in such an area. In this sense, it has been widely acknowledged that a subjective and acoustical categorization of a soundscape is the first step to evaluate it, providing a basis for designing or adapting it to match people's expectations as well. In this sense, this work proposes a model for automatic classification of urban soundscapes. This model is intended for the automatic classification of urban soundscapes based on underlying acoustical and perceptual criteria. Thus, this classification model is proposed to be used as a tool for a comprehensive urban soundscape evaluation. Because of the great complexity associated with the problem, two machine learning techniques, Support Vector Machines (SVM) and Support Vector Machines trained with Sequential Minimal Optimization (SMO), are implemented in developing model classification. The results indicate that the SMO model outperforms the SVM model in the specific task of soundscape classification. With the implementation of the SMO algorithm, the classification model achieves an outstanding performance (91.3% of instances correctly classified).

  12. Ghosts in the Machine. Interoceptive Modeling for Chronic Pain Treatment

    PubMed Central

    Di Lernia, Daniele; Serino, Silvia; Cipresso, Pietro; Riva, Giuseppe

    2016-01-01

    Pain is a complex and multidimensional perception, embodied in our daily experiences through interoceptive appraisal processes. The article reviews the recent literature about interoception along with predictive coding theories and tries to explain a missing link between the sense of the physiological condition of the entire body and the perception of pain in chronic conditions, which are characterized by interoceptive deficits. Understanding chronic pain from an interoceptive point of view allows us to better comprehend the multidimensional nature of this specific organic information, integrating the input of several sources from Gifford's Mature Organism Model to Melzack's neuromatrix. The article proposes the concept of residual interoceptive images (ghosts), to explain the diffuse multilevel nature of chronic pain perceptions. Lastly, we introduce a treatment concept, forged upon the possibility to modify the interoceptive chronic representation of pain through external input in a process that we call interoceptive modeling, with the ultimate goal of reducing pain in chronic subjects. PMID:27445681

  13. (abstract) Generic Modeling of a Life Support System for Process Technology Comparisons

    NASA Technical Reports Server (NTRS)

    Ferrall, J. F.; Seshan, P. K.; Rohatgi, N. K.; Ganapathi, G. B.

    1993-01-01

    This paper describes a simulation model called the Life Support Systems Analysis Simulation Tool (LiSSA-ST), the spreadsheet program called the Life Support Systems Analysis Trade Tool (LiSSA-TT), and the Generic Modular Flow Schematic (GMFS) modeling technique. Results of using the LiSSA-ST and the LiSSA-TT will be presented for comparing life support systems and process technology options for a Lunar Base and a Mars Exploration Mission.

  14. Machine Visual Motion Detection Modeled On Vertebrate Retina

    NASA Astrophysics Data System (ADS)

    Blackburn, M. R.; Nguyen, H. G.; Kaomea, P. K.

    1988-12-01

    Real-time motion analysis would be very useful for autonomous undersea vehicle (AUV) navigation, target tracking, homing, and obstacle avoidance. The perception of motion is well developed in animals from insects to man, providing solutions to similar problems. We have therefore applied a model of the motion analysis subnetwork in the vertebrate retina to visual navigation in the AUV. The model is currently implemented in the C programming language as a discrete- time serial approximation of a continuous-time parallel process. Running on an IBM-PC/AT with digitized video camera images, the system can detect and describe motion in a 16 by 16 receptor field at the rate of 4 updates per second. The system responds accurately with direction and speed information to images moving across the visual field at velocities less than 8 degrees of visual angle per second at signal-to-noise ratios greater than 3. The architecture is parallel and its sparse connections do not require long-term modifications. The model is thus appropriate for implementation in VLSI optoelectronics.

  15. Model-Based Fault Diagnosis in Electric Drives Using Machine Learning

    DTIC Science & Technology

    2005-08-09

    1 Model-Based Fault Diagnosis in Electric Drives Using Machine Learning Yi L. Murphey, Senior Member, IEEE, M. Abul Masrur, Senior...detection and diagnosis system for electric motors [19]. Their system used a transient empirical predictor modeled by a dynamic recurrent neural...networks and wavelet packet decomposition. Their diagnosis system was tested on a 373- kW and a 597-kW induction motor, and its diagnostics accuracy

  16. ERGONOMICS ABSTRACTS 48347-48982.

    ERIC Educational Resources Information Center

    Ministry of Technology, London (England). Warren Spring Lab.

    IN THIS COLLECTION OF ERGONOMICS ABSTRACTS AND ANNOTATIONS THE FOLLOWING AREAS OF CONCERN ARE REPRESENTED--GENERAL REFERENCES, METHODS, FACILITIES, AND EQUIPMENT RELATING TO ERGONOMICS, SYSTEMS OF MAN AND MACHINES, VISUAL, AUDITORY, AND OTHER SENSORY INPUTS AND PROCESSES (INCLUDING SPEECH AND INTELLIGIBILITY), INPUT CHANNELS, BODY MEASUREMENTS,…

  17. ERGONOMICS ABSTRACTS 48347-48982.

    ERIC Educational Resources Information Center

    Ministry of Technology, London (England). Warren Spring Lab.

    IN THIS COLLECTION OF ERGONOMICS ABSTRACTS AND ANNOTATIONS THE FOLLOWING AREAS OF CONCERN ARE REPRESENTED--GENERAL REFERENCES, METHODS, FACILITIES, AND EQUIPMENT RELATING TO ERGONOMICS, SYSTEMS OF MAN AND MACHINES, VISUAL, AUDITORY, AND OTHER SENSORY INPUTS AND PROCESSES (INCLUDING SPEECH AND INTELLIGIBILITY), INPUT CHANNELS, BODY MEASUREMENTS,…

  18. Interpretation of machine-learning-based disruption models for plasma control

    NASA Astrophysics Data System (ADS)

    Parsons, Matthew S.

    2017-08-01

    While machine learning techniques have been applied within the context of fusion for predicting plasma disruptions in tokamaks, they are typically interpreted with a simple ‘yes/no’ prediction or perhaps a probability forecast. These techniques take input signals, which could be real-time signals from machine diagnostics, to make a prediction of whether a transient event will occur. A major criticism of these methods is that, due to the nature of machine learning, there is no clear correlation between the input signals and the output prediction result. Here is proposed a simple method that could be applied to any existing prediction model to determine how sensitive the state of a plasma is at any given time with respect to the input signals. This is accomplished by computing the gradient of the decision function, which effectively identifies the quickest path away from a disruption as a function of the input signals and therefore could be used in a plasma control setting to avoid them. A numerical example is provided for illustration based on a support vector machine model, and the application to real data is left as an open opportunity.

  19. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications: Preprint

    SciTech Connect

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi; Sozer, Yilmaz; Husain; Iqbal; Muljadi, Eduard

    2015-08-24

    This paper presents a nonlinear analytical model of a novel double-sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets, stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry that makes it a good alternative for evaluating prospective designs of TFM compared to finite element solvers that are numerically intensive and require more computation time. A single-phase, 1-kW, 400-rpm machine is analytically modeled, and its resulting flux distribution, no-load EMF, and torque are verified with finite element analysis. The results are found to be in agreement, with less than 5% error, while reducing the computation time by 25 times.

  20. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications

    SciTech Connect

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi; Sozer, Yilmaz; Husain, Iqbal; Muljadi, Eduard

    2015-09-02

    This paper presents a nonlinear analytical model of a novel double sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets (PM), stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry which makes it a good alternative for evaluating prospective designs of TFM as compared to finite element solvers which are numerically intensive and require more computation time. A single phase, 1 kW, 400 rpm machine is analytically modeled and its resulting flux distribution, no-load EMF and torque, verified with Finite Element Analysis (FEA). The results are found to be in agreement with less than 5% error, while reducing the computation time by 25 times.

  1. Solving the Bose-Hubbard Model with Machine Learning

    NASA Astrophysics Data System (ADS)

    Saito, Hiroki

    2017-09-01

    Motivated by the recent successful application of artificial neural networks to quantum many-body problems [G. Carleo and M. Troyer, Science 355, 602 (2017)], a method to calculate the ground state of the Bose-Hubbard model using a feedforward neural network is proposed. The results are in good agreement with those obtained by exact diagonalization and the Gutzwiller approximation. The method of neural-network quantum states is promising for solving quantum many-body problems of ultracold atoms in optical lattices.

  2. A new second order model for Stirling machines

    NASA Astrophysics Data System (ADS)

    de Cicco, A.; Locorriere, R.; Naso, V.; Bartolini, C. M.

    In order to arrive at a more realistic analysis of Stirling cycle machinery than the classical one due to Finkelstein (1962, 1967), it is necessary to take into account working fluid characteristics, together with the external and regenerative cyclic real heat transfers. The scheme of the generalized model is presently followed to evaluate such secondary effects as those of aerodynamic and thermal losses in each of the components of a Stirling device, from the viewpoint of mass and energy conservation. The dimensionless equations derived allow the evaluation of both engine performance and parametric analysis results.

  3. Vehicle Concept Model Abstractions for Integrated Geometric, Inertial, Rigid Body, Powertrain, and FE Analysis

    DTIC Science & Technology

    2011-01-01

    optimization and final design validation before prototyping. This serialized optimization process provides critical CAE support for NVH assessment to the...detail design . Concept modeling methodologies have been integrated with a goal programming optimization algorithm [4]. The very critical issue of...conceptual design information. They are sufficient to quickly perform vehicle performance evaluations and optimize the vehicle architecture layout based

  4. The Academy for Community College Leadership Advancement, Innovation, and Modeling (ACCLAIM): Abstract.

    ERIC Educational Resources Information Center

    North Carolina State Univ., Raleigh. Academy for Community Coll. Leadership Advancement, Innovation, and Modeling.

    The Academy for Community College Leadership, Innovation, and Modeling (ACCLAIM) is a 3-year pilot project funded by the W. K. Kellogg Foundation, North Carolina State University (NCSU), and the community college systems of Maryland, Virginia, South Carolina, and North Carolina. ACCLAIM's purpose is to help the region's community colleges assume a…

  5. A Simple Computational Model of a jellyfish-like flying machine

    NASA Astrophysics Data System (ADS)

    Fang, Fang; Ristroph, Leif; Shelley, Michael

    2013-11-01

    We explore theoretically the aerodynamics of a jellyfish-like flying machine recently fabricated at NYU. This experimental device achieves flight and hovering by opening and closing a set of flapping wings. It displays orientational flight stability without additional control surfaces or feedback control. Our model machine consists of two symmetric massless flapping wings connected to a body with mass and moment of inertia. A vortex sheet shedding and wake model is used for the flow simulation. Use of the Fast Multipole Method (FMM), and adaptive addition/deletion of vortices, allows us to simulate for long times and resolve complex wakes. We use our model to explore the physical parameters that maintain body hovering, its ascent and descent, and investigate the stability of these states.

  6. Analytical prediction for electromagnetic performance of interior permanent magnet machines based on subdomain model

    NASA Astrophysics Data System (ADS)

    Shin, Kyung-Hun; Park, Hyung-II; Cho, Han-Wook; Choi, Jang-Young

    2017-05-01

    This paper presents an analytical model for the computation of the electromagnetic performance in interior permanent magnet (IPM) machines that accounts for the stator and the complex rotor structure. Using the subdomain method, we propose a simplified analytical model that considers the magnetic properties of the IPM machine. The analytical solutions are derived by solving the field-governing equations in each simple and regular subdomain, i.e., magnet, barrier, air gap, slot opening, and slot, and then applying the boundary conditions to the interfaces between these subdomains. The analytical model accurately accounts for the influence of the interaction between the slots, the relative recoil permeability of the magnets, and the boundary conditions. The magnetic field and electromagnetic performance obtained using the analytical method are compared with those obtained using finite element analysis. Finally, the analytical predictions are compared with the measured data in order to confirm the validity of the methods proposed in this paper.

  7. A model for a multi-class classification machine

    NASA Astrophysics Data System (ADS)

    Rau, Albrecht; Nadal, Jean-Pierre

    1992-06-01

    We consider the properties of multi-class neural networks, where each neuron can be in several different states. The motivations for considering such systems are manifold. In image processing for example, the different states correspond to the different grey tone levels. Another multi-class classification task implemented on a feed-forward network is the analysis of DNA sequences or the prediction of the secondary structure of proteins from the sequence of amino acids. To investigate the behaviour of such systems, one specific dynamical rule - the “winner-take-all” rule - is studied. Gauge invariances of the model are analysed. For a multi-class perceptron with N Q-state input neurons and Q‧-state output neuron, the maximal number of patterns that can be stored in the large N limit is found to be proportional to N(Q - 1) ƒ(Q‧), where ƒ( Q‧) is a slowly increasing and bounded function of order 1.

  8. Law machines: scale models, forensic materiality and the making of modern patent law.

    PubMed

    Pottage, Alain

    2011-10-01

    Early US patent law was machine made. Before the Patent Office took on the function of examining patent applications in 1836, questions of novelty and priority were determined in court, within the forum of the infringement action. And at all levels of litigation, from the circuit courts up to the Supreme Court, working models were the media through which doctrine, evidence and argument were made legible, communicated and interpreted. A model could be set on a table, pointed at, picked up, rotated or upended so as to display a point of interest to a particular audience within the courtroom, and, crucially, set in motion to reveal the 'mode of operation' of a machine. The immediate object of demonstration was to distinguish the intangible invention from its tangible embodiment, but models also'machined' patent law itself. Demonstrations of patent claims with models articulated and resolved a set of conceptual tensions that still make the definition and apprehension of the invention difficult, even today, but they resolved these tensions in the register of materiality, performativity and visibility, rather than the register of conceptuality. The story of models tells us something about how inventions emerge and subsist within the context of patent litigation and patent doctrine, and it offers a starting point for renewed reflection on the question of how technology becomes property.

  9. Modeling and predicting abstract concept or idea introduction and propagation through geopolitical groups

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger M.; Handley, James W.; Hicklen, Michael L.

    2007-04-01

    This paper describes a novel capability for modeling known idea propagation transformations and predicting responses to new ideas from geopolitical groups. Ideas are captured using semantic words that are text based and bear cognitive definitions. We demonstrate a unique algorithm for converting these into analytical predictive equations. Using the illustrative idea of "proposing a gasoline price increase of 1 per gallon from 2" and its changing perceived impact throughout 5 demographic groups, we identify 13 cost of living Diplomatic, Information, Military, and Economic (DIME) features common across all 5 demographic groups. This enables the modeling and monitoring of Political, Military, Economic, Social, Information, and Infrastructure (PMESII) effects of each group to this idea and how their "perception" of this proposal changes. Our algorithm and results are summarized in this paper.

  10. Fractured rock hydrogeology (excluding modeling). (Latest citations from the Selected Water Resources Abstracts database). Published Search

    SciTech Connect

    Not Available

    1992-11-01

    The bibliography contains citations concerning the nature and occurrence of groundwater in fractured crystalline and sedimentary rocks. Techniques for determining connectivity and hydraulic conductivity, pollutant distribution in fractures, and site studies in specific geologic environments are among the topics discussed. Citations pertaining to modeling studies of fractured rock hydrogeology are addressed in a separate bibliography. (Contains a minimum of 54 citations and includes a subject term index and title list.)

  11. Fractured rock hydrogeology (excluding modeling). (Latest citations from the Selected Water Resources abstracts database). Published Search

    SciTech Connect

    Not Available

    1994-01-01

    The bibliography contains citations concerning the nature and occurrence of groundwater in fractured crystalline and sedimentary rocks. Techniques for determining connectivity and hydraulic conductivity, pollutant distribution in fractures, and site studies in specific geologic environments are among the topics discussed. Citations pertaining to modeling studies of fractured rock hydrogeology are addressed in a separate bibliography. (Contains a minimum of 62 citations and includes a subject term index and title list.)

  12. Using Machine Learning to Create Turbine Performance Models (Presentation)

    SciTech Connect

    Clifton, A.

    2013-04-01

    Wind turbine power output is known to be a strong function of wind speed, but is also affected by turbulence and shear. In this work, new aerostructural simulations of a generic 1.5 MW turbine are used to explore atmospheric influences on power output. Most significant is the hub height wind speed, followed by hub height turbulence intensity and then wind speed shear across the rotor disk. These simulation data are used to train regression trees that predict the turbine response for any combination of wind speed, turbulence intensity, and wind shear that might be expected at a turbine site. For a randomly selected atmospheric condition, the accuracy of the regression tree power predictions is three times higher than that of the traditional power curve methodology. The regression tree method can also be applied to turbine test data and used to predict turbine performance at a new site. No new data is required in comparison to the data that are usually collected for a wind resource assessment. Implementing the method requires turbine manufacturers to create a turbine regression tree model from test site data. Such an approach could significantly reduce bias in power predictions that arise because of different turbulence and shear at the new site, compared to the test site.

  13. Learning with Technology: Video Modeling with Concrete-Representational-Abstract Sequencing for Students with Autism Spectrum Disorder.

    PubMed

    Yakubova, Gulnoza; Hughes, Elizabeth M; Shinaberry, Megan

    2016-07-01

    The purpose of this study was to determine the effectiveness of a video modeling intervention with concrete-representational-abstract instructional sequence in teaching mathematics concepts to students with autism spectrum disorder (ASD). A multiple baseline across skills design of single-case experimental methodology was used to determine the effectiveness of the intervention on the acquisition and maintenance of addition, subtraction, and number comparison skills for four elementary school students with ASD. Findings supported the effectiveness of the intervention in improving skill acquisition and maintenance at a 3-week follow-up. Implications for practice and future research are discussed.

  14. Compounding as Abstract Operation in Semantic Space: Investigating relational effects through a large-scale, data-driven computational model.

    PubMed

    Marelli, Marco; Gagné, Christina L; Spalding, Thomas L

    2017-09-01

    In many languages, compounding is a fundamental process for the generation of novel words. When this process is productive (as, e.g., in English), native speakers can juxtapose two words to create novel compounds that can be readily understood by other speakers. The present paper proposes a large-scale, data-driven computational system for compound semantic processing based on distributional semantics, the CAOSS model (Compounding as Abstract Operation in Semantic Space). In CAOSS, word meanings are represented as vectors encoding their lexical co-occurrences in a reference corpus. Given two constituent words, their composed representation (the compound) is computed by using matrices representing the abstract properties of constituent roles (modifier vs. head). The matrices are also induced through examples of language usage. The model is then validated against behavioral results concerning the processing of novel compounds, and in particular relational effects on response latencies. The effects of relational priming and relational dominance are considered. CAOSS predictions are shown to pattern with previous results, in terms of both the impact of relational information and the dissociations related to the different constituent roles. The simulations indicate that relational information is implicitly reflected in language usage, suggesting that human speakers can learn these aspects from language experience and automatically apply them to the processing of new word combinations. The present model is flexible enough to emulate this procedure, suggesting that relational effects might emerge as a by-product of nuanced operations across distributional patterns. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Monte Carlo simulation of domain growth in the kinetic Ising model on the connection machine

    NASA Astrophysics Data System (ADS)

    Amar, Jacques G.; Sullivan, Francis

    1989-10-01

    A fast multispin algorithm for the Monte Carlo simulation of the two-dimensional spin-exchange kinetic Ising model, previously described by Sullivan and Mountain and used by Amar et al. has been adapted for use on the Connection Machine and applied as a first test in a calculation of domain growth. Features of the code include: (a) the use of demon bits, (b) the simulation of several runs simultaneously to improve the efficiency of the code, (c) the use of virtual processors to simulate easily and efficiently a larger system size, (d) the use of the (NEWS) grid for last communication between neighbouring processors and updating of boundary layers, (e) the implementation of an efficient random number generator much faster than that provided by Thinking Machines Corp., and (f) the use of the LISP function "funcall" to select which processors to update. Overall speed of the code when run on a (128x128) processor machine is about 130 million attempted spin-exchanges per second, about 9 times faster than the comparable code, using hardware vectorised-logic operations and 64-bit multispin coding on the Cyber 205. The same code can be used on a larger machine (65 536 processors) and should produce speeds in excess of 500 million attempted spin-exchanges per second.

  16. Abstraction of mechanistic sorption model results for performance assessment calculations at Yucca Mountain, Nevada

    SciTech Connect

    Turner, D.R.; Pabalan, R.T.

    1999-11-01

    Sorption onto minerals in the geologic setting may help to mitigate potential radionuclide transport from the proposed high-level radioactive waste repository at Yucca Mountain (YM), Nevada. An approach is developed for including aspects of more mechanistic sorption models into current probabilistic performance assessment (PA) calculations. Data on water chemistry from the vicinity of YM are screened and used to calculate the ranges in parameters that could exert control on radionuclide corruption behavior. Using a diffuse-layer surface complexation model, sorption parameters for Np(V) and U(VI) are calculated based on the chemistry of each water sample. Model results suggest that log normal probability distribution functions (PDFs) of sorption parameters are appropriate for most of the samples, but the calculated range is almost five orders of magnitude for Np(V) sorption and nine orders of magnitude for U(VI) sorption. Calculated sorption parameters may also vary at a single sample location by almost a factor of 10 over time periods of the order of days to years due to changes in chemistry, although sampling and analytical methodologies may introduce artifacts that add uncertainty to the evaluation of these fluctuations. Finally, correlation coefficients between the calculated Np(V) and U(VI) sorption parameters can be included as input into PA sampling routines, so that the value selected for one radionuclide sorption parameter is conditioned by its statistical relationship to the others. The approaches outlined here can be adapted readily to current PA efforts, using site-specific information to provide geochemical constraints on PDFs for radionuclide transport parameters.

  17. Abstract: Comparing Semiparametric and Parametric Methods for Modeling Interactions Among Latent Variables.

    PubMed

    Baldasaro, Ruth E; Bauer, Daniel J

    2011-11-30

    Many approaches have been proposed to estimate interactions among latent variables. These methods often assume a specific functional form for the interaction, such as a bilinear interaction. Theory is seldom specific enough to provide a functional form for an interaction, however, so a more exploratory, diagnostic approach may often be required. Bauer (2005) proposed a semiparametric approach that allows for the estimation of interaction effects of unknown functional form among latent variables. A structural equation mixture model (SEMM) is first fit to the data. Then an approximation of the interaction is obtained by aggregating over the mixing components. A simulation study is used to examine the performance of this semiparametric approach to two parametric approaches: the latent moderated structures approach (Klein & Moosbrugger, 2000) and the unconstrained product-indicator approach (Marsh, Wen, & Hau, 2004). Data were generated from four functional forms: main effects only, quadratic trend, bilinear interaction, and exponential interaction. Estimates of bias and root mean squared error of approximation were calculated by comparing the surface used to generate the data and the model-implied surface constructed from each approach. As expected, the parametric approaches were more efficient than the SEMM. For the main effects model, bias was similar for both the SEMM and parametric approaches. For the bilinear interaction, the parametric approaches provided nearly identical results, although the SEMM approach was slightly more biased. When the parametric approaches assumed a bilinear interaction and the data were generated from a quadratic trend or an exponential interaction, the parametric approaches generated biased estimates of the true surface. The SEMM approach approximated the true data generation surface with a similarly low level of bias for all the nonlinear surfaces. For example, Figure 1 shows the true surface for the bilinear interaction along with the

  18. A paradigm for data-driven predictive modeling using field inversion and machine learning

    NASA Astrophysics Data System (ADS)

    Parish, Eric J.; Duraisamy, Karthik

    2016-01-01

    We propose a modeling paradigm, termed field inversion and machine learning (FIML), that seeks to comprehensively harness data from sources such as high-fidelity simulations and experiments to aid the creation of improved closure models for computational physics applications. In contrast to inferring model parameters, this work uses inverse modeling to obtain corrective, spatially distributed functional terms, offering a route to directly address model-form errors. Once the inference has been performed over a number of problems that are representative of the deficient physics in the closure model, machine learning techniques are used to reconstruct the model corrections in terms of variables that appear in the closure model. These reconstructed functional forms are then used to augment the closure model in a predictive computational setting. As a first demonstrative example, a scalar ordinary differential equation is considered, wherein the model equation has missing and deficient terms. Following this, the methodology is extended to the prediction of turbulent channel flow. In both of these applications, the approach is demonstrated to be able to successfully reconstruct functional corrections and yield accurate predictive solutions while providing a measure of model form uncertainties.

  19. Machine Learning Techniques for Combining Multi-Model Climate Projections (Invited)

    NASA Astrophysics Data System (ADS)

    Monteleoni, C.

    2013-12-01

    The threat of climate change is one of the greatest challenges currently facing society. Given the profound impact machine learning has made on the natural sciences to which it has been applied, such as the field of bioinformatics, machine learning is poised to accelerate discovery in climate science. Recent advances in the fledgling field of climate informatics have demonstrated the promise of machine learning techniques for problems in climate science. A key problem in climate science is how to combine the projections of the multi-model ensemble of global climate models that inform the Intergovernmental Panel on Climate Change (IPCC). I will present three approaches to this problem. Our Tracking Climate Models (TCM) work demonstrated the promise of an algorithm for online learning with expert advice, for this task. Given temperature projections and hindcasts from 20 IPCC global climate models, and over 100 years of historical temperature data, TCM generated predictions that tracked the changing sequence of which model currently predicts best. On historical data, at both annual and monthly time-scales, and in future simulations, TCM consistently outperformed the average over climate models, the existing benchmark in climate science, at both global and continental scales. We then extended TCM to take into account climate model projections at higher spatial resolutions, and to model geospatial neighborhood influence between regions. Our second algorithm enables neighborhood influence by modifying the transition dynamics of the Hidden Markov Model from which TCM is derived, allowing the performance of spatial neighbors to influence the temporal switching probabilities for the best climate model at a given location. We recently applied a third technique, sparse matrix completion, in which we create a sparse (incomplete) matrix from climate model projections/hindcasts and observed temperature data, and apply a matrix completion algorithm to recover it, yielding

  20. A study of sound transmission in an abstract middle ear using physical and finite element models

    PubMed Central

    Gonzalez-Herrera, Antonio; Olson, Elizabeth S.

    2015-01-01

    The classical picture of middle ear (ME) transmission has the tympanic membrane (TM) as a piston and the ME cavity as a vacuum. In reality, the TM moves in a complex multiphasic pattern and substantial pressure is radiated into the ME cavity by the motion of the TM. This study explores ME transmission with a simple model, using a tube terminated with a plastic membrane. Membrane motion was measured with a laser interferometer and pressure on both sides of the membrane with micro-sensors that could be positioned close to the membrane without disturbance. A finite element model of the system explored the experimental results. Both experimental and theoretical results show resonances that are in some cases primarily acoustical or mechanical and sometimes produced by coupled acousto-mechanics. The largest membrane motions were a result of the membrane's mechanical resonances. At these resonant frequencies, sound transmission through the system was larger with the membrane in place than it was when the membrane was absent. PMID:26627771

  1. Modeling Physical Processes at the Nanoscale—Insight into Self-Organization of Small Systems (abstract)

    NASA Astrophysics Data System (ADS)

    Proykova, Ana

    2009-04-01

    Essential contributions have been made in the field of finite-size systems of ingredients interacting with potentials of various ranges. Theoretical simulations have revealed peculiar size effects on stability, ground state structure, phases, and phase transformation of systems confined in space and time. Models developed in the field of pure physics (atomic and molecular clusters) have been extended and successfully transferred to finite-size systems that seem very different—small-scale financial markets, autoimmune reactions, and social group reactions to advertisements. The models show that small-scale markets diverge unexpectedly fast as a result of small fluctuations; autoimmune reactions are sequences of two discontinuous phase transitions; and social groups possess critical behavior (social percolation) under the influence of an external field (advertisement). Some predicted size-dependent properties have been experimentally observed. These findings lead to the hypothesis that restrictions on an object's size determine the object's total internal (configuration) and external (environmental) interactions. Since phases are emergent phenomena produced by self-organization of a large number of particles, the occurrence of a phase in a system containing a small number of ingredients is remarkable.

  2. River Flow Forecasting: a Hybrid Model of Self Organizing Maps and Least Square Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Ismail, S.; Samsudin, R.; Shabri, A.

    2010-10-01

    Successful river flow time series forecasting is a major goal and an essential procedure that is necessary in water resources planning and management. This study introduced a new hybrid model based on a combination of two familiar non-linear method of mathematical modeling: Self Organizing Map (SOM) and Least Square Support Vector Machine (LSSVM) model referred as SOM-LSSVM model. The hybrid model uses the SOM algorithm to cluster the training data into several disjointed clusters and the individual LSSVM is used to forecast the river flow. The feasibility of this proposed model is evaluated to actual river flow data from Bernam River located in Selangor, Malaysia. Their results have been compared to those obtained using LSSVM and artificial neural networks (ANN) models. The experiment results show that the SOM-LSSVM model outperforms other models for forecasting river flow. It also indicates that the proposed model can forecast more precisely and provides a promising alternative technique in river flow forecasting.

  3. Prediction modeling using EHR data: challenges, strategies, and a comparison of machine learning approaches.

    PubMed

    Wu, Jionglin; Roy, Jason; Stewart, Walter F

    2010-06-01

    Electronic health record (EHR) databases contain vast amounts of information about patients. Machine learning techniques such as Boosting and support vector machine (SVM) can potentially identify patients at high risk for serious conditions, such as heart disease, from EHR data. However, these techniques have not yet been widely tested. To model detection of heart failure more than 6 months before the actual date of clinical diagnosis using machine learning techniques applied to EHR data. To compare the performance of logistic regression, SVM, and Boosting, along with various variable selection methods in heart failure prediction. Geisinger Clinic primary care patients with data in the EHR data from 2001 to 2006 diagnosed with heart failure between 2003 and 2006 were identified. Controls were randomly selected matched on sex, age, and clinic for this nested case-control study. Area under the curve (AUC) of receiver operator characteristic curve was computed for each method using 10-fold cross-validation. The number of variables selected by each method was compared. Logistic regression with model selection based on Bayesian information criterion provided the most parsimonious model, with about 10 variables selected on average, while maintaining a high AUC (0.77 in 10-fold cross-validation). Boosting with strict variable importance threshold provided similar performance. Heart failure was predicted more than 6 months before clinical diagnosis, with AUC of about 0.76, using logistic regression and Boosting. These results were achieved even with strict model selection criteria. SVM had the poorest performance, possibly because of imbalanced data.

  4. A Model-Free Machine Learning Method for Risk Classification and Survival Probability Prediction.

    PubMed

    Geng, Yuan; Lu, Wenbin; Zhang, Hao Helen

    2014-01-01

    Risk classification and survival probability prediction are two major goals in survival data analysis since they play an important role in patients' risk stratification, long-term diagnosis, and treatment selection. In this article, we propose a new model-free machine learning framework for risk classification and survival probability prediction based on weighted support vector machines. The new procedure does not require any specific parametric or semiparametric model assumption on data, and is therefore capable of capturing nonlinear covariate effects. We use numerous simulation examples to demonstrate finite sample performance of the proposed method under various settings. Applications to a glioma tumor data and a breast cancer gene expression survival data are shown to illustrate the new methodology in real data analysis.

  5. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    PubMed Central

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052

  6. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I-Model Development.

    PubMed

    Calvo, Roque; D'Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-09-29

    The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM's behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  7. Calibration drift in regression and machine learning models for acute kidney injury.

    PubMed

    Davis, Sharon E; Lasko, Thomas A; Chen, Guanhua; Siew, Edward D; Matheny, Michael E

    2017-03-31

    Predictive analytics create opportunities to incorporate personalized risk estimates into clinical decision support. Models must be well calibrated to support decision-making, yet calibration deteriorates over time. This study explored the influence of modeling methods on performance drift and connected observed drift with data shifts in the patient population. Using 2003 admissions to Department of Veterans Affairs hospitals nationwide, we developed 7 parallel models for hospital-acquired acute kidney injury using common regression and machine learning methods, validating each over 9 subsequent years. Discrimination was maintained for all models. Calibration declined as all models increasingly overpredicted risk. However, the random forest and neural network models maintained calibration across ranges of probability, capturing more admissions than did the regression models. The magnitude of overprediction increased over time for the regression models while remaining stable and small for the machine learning models. Changes in the rate of acute kidney injury were strongly linked to increasing overprediction, while changes in predictor-outcome associations corresponded with diverging patterns of calibration drift across methods. Efficient and effective updating protocols will be essential for maintaining accuracy of, user confidence in, and safety of personalized risk predictions to support decision-making. Model updating protocols should be tailored to account for variations in calibration drift across methods and respond to periods of rapid performance drift rather than be limited to regularly scheduled annual or biannual intervals.

  8. A mathematical model of the controlled axial flow divider for mobile machines

    NASA Astrophysics Data System (ADS)

    Mulyukin, V. L.; Karelin, D. L.; Belousov, A. M.

    2016-06-01

    The authors give a mathematical model of the axial adjustable flow divider allowing one to define the parameters of the feed pump and the hydraulic motor-wheels in the multi-circuit hydrostatic transmission of mobile machines, as well as for example built features that allows to clearly evaluate the mutual influence of the values of pressure and flow on all input and output circuits of the system.

  9. Applications of hand-arm models in the investigation of the interaction between man and machine.

    PubMed

    Jahn, R; Hesse, M

    1986-08-01

    The mode of vibration of hand-held tools cannot be considered without knowledge of the influence of the operator's hand-arm system. Therefore some technical applications of hand-arm models were realized for drill hammers by the University of Dortmund. These applications are a software program to simulate the motion of machine components, a horizontal drilling jig, and a chucking device in a drilling rig.

  10. RMP model based optimization of power system stabilizers in multi-machine power system.

    PubMed

    Baek, Seung-Mook; Park, Jung-Wook

    2009-01-01

    This paper describes the nonlinear parameter optimization of power system stabilizer (PSS) by using the reduced multivariate polynomial (RMP) algorithm with the one-shot property. The RMP model estimates the second-order partial derivatives of the Hessian matrix after identifying the trajectory sensitivities, which can be computed from the hybrid system modeling with a set of differential-algebraic-impulsive-switched (DAIS) structure for a power system. Then, any nonlinear controller in the power system can be optimized by achieving a desired performance measure, mathematically represented by an objective function (OF). In this paper, the output saturation limiter of the PSS, which is used to improve low-frequency oscillation damping performance during a large disturbance, is optimally tuned exploiting the Hessian estimated by the RMP model. Its performances are evaluated with several case studies on both single-machine infinite bus (SMIB) and multi-machine power system (MMPS) by time-domain simulation. In particular, all nonlinear parameters of multiple PSSs on IEEE benchmark two-area four-machine power system are optimized to be robust against various disturbances by using the weighted sum of the OFs.

  11. The modified nodal analysis method applied to the modeling of the thermal circuit of an asynchronous machine

    NASA Astrophysics Data System (ADS)

    Nedelcu, O.; Salisteanu, C. I.; Popa, F.; Salisteanu, B.; Oprescu, C. V.; Dogaru, V.

    2017-01-01

    The complexity of electrical circuits or of equivalent thermal circuits that were considered to be analyzed and solved requires taking into account the method that is used for their solving. Choosing the method of solving determines the amount of calculation necessary for applying one of the methods. The heating and ventilation systems of electrical machines that have to be modeled result in complex equivalent electrical circuits of large dimensions, which requires the use of the most efficient methods of solving them. The purpose of the thermal calculation of electrical machines is to establish the heating, the overruns of temperatures or over-temperatures in some parts of the machine compared to the temperature of the ambient, in a given operating mode of the machine. The paper presents the application of the modified nodal analysis method for the modeling of the thermal circuit of an asynchronous machine.

  12. Hierarchical analytical and simulation modelling of human-machine systems with interference

    NASA Astrophysics Data System (ADS)

    Braginsky, M. Ya; Tarakanov, D. V.; Tsapko, S. G.; Tsapko, I. V.; Baglaeva, E. A.

    2017-01-01

    The article considers the principles of building the analytical and simulation model of the human operator and the industrial control system hardware and software. E-networks as the extension of Petri nets are used as the mathematical apparatus. This approach allows simulating complex parallel distributed processes in human-machine systems. The structural and hierarchical approach is used as the building method for the mathematical model of the human operator. The upper level of the human operator is represented by the logical dynamic model of decision making based on E-networks. The lower level reflects psychophysiological characteristics of the human-operator.

  13. A model of unsteady spatially inhomogeneous flow in a radial-axial blade machine

    NASA Astrophysics Data System (ADS)

    Ambrozhevich, A. V.; Munshtukov, D. A.

    A two-dimensional model of the gasdynamic process in a radial-axial blade machine is proposed which allows for the instantaneous local state of the field of flow parameters, changes in the set angles along the median profile line, profile losses, and centrifugal and Coriolis forces. The model also allows for the injection of cooling air and completion of fuel combustion in the flow. The model is equally applicable to turbines and compressors. The use of the method of singularities provides for a unified and relatively simple description of various factors affecting the flow and, therefore, for computational efficiency.

  14. Predictive modeling of human operator cognitive state via sparse and robust support vector machines.

    PubMed

    Zhang, Jian-Hua; Qin, Pan-Pan; Raisch, Jörg; Wang, Ru-Bin

    2013-10-01

    The accurate prediction of the temporal variations in human operator cognitive state (HCS) is of great practical importance in many real-world safety-critical situations. However, since the relationship between the HCS and electrophysiological responses of the operator is basically unknown, complicated and uncertain, only data-based modeling method can be employed. This paper is aimed at constructing a data-driven computationally intelligent model, based on multiple psychophysiological and performance measures, to accurately estimate the HCS in the context of a safety-critical human-machine system. The advanced least squares support vector machines (LS-SVM), whose parameters are optimized by grid search and cross-validation techniques, are adopted for the purpose of predictive modeling of the HCS. The sparse and weighted LS-SVM (WLS-SVM) were proposed by Suykens et al. to overcome the deficiency of the standard LS-SVM in lacking sparseness and robustness. This paper adopted those two improved LS-SVM algorithms to model the HCS based solely on a set of physiological and operator performance data. The results showed that the sparse LS-SVM can obtain HCS models with sparseness with almost no loss of modeling accuracy, while the WLS-SVM leads to models which are robust in case of noisy training data. Both intelligent system modeling approaches are shown to be capable of capturing the temporal fluctuation trends of the HCS because of their superior generalization performance.

  15. Hypoglycemia prediction using machine learning models for patients with type 2 diabetes.

    PubMed

    Sudharsan, Bharath; Peeples, Malinda; Shomali, Mansur

    2015-01-01

    Minimizing the occurrence of hypoglycemia in patients with type 2 diabetes is a challenging task since these patients typically check only 1 to 2 self-monitored blood glucose (SMBG) readings per day. We trained a probabilistic model using machine learning algorithms and SMBG values from real patients. Hypoglycemia was defined as a SMBG value < 70 mg/dL. We validated our model using multiple data sets. In addition, we trained a second model, which used patient SMBG values and information about patient medication administration. The optimal number of SMBG values needed by the model was approximately 10 per week. The sensitivity of the model for predicting a hypoglycemia event in the next 24 hours was 92% and the specificity was 70%. In the model that incorporated medication information, the prediction window was for the hour of hypoglycemia, and the specificity improved to 90%. Our machine learning models can predict hypoglycemia events with a high degree of sensitivity and specificity. These models-which have been validated retrospectively and if implemented in real time-could be useful tools for reducing hypoglycemia in vulnerable patients.

  16. Application of Machine-Learning Models to Predict Tacrolimus Stable Dose in Renal Transplant Recipients

    NASA Astrophysics Data System (ADS)

    Tang, Jie; Liu, Rong; Zhang, Yue-Li; Liu, Mou-Ze; Hu, Yong-Fang; Shao, Ming-Jie; Zhu, Li-Jun; Xin, Hua-Wen; Feng, Gui-Wen; Shang, Wen-Jun; Meng, Xiang-Guang; Zhang, Li-Rong; Ming, Ying-Zi; Zhang, Wei

    2017-02-01

    Tacrolimus has a narrow therapeutic window and considerable variability in clinical use. Our goal was to compare the performance of multiple linear regression (MLR) and eight machine learning techniques in pharmacogenetic algorithm-based prediction of tacrolimus stable dose (TSD) in a large Chinese cohort. A total of 1,045 renal transplant patients were recruited, 80% of which were randomly selected as the “derivation cohort” to develop dose-prediction algorithm, while the remaining 20% constituted the “validation cohort” to test the final selected algorithm. MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied and their performances were compared in this work. Among all the machine learning models, RT performed best in both derivation [0.71 (0.67-0.76)] and validation cohorts [0.73 (0.63-0.82)]. In addition, the ideal rate of RT was 4% higher than that of MLR. To our knowledge, this is the first study to use machine learning models to predict TSD, which will further facilitate personalized medicine in tacrolimus administration in the future.

  17. Bayesian reliability modeling and assessment solution for NC machine tools under small-sample data

    NASA Astrophysics Data System (ADS)

    Yang, Zhaojun; Kan, Yingnan; Chen, Fei; Xu, Binbin; Chen, Chuanhai; Yang, Chuangui

    2015-11-01

    Although Markov chain Monte Carlo(MCMC) algorithms are accurate, many factors may cause instability when they are utilized in reliability analysis; such instability makes these algorithms unsuitable for widespread engineering applications. Thus, a reliability modeling and assessment solution aimed at small-sample data of numerical control(NC) machine tools is proposed on the basis of Bayes theories. An expert-judgment process of fusing multi-source prior information is developed to obtain the Weibull parameters' prior distributions and reduce the subjective bias of usual expert-judgment methods. The grid approximation method is applied to two-parameter Weibull distribution to derive the formulas for the parameters' posterior distributions and solve the calculation difficulty of high-dimensional integration. The method is then applied to the real data of a type of NC machine tool to implement a reliability assessment and obtain the mean time between failures(MTBF). The relative error of the proposed method is 5.8020×10-4 compared with the MTBF obtained by the MCMC algorithm. This result indicates that the proposed method is as accurate as MCMC. The newly developed solution for reliability modeling and assessment of NC machine tools under small-sample data is easy, practical, and highly suitable for widespread application in the engineering field; in addition, the solution does not reduce accuracy.

  18. Application of Machine-Learning Models to Predict Tacrolimus Stable Dose in Renal Transplant Recipients

    PubMed Central

    Tang, Jie; Liu, Rong; Zhang, Yue-Li; Liu, Mou-Ze; Hu, Yong-Fang; Shao, Ming-Jie; Zhu, Li-Jun; Xin, Hua-Wen; Feng, Gui-Wen; Shang, Wen-Jun; Meng, Xiang-Guang; Zhang, Li-Rong; Ming, Ying-Zi; Zhang, Wei

    2017-01-01

    Tacrolimus has a narrow therapeutic window and considerable variability in clinical use. Our goal was to compare the performance of multiple linear regression (MLR) and eight machine learning techniques in pharmacogenetic algorithm-based prediction of tacrolimus stable dose (TSD) in a large Chinese cohort. A total of 1,045 renal transplant patients were recruited, 80% of which were randomly selected as the “derivation cohort” to develop dose-prediction algorithm, while the remaining 20% constituted the “validation cohort” to test the final selected algorithm. MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied and their performances were compared in this work. Among all the machine learning models, RT performed best in both derivation [0.71 (0.67–0.76)] and validation cohorts [0.73 (0.63–0.82)]. In addition, the ideal rate of RT was 4% higher than that of MLR. To our knowledge, this is the first study to use machine learning models to predict TSD, which will further facilitate personalized medicine in tacrolimus administration in the future. PMID:28176850

  19. Application of Machine-Learning Models to Predict Tacrolimus Stable Dose in Renal Transplant Recipients.

    PubMed

    Tang, Jie; Liu, Rong; Zhang, Yue-Li; Liu, Mou-Ze; Hu, Yong-Fang; Shao, Ming-Jie; Zhu, Li-Jun; Xin, Hua-Wen; Feng, Gui-Wen; Shang, Wen-Jun; Meng, Xiang-Guang; Zhang, Li-Rong; Ming, Ying-Zi; Zhang, Wei

    2017-02-08

    Tacrolimus has a narrow therapeutic window and considerable variability in clinical use. Our goal was to compare the performance of multiple linear regression (MLR) and eight machine learning techniques in pharmacogenetic algorithm-based prediction of tacrolimus stable dose (TSD) in a large Chinese cohort. A total of 1,045 renal transplant patients were recruited, 80% of which were randomly selected as the "derivation cohort" to develop dose-prediction algorithm, while the remaining 20% constituted the "validation cohort" to test the final selected algorithm. MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied and their performances were compared in this work. Among all the machine learning models, RT performed best in both derivation [0.71 (0.67-0.76)] and validation cohorts [0.73 (0.63-0.82)]. In addition, the ideal rate of RT was 4% higher than that of MLR. To our knowledge, this is the first study to use machine learning models to predict TSD, which will further facilitate personalized medicine in tacrolimus administration in the future.

  20. Ex vivo normothermic machine perfusion is safe, simple, and reliable: results from a large animal model.

    PubMed

    Nassar, Ahmed; Liu, Qiang; Farias, Kevin; D'Amico, Giuseppe; Tom, Cynthia; Grady, Patrick; Bennett, Ana; Diago Uso, Teresa; Eghtesad, Bijan; Kelly, Dympna; Fung, John; Abu-Elmagd, Kareem; Miller, Charles; Quintini, Cristiano

    2015-02-01

    Normothermic machine perfusion (NMP) is an emerging preservation modality that holds the potential to prevent the injury associated with low temperature and to promote organ repair that follows ischemic cell damage. While several animal studies have showed its superiority over cold storage (CS), minimal studies in the literature have focused on safety, feasibility, and reliability of this technology, which represent key factors in its implementation into clinical practice. The aim of the present study is to report safety and performance data on NMP of DCD porcine livers. After 60 minutes of warm ischemia time, 20 pig livers were preserved using either NMP (n = 15; physiologic perfusion temperature) or CS group (n = 5) for a preservation time of 10 hours. Livers were then tested on a transplant simulation model for 24 hours. Machine safety was assessed by measuring system failure events, the ability to monitor perfusion parameters, sterility, and vessel integrity. The ability of the machine to preserve injured organs was assessed by liver function tests, hemodynamic parameters, and histology. No system failures were recorded. Target hemodynamic parameters were easily achieved and vascular complications were not encountered. Liver function parameters as well as histology showed significant differences between the 2 groups, with NMP livers showing preserved liver function and histological architecture, while CS livers presenting postreperfusion parameters consistent with unrecoverable cell injury. Our study shows that NMP is safe, reliable, and provides superior graft preservation compared to CS in our DCD porcine model. © The Author(s) 2014.

  1. Experimental study on light induced influence model to mice using support vector machine

    NASA Astrophysics Data System (ADS)

    Ji, Lei; Zhao, Zhimin; Yu, Yinshan; Zhu, Xingyue

    2014-08-01

    Previous researchers have made studies on different influences created by light irradiation to animals, including retinal damage, changes of inner index and so on. However, the model of light induced damage to animals using physiological indicators as features in machine learning method is never founded. This study was designed to evaluate the changes in micro vascular diameter, the serum absorption spectrum and the blood flow influenced by light irradiation of different wavelengths, powers and exposure time with support vector machine (SVM). The micro images of the mice auricle were recorded and the vessel diameters were calculated by computer program. The serum absorption spectrums were analyzed. The result shows that training sample rate 20% and 50% have almost the same correct recognition rate. Better performance and accuracy was achieved by third-order polynomial kernel SVM quadratic optimization method and it worked suitably for predicting the light induced damage to organisms.

  2. Machine learning methods enable predictive modeling of antibody feature:function relationships in RV144 vaccinees.

    PubMed

    Choi, Ickwon; Chung, Amy W; Suscovich, Todd J; Rerks-Ngarm, Supachai; Pitisuttithum, Punnee; Nitayaphan, Sorachai; Kaewkungwal, Jaranit; O'Connell, Robert J; Francis, Donald; Robb, Merlin L; Michael, Nelson L; Kim, Jerome H; Alter, Galit; Ackerman, Margaret E; Bailey-Kellogg, Chris

    2015-04-01

    The adaptive immune response to vaccination or infection can lead to the production of specific antibodies to neutralize the pathogen or recruit innate immune effector cells for help. The non-neutralizing role of antibodies in stimulating effector cell responses may have been a key mechanism of the protection observed in the RV144 HIV vaccine trial. In an extensive investigation of a rich set of data collected from RV144 vaccine recipients, we here employ machine learning methods to identify and model associations between antibody features (IgG subclass and antigen specificity) and effector function activities (antibody dependent cellular phagocytosis, cellular cytotoxicity, and cytokine release). We demonstrate via cross-validation that classification and regression approaches can effectively use the antibody features to robustly predict qualitative and quantitative functional outcomes. This integration of antibody feature and function data within a machine learning framework provides a new, objective approach to discovering and assessing multivariate immune correlates.

  3. A model predictive current control of flux-switching permanent magnet machines for torque ripple minimization

    NASA Astrophysics Data System (ADS)

    Huang, Wentao; Hua, Wei; Yu, Feng

    2017-05-01

    Due to high airgap flux density generated by magnets and the special double salient structure, the cogging torque of the flux-switching permanent magnet (FSPM) machine is considerable, which limits the further applications. Based on the model predictive current control (MPCC) and the compensation control theory, a compensating-current MPCC (CC-MPCC) scheme is proposed and implemented to counteract the dominated components in cogging torque of an existing three-phase 12/10 FSPM prototyped machine, and thus to alleviate the influence of the cogging torque and improve the smoothness of electromagnetic torque as well as speed, where a comprehensive cost function is designed to evaluate the switching states. The simulated results indicate that the proposed CC-MPCC scheme can suppress the torque ripple significantly and offer satisfactory dynamic performances by comparisons with the conventional MPCC strategy. Finally, experimental results validate both the theoretical and simulated predictions.

  4. Physics-informed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data

    NASA Astrophysics Data System (ADS)

    Wang, Jian-Xun; Wu, Jin-Long; Xiao, Heng

    2017-03-01

    Turbulence modeling is a critical component in numerical simulations of industrial flows based on Reynolds-averaged Navier-Stokes (RANS) equations. However, after decades of efforts in the turbulence modeling community, universally applicable RANS models with predictive capabilities are still lacking. Large discrepancies in the RANS-modeled Reynolds stresses are the main source that limits the predictive accuracy of RANS models. Identifying these discrepancies is of significance to possibly improve the RANS modeling. In this work, we propose a data-driven, physics-informed machine learning approach for reconstructing discrepancies in RANS modeled Reynolds stresses. The discrepancies are formulated as functions of the mean flow features. By using a modern machine learning technique based on random forests, the discrepancy functions are trained by existing direct numerical simulation (DNS) databases and then used to predict Reynolds stress discrepancies in different flows where data are not available. The proposed method is evaluated by two classes of flows: (1) fully developed turbulent flows in a square duct at various Reynolds numbers and (2) flows with massive separations. In separated flows, two training flow scenarios of increasing difficulties are considered: (1) the flow in the same periodic hills geometry yet at a lower Reynolds number and (2) the flow in a different hill geometry with a similar recirculation zone. Excellent predictive performances were observed in both scenarios, demonstrating the merits of the proposed method.

  5. A Physics-Informed Machine Learning Framework for RANS-based Predictive Turbulence Modeling

    NASA Astrophysics Data System (ADS)

    Xiao, Heng; Wu, Jinlong; Wang, Jianxun; Ling, Julia

    2016-11-01

    Numerical models based on the Reynolds-averaged Navier-Stokes (RANS) equations are widely used in turbulent flow simulations in support of engineering design and optimization. In these models, turbulence modeling introduces significant uncertainties in the predictions. In light of the decades-long stagnation encountered by the traditional approach of turbulence model development, data-driven methods have been proposed as a promising alternative. We will present a data-driven, physics-informed machine-learning framework for predictive turbulence modeling based on RANS models. The framework consists of three components: (1) prediction of discrepancies in RANS modeled Reynolds stresses based on machine learning algorithms, (2) propagation of improved Reynolds stresses to quantities of interests with a modified RANS solver, and (3) quantitative, a priori assessment of predictive confidence based on distance metrics in the mean flow feature space. Merits of the proposed framework are demonstrated in a class of flows featuring massive separations. Significant improvements over the baseline RANS predictions are observed. The favorable results suggest that the proposed framework is a promising path toward RANS-based predictive turbulence in the era of big data. (SAND2016-7435 A).

  6. Use of different sampling schemes in machine learning-based prediction of hydrological models' uncertainty

    NASA Astrophysics Data System (ADS)

    Kayastha, Nagendra; Solomatine, Dimitri; Lal Shrestha, Durga; van Griensven, Ann

    2013-04-01

    In recent years, a lot of attention in the hydrologic literature is given to model parameter uncertainty analysis. The robustness estimation of uncertainty depends on the efficiency of sampling method used to generate the best fit responses (outputs) and on ease of use. This paper aims to investigate: (1) how sampling strategies effect the uncertainty estimations of hydrological models, (2) how to use this information in machine learning predictors of models uncertainty. Sampling of parameters may employ various algorithms. We compared seven different algorithms namely, Monte Carlo (MC) simulation, generalized likelihood uncertainty estimation (GLUE), Markov chain Monte Carlo (MCMC), shuffled complex evolution metropolis algorithm (SCEMUA), differential evolution adaptive metropolis (DREAM), partical swarm optimization (PSO) and adaptive cluster covering (ACCO) [1]. These methods were applied to estimate uncertainty of streamflow simulation using conceptual model HBV and Semi-distributed hydrological model SWAT. Nzoia catchment in West Kenya is considered as the case study. The results are compared and analysed based on the shape of the posterior distribution of parameters, uncertainty results on model outputs. The MLUE method [2] uses results of Monte Carlo sampling (or any other sampling shceme) to build a machine learning (regression) model U able to predict uncertainty (quantiles of pdf) of a hydrological model H outputs. Inputs to these models are specially identified representative variables (past events precipitation and flows). The trained machine learning models are then employed to predict the model output uncertainty which is specific for the new input data. The problem here is that different sampling algorithms result in different data sets used to train such a model U, which leads to several models (and there is no clear evidence which model is the best since there is no basis for comparison). A solution could be to form a committee of all models U and

  7. Transient modeling and parameter identification based on wavelet and correlation filtering for rotating machine fault diagnosis

    NASA Astrophysics Data System (ADS)

    Wang, Shibin; Huang, Weiguo; Zhu, Z. K.

    2011-05-01

    At constant rotating speed, localized faults in rotating machine tend to result in periodic shocks and thus arouse periodic transients in the vibration signal. The transient feature analysis has always been a crucial problem for localized fault detection, and the key aim for transient feature analysis is to identify the model and its parameters (frequency, damping ratio and time index) of the transient, and the time interval, i.e. period, between transients. Based on wavelet and correlation filtering, a technique incorporating transient modeling and parameter identification is proposed for rotating machine fault feature detection. With the proposed method, both parameters of a single transient and the period between transients can be identified from the vibration signal, and localized faults can be detected based on the parameters, especially the period. First, a simulation signal is used to test the performance of the proposed method. Then the method is applied to the vibration signals of different types of bearings with localized faults in the outer race, the inner race and the rolling element, respectively, and all the results show that the period between transients, representing the localized fault characteristic, is successfully detected. The method is also utilized in gearbox fault diagnosis and the effectiveness is verified through identifying the parameters of the transient model and the period. Moreover, it can be drawn that for bearing fault detection, the single-side wavelet model is more suitable than double-side one, while the double-side model for gearbox fault detection. This research proposed an effective method of localized fault detection for rotating machine fault diagnosis through transient modeling and parameter detection.

  8. Support Vector Machines Model of Computed Tomography for Assessing Lymph Node Metastasis in Esophageal Cancer with Neoadjuvant Chemotherapy.

    PubMed

    Wang, Zhi-Long; Zhou, Zhi-Guo; Chen, Ying; Li, Xiao-Ting; Sun, Ying-Shi

    The aim of this study was to diagnose lymph node metastasis of esophageal cancer by support vector machines model based on computed tomography. A total of 131 esophageal cancer patients with preoperative chemotherapy and radical surgery were included. Various indicators (tumor thickness, tumor length, tumor CT value, total number of lymph nodes, and long axis and short axis sizes of largest lymph node) on CT images before and after neoadjuvant chemotherapy were recorded. A support vector machines model based on these CT indicators was built to predict lymph node metastasis. Support vector machines model diagnosed lymph node metastasis better than preoperative short axis size of largest lymph node on CT. The area under the receiver operating characteristic curves were 0.887 and 0.705, respectively. The support vector machine model of CT images can help diagnose lymph node metastasis in esophageal cancer with preoperative chemotherapy.

  9. A Critical Review for Developing Accurate and Dynamic Predictive Models Using Machine Learning Methods in Medicine and Health Care.

    PubMed

    Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer

    2017-04-01

    Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.

  10. Sensitivity Analysis of a Spatio-Temporal Avalanche Forecasting Model Based on Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Matasci, G.; Pozdnoukhov, A.; Kanevski, M.

    2009-04-01

    The recent progress in environmental monitoring technologies allows capturing extensive amount of data that can be used to assist in avalanche forecasting. While it is not straightforward to directly obtain the stability factors with the available technologies, the snow-pack profiles and especially meteorological parameters are becoming more and more available at finer spatial and temporal scales. Being very useful for improving physical modelling, these data are also of particular interest regarding their use involving the contemporary data-driven techniques of machine learning. Such, the use of support vector machine classifier opens ways to discriminate the ``safe'' and ``dangerous'' conditions in the feature space of factors related to avalanche activity based on historical observations. The input space of factors is constructed from the number of direct and indirect snowpack and weather observations pre-processed with heuristic and physical models into a high-dimensional spatially varying vector of input parameters. The particular system presented in this work is implemented for the avalanche-prone site of Ben Nevis, Lochaber region in Scotland. A data-driven model for spatio-temporal avalanche danger forecasting provides an avalanche danger map for this local (5x5 km) region at the resolution of 10m based on weather and avalanche observations made by forecasters on a daily basis at the site. We present the further work aimed at overcoming the ``black-box'' type modelling, a disadvantage the machine learning methods are often criticized for. It explores what the data-driven method of support vector machine has to offer to improve the interpretability of the forecast, uncovers the properties of the developed system with respect to highlighting which are the important features that led to the particular prediction (both in time and space), and presents the analysis of sensitivity of the prediction with respect to the varying input parameters. The purpose of the

  11. Mathematical concepts for modeling human behavior in complex man-machine systems

    NASA Technical Reports Server (NTRS)

    Johannsen, G.; Rouse, W. B.

    1979-01-01

    Many human behavior (e.g., manual control) models have been found to be inadequate for describing processes in certain real complex man-machine systems. An attempt is made to find a way to overcome this problem by examining the range of applicability of existing mathematical models with respect to the hierarchy of human activities in real complex tasks. Automobile driving is chosen as a baseline scenario, and a hierarchy of human activities is derived by analyzing this task in general terms. A structural description leads to a block diagram and a time-sharing computer analogy.

  12. Mathematical concepts for modeling human behavior in complex man-machine systems

    NASA Technical Reports Server (NTRS)

    Johannsen, G.; Rouse, W. B.

    1979-01-01

    Many human behavior (e.g., manual control) models have been found to be inadequate for describing processes in certain real complex man-machine systems. An attempt is made to find a way to overcome this problem by examining the range of applicability of existing mathematical models with respect to the hierarchy of human activities in real complex tasks. Automobile driving is chosen as a baseline scenario, and a hierarchy of human activities is derived by analyzing this task in general terms. A structural description leads to a block diagram and a time-sharing computer analogy.

  13. Analysis of pilot-aircraft performance and reliability via the application of man-machine models

    NASA Technical Reports Server (NTRS)

    Kleinman, D. L.

    1975-01-01

    The present approach to analytic modeling, which utilizes human response theory together with modern control theory, is discussed. Analytic modeling of human performance has progressed to the point where it can be used with some confidence in the design and performance analysis of man-machine systems. An example of its use in helping design a flight director for the hover control of a VTOL vehicle is described. How the method could be used for analysis of human response to system failure is sketched.

  14. Fast and accurate modeling of molecular atomization energies with machine learning.

    PubMed

    Rupp, Matthias; Tkatchenko, Alexandre; Müller, Klaus-Robert; von Lilienfeld, O Anatole

    2012-02-03

    We introduce a machine learning model to predict atomization energies of a diverse set of organic molecules, based on nuclear charges and atomic positions only. The problem of solving the molecular Schrödinger equation is mapped onto a nonlinear statistical regression problem of reduced complexity. Regression models are trained on and compared to atomization energies computed with hybrid density-functional theory. Cross validation over more than seven thousand organic molecules yields a mean absolute error of ∼10  kcal/mol. Applicability is demonstrated for the prediction of molecular atomization potential energy curves.

  15. Uncertainty "escalation" and use of machine learning to forecast residual and data model uncertainties

    NASA Astrophysics Data System (ADS)

    Solomatine, Dimitri

    2016-04-01

    When speaking about model uncertainty many authors implicitly assume the data uncertainty (mainly in parameters or inputs) which is probabilistically described by distributions. Often however it is look also into the residual uncertainty as well. It is hence reasonable to classify the main approaches to uncertainty analysis with respect to the two main types of model uncertainty that can be distinguished: A. The residual uncertainty of models. In this case the model parameters and/or model inputs are considered to be fixed (deterministic), i.e. the model is considered to be optimal (calibrated) and deterministic. Model error is considered as the manifestation of uncertainty. If there is enough past data about the model errors (i.e. it uncertainty), it is possible to build a statistical or machine learning model of uncertainty trained on this data. The following methods can be mentioned: (a) quantile regression (QR) method by Koenker and Basset in which linear regression is used to build predictive models for distribution quantiles [1] (b) a more recent approach that takes into account the input variables influencing such uncertainty and uses more advanced machine learning (non-linear) methods (neural networks, model trees etc.) - the UNEEC method [2,3,7] (c) and even more recent DUBRAUE method (Dynamic Uncertainty Model By Regression on Absolute Error), a autoregressive model of model residuals (it corrects the model residual first and then carries out the uncertainty prediction by a autoregressive statistical model) [5] B. The data uncertainty (parametric and/or input) - in this case we study the propagation of uncertainty (presented typically probabilistically) from parameters or inputs to the model outputs. In case of simple functions representing models analytical approaches can be used, or approximation methods (e.g., first-order second moment method). However, for real complex non-linear models implemented in software there is no other choice except using

  16. Estimating the complexity of 3D structural models using machine learning methods

    NASA Astrophysics Data System (ADS)

    Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques

    2016-04-01

    Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.

  17. A hybrid prognostic model for multistep ahead prediction of machine condition

    NASA Astrophysics Data System (ADS)

    Roulias, D.; Loutas, T. H.; Kostopoulos, V.

    2012-05-01

    Prognostics are the future trend in condition based maintenance. In the current framework a data driven prognostic model is developed. The typical procedure of developing such a model comprises a) the selection of features which correlate well with the gradual degradation of the machine and b) the training of a mathematical tool. In this work the data are taken from a laboratory scale single stage gearbox under multi-sensor monitoring. Tests monitoring the condition of the gear pair from healthy state until total brake down following several days of continuous operation were conducted. After basic pre-processing of the derived data, an indicator that correlated well with the gearbox condition was obtained. Consecutively the time series is split in few distinguishable time regions via an intelligent data clustering scheme. Each operating region is modelled with a feed-forward artificial neural network (FFANN) scheme. The performance of the proposed model is tested by applying the system to predict the machine degradation level on unseen data. The results show the plausibility and effectiveness of the model in following the trend of the timeseries even in the case that a sudden change occurs. Moreover the model shows ability to generalise for application in similar mechanical assets.

  18. Field tests and machine learning approaches for refining algorithms and correlations of driver's model parameters.

    PubMed

    Tango, Fabio; Minin, Luca; Tesauri, Francesco; Montanari, Roberto

    2010-03-01

    This paper describes the field tests on a driving simulator carried out to validate the algorithms and the correlations of dynamic parameters, specifically driving task demand and drivers' distraction, able to predict drivers' intentions. These parameters belong to the driver's model developed by AIDE (Adaptive Integrated Driver-vehicle InterfacE) European Integrated Project. Drivers' behavioural data have been collected from the simulator tests to model and validate these parameters using machine learning techniques, specifically the adaptive neuro fuzzy inference systems (ANFIS) and the artificial neural network (ANN). Two models of task demand and distraction have been developed, one for each adopted technique. The paper provides an overview of the driver's model, the description of the task demand and distraction modelling and the tests conducted for the validation of these parameters. A test comparing predicted and expected outcomes of the modelled parameters for each machine learning technique has been carried out: for distraction, in particular, promising results (low prediction errors) have been obtained by adopting an artificial neural network.

  19. Structure Based Thermostability Prediction Models for Protein Single Point Mutations with Machine Learning Tools.

    PubMed

    Jia, Lei; Yarlagadda, Ramya; Reed, Charles C

    2015-01-01

    Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find "hot spots" in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html) is a public database that consists of thousands of protein mutants' experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG) and melting temperature change (dTm) were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor) and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models.

  20. Constructing and validating readability models: the method of integrating multilevel linguistic features with machine learning.

    PubMed

    Sung, Yao-Ting; Chen, Ju-Ling; Cha, Ji-Her; Tseng, Hou-Chiang; Chang, Tao-Hsing; Chang, Kuo-En

    2015-06-01

    Multilevel linguistic features have been proposed for discourse analysis, but there have been few applications of multilevel linguistic features to readability models and also few validations of such models. Most traditional readability formulae are based on generalized linear models (GLMs; e.g., discriminant analysis and multiple regression), but these models have to comply with certain statistical assumptions about data properties and include all of the data in formulae construction without pruning the outliers in advance. The use of such readability formulae tends to produce a low text classification accuracy, while using a support vector machine (SVM) in machine learning can enhance the classification outcome. The present study constructed readability models by integrating multilevel linguistic features with SVM, which is more appropriate for text classification. Taking the Chinese language as an example, this study developed 31 linguistic features as the predicting variables at the word, semantic, syntax, and cohesion levels, with grade levels of texts as the criterion variable. The study compared four types of readability models by integrating unilevel and multilevel linguistic features with GLMs and an SVM. The results indicate that adopting a multilevel approach in readability analysis provides a better representation of the complexities of both texts and the reading comprehension process.

  1. Some cases of machining large-scale parts: Characterization and modelling of heavy turning, deep drilling and broaching

    NASA Astrophysics Data System (ADS)

    Haddag, B.; Nouari, M.; Moufki, A.

    2016-10-01

    Machining large-scale parts involves extreme loading at the cutting zone. This paper presents an overview of some cases of machining large-scale parts: heavy turning, deep drilling and broaching processes. It focuses on experimental characterization and modelling methods of these processes. Observed phenomena and/or measured cutting forces are reported. The paper also discusses the predictive ability of the proposed models to reproduce experimental data.

  2. Learning in closed-loop brain-machine interfaces: modeling and experimental validation.

    PubMed

    Héliot, Rodolphe; Ganguly, Karunesh; Jimenez, Jessica; Carmena, Jose M

    2010-10-01

    Closed-loop operation of a brain-machine interface (BMI) relies on the subject's ability to learn an inverse transformation of the plant to be controlled. In this paper, we propose a model of the learning process that undergoes closed-loop BMI operation. We first explore the properties of the model and show that it is able to learn an inverse model of the controlled plant. Then, we compare the model predictions to actual experimental neural and behavioral data from nonhuman primates operating a BMI, which demonstrate high accordance of the model with the experimental data. Applying tools from control theory to this learning model will help in the design of a new generation of neural information decoders which will maximize learning speed for BMI users.

  3. [Research, design and application of model NSE-1 neck muscle training machine for pilots].

    PubMed

    Cheng, Haiping; Wang, Zhijie; Liu, Songyang; Yang, Yi; Zhao, Guang; Cong, Hong; Han, Xueping; Liu, Min; Yu, Mengsun

    2011-04-01

    Pain in the cervical region of air force pilots, who are exposed to high G-forces, is a specifically occupational health problem. To minimize neck problems, the cervical muscles need specific strength exercise. It is important that the training for the neck must be carried out with optimal resistance in exercises. The model NSE-1 neck training machine for pilots was designed for neck strengthening exercises under safe and effective conditions. In order to realize the functions of changeable velocity and resistant (CVR) training and neck isometric contractive exercises, the techniques of adaptive hydraulics, sensor, optic and auditory biological feedback, and signal processing were applied to this machine. The training system mainly consists of mechanical parts (including the chair of flexion and extension, the chair of right and left lateral flexion, the components of hydraulics and torque transformer, etc.), and the software of signal processing and biological feedback. Eleven volunteers were selected for the experiments of neck isometric contractive exercises, three times a week for 6 weeks, where CVR training (flexion, extension, right, left lateral flexion) one time a week. The increase in relative strength of the neck (flexion, extension, left and right lateral flexion) was 70.8%, 83.7%, 78.6% and 75.2%, respectively after training. Results show that the strength of the neck can be increased safely, effectively and rapidly with NSE-1 neck training machine to perform neck training.

  4. Biosimilarity Assessments of Model IgG1-Fc Glycoforms Using a Machine Learning Approach.

    PubMed

    Kim, Jae Hyun; Joshi, Sangeeta B; Tolbert, Thomas J; Middaugh, C Russell; Volkin, David B; Smalter Hall, Aaron

    2016-02-01

    Biosimilarity assessments are performed to decide whether 2 preparations of complex biomolecules can be considered "highly similar." In this work, a machine learning approach is demonstrated as a mathematical tool for such assessments using a variety of analytical data sets. As proof-of-principle, physical stability data sets from 8 samples, 4 well-defined immunoglobulin G1-Fragment crystallizable glycoforms in 2 different formulations, were examined (see More et al., companion article in this issue). The data sets included triplicate measurements from 3 analytical methods across different pH and temperature conditions (2066 data features). Established machine learning techniques were used to determine whether the data sets contain sufficient discriminative power in this application. The support vector machine classifier identified the 8 distinct samples with high accuracy. For these data sets, there exists a minimum threshold in terms of information quality and volume to grant enough discriminative power. Generally, data from multiple analytical techniques, multiple pH conditions, and at least 200 representative features were required to achieve the highest discriminative accuracy. In addition to classification accuracy tests, various methods such as sample space visualization, similarity analysis based on Euclidean distance, and feature ranking by mutual information scores are demonstrated to display their effectiveness as modeling tools for biosimilarity assessments. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  5. Supercomputer Assisted Generation of Machine Learning Agents for the Calibration of Building Energy Models

    SciTech Connect

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard

    2013-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrot pur- poses. EnergyPlus is the agship Department of Energy software that performs BEM for dierent types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manu- ally by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building en- ergy modeling unfeasible for smaller projects. In this paper, we describe the \\Autotune" research which employs machine learning algorithms to generate agents for the dierent kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of En- ergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-eective cali- bration of building models.

  6. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    PubMed

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  7. A comparison of optimal MIMO linear and nonlinear models for brain-machine interfaces.

    PubMed

    Kim, S-P; Sanchez, J C; Rao, Y N; Erdogmus, D; Carmena, J M; Lebedev, M A; Nicolelis, M A L; Principe, J C

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  8. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP

    PubMed Central

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740

  9. A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces

    NASA Astrophysics Data System (ADS)

    Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  10. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

    NASA Astrophysics Data System (ADS)

    Nishizuka, N.; Sugiura, K.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M.

    2017-02-01

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010–2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite. We detected active regions (ARs) from the full-disk magnetogram, from which ∼60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

  11. One- and two-dimensional Stirling machine simulation using experimentally generated reversing flow turbuulence models

    SciTech Connect

    Goldberg, L.F.

    1990-08-01

    The activities described in this report do not constitute a continuum but rather a series of linked smaller investigations in the general area of one- and two-dimensional Stirling machine simulation. The initial impetus for these investigations was the development and construction of the Mechanical Engineering Test Rig (METR) under a grant awarded by NASA to Dr. Terry Simon at the Department of Mechanical Engineering, University of Minnesota. The purpose of the METR is to provide experimental data on oscillating turbulent flows in Stirling machine working fluid flow path components (heater, cooler, regenerator, etc.) with particular emphasis on laminar/turbulent flow transitions. Hence, the initial goals for the grant awarded by NASA were, broadly, to provide computer simulation backup for the design of the METR and to analyze the results produced. This was envisaged in two phases: First, to apply an existing one-dimensional Stirling machine simulation code to the METR and second, to adapt a two-dimensional fluid mechanics code which had been developed for simulating high Rayleigh number buoyant cavity flows to the METR. The key aspect of this latter component was the development of an appropriate turbulence model suitable for generalized application to Stirling simulation. A final-step was then to apply the two-dimensional code to an existing Stirling machine for which adequate experimental data exist. The work described herein was carried out over a period of three years on a part-time basis. Forty percent of the first year`s funding was provided as a match to the NASA funds by the Underground Space Center, University of Minnesota, which also made its computing facilities available to the project at no charge.

  12. Selecting statistical or machine learning techniques for regional landslide susceptibility modelling by evaluating spatial prediction

    NASA Astrophysics Data System (ADS)

    Goetz, Jason; Brenning, Alexander; Petschko, Helene; Leopold, Philip

    2015-04-01

    With so many techniques now available for landslide susceptibility modelling, it can be challenging to decide on which technique to apply. Generally speaking, the criteria for model selection should be tied closely to end users' purpose, which could be spatial prediction, spatial analysis or both. In our research, we focus on comparing the spatial predictive abilities of landslide susceptibility models. We illustrate how spatial cross-validation, a statistical approach for assessing spatial prediction performance, can be applied with the area under the receiver operating characteristic curve (AUROC) as a prediction measure for model comparison. Several machine learning and statistical techniques are evaluated for prediction in Lower Austria: support vector machine, random forest, bundling with penalized linear discriminant analysis, logistic regression, weights of evidence, and the generalized additive model. In addition to predictive performance, the importance of predictor variables in each model was estimated using spatial cross-validation by calculating the change in AUROC performance when variables are randomly permuted. The susceptibility modelling techniques were tested in three areas of interest in Lower Austria, which have unique geologic conditions associated with landslide occurrence. Overall, we found for the majority of comparisons that there were little practical or even statistically significant differences in AUROCs. That is the models' prediction performances were very similar. Therefore, in addition to prediction, the ability to interpret models for spatial analysis and the qualitative qualities of the prediction surface (map) are considered and discussed. The measure of variable importance provided some insight into the model behaviour for prediction, in particular for "black-box" models. However, there were no clear patterns in all areas of interest to why certain variables were given more importance over others.

  13. Slow Speed Machining of Titanium

    DTIC Science & Technology

    1981-10-01

    MRL-R-833 SLOW SPEED MACHINING OF TITANIUM D.M. Turley Approved for Public Release I -J C) COMMONWEALTH OF AUSTRALIA 1981 OCTOBER, 1981 82ോ 20 059...DEPARTMENT OF DEFENCE MATERIALS RESEARCH LABORATORIES REPORT MRL-R-833 SLOW SPEED MACHINING OF TITANIUM D.M. Turley ABSTRACT Catastrophic-shear type...MRL-R-833 b. Title in isolation: UNCLASSIFIED c. Report Number: MRL-R-833 c. Abstract in isolation: UNCLASSIFIED 3. TITLE: SLOW SPEED MACHINING OF

  14. A Hybrid dasymetric and machine learning approach to high-resolution residential electricity consumption modeling

    SciTech Connect

    Morton, April M; Nagle, Nicholas N; Piburn, Jesse O; Stewart, Robert N; McManamay, Ryan A

    2017-01-01

    As urban areas continue to grow and evolve in a world of increasing environmental awareness, the need for detailed information regarding residential energy consumption patterns has become increasingly important. Though current modeling efforts mark significant progress in the effort to better understand the spatial distribution of energy consumption, the majority of techniques are highly dependent on region-specific data sources and often require building- or dwelling-level details that are not publicly available for many regions in the United States. Furthermore, many existing methods do not account for errors in input data sources and may not accurately reflect inherent uncertainties in model outputs. We propose an alternative and more general hybrid approach to high-resolution residential electricity consumption modeling by merging a dasymetric model with a complementary machine learning algorithm. The method s flexible data requirement and statistical framework ensure that the model both is applicable to a wide range of regions and considers errors in input data sources.

  15. Etch proximity correction through machine-learning-driven etch bias model

    NASA Astrophysics Data System (ADS)

    Shim, Seongbo; Shin, Youngsoo

    2016-03-01

    Accurate prediction of etch bias has become more important as technology node shrinks. A simulation is not feasible solution in full chip level due to excessive runtime, so etch proximity correction (EPC) often relies on empirically obtained rules or models. However, simple rules alone cannot accurately correct various pattern shapes, and a few empirical parameters in model-based EPC is still not enough to achieve satisfactory OCV. We propose a new approach of etch bias modeling through machine learning (ML) technique. A segment of interest (and its surroundings) are characterized by some geometric and optical parameters, which are received by an artificial neural network (ANN), which then outputs predicted etch bias of the segment. The ANN is used as our etch bias model for new EPC, which we propose in this paper. The new etch bias model and EPC are implemented in commercial OPC tool and demonstrated using 20nm technology DRAM gate layer.

  16. Monkey models for brain-machine interfaces: the need for maintaining diversity.

    PubMed

    Nuyujukian, Paul; Fan, Joline M; Gilja, Vikash; Kalanithi, Paul S; Chestek, Cindy A; Shenoy, Krishna V

    2011-01-01

    Brain-machine interfaces (BMIs) aim to help disabled patients by translating neural signals from the brain into control signals for guiding prosthetic arms, computer cursors, and other assistive devices. Animal models are central to the development of these systems and have helped enable the successful translation of the first generation of BMIs. As we move toward next-generation systems, we face the question of which animal models will aid broader patient populations and achieve even higher performance, robustness, and functionality. We review here four general types of rhesus monkey models employed in BMI research, and describe two additional, complementary models. Given the physiological diversity of neurological injury and disease, we suggest a need to maintain the current diversity of animal models and to explore additional alternatives, as each mimic different aspects of injury or disease.

  17. Identification of nonlinear system using extreme learning machine based Hammerstein model

    NASA Astrophysics Data System (ADS)

    Tang, Yinggan; Li, Zhonghui; Guan, Xinping

    2014-09-01

    In this paper, a new method for nonlinear system identification via extreme learning machine neural network based Hammerstein model (ELM-Hammerstein) is proposed. The ELM-Hammerstein model consists of static ELM neural network followed by a linear dynamic subsystem. The identification of nonlinear system is achieved by determining the structure of ELM-Hammerstein model and estimating its parameters. Lipschitz quotient criterion is adopted to determine the structure of ELM-Hammerstein model from input-output data. A generalized ELM algorithm is proposed to estimate the parameters of ELM-Hammerstein model, where the parameters of linear dynamic part and the output weights of ELM neural network are estimated simultaneously. The proposed method can obtain more accurate identification results with less computation complexity. Three simulation examples demonstrate its effectiveness.

  18. Evaluating machine learning and statistical prediction techniques for landslide susceptibility modeling

    NASA Astrophysics Data System (ADS)

    Goetz, J. N.; Brenning, A.; Petschko, H.; Leopold, P.

    2015-08-01

    Statistical and now machine learning prediction methods have been gaining popularity in the field of landslide susceptibility modeling. Particularly, these data driven approaches show promise when tackling the challenge of mapping landslide prone areas for large regions, which may not have sufficient geotechnical data to conduct physically-based methods. Currently, there is no best method for empirical susceptibility modeling. Therefore, this study presents a comparison of traditional statistical and novel machine learning models applied for regional scale landslide susceptibility modeling. These methods were evaluated by spatial k-fold cross-validation estimation of the predictive performance, assessment of variable importance for gaining insights into model behavior and by the appearance of the prediction (i.e. susceptibility) map. The modeling techniques applied were logistic regression (GLM), generalized additive models (GAM), weights of evidence (WOE), the support vector machine (SVM), random forest classification (RF), and bootstrap aggregated classification trees (bundling) with penalized discriminant analysis (BPLDA). These modeling methods were tested for three areas in the province of Lower Austria, Austria. The areas are characterized by different geological and morphological settings. Random forest and bundling classification techniques had the overall best predictive performances. However, the performances of all modeling techniques were for the majority not significantly different from each other; depending on the areas of interest, the overall median estimated area under the receiver operating characteristic curve (AUROC) differences ranged from 2.9 to 8.9 percentage points. The overall median estimated true positive rate (TPR) measured at a 10% false positive rate (FPR) differences ranged from 11 to 15pp. The relative importance of each predictor was generally different between the modeling methods. However, slope angle, surface roughness and plan

  19. Prediction of effluent concentration in a wastewater treatment plant using machine learning models.

    PubMed

    Guo, Hong; Jeong, Kwanho; Lim, Jiyeon; Jo, Jeongwon; Kim, Young Mo; Park, Jong-pyo; Kim, Joon Ha; Cho, Kyung Hwa

    2015-06-01

    Of growing amount of food waste, the integrated food waste and waste water treatment was regarded as one of the efficient modeling method. However, the load of food waste to the conventional waste treatment process might lead to the high concentration of total nitrogen (T-N) impact on the effluent water quality. The objective of this study is to establish two machine learning models-artificial neural networks (ANNs) and support vector machines (SVMs), in order to predict 1-day interval T-N concentration of effluent from a wastewater treatment plant in Ulsan, Korea. Daily water quality data and meteorological data were used and the performance of both models was evaluated in terms of the coefficient of determination (R2), Nash-Sutcliff efficiency (NSE), relative efficiency criteria (drel). Additionally, Latin-Hypercube one-factor-at-a-time (LH-OAT) and a pattern search algorithm were applied to sensitivity analysis and model parameter optimization, respectively. Results showed that both models could be effectively applied to the 1-day interval prediction of T-N concentration of effluent. SVM model showed a higher prediction accuracy in the training stage and similar result in the validation stage. However, the sensitivity analysis demonstrated that the ANN model was a superior model for 1-day interval T-N concentration prediction in terms of the cause-and-effect relationship between T-N concentration and modeling input values to integrated food waste and waste water treatment. This study suggested the efficient and robust nonlinear time-series modeling method for an early prediction of the water quality of integrated food waste and waste water treatment process. Copyright © 2015. Published by Elsevier B.V.

  20. Multi-center machine learning in imaging psychiatry: A meta-model approach.

    PubMed

    Dluhoš, Petr; Schwarz, Daniel; Cahn, Wiepke; van Haren, Neeltje; Kahn, René; Španiel, Filip; Horáček, Jiří; Kašpárek, Tomáš; Schnack, Hugo

    2017-07-15

    One of the biggest problems in automated diagnosis of psychiatric disorders from medical images is the lack of sufficiently large samples for training. Sample size is especially important in the case of highly heterogeneous disorders such as schizophrenia, where machine learning models built on relatively low numbers of subjects may suffer from poor generalizability. Via multicenter studies and consortium initiatives researchers have tried to solve this problem by combining data sets from multiple sites. The necessary sharing of (raw) data is, however, often hindered by legal and ethical issues. Moreover, in the case of very large samples, the computational complexity might become too large. The solution to this problem could be distributed learning. In this paper we investigated the possibility to create a meta-model by combining support vector machines (SVM) classifiers trained on the local datasets, without the need for sharing medical images or any other personal data. Validation was done in a 4-center setup comprising of 480 first-episode schizophrenia patients and healthy controls in total. We built SVM models to separate patients from controls based on three different kinds of imaging features derived from structural MRI scans, and compared models built on the joint multicenter data to the meta-models. The results showed that the combined meta-model had high similarity to the model built on all data pooled together and comparable classification performance on all three imaging features. Both similarity and performance was superior to that of the local models. We conclude that combining models is thus a viable alternative that facilitates data sharing and creating bigger and more informative models. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. EBS Radionuclide Transport Abstraction

    SciTech Connect

    R. Schreiner

    2001-06-27

    The purpose of this work is to develop the Engineered Barrier System (EBS) radionuclide transport abstraction model, as directed by a written development plan (CRWMS M&O 1999a). This abstraction is the conceptual model that will be used to determine the rate of release of radionuclides from the EBS to the unsaturated zone (UZ) in the total system performance assessment-license application (TSPA-LA). In particular, this model will be used to quantify the time-dependent radionuclide releases from a failed waste package (WP) and their subsequent transport through the EBS to the emplacement drift wall/UZ interface. The development of this conceptual model will allow Performance Assessment Operations (PAO) and its Engineered Barrier Performance Department to provide a more detailed and complete EBS flow and transport abstraction. The results from this conceptual model will allow PA0 to address portions of the key technical issues (KTIs) presented in three NRC Issue Resolution Status Reports (IRSRs): (1) the Evolution of the Near-Field Environment (ENFE), Revision 2 (NRC 1999a), (2) the Container Life and Source Term (CLST), Revision 2 (NRC 1999b), and (3) the Thermal Effects on Flow (TEF), Revision 1 (NRC 1998). The conceptual model for flow and transport in the EBS will be referred to as the ''EBS RT Abstraction'' in this analysis/modeling report (AMR). The scope of this abstraction and report is limited to flow and transport processes. More specifically, this AMR does not discuss elements of the TSPA-SR and TSPA-LA that relate to the EBS but are discussed in other AMRs. These elements include corrosion processes, radionuclide solubility limits, waste form dissolution rates and concentrations of colloidal particles that are generally represented as boundary conditions or input parameters for the EBS RT Abstraction. In effect, this AMR provides the algorithms for transporting radionuclides using the flow geometry and radionuclide concentrations determined by other

  2. Integrating Machine Learning into a Crowdsourced Model for Earthquake-Induced Damage Assessment

    NASA Technical Reports Server (NTRS)

    Rebbapragada, Umaa; Oommen, Thomas

    2011-01-01

    On January 12th, 2010, a catastrophic 7.0M earthquake devastated the country of Haiti. In the aftermath of an earthquake, it is important to rapidly assess damaged areas in order to mobilize the appropriate resources. The Haiti damage assessment effort introduced a promising model that uses crowdsourcing to map damaged areas in freely available remotely-sensed data. This paper proposes the application of machine learning methods to improve this model. Specifically, we apply work on learning from multiple, imperfect experts to the assessment of volunteer reliability, and propose the use of image segmentation to automate the detection of damaged areas. We wrap both tasks in an active learning framework in order to shift volunteer effort from mapping a full catalog of images to the generation of high-quality training data. We hypothesize that the integration of machine learning into this model improves its reliability, maintains the speed of damage assessment, and allows the model to scale to higher data volumes.

  3. The applications of machine learning algorithms in the modeling of estrogen-like chemicals.

    PubMed

    Liu, Huanxiang; Yao, Xiaojun; Gramatica, Paola

    2009-06-01

    Increasing concern is being shown by the scientific community, government regulators, and the public about endocrine-disrupting chemicals that, in the environment, are adversely affecting human and wildlife health through a variety of mechanisms, mainly estrogen receptor-mediated mechanisms of toxicity. Because of the large number of such chemicals in the environment, there is a great need for an effective means of rapidly assessing endocrine-disrupting activity in the toxicology assessment process. When faced with the challenging task of screening large libraries of molecules for biological activity, the benefits of computational predictive models based on quantitative structure-activity relationships to identify possible estrogens become immediately obvious. Recently, in order to improve the accuracy of prediction, some machine learning techniques were introduced to build more effective predictive models. In this review we will focus our attention on some recent advances in the use of these methods in modeling estrogen-like chemicals. The advantages and disadvantages of the machine learning algorithms used in solving this problem, the importance of the validation and performance assessment of the built models as well as their applicability domains will be discussed.

  4. Modeling and Designing of A Nonlineartemperature-Humidity Controller Using Inmushroom-Drying Machine

    NASA Astrophysics Data System (ADS)

    Wu, Xiuhua; Luo, Haiyan; Shi, Minhui

    Drying-process of many kinds of farm produce in a close room, such as mushroom-drying machine, is generally a complicated nonlinear and timedelay cause, in which the temperature and the humidity are the main controlled elements. The accurate controlling of the temperature and humidity is always an interesting problem. It's difficult and very important to make a more accurate mathematical model about the varying of the two. A math model was put forward after considering many aspects and analyzing the actual working circumstance in this paper. Form the model it can be seen that the changes of temperature and humidity in drying machine are not simple linear but an affine nonlinear process. Controlling the process exactly is the key that influences the quality of the dried mushroom. In this paper, the differential geometry theories and methods are used to analyze and solve the model of these smallenvironment elements. And at last a kind of nonlinear controller which satisfied the optimal quadratic performance index is designed. It can be proved more feasible and practical than the conventional controlling.

  5. Classification of signaling proteins based on molecular star graph descriptors using Machine Learning models.

    PubMed

    Fernandez-Lozano, Carlos; Cuiñas, Rubén F; Seoane, José A; Fernández-Blanco, Enrique; Dorado, Julian; Munteanu, Cristian R

    2015-11-07

    Signaling proteins are an important topic in drug development due to the increased importance of finding fast, accurate and cheap methods to evaluate new molecular targets involved in specific diseases. The complexity of the protein structure hinders the direct association of the signaling activity with the molecular structure. Therefore, the proposed solution involves the use of protein star graphs for the peptide sequence information encoding into specific topological indices calculated with S2SNet tool. The Quantitative Structure-Activity Relationship classification model obtained with Machine Learning techniques is able to predict new signaling peptides. The best classification model is the first signaling prediction model, which is based on eleven descriptors and it was obtained using the Support Vector Machines-Recursive Feature Elimination (SVM-RFE) technique with the Laplacian kernel (RFE-LAP) and an AUROC of 0.961. Testing a set of 3114 proteins of unknown function from the PDB database assessed the prediction performance of the model. Important signaling pathways are presented for three UniprotIDs (34 PDBs) with a signaling prediction greater than 98.0%. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Integrating Machine Learning into a Crowdsourced Model for Earthquake-Induced Damage Assessment

    NASA Technical Reports Server (NTRS)

    Rebbapragada, Umaa; Oommen, Thomas

    2011-01-01

    On January 12th, 2010, a catastrophic 7.0M earthquake devastated the country of Haiti. In the aftermath of an earthquake, it is important to rapidly assess damaged areas in order to mobilize the appropriate resources. The Haiti damage assessment effort introduced a promising model that uses crowdsourcing to map damaged areas in freely available remotely-sensed data. This paper proposes the application of machine learning methods to improve this model. Specifically, we apply work on learning from multiple, imperfect experts to the assessment of volunteer reliability, and propose the use of image segmentation to automate the detection of damaged areas. We wrap both tasks in an active learning framework in order to shift volunteer effort from mapping a full catalog of images to the generation of high-quality training data. We hypothesize that the integration of machine learning into this model improves its reliability, maintains the speed of damage assessment, and allows the model to scale to higher data volumes.

  7. A Genetic Algorithm Based Support Vector Machine Model for Blood-Brain Barrier Penetration Prediction

    PubMed Central

    Zhang, Daqing; Xiao, Jianfeng; Zhou, Nannan; Zheng, Mingyue; Luo, Xiaomin; Jiang, Hualiang; Chen, Kaixian

    2015-01-01

    Blood-brain barrier (BBB) is a highly complex physical barrier determining what substances are allowed to enter the brain. Support vector machine (SVM) is a kernel-based machine learning method that is widely used in QSAR study. For a successful SVM model, the kernel parameters for SVM and feature subset selection are the most important factors affecting prediction accuracy. In most studies, they are treated as two independent problems, but it has been proven that they could affect each other. We designed and implemented genetic algorithm (GA) to optimize kernel parameters and feature subset selection for SVM regression and applied it to the BBB penetration prediction. The results show that our GA/SVM model is more accurate than other currently available log BB models. Therefore, to optimize both SVM parameters and feature subset simultaneously with genetic algorithm is a better approach than other methods that treat the two problems separately. Analysis of our log BB model suggests that carboxylic acid group, polar surface area (PSA)/hydrogen-bonding ability, lipophilicity, and molecular charge play important role in BBB penetration. Among those properties relevant to BBB penetration, lipophilicity could enhance the BBB penetration while all the others are negatively correlated with BBB penetration. PMID:26504797

  8. Three-Phase Unbalanced Transient Dynamics and Powerflow for Modeling Distribution Systems With Synchronous Machines

    SciTech Connect

    Elizondo, Marcelo A.; Tuffner, Francis K.; Schneider, Kevin P.

    2016-01-01

    Unlike transmission systems, distribution feeders in North America operate under unbalanced conditions at all times, and generally have a single strong voltage source. When a distribution feeder is connected to a strong substation source, the system is dynamically very stable, even for large transients. However if a distribution feeder, or part of the feeder, is separated from the substation and begins to operate as an islanded microgrid, transient dynamics become more of an issue. To assess the impact of transient dynamics at the distribution level, it is not appropriate to use traditional transmission solvers, which generally assume transposed lines and balanced loads. Full electromagnetic solvers capture a high level of detail, but it is difficult to model large systems because of the required detail. This paper proposes an electromechanical transient model of synchronous machine for distribution-level modeling and microgrids. This approach includes not only the machine model, but also its interface with an unbalanced network solver, and a powerflow method to solve unbalanced conditions without a strong reference bus. The presented method is validated against a full electromagnetic transient simulation.

  9. Abstracts of SIG Sessions.

    ERIC Educational Resources Information Center

    Proceedings of the ASIS Annual Meeting, 1995

    1995-01-01

    Presents abstracts of 15 special interest group (SIG) sessions. Topics include navigation and information utilization in the Internet, natural language processing, automatic indexing, image indexing, classification, users' models of database searching, online public access catalogs, education for information professions, information services,…

  10. Abstracts of SIG Sessions.

    ERIC Educational Resources Information Center

    Proceedings of the ASIS Annual Meeting, 1995

    1995-01-01

    Presents abstracts of 15 special interest group (SIG) sessions. Topics include navigation and information utilization in the Internet, natural language processing, automatic indexing, image indexing, classification, users' models of database searching, online public access catalogs, education for information professions, information services,…

  11. Ascertaining Validity in the Abstract Realm of PMESII Simulation Models: An Analysis of the Peace Support Operations Model (PSOM)

    DTIC Science & Technology

    2009-06-01

    the situation we wish to model ( Perla , 1990, p. 276). This problem is amplified when attempting to model irregular warfare. In FM 3–07, the newest...turn affects, decisions made during the course of those events by players representing opposing sides ( Perla , 1990, p. 274). PSOM is a campaign level...exploration of human decisions processes in the content of military action ( Perla 1990 p. 261). An action model that is disconnected from the

  12. Quantifying surgical complexity with machine learning: looking beyond patient factors to improve surgical models.

    PubMed

    Van Esbroeck, Alexander; Rubinfeld, Ilan; Hall, Bruce; Syed, Zeeshan

    2014-11-01

    To investigate the use of machine learning to empirically determine the risk of individual surgical procedures and to improve surgical models with this information. American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) data from 2005 to 2009 were used to train support vector machine (SVM) classifiers to learn the relationship between textual constructs in current procedural terminology (CPT) descriptions and mortality, morbidity, Clavien 4 complications, and surgical-site infections (SSI) within 30 days of surgery. The procedural risk scores produced by the SVM classifiers were validated on data from 2010 in univariate and multivariate analyses. The procedural risk scores produced by the SVM classifiers achieved moderate-to-high levels of discrimination in univariate analyses (area under receiver operating characteristic curve: 0.871 for mortality, 0.789 for morbidity, 0.791 for SSI, 0.845 for Clavien 4 complications). Addition of these scores also substantially improved multivariate models comprising patient factors and previously proposed correlates of procedural risk (net reclassification improvement and integrated discrimination improvement: 0.54 and 0.001 for mortality, 0.46 and 0.011 for morbidity, 0.68 and 0.022 for SSI, 0.44 and 0.001 for Clavien 4 complications; P < .05 for all comparisons). Similar improvements were noted in discrimination and calibration for other statistical measures, and in subcohorts comprising patients with general or vascular surgery. Machine learning provides clinically useful estimates of surgical risk for individual procedures. This information can be measured in an entirely data-driven manner and substantially improves multifactorial models to predict postoperative complications. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. A model-based analysis of impulsivity using a slot-machine gambling paradigm.

    PubMed

    Paliwal, Saee; Petzschner, Frederike H; Schmitz, Anna Katharina; Tittgemeyer, Marc; Stephan, Klaas E

    2014-01-01

    Impulsivity plays a key role in decision-making under uncertainty. It is a significant contributor to problem and pathological gambling (PG). Standard assessments of impulsivity by questionnaires, however, have various limitations, partly because impulsivity is a broad, multi-faceted concept. What remains unclear is which of these facets contribute to shaping gambling behavior. In the present study, we investigated impulsivity as expressed in a gambling setting by applying computational modeling to data from 47 healthy male volunteers who played a realistic, virtual slot-machine gambling task. Behaviorally, we found that impulsivity, as measured independently by the 11th revision of the Barratt Impulsiveness Scale (BIS-11), correlated significantly with an aggregate read-out of the following gambling responses: bet increases (BIs), machines switches (MS), casino switches (CS), and double-ups (DUs). Using model comparison, we compared a set of hierarchical Bayesian belief-updating models, i.e., the Hierarchical Gaussian Filter (HGF) and Rescorla-Wagner reinforcement learning (RL) models, with regard to how well they explained different aspects of the behavioral data. We then examined the construct validity of our winning models with multiple regression, relating subject-specific model parameter estimates to the individual BIS-11 total scores. In the most predictive model (a three-level HGF), the two free parameters encoded uncertainty-dependent mechanisms of belief updates and significantly explained BIS-11 variance across subjects. Furthermore, in this model, decision noise was a function of trial-wise uncertainty about winning probability. Collectively, our results provide a proof of concept that hierarchical Bayesian models can characterize the decision-making mechanisms linked to the impulsive traits of an individual. These novel indices of gambling mechanisms unmasked during actual play may be useful for online prevention measures for at-risk players and future

  14. A model-based analysis of impulsivity using a slot-machine gambling paradigm

    PubMed Central

    Paliwal, Saee; Petzschner, Frederike H.; Schmitz, Anna Katharina; Tittgemeyer, Marc; Stephan, Klaas E.

    2014-01-01

    Impulsivity plays a key role in decision-making under uncertainty. It is a significant contributor to problem and pathological gambling (PG). Standard assessments of impulsivity by questionnaires, however, have various limitations, partly because impulsivity is a broad, multi-faceted concept. What remains unclear is which of these facets contribute to shaping gambling behavior. In the present study, we investigated impulsivity as expressed in a gambling setting by applying computational modeling to data from 47 healthy male volunteers who played a realistic, virtual slot-machine gambling task. Behaviorally, we found that impulsivity, as measured independently by the 11th revision of the Barratt Impulsiveness Scale (BIS-11), correlated significantly with an aggregate read-out of the following gambling responses: bet increases (BIs), machines switches (MS), casino switches (CS), and double-ups (DUs). Using model comparison, we compared a set of hierarchical Bayesian belief-updating models, i.e., the Hierarchical Gaussian Filter (HGF) and Rescorla–Wagner reinforcement learning (RL) models, with regard to how well they explained different aspects of the behavioral data. We then examined the construct validity of our winning models with multiple regression, relating subject-specific model parameter estimates to the individual BIS-11 total scores. In the most predictive model (a three-level HGF), the two free parameters encoded uncertainty-dependent mechanisms of belief updates and significantly explained BIS-11 variance across subjects. Furthermore, in this model, decision noise was a function of trial-wise uncertainty about winning probability. Collectively, our results provide a proof of concept that hierarchical Bayesian models can characterize the decision-making mechanisms linked to the impulsive traits of an individual. These novel indices of gambling mechanisms unmasked during actual play may be useful for online prevention measures for at-risk players and

  15. EBS Radionuclide Transport Abstraction

    SciTech Connect

    J. Prouty

    2006-07-14

    The purpose of this report is to develop and analyze the engineered barrier system (EBS) radionuclide transport abstraction model, consistent with Level I and Level II model validation, as identified in Technical Work Plan for: Near-Field Environment and Transport: Engineered Barrier System: Radionuclide Transport Abstraction Model Report Integration (BSC 2005 [DIRS 173617]). The EBS radionuclide transport abstraction (or EBS RT Abstraction) is the conceptual model used in the total system performance assessment (TSPA) to determine the rate of radionuclide releases from the EBS to the unsaturated zone (UZ). The EBS RT Abstraction conceptual model consists of two main components: a flow model and a transport model. Both models are developed mathematically from first principles in order to show explicitly what assumptions, simplifications, and approximations are incorporated into the models used in the TSPA. The flow model defines the pathways for water flow in the EBS and specifies how the flow rate is computed in each pathway. Input to this model includes the seepage flux into a drift. The seepage flux is potentially split by the drip shield, with some (or all) of the flux being diverted by the drip shield and some passing through breaches in the drip shield that might result from corrosion or seismic damage. The flux through drip shield breaches is potentially split by the waste package, with some (or all) of the flux being diverted by the waste package and some passing through waste package breaches that might result from corrosion or seismic damage. Neither the drip shield nor the waste package survives an igneous intrusion, so the flux splitting submodel is not used in the igneous scenario class. The flow model is validated in an independent model validation technical review. The drip shield and waste package flux splitting algorithms are developed and validated using experimental data. The transport model considers advective transport and diffusive transport

  16. Discriminative feature-rich models for syntax-based machine translation.

    SciTech Connect

    Dixon, Kevin R.

    2012-12-01

    This report describes the campus executive LDRD %E2%80%9CDiscriminative Feature-Rich Models for Syntax-Based Machine Translation,%E2%80%9D which was an effort to foster a better relationship between Sandia and Carnegie Mellon University (CMU). The primary purpose of the LDRD was to fund the research of a promising graduate student at CMU; in this case, Kevin Gimpel was selected from the pool of candidates. This report gives a brief overview of Kevin Gimpel's research.

  17. A machine learning approach to the potential-field method for implicit modeling of geological structures

    NASA Astrophysics Data System (ADS)

    Gonçalves, Ítalo Gomes; Kumaira, Sissa; Guadagnin, Felipe

    2017-06-01

    Implicit modeling has experienced a rise in popularity over the last decade due to its advantages in terms of speed and reproducibility in comparison with manual digitization of geological structures. The potential-field method consists in interpolating a scalar function that indicates to which side of a geological boundary a given point belongs to, based on cokriging of point data and structural orientations. This work proposes a vector potential-field solution from a machine learning perspective, recasting the problem as multi-class classification, which alleviates some of the original method's assumptions. The potentials related to each geological class are interpreted in a compositional data framework. Variogram modeling is avoided through the use of maximum likelihood to train the model, and an uncertainty measure is introduced. The methodology was applied to the modeling of a sample dataset provided with the software Move™. The calculations were implemented in the R language and 3D visualizations were prepared with the rgl package.

  18. Study of Two-Dimensional Compressible Non-Acoustic Modeling of Stirling Machine Type Components

    NASA Technical Reports Server (NTRS)

    Tew, Roy C., Jr.; Ibrahim, Mounir B.

    2001-01-01

    A two-dimensional (2-D) computer code was developed for modeling enclosed volumes of gas with oscillating boundaries, such as Stirling machine components. An existing 2-D incompressible flow computer code, CAST, was used as the starting point for the project. CAST was modified to use the compressible non-acoustic Navier-Stokes equations to model an enclosed volume including an oscillating piston. The devices modeled have low Mach numbers and are sufficiently small that the time required for acoustics to propagate across them is negligible. Therefore, acoustics were excluded to enable more time efficient computation. Background information about the project is presented. The compressible non-acoustic flow assumptions are discussed. The governing equations used in the model are presented in transport equation format. A brief description is given of the numerical methods used. Comparisons of code predictions with experimental data are then discussed.

  19. Accurate modeling of switched reluctance machine based on hybrid trained WNN

    NASA Astrophysics Data System (ADS)

    Song, Shoujun; Ge, Lefei; Ma, Shaojie; Zhang, Man

    2014-04-01

    According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, the nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.

  20. A wearable computing platform for developing cloud-based machine learning models for health monitoring applications.

    PubMed

    Patel, Shyamal; McGinnis, Ryan S; Silva, Ikaro; DiCristofaro, Steve; Mahadevan, Nikhil; Jortberg, Elise; Franco, Jaime; Martin, Albert; Lust, Joseph; Raj, Milan; McGrane, Bryan; DePetrillo, Paolo; Aranyosi, A J; Ceruolo, Melissa; Pindado, Jesus; Ghaffari, Roozbeh

    2016-08-01

    Wearable sensors have the potential to enable clinical-grade ambulatory health monitoring outside the clinic. Technological advances have enabled development of devices that can measure vital signs with great precision and significant progress has been made towards extracting clinically meaningful information from these devices in research studies. However, translating measurement accuracies achieved in the controlled settings such as the lab and clinic to unconstrained environments such as the home remains a challenge. In this paper, we present a novel wearable computing platform for unobtrusive collection of labeled datasets and a new paradigm for continuous development, deployment and evaluation of machine learning models to ensure robust model performance as we transition from the lab to home. Using this system, we train activity classification models across two studies and track changes in model performance as we go from constrained to unconstrained settings.

  1. Modelling of classification rules on metabolic patterns including machine learning and expert knowledge.

    PubMed

    Baumgartner, Christian; Böhm, Christian; Baumgartner, Daniela

    2005-04-01

    Machine learning has a great potential to mine potential markers from high-dimensional metabolic data without any a priori knowledge. Exemplarily, we investigated metabolic patterns of three severe metabolic disorders, PAHD, MCADD, and 3-MCCD, on which we constructed classification models for disease screening and diagnosis using a decision tree paradigm and logistic regression analysis (LRA). For the LRA model-building process we assessed the relevance of established diagnostic flags, which have been developed from the biochemical knowledge of newborn metabolism, and compared the models' error rates with those of the decision tree classifier. Both approaches yielded comparable classification accuracy in terms of sensitivity (>95.2%), while the LRA models built on flags showed significantly enhanced specificity. The number of false positive cases did not exceed 0.001%.

  2. Accurate modeling of switched reluctance machine based on hybrid trained WNN

    SciTech Connect

    Song, Shoujun Ge, Lefei; Ma, Shaojie; Zhang, Man

    2014-04-15

    According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, the nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.

  3. Modelling Soil Water Retention Using Support Vector Machines with Genetic Algorithm Optimisation

    PubMed Central

    Lamorski, Krzysztof; Sławiński, Cezary; Moreno, Felix; Barna, Gyöngyi; Skierucha, Wojciech; Arrue, José L.

    2014-01-01

    This work presents point pedotransfer function (PTF) models of the soil water retention curve. The developed models allowed for estimation of the soil water content for the specified soil water potentials: –0.98, –3.10, –9.81, –31.02, –491.66, and –1554.78 kPa, based on the following soil characteristics: soil granulometric composition, total porosity, and bulk density. Support Vector Machines (SVM) methodology was used for model development. A new methodology for elaboration of retention function models is proposed. Alternative to previous attempts known from literature, the ν-SVM method was used for model development and the results were compared with the formerly used the C-SVM method. For the purpose of models' parameters search, genetic algorithms were used as an optimisation framework. A new form of the aim function used for models parameters search is proposed which allowed for development of models with better prediction capabilities. This new aim function avoids overestimation of models which is typically encountered when root mean squared error is used as an aim function. Elaborated models showed good agreement with measured soil water retention data. Achieved coefficients of determination values were in the range 0.67–0.92. Studies demonstrated usability of ν-SVM methodology together with genetic algorithm optimisation for retention modelling which gave better performing models than other tested approaches. PMID:24772030

  4. Modelling soil water retention using support vector machines with genetic algorithm optimisation.

    PubMed

    Lamorski, Krzysztof; Sławiński, Cezary; Moreno, Felix; Barna, Gyöngyi; Skierucha, Wojciech; Arrue, José L

    2014-01-01

    This work presents point pedotransfer function (PTF) models of the soil water retention curve. The developed models allowed for estimation of the soil water content for the specified soil water potentials: -0.98, -3.10, -9.81, -31.02, -491.66, and -1554.78 kPa, based on the following soil characteristics: soil granulometric composition, total porosity, and bulk density. Support Vector Machines (SVM) methodology was used for model development. A new methodology for elaboration of retention function models is proposed. Alternative to previous attempts known from literature, the ν-SVM method was used for model development and the results were compared with the formerly used the C-SVM method. For the purpose of models' parameters search, genetic algorithms were used as an optimisation framework. A new form of the aim function used for models parameters search is proposed which allowed for development of models with better prediction capabilities. This new aim function avoids overestimation of models which is typically encountered when root mean squared error is used as an aim function. Elaborated models showed good agreement with measured soil water retention data. Achieved coefficients of determination values were in the range 0.67-0.92. Studies demonstrated usability of ν-SVM methodology together with genetic algorithm optimisation for retention modelling which gave better performing models than other tested approaches.

  5. Feature combination networks for the interpretation of statistical machine learning models: application to Ames mutagenicity

    PubMed Central

    2014-01-01

    Background A new algorithm has been developed to enable the interpretation of black box models. The developed algorithm is agnostic to learning algorithm and open to all structural based descriptors such as fragments, keys and hashed fingerprints. The algorithm has provided meaningful interpretation of Ames mutagenicity predictions from both random forest and support vector machine models built on a variety of structural fingerprints. A fragmentation algorithm is utilised to investigate the model’s behaviour on specific substructures present in the query. An output is formulated summarising causes of activation and deactivation. The algorithm is able to identify multiple causes of activation or deactivation in addition to identifying localised deactivations where the prediction for the query is active overall. No loss in performance is seen as there is no change in the prediction; the interpretation is produced directly on the model’s behaviour for the specific query. Results Models have been built using multiple learning algorithms including support vector machine and random forest. The models were built on public Ames mutagenicity data and a variety of fingerprint descriptors were used. These models produced a good performance in both internal and external validation with accuracies around 82%. The models were used to evaluate the interpretation algorithm. Interpretation was revealed that links closely with understood mechanisms for Ames mutagenicity. Conclusion This methodology allows for a greater utilisation of the predictions made by black box models and can expedite further study based on the output for a (quantitative) structure activity model. Additionally the algorithm could be utilised for chemical dataset investigation and knowledge extraction/human SAR development. PMID:24661325

  6. Simulation of abrasive flow machining process for 2D and 3D mixture models

    NASA Astrophysics Data System (ADS)

    Dash, Rupalika; Maity, Kalipada

    2015-12-01

    Improvement of surface finish and material removal has been quite a challenge in a finishing operation such as abrasive flow machining (AFM). Factors that affect the surface finish and material removal are media viscosity, extrusion pressure, piston velocity, and particle size in abrasive flow machining process. Performing experiments for all the parameters and accurately obtaining an optimized parameter in a short time are difficult to accomplish because the operation requires a precise finish. Computational fluid dynamics (CFD) simulation was employed to accurately determine optimum parameters. In the current work, a 2D model was designed, and the flow analysis, force calculation, and material removal prediction were performed and compared with the available experimental data. Another 3D model for a swaging die finishing using AFM was simulated at different viscosities of the media to study the effects on the controlling parameters. A CFD simulation was performed by using commercially available ANSYS FLUENT. Two phases were considered for the flow analysis, and multiphase mixture model was taken into account. The fluid was considered to be a

  7. Seismic Consequence Abstraction

    SciTech Connect

    M. Gross

    2004-10-25

    The primary purpose of this model report is to develop abstractions for the response of engineered barrier system (EBS) components to seismic hazards at a geologic repository at Yucca Mountain, Nevada, and to define the methodology for using these abstractions in a seismic scenario class for the Total System Performance Assessment - License Application (TSPA-LA). A secondary purpose of this model report is to provide information for criticality studies related to seismic hazards. The seismic hazards addressed herein are vibratory ground motion, fault displacement, and rockfall due to ground motion. The EBS components are the drip shield, the waste package, and the fuel cladding. The requirements for development of the abstractions and the associated algorithms for the seismic scenario class are defined in ''Technical Work Plan For: Regulatory Integration Modeling of Drift Degradation, Waste Package and Drip Shield Vibratory Motion and Seismic Consequences'' (BSC 2004 [DIRS 171520]). The development of these abstractions will provide a more complete representation of flow into and transport from the EBS under disruptive events. The results from this development will also address portions of integrated subissue ENG2, Mechanical Disruption of Engineered Barriers, including the acceptance criteria for this subissue defined in Section 2.2.1.3.2.3 of the ''Yucca Mountain Review Plan, Final Report'' (NRC 2003 [DIRS 163274]).

  8. Use of machine learning methods to reduce predictive error of groundwater models.

    PubMed

    Xu, Tianfang; Valocchi, Albert J; Choi, Jaesik; Amir, Eyal

    2014-01-01

    Quantitative analyses of groundwater flow and transport typically rely on a physically-based model, which is inherently subject to error. Errors in model structure, parameter and data lead to both random and systematic error even in the output of a calibrated model. We develop complementary data-driven models (DDMs) to reduce the predictive error of physically-based groundwater models. Two machine learning techniques, the instance-based weighting and support vector regression, are used to build the DDMs. This approach is illustrated using two real-world case studies of the Republican River Compact Administration model and the Spokane Valley-Rathdrum Prairie model. The two groundwater models have different hydrogeologic settings, parameterization, and calibration methods. In the first case study, cluster analysis is introduced for data preprocessing to make the DDMs more robust and computationally efficient. The DDMs reduce the root-mean-square error (RMSE) of the temporal, spatial, and spatiotemporal prediction of piezometric head of the groundwater model by 82%, 60%, and 48%, respectively. In the second case study, the DDMs reduce the RMSE of the temporal prediction of piezometric head of the groundwater model by 77%. It is further demonstrated that the effectiveness of the DDMs depends on the existence and extent of the structure in the error of the physically-based model. © 2013, National GroundWater Association.

  9. Machine learning models identify molecules active against the Ebola virus in vitro

    PubMed Central

    Ekins, Sean; Freundlich, Joel S.; Clark, Alex M.; Anantpadma, Manu; Davey, Robert A.; Madrid, Peter

    2016-01-01

    The search for small molecule inhibitors of Ebola virus (EBOV) has led to several high throughput screens over the past 3 years. These have identified a range of FDA-approved active pharmaceutical ingredients (APIs) with anti-EBOV activity in vitro and several of which are also active in a mouse infection model. There are millions of additional commercially-available molecules that could be screened for potential activities as anti-EBOV compounds. One way to prioritize compounds for testing is to generate computational models based on the high throughput screening data and then virtually screen compound libraries. In the current study, we have generated Bayesian machine learning models with viral pseudotype entry assay and the EBOV replication assay data. We have validated the models internally and externally. We have also used these models to computationally score the MicroSource library of drugs to select those likely to be potential inhibitors. Three of the highest scoring molecules that were not in the model training sets, quinacrine, pyronaridine and tilorone, were tested in vitro and had EC 50 values of 350, 420 and 230 nM, respectively. Pyronaridine is a component of a combination therapy for malaria that was recently approved by the European Medicines Agency, which may make it more readily accessible for clinical testing. Like other known antimalarial drugs active against EBOV, it shares the 4-aminoquinoline scaffold. Tilorone, is an investigational antiviral agent that has shown a broad array of biological activities including cell growth inhibition in cancer cells, antifibrotic properties, α7 nicotinic receptor agonist activity, radioprotective activity and activation of hypoxia inducible factor-1. Quinacrine is an antimalarial but also has use as an anthelmintic. Our results suggest data sets with less than 1,000 molecules can produce validated machine learning models that can in turn be utilized to identify novel EBOV inhibitors in vitro. PMID:26834994

  10. Modeling complex responses of FM-sensitive cells in the auditory midbrain using a committee machine.

    PubMed

    Chang, T R; Chiu, T W; Sun, X; Poon, Paul W F

    2013-11-06

    Frequency modulation (FM) is an important building block of complex sounds that include speech signals. Exploring the neural mechanisms of FM coding with computer modeling could help understand how speech sounds are processed in the brain. Here, we modeled the single unit responses of auditory neurons recorded from the midbrain of anesthetized rats. These neurons displayed spectral temporal receptive fields (STRFs) that had multiple-trigger features, and were more complex than those with single-trigger features. Their responses have not been modeled satisfactorily with simple artificial neural networks, unlike neurons with simple-trigger features. To improve model performance, here we tested an approach with the committee machine. For a given neuron, the peri-stimulus time histogram (PSTH) was first generated in response to a repeated random FM tone, and peaks in the PSTH were segregated into groups based on the similarity of their pre-spike FM trigger features. Each group was then modeled using an artificial neural network with simple architecture, and, when necessary, by increasing the number of neurons in the hidden layer. After initial training, the artificial neural networks with their optimized weighting coefficients were pooled into a committee machine for training. Finally, the model performance was tested by prediction of the response of the same cell to a novel FM tone. The results showed improvement over simple artificial neural networks, supporting that trigger-feature-based modeling can be extended to cells with complex responses. This article is part of a Special Issue entitled Neural Coding 2012. This article is part of a Special Issue entitled Neural Coding 2012.

  11. A hybrid flowshop scheduling model considering dedicated machines and lot-splitting for the solar cell industry

    NASA Astrophysics Data System (ADS)

    Wang, Li-Chih; Chen, Yin-Yann; Chen, Tzu-Li; Cheng, Chen-Yang; Chang, Chin-Wei

    2014-10-01

    This paper studies a solar cell industry scheduling problem, which is similar to traditional hybrid flowshop scheduling (HFS). In a typical HFS problem, the allocation of machine resources for each order should be scheduled in advance. However, the challenge in solar cell manufacturing is the number of machines that can be adjusted dynamically to complete the job. An optimal production scheduling model is developed to explore these issues, considering the practical characteristics, such as hybrid flowshop, parallel machine system, dedicated machines, sequence independent job setup times and sequence dependent job setup times. The objective of this model is to minimise the makespan and to decide the processing sequence of the orders/lots in each stage, lot-splitting decisions for the orders and the number of machines used to satisfy the demands in each stage. From the experimental results, lot-splitting has significant effect on shortening the makespan, and the improvement effect is influenced by the processing time and the setup time of orders. Therefore, the threshold point to improve the makespan can be identified. In addition, the model also indicates that more lot-splitting approaches, that is, the flexibility of allocating orders/lots to machines is larger, will result in a better scheduling performance.

  12. Gain scheduled continuous-time model predictive controller with experimental validation on AC machine

    NASA Astrophysics Data System (ADS)

    Wang, Liuping; Gan, Lu

    2013-08-01

    Linear controllers with gain scheduling have been successfully used in the control of nonlinear systems for the past several decades. This paper proposes the design of gain scheduled continuous-time model predictive controller with constraints. Using induction machine as an illustrative example, the paper will show the four steps involved in the design of a gain scheduled predictive controller: (i) linearisation of a nonlinear plant according to operating conditions; (ii) the design of linear predictive controllers for the family of linear models; (iii) gain scheduled predictive control law that will optimise a multiple model objective function with constraints, which will also ensure smooth transitions (i.e. bumpless transfer) between the predictive controllers; (iv) experimental validation of the gain scheduled predictive control system with constraints.

  13. Glucose Oxidase Biosensor Modeling and Predictors Optimization by Machine Learning Methods †

    PubMed Central

    Gonzalez-Navarro, Felix F.; Stilianova-Stoytcheva, Margarita; Renteria-Gutierrez, Livier; Belanche-Muñoz, Lluís A.; Flores-Rios, Brenda L.; Ibarra-Esquer, Jorge E.

    2016-01-01

    Biosensors are small analytical devices incorporating a biological recognition element and a physico-chemical transducer to convert a biological signal into an electrical reading. Nowadays, their technological appeal resides in their fast performance, high sensitivity and continuous measuring capabilities; however, a full understanding is still under research. This paper aims to contribute to this growing field of biotechnology, with a focus on Glucose-Oxidase Biosensor (GOB) modeling through statistical learning methods from a regression perspective. We model the amperometric response of a GOB with dependent variables under different conditions, such as temperature, benzoquinone, pH and glucose concentrations, by means of several machine learning algorithms. Since the sensitivity of a GOB response is strongly related to these dependent variables, their interactions should be optimized to maximize the output signal, for which a genetic algorithm and simulated annealing are used. We report a model that shows a good generalization error and is consistent with the optimization. PMID:27792165

  14. Modelling and simulation for table tennis referee regulation based on finite state machine.

    PubMed

    Cui, Jianjiang; Liu, Zixuan; Xu, Long

    2016-10-13

    As referee's decisions are made artificially in traditional table tennis matches, many factors in a match, such as fatigue and subjective tendency, may lead to unjust decision. Based on finite state machine (FSM), this paper presents a model for table tennis referee regulation to substitute manual decisions. In this model, the trajectory of the ball is recorded through a binocular visual system while the complete rules extracted from the International Table Tennis Federation (ITTF) rules are described based on FSM. The final decision for the competition is made based on expert system theory. Simulation result shows that the proposed model has high accuracy, and can be generalised to other similar games such as badminton, volleyball, etc.

  15. Hybrid wavelet-support vector machine approach for modelling rainfall-runoff process.

    PubMed

    Komasi, Mehdi; Sharghi, Soroush

    2016-01-01

    Because of the importance of water resources management, the need for accurate modeling of the rainfall-runoff process has rapidly grown in the past decades. Recently, the support vector machine (SVM) approach has been used by hydrologists for rainfall-runoff modeling and the other fields of hydrology. Similar to the other artificial intelligence models, such as artificial neural network (ANN) and adaptive neural fuzzy inference system, the SVM model is based on the autoregressive properties. In this paper, the wavelet analysis was linked to the SVM model concept for modeling the rainfall-runoff process of Aghchai and Eel River watersheds. In this way, the main time series of two variables, rainfall and runoff, were decomposed to multiple frequent time series by wavelet theory; then, these time series were imposed as input data on the SVM model in order to predict the runoff discharge one day ahead. The obtained results show that the wavelet SVM model can predict both short- and long-term runoff discharges by considering the seasonality effects. Also, the proposed hybrid model is relatively more appropriate than classical autoregressive ones such as ANN and SVM because it uses the multi-scale time series of rainfall and runoff data in the modeling process.

  16. Estimation of the applicability domain of kernel-based machine learning models for virtual screening

    PubMed Central

    2010-01-01

    Background The virtual screening of large compound databases is an important application of structural-activity relationship models. Due to the high structural diversity of these data sets, it is impossible for machine learning based QSAR models, which rely on a specific training set, to give reliable results for all compounds. Thus, it is important to consider the subset of the chemical space in which the model is applicable. The approaches to this problem that have been published so far mostly use vectorial descriptor representations to define this domain of applicability of the model. Unfortunately, these cannot be extended easily to structured kernel-based machine learning models. For this reason, we propose three approaches to estimate the domain of applicability of a kernel-based QSAR model. Results We evaluated three kernel-based applicability domain estimations using three different structured kernels on three virtual screening tasks. Each experiment consisted of the training of a kernel-based QSAR model using support vector regression and the ranking of a disjoint screening data set according to the predicted activity. For each prediction, the applicability of the model for the respective compound is quantitatively described using a score obtained by an applicability domain formulation. The suitability of the applicability domain estimation is evaluated by comparing the model performance on the subsets of the screening data sets obtained by different thresholds for the applicability scores. This comparison indicates that it is possible to separate the part of the chemspace, in which the model gives reliable predictions, from the part consisting of structures too dissimilar to the training set to apply the model successfully. A closer inspection reveals that the virtual screening performance of the model is considerably improved if half of the molecules, those with the lowest applicability scores, are omitted from the screening. Conclusion The proposed

  17. An Abstract Data Interface

    NASA Astrophysics Data System (ADS)

    Allan, D. J.

    The Abstract Data Interface (ADI) is a system within which both abstract data models and their mappings on to file formats can be defined. The data model system is object-oriented and closely follows the Common Lisp Object System (CLOS) object model. Programming interfaces in both C and \\fortran are supplied, and are designed to be simple enough for use by users with limited software skills. The prototype system supports access to those FITS formats most commonly used in the X-ray community, as well as the Starlink NDF data format. New interfaces can be rapidly added to the system---these may communicate directly with the file system, other ADI objects or elsewhere (e.g., a network connection).

  18. MIP Models and Hybrid Algorithms for Simultaneous Job Splitting and Scheduling on Unrelated Parallel Machines

    PubMed Central

    Ozmutlu, H. Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204

  19. MIP models and hybrid algorithms for simultaneous job splitting and scheduling on unrelated parallel machines.

    PubMed

    Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.

  20. State Event Models for the Formal Analysis of Human-Machine Interactions

    NASA Technical Reports Server (NTRS)

    Combefis, Sebastien; Giannakopoulou, Dimitra; Pecheur, Charles

    2014-01-01

    The work described in this paper was motivated by our experience with applying a framework for formal analysis of human-machine interactions (HMI) to a realistic model of an autopilot. The framework is built around a formally defined conformance relation called "fullcontrol" between an actual system and the mental model according to which the system is operated. Systems are well-designed if they can be described by relatively simple, full-control, mental models for their human operators. For this reason, our framework supports automated generation of minimal full-control mental models for HMI systems, where both the system and the mental models are described as labelled transition systems (LTS). The autopilot that we analysed has been developed in the NASA Ames HMI prototyping tool ADEPT. In this paper, we describe how we extended the models that our HMI analysis framework handles to allow adequate representation of ADEPT models. We then provide a property-preserving reduction from these extended models to LTSs, to enable application of our LTS-based formal analysis algorithms. Finally, we briefly discuss the analyses we were able to perform on the autopilot model with our extended framework.

  1. Chemical Kinetics of Hydrogen Atom Abstraction from Allylic Sites by (3)O2; Implications for Combustion Modeling and Simulation.

    PubMed

    Zhou, Chong-Wen; Simmie, John M; Somers, Kieran P; Goldsmith, C Franklin; Curran, Henry J

    2017-03-09

    Hydrogen atom abstraction from allylic C-H bonds by molecular oxygen plays a very important role in determining the reactivity of fuel molecules having allylic hydrogen atoms. Rate constants for hydrogen atom abstraction by molecular oxygen from molecules with allylic sites have been calculated. A series of molecules with primary, secondary, tertiary, and super secondary allylic hydrogen atoms of alkene, furan, and alkylbenzene families are taken into consideration. Those molecules include propene, 2-butene, isobutene, 2-methylfuran, and toluene containing the primary allylic hydrogen atom; 1-butene, 1-pentene, 2-ethylfuran, ethylbenzene, and n-propylbenzene containing the secondary allylic hydrogen atom; 3-methyl-1-butene, 2-isopropylfuran, and isopropylbenzene containing tertiary allylic hydrogen atom; and 1-4-pentadiene containing super allylic secondary hydrogen atoms. The M06-2X/6-311++G(d,p) level of theory was used to optimize the geometries of all of the reactants, transition states, products and also the hinder rotation treatments for lower frequency modes. The G4 level of theory was used to calculate the electronic single point energies for those species to determine the 0 K barriers to reaction. Conventional transition state theory with Eckart tunnelling corrections was used to calculate the rate constants. The comparison between our calculated rate constants with the available experimental results from the literature shows good agreement for the reactions of propene and isobutene with molecular oxygen. The rate constant for toluene with O2 is about an order magnitude slower than that experimentally derived from a comprehensive model proposed by Oehlschlaeger and coauthors. The results clearly indicate the need for a more detailed investigation of the combustion kinetics of toluene oxidation and its key pyrolysis and oxidation intermediates. Despite this, our computed barriers and rate constants retain an important internal consistency. Rate constants

  2. Unified error model based spatial error compensation for four types of CNC machining center: Part I-Singular function based unified error model

    NASA Astrophysics Data System (ADS)

    Fan, Kaiguo; Yang, Jianguo; Yang, Liyan

    2015-08-01

    To unify the error model for four types of CNC machining center, the comprehensive error model of each type of CNC machining center was established using the homogenous transformation matrix (HTM). The internal rules between the HTMs and the kinematic chains were analyzed in this research. The analysis results show that the HTM elements associated with the motion axes which are at the rear of the reference coordinate system are positive value. On the contrary, the HTM elements associated with the motion axes which are at the front of the reference coordinate system are negative value. To express these internal rules, the singular function was introduced to the HTMs. And a unified error model for four types of CNC machining center was established based on the HTM and the singular function. The unified error model includes 18 error elements which are the main factors affecting the machining accuracy of CNC machine tools. The practical results show that the unified error model is not only suitable for vertical machining center but also suitable for horizontal machining center.

  3. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    PubMed

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  4. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach

    PubMed Central

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-01-01

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202

  5. The Development of Surface Profile Models in Abrasive Slurry Jet Micro-machining of Brittle and Ductile materials

    NASA Astrophysics Data System (ADS)

    Nouraei, Hooman

    In low-pressure abrasive slurry jet micro-machining (ASJM), a slurry jet of fine abrasive particles is used to erode micro-sized features such as holes and channels in a variety of brittle and ductile materials with a high degree of accuracy and repeatability without the need for a patterned mask. ASJM causes no tool wear and thermal damage, applies small forces on the workpiece, allows multilevel etching on a single substrate and is relatively quick and inexpensive. In this study for the first time, the mechanics of micro-slurry jet erosion and its relation to the fluid flow of the impinging jet was investigated using a newly developed ASJM system. Existing surface evolution models, previously developed for abrasive air jet machining (AJM), were evaluated and modified through the use of computational fluid dynamic (CFD) models for profile modeling of micro-channels and micro-holes machined with ASJM in brittle materials. A novel numerical-empirical model was also developed in order to compensate for the shortcoming of existing surface evolution models and provide a higher degree of accuracy in predicting the profiles of features in ductile materials machined with ASJM. In addition, the effect of process parameters on the minimum feature size attainable with ASJM as a maskless process was also examined and it was shown that the size of machined features could be further reduced.

  6. Thermal Error Modeling Method with the Jamming of Temperature-Sensitive Points' Volatility on CNC Machine Tools

    NASA Astrophysics Data System (ADS)

    MIAO, Enming; LIU, Yi; XU, Jianguo; LIU, Hui

    2017-05-01

    Aiming at the deficiency of the robustness of thermal error compensation models of CNC machine tools, the mechanism of improving the models' robustness is studied by regarding the Leaderway-V450 machining center as the object. Through the analysis of actual spindle air cutting experimental data on Leaderway-V450 machine, it is found that the temperature-sensitive points used for modeling is volatility, and this volatility directly leads to large changes on the collinear degree among modeling independent variables. Thus, the forecasting accuracy of multivariate regression model is severely affected, and the forecasting robustness becomes poor too. To overcome this effect, a modeling method of establishing thermal error models by using single temperature variable under the jamming of temperature-sensitive points' volatility is put forward. According to the actual data of thermal error measured in different seasons, it is proved that the single temperature variable model can reduce the loss of forecasting accuracy resulted from the volatility of temperature-sensitive points, especially for the prediction of cross quarter data, the improvement of forecasting accuracy is about 5 μm or more. The purpose that improving the robustness of the thermal error models is realized, which can provide a reference for selecting the modeling independent variable in the application of thermal error compensation of CNC machine tools.

  7. Thermal Error Modeling Method with the Jamming of Temperature-Sensitive Points' Volatility on CNC Machine Tools

    NASA Astrophysics Data System (ADS)

    MIAO, Enming; LIU, Yi; XU, Jianguo; LIU, Hui

    2017-03-01

    Aiming at the deficiency of the robustness of thermal error compensation models of CNC machine tools, the mechanism of improving the models' robustness is studied by regarding the Leaderway-V450 machining center as the object. Through the analysis of actual spindle air cutting experimental data on Leaderway-V450 machine, it is found that the temperature-sensitive points used for modeling is volatility, and this volatility directly leads to large changes on the collinear degree among modeling independent variables. Thus, the forecasting accuracy of multivariate regression model is severely affected, and the forecasting robustness becomes poor too. To overcome this effect, a modeling method of establishing thermal error models by using single temperature variable under the jamming of temperature-sensitive points' volatility is put forward. According to the actual data of thermal error measured in different seasons, it is proved that the single temperature variable model can reduce the loss of forecasting accuracy resulted from the volatility of temperature-sensitive points, especially for the prediction of cross quarter data, the improvement of forecasting accuracy is about 5 μm or more. The purpose that improving the robustness of the thermal error models is realized, which can provide a reference for selecting the modeling independent variable in the application of thermal error compensation of CNC machine tools.

  8. Modeling of variable speed refrigerated display cabinets based on adaptive support vector machine

    NASA Astrophysics Data System (ADS)

    Cao, Zhikun; Han, Hua; Gu, Bo

    2010-01-01

    In this paper the adaptive support vector machine (ASVM) method is introduced to the field of intelligent modeling of refrigerated display cabinets and used to construct a highly precise mathematical model of their performance. A model for a variable speed open vertical display cabinet was constructed using preprocessing techniques for measured data, including the elimination of outlying data points by the use of an exponential weighted moving average (EWMA). Using dynamic loss coefficient adjustment, the adaptation of the SVM for use in this application was achieved. From there, the object function for energy use per unit of display area total energy consumption (TEC)/total display area (TDA) was constructed and solved using the ASVM method. When compared to the results achieved using a back-propagation neural network (BPNN) model, the ASVM model for the refrigerated display cabinet was characterized by its simple structure, fast convergence speed and high prediction accuracy. The ASVM model also has better noise rejection properties than that of original SVM model. It was revealed by the theoretical analysis and experimental results presented in this paper that it is feasible to model of the display cabinet built using the ASVM method.

  9. Computational modeling of skin reflectance spectra for biological parameter estimation through machine learning

    NASA Astrophysics Data System (ADS)

    Vyas, Saurabh; Van Nguyen, Hien; Burlina, Philippe; Banerjee, Amit; Garza, Luis; Chellappa, Rama

    2012-06-01

    A computational skin re ectance model is used here to provide the re ectance, absorption, scattering, and transmittance based on the constitutive biological components that make up the layers of the skin. The changes in re ectance are mapped back to deviations in model parameters, which include melanosome level, collagen level and blood oxygenation. The computational model implemented in this work is based on the Kubelka- Munk multi-layer re ectance model and the Fresnel Equations that describe a generic N-layer model structure. This assumes the skin as a multi-layered material, with each layer consisting of specic absorption, scattering coecients, re ectance spectra and transmittance based on the model parameters. These model parameters include melanosome level, collagen level, blood oxygenation, blood level, dermal depth, and subcutaneous tissue re ectance. We use this model, coupled with support vector machine based regression (SVR), to predict the biological parameters that make up the layers of the skin. In the proposed approach, the physics-based forward mapping is used to generate a large set of training exemplars. The samples in this dataset are then used as training inputs for the SVR algorithm to learn the inverse mapping. This approach was tested on VIS-range hyperspectral data. Performance validation of the proposed approach was performed by measuring the prediction error on the skin constitutive parameters and exhibited very promising results.

  10. Deriving global parameter estimates for the Noah land surface model using FLUXNET and machine learning

    NASA Astrophysics Data System (ADS)

    Chaney, Nathaniel W.; Herman, Jonathan D.; Ek, Michael B.; Wood, Eric F.

    2016-11-01

    With their origins in numerical weather prediction and climate modeling, land surface models aim to accurately partition the surface energy balance. An overlooked challenge in these schemes is the role of model parameter uncertainty, particularly at unmonitored sites. This study provides global parameter estimates for the Noah land surface model using 85 eddy covariance sites in the global FLUXNET network. The at-site parameters are first calibrated using a Latin Hypercube-based ensemble of the most sensitive parameters, determined by the Sobol method, to be the minimum stomatal resistance (rs,min), the Zilitinkevich empirical constant (Czil), and the bare soil evaporation exponent (fxexp). Calibration leads to an increase in the mean Kling-Gupta Efficiency performance metric from 0.54 to 0.71. These calibrated parameter sets are then related to local environmental characteristics using the Extra-Trees machine learning algorithm. The fitted Extra-Trees model is used to map the optimal parameter sets over the globe at a 5 km spatial resolution. The leave-one-out cross validation of the mapped parameters using the Noah land surface model suggests that there is the potential to skillfully relate calibrated model parameter sets to local environmental characteristics. The results demonstrate the potential to use FLUXNET to tune the parameterizations of surface fluxes in land surface models and to provide improved parameter estimates over the globe.

  11. Software model of a machine vision system based on the common house fly.

    PubMed

    Madsen, Robert; Barrett, Steven; Wilcox, Michael

    2005-01-01

    The vision system of the common house fly has many properties, such as hyperacuity and parallel structure, which would be advantageous in a machine vision system. A software model has been developed which is ultimately intended to be a tool to guide the design of an analog real time vision system. The model starts by laying out cartridges over an image. The cartridges are analogous to the ommatidium of the fly's eye and contain seven photoreceptors each with a Gaussian profile. The spacing between photoreceptors is variable providing for more or less detail as needed. The cartridges provide information on what type of features they see and neighboring cartridges share information to construct a feature map.

  12. Quick Estimation Model for the Concentration of Indoor Airborne Culturable Bacteria: An Application of Machine Learning.

    PubMed

    Liu, Zhijian; Li, Hao; Cao, Guoqing

    2017-07-30

    Indoor airborne culturable bacteria are sometimes harmful to human health. Therefore, a quick estimation of their concentration is particularly necessary. However, measuring the indoor microorganism concentration (e.g., bacteria) usually requires a large amount of time, economic cost, and manpower. In this paper, we aim to provide a quick solution: using knowledge-based machine learning to provide quick estimation of the concentration of indoor airborne culturable bacteria only with the inputs of several measurable indoor environmental indicators, including: indoor particulate matter (PM2.5 and PM10), temperature, relative humidity, and CO₂ concentration. Our results show that a general regression neural network (GRNN) model can sufficiently provide a quick and decent estimation based on the model training and testing using an experimental database with 249 data groups.

  13. Dynamic model of heat and mass transfer in rectangular adsorber of a solar adsorption machine

    NASA Astrophysics Data System (ADS)

    Chekirou, W.; Boukheit, N.; Karaali, A.

    2016-10-01

    This paper presents the study of a rectangular adsorber of solar adsorption cooling machine. The modeling and the analysis of the adsorber are the key point of such studies; because of the complex coupled heat and mass transfer phenomena that occur during the working cycle. The adsorber is heated by solar energy and contains a porous medium constituted of activated carbon AC-35 reacting by adsorption with methanol. To study the solar collector type effect on system's performances, the used model takes into account the variation of ambient temperature and solar intensity along a simulated day, corresponding to a total daily insolation of 26.12 MJ/m2 with ambient temperature average of 27.7 °C, which is useful to know the daily thermal behavior of the rectangular adsorber.

  14. Simulation modeling and tracing optimal trajectory of robotic mining machine effector

    NASA Astrophysics Data System (ADS)

    Fryanov, VN; Pavlova, LD

    2017-02-01

    Within the framework of the robotic coal mine design for deep-level coal beds with the high gas content in the seismically active areas in the southern Kuzbass, the motion path parameters for an effector of a robotic mining machine are evaluated. The simulation model is meant for selection of minimum energy-based optimum trajectory for the robot effector, calculation of stresses and strains in a coal bed in a variable perimeter shortwall in the course of coal extraction, determination of coordinates of a coal bed edge area with the maximum disintegration of coal, and for choice of direction of the robot effector to get in contact with the mentioned area and to break coal at the minimum energy input. It is suggested to use the model in the engineering of the robot intelligence.

  15. Machine Learning Models and Pathway Genome Data Base for Trypanosoma cruzi Drug Discovery.

    PubMed

    Ekins, Sean; de Siqueira-Neto, Jair Lage; McCall, Laura-Isobel; Sarker, Malabika; Yadav, Maneesh; Ponder, Elizabeth L; Kallel, E Adam; Kellar, Danielle; Chen, Steven; Arkin, Michelle; Bunin, Barry A; McKerrow, James H; Talcott, Carolyn

    2015-01-01

    Chagas disease is a neglected tropical disease (NTD) caused by the eukaryotic parasite Trypanosoma cruzi. The current clinical and preclinical pipeline for T. cruzi is extremely sparse and lacks drug target diversity. In the present study we developed a computational approach that utilized data from several public whole-cell, phenotypic high throughput screens that have been completed for T. cruzi by the Broad Institute, including a single screen of over 300,000 molecules in the search for chemical probes as part of the NIH Molecular Libraries program. We have also compiled and curated relevant biological and chemical compound screening data including (i) compounds and biological activity data from the literature, (ii) high throughput screening datasets, and (iii) predicted metabolites of T. cruzi metabolic pathways. This information was used to help us identify compounds and their potential targets. We have constructed a Pathway Genome Data Base for T. cruzi. In addition, we have developed Bayesian machine learning models that were used to virtually screen libraries of compounds. Ninety-seven compounds were selected for in vitro testing, and 11 of these were found to have EC50 < 10 μM. We progressed five compounds to an in vivo mouse efficacy model of Chagas disease and validated that the machine learning model could identify in vitro active compounds not in the training set, as well as known positive controls. The antimalarial pyronaridine possessed 85.2% efficacy in the acute Chagas mouse model. We have also proposed potential targets (for future verification) for this compound based on structural similarity to known compounds with targets in T. cruzi. We have demonstrated how combining chemoinformatics and bioinformatics for T. cruzi drug discovery can bring interesting in vivo active molecules to light that may have been overlooked. The approach we have taken is broadly applicable to other NTDs.

  16. Machine Learning Models and Pathway Genome Data Base for Trypanosoma cruzi Drug Discovery

    PubMed Central

    McCall, Laura-Isobel; Sarker, Malabika; Yadav, Maneesh; Ponder, Elizabeth L.; Kallel, E. Adam; Kellar, Danielle; Chen, Steven; Arkin, Michelle; Bunin, Barry A.; McKerrow, James H.; Talcott, Carolyn

    2015-01-01

    Background Chagas disease is a neglected tropical disease (NTD) caused by the eukaryotic parasite Trypanosoma cruzi. The current clinical and preclinical pipeline for T. cruzi is extremely sparse and lacks drug target diversity. Methodology/Principal Findings In the present study we developed a computational approach that utilized data from several public whole-cell, phenotypic high throughput screens that have been completed for T. cruzi by the Broad Institute, including a single screen of over 300,000 molecules in the search for chemical probes as part of the NIH Molecular Libraries program. We have also compiled and curated relevant biological and chemical compound screening data including (i) compounds and biological activity data from the literature, (ii) high throughput screening datasets, and (iii) predicted metabolites of T. cruzi metabolic pathways. This information was used to help us identify compounds and their potential targets. We have constructed a Pathway Genome Data Base for T. cruzi. In addition, we have developed Bayesian machine learning models that were used to virtually screen libraries of compounds. Ninety-seven compounds were selected for in vitro testing, and 11 of these were found to have EC50 < 10μM. We progressed five compounds to an in vivo mouse efficacy model of Chagas disease and validated that the machine learning model could identify in vitro active compounds not in the training set, as well as known positive controls. The antimalarial pyronaridine possessed 85.2% efficacy in the acute Chagas mouse model. We have also proposed potential targets (for future verification) for this compound based on structural similarity to known compounds with targets in T. cruzi. Conclusions/ Significance We have demonstrated how combining chemoinformatics and bioinformatics for T. cruzi drug discovery can bring interesting in vivo active molecules to light that may have been overlooked. The approach we have taken is broadly applicable to other

  17. One- and two-dimensional Stirling machine simulation using experimentally generated flow turbulence models

    NASA Technical Reports Server (NTRS)

    Goldberg, Louis F.

    1990-01-01

    Investigations of one- and two-dimensional (1- or 2-D) simulations of Stirling machines centered around experimental data generated by the U. of Minnesota Mechanical Engineering Test Rig (METR) are covered. This rig was used to investigate oscillating flows about a zero mean with emphasis on laminar/turbulent flow transitions in tubes. The Space Power Demonstrator Engine (SPDE) and in particular, its heater, were the subjects of the simulations. The heater was treated as a 1- or 2-D entity in an otherwise 1-D system. The 2-D flow effects impacted the transient flow predictions in the heater itself but did not have a major impact on overall system performance. Information propagation effects may be a significant issue in the simulation (if not the performance) of high-frequency, high-pressure Stirling machines. This was investigated further by comparing a simulation against an experimentally validated analytic solution for the fluid dynamics of a transmission line. The applicability of the pressure-linking algorithm for compressible flows may be limited by characteristic number (defined as flow path information traverses per cycle); this warrants further study. Lastly the METR was simulated in 1- and 2-D. A two-parameter k-w foldback function turbulence model was developed and tested against a limited set of METR experimental data.

  18. Modeling and Control of a Double-effect Absorption Refrigerating Machine

    NASA Astrophysics Data System (ADS)

    Hihara, Eiji; Yamamoto, Yuuji; Saito, Takamoto; Nagaoka, Yoshikazu; Nishiyama, Noriyuki

    For the purpose of impoving the response to cooling load variations and the part load characteristics, the optimal operation of a double-effect absorption refrigerating machine was investigated. The test machine was designed to be able to control energy input and weak solution flow rate continuously. It is composed of a gas-fired high-temperature generator, a separator, a low-temperature generator, an absorber, a condenser, an evaporator, and high- and low-temperature heat exchangers. The working fluid is Lithium Bromide and water solution. The standard output is 80 kW. Based on the experimental data, a simulation model of the static characteristics was developed. The experiments and simulation analysis indicate that there is an optimal weak solution flow rate which maximizes the coefficient of performance under any given cooling load condition. The optimal condition is closely related to the refrigerant steam flow rate flowing from the separator to the high temperature heat exchanger with the medium solution. The heat transfer performance of heat exchangers in the components influences the COP. The change in the overall heat transfer coefficient of absorber has much effect on the COP compared to other components.

  19. Design and performance of a multimodal vibration-based energy harvester model for machine rotational frequencies

    NASA Astrophysics Data System (ADS)

    Sun, Shilong; Tse, Peter W.

    2017-06-01

    This paper presents the design of vibration-based energy harvester model whose resonance frequency can be tunable with the help of the various cantilever type beam structures: T-folded, E-folded without a tip mass, E-folded with one tip mass, and E-folded with two tip masses. The main contribution is to make an optimal structure that can scavenge the destructive vibration into the highest possible electric energy even the attached machine is running at a low rotational frequency. The finite element method and experimental verification were used to search for the optimal design that can make the operational bandwidth broader and yield the maximum power output. The results show that the design of E-folded with two tip masses can offer a first three resonance frequencies varied from 18.18 Hz to 26.8 Hz. Such a range of low frequency can well adopt the common range of rotational frequency of most of the rotary machines. From the observations of experiments, the maximum output of electricity could be guaranteed and harvested by an external circuit tailor-made for the beam structures.

  20. CATIA-V 3D Modeling for Design Integration of the Ignitor Machine Load Assembly^*

    NASA Astrophysics Data System (ADS)

    Bianchi, A.; Parodi, B.; Gardella, F.; Coppi, B.

    2007-11-01

    In the framework of the ANSALDO industrial contribution to the Ignitor engineering design, the detailed design of all components of the machine core (Load Assembly) has been completed. The machine Central Post, Central Solenoid, and Poloidal Field Coil systems, the Plasma Chamber and First Wall system, the surrounding mechanical structures, the Vacuum Cryostat and the polyethylene boron sheets attached to it for neutron shielding, have all been analyzed to confirm that they can withstand both normal and off-normal operating loads, as well as the Plasma Chamber and First Wall baking operations, with proper safety margins, for the maximum plasma parameters scenario at 13 T/11 MA, for the reduced scenarios at 9 T/7 MA (limiter) and at 9 T/6 MA (double nul). Both 3D and 2D drawings of each individual component have been produced using the Dassault Systems CATIA-V software. After they have been all integrated into a single 3D CATIA model of the Load Assembly, the electro-fluidic and fluidic lines which supply electrical currents and helium cooling gas to the coils have been added and mechanically incorporated with the components listed above. A global seismic analysis of the Load Assembly with SSE/OBE response spectra has also been performed to verify that it is able to withstand such external events. ^*Work supported in part by ENEA of italy and by the US D.O.E.

  1. A 3D finite element model for the vibration analysis of asymmetric rotating machines

    NASA Astrophysics Data System (ADS)

    Lazarus, A.; Prabel, B.; Combescure, D.

    2010-08-01

    This paper suggests a 3D finite element method based on the modal theory in order to analyse linear periodically time-varying systems. Presentation of the method is given through the particular case of asymmetric rotating machines. First, Hill governing equations of asymmetric rotating oscillators with two degrees of freedom are investigated. These differential equations with periodic coefficients are solved with classic Floquet theory leading to parametric quasimodes. These mathematical entities are found to have the same fundamental properties as classic eigenmodes, but contain several harmonics possibly responsible for parametric instabilities. Extension to the vibration analysis (stability, frequency spectrum) of asymmetric rotating machines with multiple degrees of freedom is achieved with a fully 3D finite element model including stator and rotor coupling. Due to Hill expansion, the usual degrees of freedom are duplicated and associated with the relevant harmonic of the Floquet solutions in the frequency domain. Parametric quasimodes as well as steady-state response of the whole system are ingeniously computed with a component-mode synthesis method. Finally, experimental investigations are performed on a test rig composed of an asymmetric rotor running on nonisotropic supports. Numerical and experimental results are compared to highlight the potential of the numerical method.

  2. Collective I/O Tuning Using Analytical and Machine-Learning Models

    SciTech Connect

    Isaila, Florin; Balaprakash, Prasanna; Wild, Stefan M.; Kimpe, Dries; Latham, Rob; Ross, Rob; Hovland, Paul

    2015-01-01

    The ever larger demand of scientific applications for computation and data is currently driving a continuous increase in scale of parallel computers. The inherent complexity of scaling up a computing systems in terms of both hardware and software stack exposes an increasing number of factors impacting the performance and complicating the process of optimization. In particular, the optimization of parallel I/O has become increasingly challenging due to increasing storage hierarchy and well known performance variability of shared storage systems. This paper focuses on model-based autotuning of the two-phase collective I/O algorithm from a popular MPI distribution on the Blue Gene/Q architecture. We propose a novel hybrid model, constructed as a composition of analytical models for communication and storage operations and black-box models for the performance of the individual operations. We perform an in-depth study of the complexity involved in performance modeling including architecture, software stack and noise. In particular we address this challenges of modeling the performance of shared storage systems by building a benchmark that helps synthesizing factors such as topology, file caching, and noise. The experimental results show that the hybrid approach produces significantly better results than state-of-the-art machine learning approaches and shows a higher robustness to noise, at the cost of a higher modeling complexity

  3. Fuzzy texture model and support vector machine hybridization for land cover classification of remotely sensed images

    NASA Astrophysics Data System (ADS)

    Jenicka, S.; Suruliandi, A.

    2014-01-01

    Accuracy of land cover classification in remotely sensed images relies on the utilized classifier and extracted features. Texture features are significant in land cover classification. Traditional texture models capture only patterns with discrete boundaries, whereas fuzzy patterns should be classified by assigning due weightage to uncertainty. When a remotely sensed image contains noise, the image may have fuzzy patterns characterizing land covers and fuzzy boundaries separating them. Therefore, a fuzzy texture model is proposed for the effective classification of land covers in remotely sensed images. The model uses a Sugeno fuzzy inference system. A support vector machine (SVM) is used for the precise, fast classification of image pixels. The model is a hybrid of a fuzzy texture model and an SVM for the land cover classification of remotely sensed images. To support this proposal, experiments were conducted in three steps. In the first two steps, the proposed texture model was validated for supervised classifications and segmentation of a standard benchmark database. In the third step, the land cover classification of a remotely sensed image of LISS-IV (an Indian remote sensing satellite) is performed using a multivariate version of the proposed model. The classified image has 95.54% classification accuracy.

  4. Support vector machine-based open crop model (SBOCM): Case of rice production in China.

    PubMed

    Su, Ying-Xue; Xu, Huan; Yan, Li-Jiao

    2017-03-01

    Existing crop models produce unsatisfactory simulation results and are operationally complicated. The present study, however, demonstrated the unique advantages of statistical crop models for large-scale simulation. Using rice as the research crop, a support vector machine-based open crop model (SBOCM) was developed by integrating developmental stage and yield prediction models. Basic geographical information obtained by surface weather observation stations in China and the 1:1000000 soil database published by the Chinese Academy of Sciences were used. Based on the principle of scale compatibility of modeling data, an open reading frame was designed for the dynamic daily input of meteorological data and output of rice development and yield records. This was used to generate rice developmental stage and yield prediction models, which were integrated into the SBOCM system. The parameters, methods, error resources, and other factors were analyzed. Although not a crop physiology simulation model, the proposed SBOCM can be used for perennial simulation and one-year rice predictions within certain scale ranges. It is convenient for data acquisition, regionally applicable, parametrically simple, and effective for multi-scale factor integration. It has the potential for future integration with extensive social and economic factors to improve the prediction accuracy and practicability.

  5. Machine learning algorithms for modeling groundwater level changes in agricultural regions of the U.S.

    NASA Astrophysics Data System (ADS)

    Sahoo, S.; Russo, T. A.; Elliott, J.; Foster, I.

    2017-05-01

    Climate, groundwater extraction, and surface water flows have complex nonlinear relationships with groundwater level in agricultural regions. To better understand the relative importance of each driver and predict groundwater level change, we develop a new ensemble modeling framework based on spectral analysis, machine learning, and uncertainty analysis, as an alternative to complex and computationally expensive physical models. We apply and evaluate this new approach in the context of two aquifer systems supporting agricultural production in the United States: the High Plains aquifer (HPA) and the Mississippi River Valley alluvial aquifer (MRVA). We select input data sets by using a combination of mutual information, genetic algorithms, and lag analysis, and then use the selected data sets in a Multilayer Perceptron network architecture to simulate seasonal groundwater level change. As expected, model results suggest that irrigation demand has the highest influence on groundwater level change for a majority of the wells. The subset of groundwater observations not used in model training or cross-validation correlates strongly (R > 0.8) with model results for 88 and 83% of the wells in the HPA and MRVA, respectively. In both aquifer systems, the error in the modeled cumulative groundwater level change during testing (2003-2012) was less than 2 m over a majority of the area. We conclude that our modeling framework can serve as an alternative approach to simulating groundwater level change and water availability, especially in regions where subsurface properties are unknown.

  6. The use of machine learning algorithms to design a generalized simplified denitrification model

    NASA Astrophysics Data System (ADS)

    Oehler, F.; Rutherford, J. C.; Coco, G.

    2010-10-01

    We propose to use machine learning (ML) algorithms to design a simplified denitrification model. Boosted regression trees (BRT) and artificial neural networks (ANN) were used to analyse the relationships and the relative influences of different input variables towards total denitrification, and an ANN was designed as a simplified model to simulate total nitrogen emissions from the denitrification process. To calibrate the BRT and ANN models and test this method, we used a database obtained collating datasets from the literature. We used bootstrapping to compute confidence intervals for the calibration and validation process. Both ML algorithms clearly outperformed a commonly used simplified model of nitrogen emissions, NEMIS, which is based on denitrification potential, temperature, soil water content and nitrate concentration. The ML models used soil organic matter % in place of a denitrification potential and pH as a fifth input variable. The BRT analysis reaffirms the importance of temperature, soil water content and nitrate concentration. Generalization, although limited to the data space of the database used to build the ML models, could be improved if pH is used to differentiate between soil types. Further improvements in model performance and generalization could be achieved by adding more data.

  7. Semi-supervised Machine Learning for Analysis of Hydrogeochemical Data and Models

    NASA Astrophysics Data System (ADS)

    Vesselinov, Velimir; O'Malley, Daniel; Alexandrov, Boian; Moore, Bryan

    2017-04-01

    Data- and model-based analyses such as uncertainty quantification, sensitivity analysis, and decision support using complex physics models with numerous model parameters and typically require a huge number of model evaluations (on order of 10^6). Furthermore, model simulations of complex physics may require substantial computational time. For example, accounting for simultaneously occurring physical processes such as fluid flow and biogeochemical reactions in heterogeneous porous medium may require several hours of wall-clock computational time. To address these issues, we have developed a novel methodology for semi-supervised machine learning based on Non-negative Matrix Factorization (NMF) coupled with customized k-means clustering. The algorithm allows for automated, robust Blind Source Separation (BSS) of groundwater types (contamination sources) based on model-free analyses of observed hydrogeochemical data. We have also developed reduced order modeling tools, which coupling support vector regression (SVR), genetic algorithms (GA) and artificial and convolutional neural network (ANN/CNN). SVR is applied to predict the model behavior within prior uncertainty ranges associated with the model parameters. ANN and CNN procedures are applied to upscale heterogeneity of the porous medium. In the upscaling process, fine-scale high-resolution models of heterogeneity are applied to inform coarse-resolution models which have improved computational efficiency while capturing the impact of fine-scale effects at the course scale of interest. These techniques are tested independently on a series of synthetic problems. We also present a decision analysis related to contaminant remediation where the developed reduced order models are applied to reproduce groundwater flow and contaminant transport in a synthetic heterogeneous aquifer. The tools are coded in Julia and are a part of the MADS high-performance computational framework (https://github.com/madsjulia/Mads.jl).

  8. Maraging Steel Machining Improvements

    DTIC Science & Technology

    2007-04-23

    APR 2007 2. REPORT TYPE Technical, Success Story 3. DATES COVERED 01-12-2006 to 23-04-2007 4. TITLE AND SUBTITLE Maraging Steel Machining...consumers of cobalt-strengthened maraging steel . An increase in production requires them to reduce the machining time of certain operations producing... maraging steel ; Success Stories 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 1 18. NUMBER OF PAGES 1 19a. NAME OF RESPONSIBLE

  9. Business Machines

    ERIC Educational Resources Information Center

    Pactor, Paul

    1970-01-01

    The U.S. Department of Labor has projected a 106 percent increase in the demand for office machine operators over the next 10 years. Machines with a high frequency of use include printing calculators, 10-key adding machines, and key punch machines. The 12th grade is the logical time for teaching business machines. (CH)

  10. A geometric process model for M/PH(M/PH)/1/K queue with new service machine procurement lead time

    NASA Astrophysics Data System (ADS)

    Yu, Miaomiao; Tang, Yinghui; Fu, Yonghong

    2013-06-01

    In this article, we consider a geometric process model for M/PH(M/PH)/1/K queue with new service machine procurement lead time. A maintenance policy (N - 1, N) based on the number of failures of the service machine is introduced into the system. Assuming that a failed service machine after repair will not be 'as good as new', and the spare service machine for replacement is only available by an order. More specifically, we suppose that the procurement lead time for delivering the spare service machine follows a phase-type (PH) distribution. Under such assumptions, we apply the matrix-analytic method to develop the steady state probabilities of the system, and then we obtain some system performance measures. Finally, employing an important Lemma, the explicit expression of the long-run average cost rate for the service machine is derived, and the direct search method is also implemented to determine the optimal value of N for minimising the average cost rate.

  11. Hidden Markov models and other machine learning approaches in computational molecular biology

    SciTech Connect

    Baldi, P.

    1995-12-31

    This tutorial was one of eight tutorials selected to be presented at the Third International Conference on Intelligent Systems for Molecular Biology which was held in the United Kingdom from July 16 to 19, 1995. Computational tools are increasingly needed to process the massive amounts of data, to organize and classify sequences, to detect weak similarities, to separate coding from non-coding regions, and reconstruct the underlying evolutionary history. The fundamental problem in machine learning is the same as in scientific reasoning in general, as well as statistical modeling: to come up with a good model for the data. In this tutorial four classes of models are reviewed. They are: Hidden Markov models; artificial Neural Networks; Belief Networks; and Stochastic Grammars. When dealing with DNA and protein primary sequences, Hidden Markov models are one of the most flexible and powerful alignments and data base searches. In this tutorial, attention is focused on the theory of Hidden Markov Models, and how to apply them to problems in molecular biology.

  12. Shared Consensus Machine Learning Models for Predicting Blood Stage Malaria Inhibition.

    PubMed

    Verras, Andreas; Waller, Chris L; Gedeck, Peter; Green, Darren V S; Kogej, Thierry; Raichurkar, Anandkumar; Panda, Manoranjan; Shelat, Anang A; Clark, Julie; Guy, R Kiplin; Papadatos, George; Burrows, Jeremy

    2017-03-27

    The development of new antimalarial therapies is essential, and lowering the barrier of entry for the screening and discovery of new lead compound classes can spur drug development at organizations that may not have large compound screening libraries or resources to conduct high-throughput screens. Machine learning models have been long established to be more robust and have a larger domain of applicability with larger training sets. Screens over multiple data sets to find compounds with potential malaria blood stage inhibitory activity have been used to generate multiple Bayesian models. Here we describe a method by which Bayesian quantitative structure-activity relationship models, which contain information on thousands to millions of proprietary compounds, can be shared between collaborators at both for-profit and not-for-profit institutions. This model-sharing paradigm allows for the development of consensus models that have increased predictive power over any single model and yet does not reveal the identity of any compounds in the training sets.

  13. Ecophysiological Modeling of Grapevine Water Stress in Burgundy Terroirs by a Machine-Learning Approach.

    PubMed

    Brillante, Luca; Mathieu, Olivier; Lévêque, Jean; Bois, Benjamin

    2016-01-01

    In a climate change scenario, successful modeling of the relationships between plant-soil-meteorology is crucial for a sustainable agricultural production, especially for perennial crops. Grapevines (Vitis vinifera L. cv Chardonnay) located in eight experimental plots (Burgundy, France) along a hillslope were monitored weekly for 3 years for leaf water potentials, both at predawn (Ψpd) and at midday (Ψstem). The water stress experienced by grapevine was modeled as a function of meteorological data (minimum and maximum temperature, rainfall) and soil characteristics (soil texture, gravel content, slope) by a gradient boosting machine. Model performance was assessed by comparison with carbon isotope discrimination (δ(13)C) of grape sugars at harvest and by the use of a test-set. The developed models reached outstanding prediction performance (RMSE < 0.08 MPa for Ψstem and < 0.06 MPa for Ψpd), comparable to measurement accuracy. Model predictions at a daily time step improved correlation with δ(13)C data, respect to the observed trend at a weekly time scale. The role of each predictor in these models was described in order to understand how temperature, rainfall, soil texture, gravel content and slope affect the grapevine water status in the studied context. This work proposes a straight-forward strategy to simulate plant water stress in field condition, at a local scale; to investigate ecological relationships in the vineyard and adapt cultural practices to future conditions.

  14. Model for noise-induced hearing loss using support vector machine

    NASA Astrophysics Data System (ADS)

    Qiu, Wei; Ye, Jun; Liu-White, Xiaohong; Hamernik, Roger P.

    2005-09-01

    Contemporary noise standards are based on the assumption that an energy metric such as the equivalent noise level is sufficient for estimating the potential of a noise stimulus to cause noise-induced hearing loss (NIHL). Available data, from laboratory-based experiments (Lei et al., 1994; Hamernik and Qiu, 2001) indicate that while an energy metric may be necessary, it is not sufficient for the prediction of NIHL. A support vector machine (SVM) NIHL prediction model was constructed, based on a 550-subject (noise-exposed chinchillas) database. Training of the model used data from 367 noise-exposed subjects. The model was tested using the remaining 183 subjects. Input variables for the model included acoustic, audiometric, and biological variables, while output variables were PTS and cell loss. The results show that an energy parameter is not sufficient to predict NIHL, especially in complex noise environments. With the kurtosis and other noise and biological parameters included as additional inputs, the performance of SVM prediction model was significantly improved. The SVM prediction model has the potential to reliably predict noise-induced hearing loss. [Work supported by NIOSH.

  15. The use of machine learning algorithms to design a generalized simplified denitrification model

    NASA Astrophysics Data System (ADS)

    Oehler, F.; Rutherford, J. C.; Coco, G.

    2010-04-01

    We designed generalized simplified models using machine learning algorithms (ML) to assess denitrification at the catchment scale. In particular, we designed an artificial neural network (ANN) to simulate total nitrogen emissions from the denitrification process. Boosted regression trees (BRT, another ML) was also used to analyse the relationships and the relative influences of different input variables towards total denitrification. To calibrate the ANN and BRT models, we used a large database obtained by collating datasets from the literature. We developed a simple methodology to give confidence intervals for the calibration and validation process. Both ML algorithms clearly outperformed a commonly used simplified model of nitrogen emissions, NEMIS. NEMIS is based on denitrification potential, temperature, soil water content and nitrate concentration. The ML models used soil organic matter % in place of a denitrification potential and pH as a fifth input variable. The BRT analysis reaffirms the importance of temperature, soil water content and nitrate concentration. Generality of the ANN model may also be improved if pH is used to differentiate between soil types. Further improvements in model performance can be achieved by lessening dataset effects.

  16. Rotary ultrasonic machining of CFRP: a mechanistic predictive model for cutting force.

    PubMed

    Cong, W L; Pei, Z J; Sun, X; Zhang, C L

    2014-02-01

    Cutting force is one of the most important output variables in rotary ultrasonic machining (RUM) of carbon fiber reinforced plastic (CFRP) composites. Many experimental investigations on cutting force in RUM of CFRP have been reported. However, in the literature, there are no cutting force models for RUM of CFRP. This paper develops a mechanistic predictive model for cutting force in RUM of CFRP. The material removal mechanism of CFRP in RUM has been analyzed first. The model is based on the assumption that brittle fracture is the dominant mode of material removal. CFRP micromechanical analysis has been conducted to represent CFRP as an equivalent homogeneous material to obtain the mechanical properties of CFRP from its components. Based on this model, relationships between input variables (including ultrasonic vibration amplitude, tool rotation speed, feedrate, abrasive size, and abrasive concentration) and cutting force can be predicted. The relationships between input variables and important intermediate variables (indentation depth, effective contact time, and maximum impact force of single abrasive grain) have been investigated to explain predicted trends of cutting force. Experiments are conducted to verify the model, and experimental results agree well with predicted trends from this model.

  17. Ecophysiological Modeling of Grapevine Water Stress in Burgundy Terroirs by a Machine-Learning Approach

    PubMed Central

    Brillante, Luca; Mathieu, Olivier; Lévêque, Jean; Bois, Benjamin

    2016-01-01

    In a climate change scenario, successful modeling of the relationships between plant-soil-meteorology is crucial for a sustainable agricultural production, especially for perennial crops. Grapevines (Vitis vinifera L. cv Chardonnay) located in eight experimental plots (Burgundy, France) along a hillslope were monitored weekly for 3 years for leaf water potentials, both at predawn (Ψpd) and at midday (Ψstem). The water stress experienced by grapevine was modeled as a function of meteorological data (minimum and maximum temperature, rainfall) and soil characteristics (soil texture, gravel content, slope) by a gradient boosting machine. Model performance was assessed by comparison with carbon isotope discrimination (δ13C) of grape sugars at harvest and by the use of a test-set. The developed models reached outstanding prediction performance (RMSE < 0.08 MPa for Ψstem and < 0.06 MPa for Ψpd), comparable to measurement accuracy. Model predictions at a daily time step improved correlation with δ13C data, respect to the observed trend at a weekly time scale. The role of each predictor in these models was described in order to understand how temperature, rainfall, soil texture, gravel content and slope affect the grapevine water status in the studied context. This work proposes a straight-forward strategy to simulate plant water stress in field condition, at a local scale; to investigate ecological relationships in the vineyard and adapt cultural practices to future conditions. PMID:27375651

  18. The Art of Abstracting.

    ERIC Educational Resources Information Center

    Cremmins, Edward T.

    A three-stage analytical reading method for the composition of informative and indicative abstracts by authors and abstractors is presented in this monograph, along with background information on the abstracting process and a discussion of professional considerations in abstracting. An introduction to abstracts and abstracting precedes general…

  19. Classifier Model Based on Machine Learning Algorithms: Application to Differential Diagnosis of Suspicious Thyroid Nodules via Sonography.

    PubMed

    Wu, Hongxun; Deng, Zhaohong; Zhang, Bingjie; Liu, Qianyun; Chen, Junyong

    2016-06-24

    The purpose of this article is to construct classifier models using machine learning algorithms and to evaluate their diagnostic performances for differentiating malignant from benign thyroid nodules. This study included 970 histopathologically proven thyroid nodules in 970 patients. Two radiologists retrospectively reviewed ultrasound images, and nodules were graded according to a five-tier sonographic scoring system. Statistically significant variables based on an experienced radiologist's observations were obtained with attribute optimization using fivefold cross-validation and applied as the input nodes to build models for predicting malignancy of nodules. The performances of the machine learning algorithms and radiologists were compared using ROC curve analysis. Diagnosis by the experienced radiologist achieved the highest predictive accuracy of 88.66% with a specificity of 85.33%, whereas the radial basis function (RBF)-neural network (NN) achieved the highest sensitivity of 92.31%. The AUC value for diagnosis by the experienced radiologist (AUC = 0.9135) was greater than those for diagnosis by the less experienced radiologist, the naïve Bayes classifier, the support vector machine, and the RBF-NN (AUC = 0.8492, 0.8811, 0.9033, and 0.9103, respectively; p < 0.05). The machine learning algorithms underperformed with respect to the experienced radiologist's readings used to construct them, and the RBF-NN outperformed the other machine learning algorithm models.

  20. A computational visual saliency model based on statistics and machine learning.

    PubMed

    Lin, Ru-Je; Lin, Wei-Song

    2014-08-01

    Identifying the type of stimuli that attracts human visual attention has been an appealing topic for scientists for many years. In particular, marking the salient regions in images is useful for both psychologists and many computer vision applications. In this paper, we propose a computational approach for producing saliency maps using statistics and machine learning methods. Based on four assumptions, three properties (Feature-Prior, Position-Prior, and Feature-Distribution) can be derived and combined by a simple intersection operation to obtain a saliency map. These properties are implemented by a similarity computation, support vector regression (SVR) technique, statistical analysis of training samples, and information theory using low-level features. This technique is able to learn the preferences of human visual behavior while simultaneously considering feature uniqueness. Experimental results show that our approach performs better in predicting human visual attention regions than 12 other models in two test databases. © 2014 ARVO.

  1. Data on Support Vector Machines (SVM) model to forecast photovoltaic power.

    PubMed

    Malvoni, M; De Giorgi, M G; Congedo, P M

    2016-12-01

    The data concern the photovoltaic (PV) power, forecasted by a hybrid model that considers weather variations and applies a technique to reduce the input data size, as presented in the paper entitled "Photovoltaic forecast based on hybrid pca-lssvm using dimensionality reducted data" (M. Malvoni, M.G. De Giorgi, P.M. Congedo, 2015) [1]. The quadratic Renyi entropy criteria together with the principal component analysis (PCA) are applied to the Least Squares Support Vector Machines (LS-SVM) to predict the PV power in the day-ahead time frame. The data here shared represent the proposed approach results. Hourly PV power predictions for 1,3,6,12, 24 ahead hours and for different data reduction sizes are provided in Supplementary material.

  2. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    DOE PAGES

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    2016-12-28

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less

  3. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    SciTech Connect

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    2016-12-28

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelity quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.

  4. A Reordering Model Using a Source-Side Parse-Tree for Statistical Machine Translation

    NASA Astrophysics Data System (ADS)

    Hashimoto, Kei; Yamamoto, Hirofumi; Okuma, Hideo; Sumita, Eiichiro; Tokuda, Keiichi

    This paper presents a reordering model using a source-side parse-tree for phrase-based statistical machine translation. The proposed model is an extension of IST-ITG (imposing source tree on inversion transduction grammar) constraints. In the proposed method, the target-side word order is obtained by rotating nodes of the source-side parse-tree. We modeled the node rotation, monotone or swap, using word alignments based on a training parallel corpus and source-side parse-trees. The model efficiently suppresses erroneous target word orderings, especially global orderings. Furthermore, the proposed method conducts a probabilistic evaluation of target word reorderings. In English-to-Japanese and English-to-Chinese translation experiments, the proposed method resulted in a 0.49-point improvement (29.31 to 29.80) and a 0.33-point improvement (18.60 to 18.93) in word BLEU-4 compared with IST-ITG constraints, respectively. This indicates the validity of the proposed reordering model.

  5. Evaluation models for soil nutrient based on support vector machine and artificial neural networks.

    PubMed

    Li, Hao; Leng, Weijia; Zhou, Yibing; Chen, Fudi; Xiu, Zhilong; Yang, Dazuo

    2014-01-01

    Soil nutrient is an important aspect that contributes to the soil fertility and environmental effects. Traditional evaluation approaches of soil nutrient are quite hard to operate, making great difficulties in practical applications. In this paper, we present a series of comprehensive evaluation models for soil nutrient by using support vector machine (SVM), multiple linear regression (MLR), and artificial neural networks (ANNs), respectively. We took the content of organic matter, total nitrogen, alkali-hydrolysable nitrogen, rapidly available phosphorus, and rapidly available potassium as independent variables, while the evaluation level of soil nutrient content was taken as dependent variable. Results show that the average prediction accuracies of SVM models are 77.87% and 83.00%, respectively, while the general regression neural network (GRNN) model's average prediction accuracy is 92.86%, indicating that SVM and GRNN models can be used effectively to assess the levels of soil nutrient with suitable dependent variables. In practical applications, both SVM and GRNN models can be used for determining the levels of soil nutrient.

  6. Structure-guided expansion of kinase fragment libraries driven by support vector machine models.

    PubMed

    Erickson, Jon A; Mader, Mary M; Watson, Ian A; Webster, Yue W; Higgs, Richard E; Bell, Michael A; Vieth, Michal

    2010-03-01

    This work outlines a new de novo design process for the creation of novel kinase inhibitor libraries. It relies on a profiling paradigm that generates a substantial amount of kinase inhibitor data from which highly predictive QSAR models can be constructed. In addition, a broad diversity of X-ray structure information is needed for binding mode prediction. This is important for scaffold and substituent site selection. Borrowing from FBDD, the process involves fragmentation of known actives, proposition of binding mode hypotheses for the fragments, and model-driven recombination using a pharmacophore derived from known kinase inhibitor structures. The support vector machine method, using Merck atom pair derived fingerprint descriptors, was used to build models from activity from 6 kinase assays. These models were qualified prospectively by selecting and testing compounds from the internal compound collection. Overall hit and enrichment rates of 82% and 2.5%, respectively, qualified the models for use in library design. Using the process, 7 novel libraries were designed, synthesized and tested against these same 6 kinases. The results showed excellent results, yielding a 92% hit rate for the 179 compounds that made up the 7 libraries. The results of one library designed to include known literature compounds, as well as an analysis of overall substituent frequency, are discussed. Copyright 2009 Elsevier B.V. All rights reserved.

  7. Investigating driver injury severity patterns in rollover crashes using support vector machine models.

    PubMed

    Chen, Cong; Zhang, Guohui; Qian, Zhen; Tarefder, Rafiqul A; Tian, Zong

    2016-05-01

    Rollover crash is one of the major types of traffic crashes that induce fatal injuries. It is important to investigate the factors that affect rollover crashes and their influence on driver injury severity outcomes. This study employs support vector machine (SVM) models to investigate driver injury severity patterns in rollover crashes based on two-year crash data gathered in New Mexico. The impacts of various explanatory variables are examined in terms of crash and environmental information, vehicle features, and driver demographics and behavior characteristics. A classification and regression tree (CART) model is utilized to identify significant variables and SVM models with polynomial and Gaussian radius basis function (RBF) kernels are used for model performance evaluation. It is shown that the SVM models produce reasonable prediction performance and the polynomial kernel outperforms the Gaussian RBF kernel. Variable impact analysis reveals that factors including comfortable driving environment conditions, driver alcohol or drug involvement, seatbelt use, number of travel lanes, driver demographic features, maximum vehicle damages in crashes, crash time, and crash location are significantly associated with driver incapacitating injuries and fatalities. These findings provide insights for better understanding rollover crash causes and the impacts of various explanatory factors on driver injury severity patterns. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Early Colorectal Cancer Detected by Machine Learning Model Using Gender, Age, and Complete Blood Count Data.

    PubMed

    Hornbrook, Mark C; Goshen, Ran; Choman, Eran; O'Keeffe-Rosetti, Maureen; Kinar, Yaron; Liles, Elizabeth G; Rust, Kristal C

    2017-08-23

    Machine learning tools identify patients with blood counts indicating greater likelihood of colorectal cancer and warranting colonoscopy referral. To validate a machine learning colorectal cancer detection model on a US community-based insured adult population. Eligible colorectal cancer cases (439 females, 461 males) with complete blood counts before diagnosis were identified from Kaiser Permanente Northwest Region's Tumor Registry. Control patients (n = 9108) were randomly selected from KPNW's population who had no cancers, received at ≥1 blood count, had continuous enrollment from 180 days prior to the blood count through 24 months after the count, and were aged 40-89. For each control, one blood count was randomly selected as the pseudo-colorectal cancer diagnosis date for matching to cases, and assigned a "calendar year" based on the count date. For each calendar year, 18 controls were randomly selected to match the general enrollment's 10-year age groups and lengths of continuous enrollment. Prediction performance was evaluated by area under the curve, specificity, and odds ratios. Area under the receiver operating characteristics curve for detecting colorectal cancer was 0.80 ± 0.01. At 99% specificity, the odds ratio for association of a high-risk detection score with colorectal cancer was 34.7 (95% CI 28.9-40.4). The detection model had the highest accuracy in identifying right-sided colorectal cancers. ColonFlag(®) identifies individuals with tenfold higher risk of undiagnosed colorectal cancer at curable stages (0/I/II), flags colorectal tumors 180-360 days prior to usual clinical diagnosis, and is more accurate at identifying right-sided (compared to left-sided) colorectal cancers.

  9. EBS Radionuclide Transport Abstraction

    SciTech Connect

    J.D. Schreiber

    2005-08-25

    The purpose of this report is to develop and analyze the engineered barrier system (EBS) radionuclide transport abstraction model, consistent with Level I and Level II model validation, as identified in ''Technical Work Plan for: Near-Field Environment and Transport: Engineered Barrier System: Radionuclide Transport Abstraction Model Report Integration'' (BSC 2005 [DIRS 173617]). The EBS radionuclide transport abstraction (or EBS RT Abstraction) is the conceptual model used in the total system performance assessment for the license application (TSPA-LA) to determine the rate of radionuclide releases from the EBS to the unsaturated zone (UZ). The EBS RT Abstraction conceptual model consists of two main components: a flow model and a transport model. Both models are developed mathematically from first principles in order to show explicitly what assumptions, simplifications, and approximations are incorporated into the models used in the TSPA-LA. The flow model defines the pathways for water flow in the EBS and specifies how the flow rate is computed in each pathway. Input to this model includes the seepage flux into a drift. The seepage flux is potentially split by the drip shield, with some (or all) of the flux being diverted by the drip shield and some passing through breaches in the drip shield that might result from corrosion or seismic damage. The flux through drip shield breaches is potentially split by the waste package, with some (or all) of the flux being diverted by the waste package and some passing through waste package breaches that might result from corrosion or seismic damage. Neither the drip shield nor the waste package survives an igneous intrusion, so the flux splitting submodel is not used in the igneous scenario class. The flow model is validated in an independent model validation technical review. The drip shield and waste package flux splitting algorithms are developed and validated using experimental data. The transport model considers

  10. Machine listening intelligence

    NASA Astrophysics Data System (ADS)

    Cella, C. E.

    2017-05-01

    This manifesto paper will introduce machine listening intelligence, an integrated research framework for acoustic and musical signals modelling, based on signal processing, deep learning and computational musicology.

  11. Prediction of Aerosol Optical Depth in West Asia: Machine Learning Methods versus Numerical Models

    NASA Astrophysics Data System (ADS)

    Omid Nabavi, Seyed; Haimberger, Leopold; Abbasi, Reyhaneh; Samimi, Cyrus

    2017-04-01

    Dust-prone areas of West Asia are releasing increasingly large amounts of dust particles during warm months. Because of the lack of ground-based observations in the region, this phenomenon is mainly monitored through remotely sensed aerosol products. The recent development of mesoscale Numerical Models (NMs) has offered an unprecedented opportunity to predict dust emission, and, subsequently Aerosol Optical Depth (AOD), at finer spatial and temporal resolutions. Nevertheless, the significant uncertainties in input data and simulations of dust activation and transport limit the performance of numerical models in dust prediction. The presented study aims to evaluate if machine-learning algorithms (MLAs), which require much less computational expense, can yield the same or even better performance than NMs. Deep blue (DB) AOD, which is observed by satellites but also predicted by MLAs and NMs, is used for validation. We concentrate our evaluations on the over dry Iraq plains, known as the main origin of recently intensified dust storms in West Asia. Here we examine the performance of four MLAs including Linear regression Model (LM), Support Vector Machine (SVM), Artificial Neural Network (ANN), Multivariate Adaptive Regression Splines (MARS). The Weather Research and Forecasting model coupled to Chemistry (WRF-Chem) and the Dust REgional Atmosphere Model (DREAM) are included as NMs. The MACC aerosol re-analysis of European Centre for Medium-range Weather Forecast (ECMWF) is also included, although it has assimilated satellite-based AOD data. Using the Recursive Feature Elimination (RFE) method, nine environmental features including soil moisture and temperature, NDVI, dust source function, albedo, dust uplift potential, vertical velocity, precipitation and 9-month SPEI drought index are selected for dust (AOD) modeling by MLAs. During the feature selection process, we noticed that NDVI and SPEI are of the highest importance in MLAs predictions. The data set was divided

  12. ABSTRACTION OF DRIFT SEEPAGE

    SciTech Connect

    Michael L. Wilson

    2001-02-08

    Drift seepage refers to flow of liquid water into repository emplacement drifts, where it can potentially contribute to degradation of the engineered systems and release and transport of radionuclides within the drifts. Because of these important effects, seepage into emplacement drifts is listed as a ''principal factor for the postclosure safety case'' in the screening criteria for grading of data in Attachment 1 of AP-3.15Q, Rev. 2, ''Managing Technical Product Inputs''. Abstraction refers to distillation of the essential components of a process model into a form suitable for use in total-system performance assessment (TSPA). Thus, the purpose of this analysis/model is to put the information generated by the seepage process modeling in a form appropriate for use in the TSPA for the Site Recommendation. This report also supports the Unsaturated-Zone Flow and Transport Process Model Report. The scope of the work is discussed below. This analysis/model is governed by the ''Technical Work Plan for Unsaturated Zone Flow and Transport Process Model Report'' (CRWMS M&O 2000a). Details of this activity are in Addendum A of the technical work plan. The original Work Direction and Planning Document is included as Attachment 7 of Addendum A. Note that the Work Direction and Planning Document contains tasks identified for both Performance Assessment Operations (PAO) and Natural Environment Program Operations (NEPO). Only the PAO tasks are documented here. The planning for the NEPO activities is now in Addendum D of the same technical work plan and the work is documented in a separate report (CRWMS M&O 2000b). The Project has been reorganized since the document was written. The responsible organizations in the new structure are the Performance Assessment Department and the Unsaturated Zone Department, respectively. The work plan for the seepage abstraction calls for determining an appropriate abstraction methodology, determining uncertainties in seepage, and providing

  13. Development of hardware system using temperature and vibration maintenance models integration concepts for conventional machines monitoring: a case study

    NASA Astrophysics Data System (ADS)

    Adeyeri, Michael Kanisuru; Mpofu, Khumbulani; Kareem, Buliaminu

    2016-12-01

    This article describes the integration of temperature and vibration models for maintenance monitoring of conventional machinery parts in which their optimal and best functionalities are affected by abnormal changes in temperature and vibration values thereby resulting in machine failures, machines breakdown, poor quality of products, inability to meeting customers' demand, poor inventory control and just to mention a few. The work entails the use of temperature and vibration sensors as monitoring probes programmed in microcontroller using C language. The developed hardware consists of vibration sensor of ADXL345, temperature sensor of AD594/595 of type K thermocouple, microcontroller, graphic liquid crystal display, real time clock, etc. The hardware is divided into two: one is based at the workstation (majorly meant to monitor machines behaviour) and the other at the base station (meant to receive transmission of machines information sent from the workstation), working cooperatively for effective functionalities. The resulting hardware built was calibrated, tested using model verification and validated through principles pivoted on least square and regression analysis approach using data read from the gear boxes of extruding and cutting machines used for polyethylene bag production. The results got therein confirmed related correlation existing between time, vibration and temperature, which are reflections of effective formulation of the developed concept.

  14. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  15. Copper Conductivity Model Development and Validation Using Flyer Plate Experiments on the Z-machine

    NASA Astrophysics Data System (ADS)

    Riford, L.; Lemke, R. W.; Cochrane, K.

    2015-11-01

    Magnetically accelerated flyer plate experiments done on Sandia's Z-machine provide insight into a multitude of materials problems at high energies and densities including conductivity model development and validation. In an experiment with ten Cu flyer plates of thicknesses 500-1000 μm, VISAR measurements exhibit a characteristic jump in the velocity correlated with magnetic field burn-through and the expansion of melted material at the free surface. The experiment is modeled using Sandia's shock and multiphysics MHD code ALEGRA. Simulated free surface velocities are within 1% of the measured data early in time, but divergence occurs at the feature, where the simulation indicates a slower burn through time. The cause was found to be in the Cu conductivity model's compressed regime. The model was improved by lowering the conductivity in the region 12.5-16 g/cc and 350-16000 K with a novel parameter based optimization method using the velocity feature as a figure of merit. Sandia National Laboratories is a multiprogram laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U. S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  16. Gaussian-binary restricted Boltzmann machines for modeling natural image statistics.

    PubMed

    Melchior, Jan; Wang, Nan; Wiskott, Laurenz

    2017-01-01

    We present a theoretical analysis of Gaussian-binary restricted Boltzmann machines (GRBMs) from the perspective of density models. The key aspect of this analysis is to show that GRBMs can be formulated as a constrained mixture of Gaussians, which gives a much better insight into the model's capabilities and limitations. We further show that GRBMs are capable of learning meaningful features without using a regularization term and that the results are comparable to those of independent component analysis. This is illustrated for both a two-dimensional blind source separation task and for modeling natural image patches. Our findings exemplify that reported difficulties in training GRBMs are due to the failure of the training algorithm rather than the model itself. Based on our analysis we derive a better training setup and show empirically that it leads to faster and more robust training of GRBMs. Finally, we compare different sampling algorithms for training GRBMs and show that Contrastive Divergence performs better than training methods that use a persistent Markov chain.

  17. Using machine learning tools to model complex toxic interactions with limited sampling regimes.

    PubMed

    Bertin, Matthew J; Moeller, Peter; Guillette, Louis J; Chapman, Robert W

    2013-03-19

    A major impediment to understanding the impact of environmental stress, including toxins and other pollutants, on organisms, is that organisms are rarely challenged by one or a few stressors in natural systems. Thus, linking laboratory experiments that are limited by practical considerations to a few stressors and a few levels of these stressors to real world conditions is constrained. In addition, while the existence of complex interactions among stressors can be identified by current statistical methods, these methods do not provide a means to construct mathematical models of these interactions. In this paper, we offer a two-step process by which complex interactions of stressors on biological systems can be modeled in an experimental design that is within the limits of practicality. We begin with the notion that environment conditions circumscribe an n-dimensional hyperspace within which biological processes or end points are embedded. We then randomly sample this hyperspace to establish experimental conditions that span the range of the relevant parameters and conduct the experiment(s) based upon these selected conditions. Models of the complex interactions of the parameters are then extracted using machine learning tools, specifically artificial neural networks. This approach can rapidly generate highly accurate models of biological responses to complex interactions among environmentally relevant toxins, identify critical subspaces where nonlinear responses exist, and provide an expedient means of designing traditional experiments to test the impact of complex mixtures on biological responses. Further, this can be accomplished with an astonishingly small sample size.

  18. Development of robust calibration models using support vector machines for spectroscopic monitoring of blood glucose

    PubMed Central

    Barman, Ishan; Kong, Chae-Ryon; Dingari, Narahara Chari; Dasari, Ramachandra R.; Feld, Michael S.

    2010-01-01

    Sample-to-sample variability has proven to be a major challenge in achieving calibration transfer in quantitative biological Raman spectroscopy. Multiple morphological and optical parameters, such as tissue absorption and scattering, physiological glucose dynamics and skin heterogeneity, vary significantly in a human population introducing non-analyte specific features into the calibration model. In this paper, we show that fluctuations of such parameters in human subjects introduce curved (non-linear) effects in the relationship between the concentrations of the analyte of interest and the mixture Raman spectra. To account for these curved effects, we propose the use of support vector machines (SVM) as a non-linear regression method over conventional linear regression techniques such as partial least squares (PLS). Using transcutaneous blood glucose detection as an example, we demonstrate that application of SVM enables a significant improvement (at least 30%) in cross-validation accuracy over PLS when measurements from multiple human volunteers are employed in the calibration set. Furthermore, using physical tissue models with randomized analyte concentrations and varying turbidities, we show that the fluctuations in turbidity alone causes curved effects which can only be adequately modeled using non-linear regression techniques. The enhanced levels of accuracy obtained with the SVM based calibration models opens up avenues for prospective prediction in humans and thus for clinical translation of the technology. PMID:21050004

  19. Improving protein-protein interactions prediction accuracy using protein evolutionary information and relevance vector machine model.

    PubMed

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Chen, Xing; Yan, Gui-Ying; Hu, Ji-Pu

    2016-10-01

    Predicting protein-protein interactions (PPIs) is a challenging task and essential to construct the protein interaction networks, which is important for facilitating our understanding of the mechanisms of biological systems. Although a number of high-throughput technologies have been proposed to predict PPIs, there are unavoidable shortcomings, including high cost, time intensity, and inherently high false positive rates. For these reasons, many computational methods have been proposed for predicting PPIs. However, the problem is still far from being solved. In this article, we propose a novel computational method called RVM-BiGP that combines the relevance vector machine (RVM) model and Bi-gram Probabilities (BiGP) for PPIs detection from protein sequences. The major improvement includes (1) Protein sequences are represented using the Bi-gram probabilities (BiGP) feature representation on a Position Specific Scoring Matrix (PSSM), in which the protein evolutionary information is contained; (2) For reducing the influence of noise, the Principal Component Analysis (PCA) method is used to reduce the dimension of BiGP vector; (3) The powerful and robust Relevance Vector Machine (RVM) algorithm is used for classification. Five-fold cross-validation experiments executed on yeast and Helicobacter pylori datasets, which achieved very high accuracies of 94.57 and 90.57%, respectively. Experimental results are significantly better than previous methods. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM-BiGP method is significantly better than the SVM-based method. In addition, we achieved 97.15% accuracy on imbalance yeast dataset, which is higher than that of balance yeast dataset. The promising experimental results show the efficiency and robust of the proposed method, which can be an automatic decision support tool for future

  20. Modeling workflow to design machine translation applications for public health practice.

    PubMed

    Turner, Anne M; Brownstein, Megumu K; Cole, Kate; Karasz, Hilary; Kirchhoff, Katrin

    2015-02-01

    Provide a detailed understanding of the information workflow processes related to translating health promotion materials for limited English proficiency individuals in order to inform the design of context-driven machine translation (MT) tools for public health (PH). We applied a cognitive work analysis framework to investigate the translation information workflow processes of two large health departments in Washington State. Researchers conducted interviews, performed a task analysis, and validated results with PH professionals to model translation workflow and identify functional requirements for a translation system for PH. The study resulted in a detailed description of work related to translation of PH materials, an information workflow diagram, and a description of attitudes towards MT technology. We identified a number of themes that hold design implications for incorporating MT in PH translation practice. A PH translation tool prototype was designed based on these findings. This study underscores the importance of understanding the work context and information workflow for which systems will be designed. Based on themes and translation information workflow processes, we identified key design guidelines for incorporating MT into PH translation work. Primary amongst these is that MT should be followed by human review for translations to be of high quality and for the technology to be adopted into practice. The time and costs of creating multilingual health promotion materials are barriers to translation. PH personnel were interested in MT's potential to improve access to low-cost translated PH materials, but expressed concerns about ensuring quality. We outline design considerations and a potential machine translation tool to best fit MT systems into PH practice. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. A survey of supervised machine learning models for mobile-phone based pathogen identification and classification

    NASA Astrophysics Data System (ADS)

    Ceylan Koydemir, Hatice; Feng, Steve; Liang, Kyle; Nadkarni, Rohan; Tseng, Derek; Benien, Parul; Ozcan, Aydogan

    2017-03-01

    Giardia lamblia causes a disease known as giardiasis, which results in diarrhea, abdominal cramps, and bloating. Although conventional pathogen detection methods used in water analysis laboratories offer high sensitivity and specificity, they are time consuming, and need experts to operate bulky equipment and analyze the samples. Here we present a field-portable and cost-effective smartphone-based waterborne pathogen detection platform that can automatically classify Giardia cysts using machine learning. Our platform enables the detection and quantification of Giardia cysts in one hour, including sample collection, labeling, filtration, and automated counting steps. We evaluated the performance of three prototypes using Giardia-spiked water samples from different sources (e.g., reagent-grade, tap, non-potable, and pond water samples). We populated a training database with >30,000 cysts and estimated our detection sensitivity and specificity using 20 different classifier models, including decision trees, nearest neighbor classifiers, support vector machines (SVMs), and ensemble classifiers, and compared their speed of training and classification, as well as predicted accuracies. Among them, cubic SVM, medium Gaussian SVM, and bagged-trees were the most promising classifier types with accuracies of 94.1%, 94.2%, and 95%, respectively; we selected the latter as our preferred classifier for the detection and enumeration of Giardia cysts that are imaged using our mobile-phone fluorescence microscope. Without the need for any experts or microbiologists, this field-portable pathogen detection platform can present a useful tool for water quality monitoring in resource-limited-settings.

  2. Modeling workflow to design machine translation applications for public health practice

    PubMed Central

    Turner, Anne M.; Brownstein, Megumu K.; Cole, Kate; Karasz, Hilary; Kirchhoff, Katrin

    2014-01-01

    Objective Provide a detailed understanding of the information workflow processes related to translating health promotion materials for limited English proficiency individuals in order to inform the design of context-driven machine translation (MT) tools for public health (PH). Materials and Methods We applied a cognitive work analysis framework to investigate the translation information workflow processes of two large health departments in Washington State. Researchers conducted interviews, performed a task analysis, and validated results with PH professionals to model translation workflow and identify functional requirements for a translation system for PH. Results The study resulted in a detailed description of work related to translation of PH materials, an information workflow diagram, and a description of attitudes towards MT technology. We identified a number of themes that hold design implications for incorporating MT in PH translation practice. A PH translation tool prototype was designed based on these findings. Discussion This study underscores the importance of understanding the work context and information workflow for which systems will be designed. Based on themes and translation information workflow processes, we identified key design guidelines for incorporating MT into PH translation work. Primary amongst these is that MT should be followed by human review for translations to be of high quality and for the technology to be adopted into practice. Counclusion The time and costs of creating multilingual health promotion materials are barriers to translation. PH personnel were interested in MT's potential to improve access to low-cost translated PH materials, but expressed concerns about ensuring quality. We outline design considerations and a potential machine translation tool to best fit MT systems into PH practice. PMID:25445922

  3. Estimation of respiratory volume from thoracoabdominal breathing distances: comparison of two models of machine learning.

    PubMed

    Dumond, Rémy; Gastinger, Steven; Rahman, Hala Abdul; Faucheur, Alexis Le; Quinton, Patrice; Kang, Haitao; Prioux, Jacques

    2017-08-01

    The purposes of this study were to both improve the accuracy of respiratory volume (V) estimates using the respiratory magnetometer plethysmography (RMP) technique and facilitate the use of this technique. We compared two models of machine learning (ML) for estimating [Formula: see text]: a linear model (multiple linear regression-MLR) and a nonlinear model (artificial neural network-ANN), and we used cross-validation to validate these models. Fourteen healthy adults, aged [Formula: see text] years participated in the present study. The protocol was conducted in a laboratory test room. The anteroposterior displacements of the rib cage and abdomen, and the axial displacements of the chest wall and spine were measured using two pairs of magnetometers. [Formula: see text] was estimated from these four signals, and the respiratory volume was simultaneously measured using a spirometer ([Formula: see text]) under lying, sitting and standing conditions as well as various exercise conditions (working on computer, treadmill walking at 4 and 6 km[Formula: see text], treadmill running at 9 and 12  km [Formula: see text] and ergometer cycling at 90 and 110 W). The results from the ANN model fitted the spirometer volume significantly better than those obtained through MLR. Considering all activities, the difference between [Formula: see text] and [Formula: see text] (bias) was higher for the MLR model ([Formula: see text] L) than for the ANN model ([Formula: see text] L). Our results demonstrate that this new processing approach for RMP seems to be a valid tool for estimating V with sufficient accuracy during lying, sitting and standing and under various exercise conditions.

  4. Including crystal structure attributes in machine learning models of formation energies via Voronoi tessellations

    NASA Astrophysics Data System (ADS)

    Ward, Logan; Liu, Ruoqian; Krishna, Amar; Hegde, Vinay I.; Agrawal, Ankit; Choudhary, Alok; Wolverton, Chris

    2017-07-01

    While high-throughput density functional theory (DFT) has become a prevalent tool for materials discovery, it is limited by the relatively large computational cost. In this paper, we explore using DFT data from high-throughput calculations to create faster, surrogate models with machine learning (ML) that can be used to guide new searches. Our method works by using decision tree models to map DFT-calculated formation enthalpies to a set of attributes consisting of two distinct types: (i) composition-dependent attributes of elemental properties (as have been used in previous ML models of DFT formation energies), combined with (ii) attributes derived from the Voronoi tessellation of the compound's crystal structure. The ML models created using this method have half the cross-validation error and similar training and evaluation speeds to models created with the Coulomb matrix and partial radial distribution function methods. For a dataset of 435 000 formation energies taken from the Open Quantum Materials Database (OQMD), our model achieves a mean absolute error of 80 meV/atom in cross validation, which is lower than the approximate error between DFT-computed and experimentally measured formation enthalpies and below 15% of the mean absolute deviation of the training set. We also demonstrate that our method can accurately estimate the formation energy of materials outside of the training set and be used to identify materials with especially large formation enthalpies. We propose that our models can be used to accelerate the discovery of new materials by identifying the most promising materials to study with DFT at little additional computational cost.

  5. Machine learning and hurdle models for improving regional predictions of stream water acid neutralizing capacity

    NASA Astrophysics Data System (ADS)

    Povak, Nicholas A.; Hessburg, Paul F.; Reynolds, Keith M.; Sullivan, Timothy J.; McDonnell, Todd C.; Salter, R. Brion

    2013-06-01

    In many industrialized regions of the world, atmospherically deposited sulfur derived from industrial, nonpoint air pollution sources reduces stream water quality and results in acidic conditions that threaten aquatic resources. Accurate maps of predicted stream water acidity are an essential aid to managers who must identify acid-sensitive streams, potentially affected biota, and create resource protection strategies. In this study, we developed correlative models to predict the acid neutralizing capacity (ANC) of streams across the southern Appalachian Mountain region, USA. Models were developed using stream water chemistry data from 933 sampled locations and continuous maps of pertinent environmental and climatic predictors. Environmental predictors were averaged across the upslope contributing area for each sampled stream location and submitted to both statistical and machine-learning regression models. Predictor variables represented key aspects of the contributing geology, soils, climate, topography, and acidic deposition. To reduce model error rates, we employed hurdle modeling to screen out well-buffered sites and predict continuous ANC for the remainder of the stream network. Models predicted acid-sensitive streams in forested watersheds with small contributing areas, siliceous lithologies, cool and moist environments, low clay content soils, and moderate or higher dry sulfur deposition. Our results confirmed findings from other studies and further identified several influential climatic variables and variable interactions. Model predictions indicated that one quarter of the total stream network was sensitive to additional sulfur inputs (i.e., ANC < 100 µeq L-1), while <10% displayed much lower ANC (<50 µeq L-1). These methods may be readily adapted in other regions to assess stream water quality and potential biotic sensitivity to acidic inputs.

  6. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling Error.

    PubMed

    Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui

    2017-06-13

    The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.

  7. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine.

    PubMed

    Shang, Qiang; Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang

    2016-01-01

    Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust.

  8. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine

    PubMed Central

    Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang

    2016-01-01

    Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust. PMID:27551829

  9. Machine Shop Grinding Machines.

    ERIC Educational Resources Information Center

    Dunn, James

    This curriculum manual is one in a series of machine shop curriculum manuals intended for use in full-time secondary and postsecondary classes, as well as part-time adult classes. The curriculum can also be adapted to open-entry, open-exit programs. Its purpose is to equip students with basic knowledge and skills that will enable them to enter the…

  10. An unsupervised machine learning model for discovering latent infectious diseases using social media data.

    PubMed

    Lim, Sunghoon; Tucker, Conrad S; Kumara, Soundar

    2017-02-01

    The authors of this work propose an unsupervised machine learning model that has the ability to identify real-world latent infectious diseases by mining social media data. In this study, a latent infectious disease is defined as a communicable disease that has not yet been formalized by national public health institutes and explicitly communicated to the general public. Most existing approaches to modeling infectious-disease-related knowledge discovery through social media networks are top-down approaches that are based on already known information, such as the names of diseases and their symptoms. In existing top-down approaches, necessary but unknown information, such as disease names and symptoms, is mostly unidentified in social media data until national public health institutes have formalized that disease. Most of the formalizing processes for latent infectious diseases are time consuming. Therefore, this study presents a bottom-up approach for latent infectious disease discovery in a given location without prior information, such as disease names and related symptoms. Social media messages with user and temporal information are extracted during the data preprocessing stage. An unsupervised sentiment analysis model is then presented. Users' expressions about symptoms, body parts, and pain locations are also identified from social media data. Then, symptom weighting vectors for each individual and time period are created, based on their sentiment and social media expressions. Finally, latent-infectious-disease-related information is retrieved from individuals' symptom weighting vectors. Twitter data from August 2012 to May 2013 are used to validate this study. Real electronic medical records for 104 individuals, who were diagnosed with influenza in the same period, are used to serve as ground truth validation. The results are promising, with the highest precision, recall, and F1 score values of 0.773, 0.680, and 0.724, respectively. This work uses individuals

  11. Mathematical Modeling and Simulation of the Pressing Section of a Paper Machine Including Dynamic Capillary Effect

    NASA Astrophysics Data System (ADS)

    Printsypar, G.; Iliev, O.; Rief, S.

    2011-12-01

    Paper production is a challenging problem which attracts attention of many scientists. The process which is of our interest takes place in the pressing section of a paper machine. The paper layer is dried by means of the pressing it against fabrics, i.e. press felts. The paper-felt sandwich is transported through the press nips at high speed (for more details see [3]). Since the natural drainage of water in the felts is much longer than the drying in the pressing section we include in the consideration the dynamic capillary effect. The dynamic capillary pressure-saturation relation proposed by Hassanizadeh and Gray (see [2]) is adopted for the pressing process. One of the other issues which is taken into account while modeling the pressing section is the appearance of fully saturated regions. We include in consideration two flow regimes: the one-phase water flow and the two-phase air-water flow. It leads to a free boundary problem. We also account for the complexity of the paper-felt sandwich porous structure. Apart from the two flow regimes the computational domain is divided by layers into nonoverlapping subdomains. Then, the system of equations describing transport processes in the pressing section is stated taking into account all these features. The presented model is discretized by the finite volume method. We carry out some numerical experiments for different configurations of the pressing section (roll press, shoe press) and for parameters which are typical for paper-felt sandwich during the paper production process. The experiments show that the dynamic capillary effect has a significant influence on the distribution of pressure even for small values of the material coefficient (see Fig. 1). The obtained results are in agreement with laboratory experiment performed in [1], which states that the distribution of the pressure is not symmetric with the maximum value occurring in front of the center of the pressing nip and the minimum value less than entry

  12. Geometric dimension model of virtual astronaut body for ergonomic analysis of man-machine space system

    NASA Astrophysics Data System (ADS)

    Qianxiang, Zhou

    2012-07-01

    It is very important to clarify the geometric characteristic of human body segment and constitute analysis model for ergonomic design and the application of ergonomic virtual human. The typical anthropometric data of 1122 Chinese men aged 20-35 years were collected using three-dimensional laser scanner for human body. According to the correlation between different parameters, curve fitting were made between seven trunk parameters and ten body parameters with the SPSS 16.0 software. It can be concluded that hip circumference and shoulder breadth are the most important parameters in the models and the two parameters have high correlation with the others parameters of human body. By comparison with the conventional regressive curves, the present regression equation with the seven trunk parameters is more accurate to forecast the geometric dimensions of head, neck, height and the four limbs with high precision. Therefore, it is greatly valuable for ergonomic design and analysis of man-machine system.This result will be very useful to astronaut body model analysis and application.

  13. Modelling effect of magnetic field on material removal in dry electrical discharge machining

    NASA Astrophysics Data System (ADS)

    Abhishek, Gupta; Suhas, S. Joshi

    2017-02-01

    One of the reasons for increased material removal rate in magnetic field assisted dry electrical discharge machining (EDM) is confinement of plasma due to Lorentz forces. This paper presents a mathematical model to evaluate the effect of external magnetic field on crater depth and diameter in single- and multiple-discharge EDM process. The model incorporates three main effects of the magnetic field, which include plasma confinement, mean free path reduction and pulsating magnetic field effects. Upon the application of an external magnetic field, Lorentz forces that are developed across the plasma column confine the plasma column. Also, the magnetic field reduces the mean free path of electrons due to an increase in the plasma pressure and cycloidal path taken by the electrons between the electrodes. As the mean free path of electrons reduces, more ionization occurs in plasma column and eventually an increase in the current density at the inter-electrode gap occurs. The model results for crater depth and its diameter in single discharge dry EDM process show an error of 9%-10% over the respective experimental values.

  14. Gaussian-binary restricted Boltzmann machines for modeling natural image statistics

    PubMed Central

    Wang, Nan; Wiskott, Laurenz

    2017-01-01

    We present a theoretical analysis of Gaussian-binary restricted Boltzmann machines (GRBMs) from the perspective of density models. The key aspect of this analysis is to show that GRBMs can be formulated as a constrained mixture of Gaussians, which gives a much better insight into the model’s capabilities and limitations. We further show that GRBMs are capable of learning meaningful features without using a regularization term and that the results are comparable to those of independent component analysis. This is illustrated for both a two-dimensional blind source separation task and for modeling natural image patches. Our findings exemplify that reported difficulties in training GRBMs are due to the failure of the training algorithm rather than the model itself. Based on our analysis we derive a better training setup and show empirically that it leads to faster and more robust training of GRBMs. Finally, we compare different sampling algorithms for training GRBMs and show that Contrastive Divergence performs better than training methods that use a persistent Markov chain. PMID:28152552

  15. Light and short arc rubs in rotating machines: Experimental tests and modelling

    NASA Astrophysics Data System (ADS)

    Pennacchi, P.; Bachschmid, N.; Tanzi, E.

    2009-10-01

    Rotor-to-stator rub is a non-linear phenomenon which has been analyzed many times in rotordynamics literature, but very often these studies are devoted simply to highlight non-linearities, using very simple rotors, rather than to present reliable models. However, rotor-to-stator rub is actually one of the most common faults during the operation of rotating machinery. The frequency of its occurrence is increasing due to the trend of reducing the radial clearance between the seal and the rotor in modern turbine units, pumps and compressors in order to increase efficiency. Often the rub occurs between rotor and seals and the analysis of the phenomenon cannot set aside the consideration of the different relative stiffness. This paper presents some experimental results obtained by means of a test rig in which rub conditions of real machines are reproduced. In particular short arc rubs are considered and the shaft is stiffer than the obstacle. Then a model, suitable to be employed for real rotating machinery, is presented and the simulations obtained are compared with the experimental results. The model is able to reproduce the behaviour of the test rig.

  16. Modelling effect of magnetic field on material removal in dry electrical discharge machining

    NASA Astrophysics Data System (ADS)

    Gupta, Abhishek; Joshi, Suhas, S.

    2017-02-01

    One of the reasons for increased material removal rate in magnetic field assisted dry electrical discharge machining (EDM) is confinement of plasma due to Lorentz forces. This paper presents a mathematical model to evaluate the effect of external magnetic field on crater depth and diameter in single- and multiple-discharge EDM process. The model incorporates three main effects of the magnetic field, which include plasma confinement, mean free path reduction and pulsating magnetic field effects. Upon the application of an external magnetic field, Lorentz forces that are developed across the plasma column confine the plasma column. Also, the magnetic field reduces the mean free path of electrons due to an increase in the plasma pressure and cycloidal path taken by the electrons between the electrodes. As the mean free path of electrons reduces, more ionization occurs in plasma column and eventually an increase in the current density at the inter-electrode gap occurs. The model results for crater depth and its diameter in single discharge dry EDM process show an error of 9%-10% over the respective experimental values.

  17. Accurate Models of Formation Enthalpy Created using Machine Learning and Voronoi Tessellations

    NASA Astrophysics Data System (ADS)

    Ward, Logan; Liu, Rosanne; Krishna, Amar; Hegde, Vinay; Agrawal, Ankit; Choudhary, Alok; Wolverton, Chris

    Several groups in the past decade have used high-throughput Density Functional Theory to predict the properties of hundreds of thousands of compounds. These databases provide the unique capability of being able to quickly query the properties of many compounds. Here, we explore how these datasets can also be used to create models that can predict the properties of compounds at rates several orders of magnitude faster than DFT. Our method relies on using Voronoi tessellations to derive attributes that quantitatively characterize the local environment around each atom, which then are used as input to a machine learning model. In this presentation, we will discuss the application of this technique to predicting the formation enthalpy of compounds using data from the Open Quantum Materials Database (OQMD). To date, we have found that this technique can be used to create models that are about twice as accurate as those created using the Coulomb Matrix and Partial Radial Distribution approaches and are equally as fast to evaluate.

  18. Nonlinear Generator Control Based on Equilibrium Point Analysis for Standard One-Machine Infinite-Bus System Model

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Fujimoto, Koji; Kawamoto, Shunji

    The aim of this letter is to show that the unstable equilibrium point of the Japanese standard one-machine infinite-bus system model is eliminated by adding a simple nonlinear complementary control input to the AVR, and then the critical clearing time of the system can be more enhanced in comparison with the PSS by introducing the proposed nonlinear generator control.

  19. Curriculum Assessment Using Artificial Neural Network and Support Vector Machine Modeling Approaches: A Case Study. IR Applications. Volume 29