Science.gov

Sample records for abstract machine model

  1. The Abstract Machine Model for Transaction-based System Control

    SciTech Connect

    Chassin, David P.

    2003-01-31

    Recent work applying statistical mechanics to economic modeling has demonstrated the effectiveness of using thermodynamic theory to address the complexities of large scale economic systems. Transaction-based control systems depend on the conjecture that when control of thermodynamic systems is based on price-mediated strategies (e.g., auctions, markets), the optimal allocation of resources in a market-based control system results in an emergent optimal control of the thermodynamic system. This paper proposes an abstract machine model as the necessary precursor for demonstrating this conjecture and establishes the dynamic laws as the basis for a special theory of emergence applied to the global behavior and control of complex adaptive systems. The abstract machine in a large system amounts to the analog of a particle in thermodynamic theory. The permit the establishment of a theory dynamic control of complex system behavior based on statistical mechanics. Thus we may be better able to engineer a few simple control laws for a very small number of devices types, which when deployed in very large numbers and operated as a system of many interacting markets yields the stable and optimal control of the thermodynamic system.

  2. Abstract machine based execution model for computer architecture design and efficient implementation of logic programs in parallel

    SciTech Connect

    Hermenegildo, M.V.

    1986-01-01

    The term Logic Programming refers to a variety of computer languages and execution models based on the traditional concept of Symbolic Logic. The expressive power of these languages offers promise to be of great assistance in facing the programming challenges of present and future symbolic processing applications in artificial intelligence, knowledge-based systems, and many other areas of computing. This dissertation presents an efficient parallel execution model for logic programs. The model is described from the source language level down to an Abstract Machine level, suitable for direct implementation on existing parallel systems or for the design of special purpose parallel architectures. Few assumptions are made at the source language level and, therefore, the techniques developed and the general Abstract Machine design are applicable to a variety of logic (and also functional) languages. These techniques offer efficient solutions to several areas of parallel Logic Programming implementation previously considered problematic or a source of considerable overhead, such as the detection and handling of variable binding conflicts in AND-parallelism, the specification of control and management of the execution tree, the treatment of distributed backtracking, and goal scheduling and memory management issues, etc. A parallel Abstract Machine design is offered, specifying data areas, operation, and a suitable instruction set.

  3. Abstract quantum computing machines and quantum computational logics

    NASA Astrophysics Data System (ADS)

    Chiara, Maria Luisa Dalla; Giuntini, Roberto; Sergioli, Giuseppe; Leporini, Roberto

    2016-06-01

    Classical and quantum parallelism are deeply different, although it is sometimes claimed that quantum Turing machines are nothing but special examples of classical probabilistic machines. We introduce the concepts of deterministic state machine, classical probabilistic state machine and quantum state machine. On this basis, we discuss the question: To what extent can quantum state machines be simulated by classical probabilistic state machines? Each state machine is devoted to a single task determined by its program. Real computers, however, behave differently, being able to solve different kinds of problems. This capacity can be modeled, in the quantum case, by the mathematical notion of abstract quantum computing machine, whose different programs determine different quantum state machines. The computations of abstract quantum computing machines can be linguistically described by the formulas of a particular form of quantum logic, termed quantum computational logic.

  4. Programming the Navier-Stokes computer: An abstract machine model and a visual editor

    NASA Technical Reports Server (NTRS)

    Middleton, David; Crockett, Tom; Tomboulian, Sherry

    1988-01-01

    The Navier-Stokes computer is a parallel computer designed to solve Computational Fluid Dynamics problems. Each processor contains several floating point units which can be configured under program control to implement a vector pipeline with several inputs and outputs. Since the development of an effective compiler for this computer appears to be very difficult, machine level programming seems necessary and support tools for this process have been studied. These support tools are organized into a graphical program editor. A programming process is described by which appropriate computations may be efficiently implemented on the Navier-Stokes computer. The graphical editor would support this programming process, verifying various programmer choices for correctness and deducing values such as pipeline delays and network configurations. Step by step details are provided and demonstrated with two example programs.

  5. Automatic Review of Abstract State Machines by Meta Property Verification

    NASA Technical Reports Server (NTRS)

    Arcaini, Paolo; Gargantini, Angelo; Riccobene, Elvinia

    2010-01-01

    A model review is a validation technique aimed at determining if a model is of sufficient quality and allows defects to be identified early in the system development, reducing the cost of fixing them. In this paper we propose a technique to perform automatic review of Abstract State Machine (ASM) formal specifications. We first detect a family of typical vulnerabilities and defects a developer can introduce during the modeling activity using the ASMs and we express such faults as the violation of meta-properties that guarantee certain quality attributes of the specification. These meta-properties are then mapped to temporal logic formulas and model checked for their violation. As a proof of concept, we also report the result of applying this ASM review process to several specifications.

  6. Formal modeling of virtual machines

    NASA Technical Reports Server (NTRS)

    Cremers, A. B.; Hibbard, T. N.

    1978-01-01

    Systematic software design can be based on the development of a 'hierarchy of virtual machines', each representing a 'level of abstraction' of the design process. The reported investigation presents the concept of 'data space' as a formal model for virtual machines. The presented model of a data space combines the notions of data type and mathematical machine to express the close interaction between data and control structures which takes place in a virtual machine. One of the main objectives of the investigation is to show that control-independent data type implementation is only of limited usefulness as an isolated tool of program development, and that the representation of data is generally dictated by the control context of a virtual machine. As a second objective, a better understanding is to be developed of virtual machine state structures than was heretofore provided by the view of the state space as a Cartesian product.

  7. Abstract Models of Probability

    NASA Astrophysics Data System (ADS)

    Maximov, V. M.

    2001-12-01

    Probability theory presents a mathematical formalization of intuitive ideas of independent events and a probability as a measure of randomness. It is based on axioms 1-5 of A.N. Kolmogorov 1 and their generalizations 2. Different formalized refinements were proposed for such notions as events, independence, random value etc., 2,3, whereas the measure of randomness, i.e. numbers from [0,1], remained unchanged. To be precise we mention some attempts of generalization of the probability theory with negative probabilities 4. From another side the physicists tryed to use the negative and even complex values of probability to explain some paradoxes in quantum mechanics 5,6,7. Only recently, the necessity of formalization of quantum mechanics and their foundations 8 led to the construction of p-adic probabilities 9,10,11, which essentially extended our concept of probability and randomness. Therefore, a natural question arises how to describe algebraic structures whose elements can be used as a measure of randomness. As consequence, a necessity arises to define the types of randomness corresponding to every such algebraic structure. Possibly, this leads to another concept of randomness that has another nature different from combinatorical - metric conception of Kolmogorov. Apparenly, discrepancy of real type of randomness corresponding to some experimental data lead to paradoxes, if we use another model of randomness for data processing 12. Algebraic structure whose elements can be used to estimate some randomness will be called a probability set Φ. Naturally, the elements of Φ are the probabilities.

  8. Modelling Metamorphism by Abstract Interpretation

    NASA Astrophysics Data System (ADS)

    Dalla Preda, Mila; Giacobazzi, Roberto; Debray, Saumya; Coogan, Kevin; Townsend, Gregg M.

    Metamorphic malware apply semantics-preserving transformations to their own code in order to foil detection systems based on signature matching. In this paper we consider the problem of automatically extract metamorphic signatures from these malware. We introduce a semantics for self-modifying code, later called phase semantics, and prove its correctness by showing that it is an abstract interpretation of the standard trace semantics. Phase semantics precisely models the metamorphic code behavior by providing a set of traces of programs which correspond to the possible evolutions of the metamorphic code during execution. We show that metamorphic signatures can be automatically extracted by abstract interpretation of the phase semantics, and that regular metamorphism can be modelled as finite state automata abstraction of the phase semantics.

  9. Abstracted model for ceramic coating

    SciTech Connect

    Farmer, J C; Stockman, C

    1998-11-14

    Engineers are exploring several mechanisms to delay corrosive attack of the CAM (corrosion allowance material) by dripping water, including drip shields and ceramic coatings. Ceramic coatings deposited with high-velocity oxyfuels (HVOF's) have exhibited a porosity of only 2% at a thickness of 0.15 cm. The primary goal of this document is to provide a detailed description of an abstracted process-level model for Total System Performance Assessment (TSPA) that has been developed to account for the inhibition of corrosion by protective ceramic coatings. A second goal was to address as many of the issues raised during a recent peer review as possible (direct reaction of liquid water with carbon steel, stress corrosion cracking of the ceramic coating, bending stresses in coatings of finite thickness, limitations of simple correction factors, etc.). During the periods of dry oxidation (T ≥ 100°C) and humid-air corrosion (T ≤ 100°C & RH < 8O%), it is assumed that the growth rate of oxide on the surface is diminished in proportion to the surface covered by solid ceramic. The mass transfer impedance imposed by a ceramic coating with gas-filled pores is assumed to be negligible. During the period of aqueous phase corrosion (T ≤ 100°C & RH ≥ 80%), it is assumed that the overall mass transfer resistance governing the corrosion rate is due to the combined resistance of ceramic coating & interfacial corrosion products. Two porosity models (simple cylinder & cylinder-sphere chain) are considered in estimation of the mass transfer resistance of the ceramic coating. It is evident that substantial impedance to 0₂ transport is encountered if pores are filled with liquid water. It may be possible to use a sealant to eliminate porosity. Spallation (rupture) of the ceramic coating is assumed to occur if the stress introduced by the expanding corrosion products at the ceramic- CAM interface exceeds fracture stress. Since this model does not account for the possibility of

  10. Abstract models of molecular walkers

    NASA Astrophysics Data System (ADS)

    Semenov, Oleg

    Recent advances in single-molecule chemistry have led to designs for artificial multi-pedal walkers that follow tracks of chemicals. The walkers, called molecular spiders, consist of a rigid chemically inert body and several flexible enzymatic legs. The legs can reversibly bind to chemical substrates on a surface, and through their enzymatic action convert them to products. We study abstract models of molecular spiders to evaluate how efficiently they can perform two tasks: molecular transport of cargo over tracks and search for targets on finite surfaces. For the single-spider model our simulations show a transient behavior wherein certain spiders move superdiffusively over significant distances and times. This gives the spiders potential as a faster-than-diffusion transport mechanism. However, analysis shows that single-spider motion eventually decays into an ordinary diffusive motion, owing to the ever increasing size of the region of products. Inspired by cooperative behavior of natural molecular walkers, we propose a symmetric exclusion process (SEP) model for multiple walkers interacting as they move over a one-dimensional lattice. We show that when walkers are sequentially released from the origin, the collective effect is to prevent the leading walkers from moving too far backwards. Hence, there is an effective outward pressure on the leading walkers that keeps them moving superdiffusively for longer times. Despite this improvement the leading spider eventually slows down and moves diffusively, similarly to a single spider. The slowdown happens because all spiders behind the leading spiders never encounter substrates, and thus they are never biased. They cannot keep up with leading spiders, and cannot put enough pressure on them. Next, we investigate search properties of a single and multiple spiders moving over one- and two-dimensional surfaces with various absorbing and reflecting boundaries. For the single-spider model we evaluate by how much the

  11. Towards Compatible and Interderivable Semantic Specifications for the Scheme Programming Language, Part II: Reduction Semantics and Abstract Machines

    NASA Astrophysics Data System (ADS)

    Biernacka, Małgorzata; Danvy, Olivier

    We present a context-sensitive reduction semantics for a lambda-calculus with explicit substitutions and we show that the functional implementation of this small-step semantics mechanically corresponds to that of the abstract machine for Core Scheme presented by Clinger at PLDI’98, including first-class continuations. Starting from this reduction semantics, (1) we refocus it into a small-step abstract machine; (2) we fuse the transition function of this abstract machine with its driver loop, obtaining a big-step abstract machine which is staged; (3) we compress its corridor transitions, obtaining an eval/continue abstract machine; and (4) we unfold its ground closures, which yields an abstract machine that essentially coincides with Clinger’s machine. This lambda-calculus with explicit substitutions therefore aptly accounts for Core Scheme, including Clinger’s permutations and unpermutations.

  12. Paralation views: Abstractions for efficient scientific computing on the connection machine. Technical report

    SciTech Connect

    Goldman, K.J.

    1989-06-01

    An ideal parallel programming language for scientific applications should provide flexible abstraction mechanisms for writing organized and readable programs, encourage a modular programming style that permits using libraries of tested routines, and, above all, permit the programming to write efficient programs for the target machine. These criteria are used to evaluate the languages Lisp, Connection Machine Lisp, and Paralation Lisp for writing scientific programs on the Connection Machine. As a vehicle for this exploration, the authors fix a particular non-trivial algorithm (LU decomposition with partial pivoting) and study code for implementing it in the three languages.

  13. Model-based machine learning.

    PubMed

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612

  14. Model-based machine learning.

    PubMed

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  15. Integrating model abstraction into monitoring strategies

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study was designed and performed to investigate the opportunities and benefits of integrating model abstraction techniques into monitoring strategies. The study focused on future applications of modeling to contingency planning and management of potential and actual contaminant release sites wi...

  16. SATURATED ZONE FLOW AND TRANSPORT MODEL ABSTRACTION

    SciTech Connect

    B.W. ARNOLD

    2004-10-27

    The purpose of the saturated zone (SZ) flow and transport model abstraction task is to provide radionuclide-transport simulation results for use in the total system performance assessment (TSPA) for license application (LA) calculations. This task includes assessment of uncertainty in parameters that pertain to both groundwater flow and radionuclide transport in the models used for this purpose. This model report documents the following: (1) The SZ transport abstraction model, which consists of a set of radionuclide breakthrough curves at the accessible environment for use in the TSPA-LA simulations of radionuclide releases into the biosphere. These radionuclide breakthrough curves contain information on radionuclide-transport times through the SZ. (2) The SZ one-dimensional (I-D) transport model, which is incorporated in the TSPA-LA model to simulate the transport, decay, and ingrowth of radionuclide decay chains in the SZ. (3) The analysis of uncertainty in groundwater-flow and radionuclide-transport input parameters for the SZ transport abstraction model and the SZ 1-D transport model. (4) The analysis of the background concentration of alpha-emitting species in the groundwater of the SZ.

  17. Solicited abstract: Global hydrological modeling and models

    NASA Astrophysics Data System (ADS)

    Xu, Chong-Yu

    2010-05-01

    The origins of rainfall-runoff modeling in the broad sense can be found in the middle of the 19th century arising in response to three types of engineering problems: (1) urban sewer design, (2) land reclamation drainage systems design, and (3) reservoir spillway design. Since then numerous empirical, conceptual and physically-based models are developed including event based models using unit hydrograph concept, Nash's linear reservoir models, HBV model, TOPMODEL, SHE model, etc. From the late 1980s, the evolution of global and continental-scale hydrology has placed new demands on hydrologic modellers. The macro-scale hydrological (global and regional scale) models were developed on the basis of the following motivations (Arenll, 1999). First, for a variety of operational and planning purposes, water resource managers responsible for large regions need to estimate the spatial variability of resources over large areas, at a spatial resolution finer than can be provided by observed data alone. Second, hydrologists and water managers are interested in the effects of land-use and climate variability and change over a large geographic domain. Third, there is an increasing need of using hydrologic models as a base to estimate point and non-point sources of pollution loading to streams. Fourth, hydrologists and atmospheric modellers have perceived weaknesses in the representation of hydrological processes in regional and global climate models, and developed global hydrological models to overcome the weaknesses of global climate models. Considerable progress in the development and application of global hydrological models has been achieved to date, however, large uncertainties still exist considering the model structure including large scale flow routing, parameterization, input data, etc. This presentation will focus on the global hydrological models, and the discussion includes (1) types of global hydrological models, (2) procedure of global hydrological model development

  18. Model Checking Abstract PLEXIL Programs with SMART

    NASA Technical Reports Server (NTRS)

    Siminiceanu, Radu I.

    2007-01-01

    We describe a method to automatically generate discrete-state models of abstract Plan Execution Interchange Language (PLEXIL) programs that can be analyzed using model checking tools. Starting from a high-level description of a PLEXIL program or a family of programs with common characteristics, the generator lays the framework that models the principles of program execution. The concrete parts of the program are not automatically generated, but require the modeler to introduce them by hand. As a case study, we generate models to verify properties of the PLEXIL macro constructs that are introduced as shorthand notation. After an exhaustive analysis, we conclude that the macro definitions obey the intended semantics and behave as expected, but contingently on a few specific requirements on the timing semantics of micro-steps in the concrete executive implementation.

  19. Hierarchical abstract semantic model for image classification

    NASA Astrophysics Data System (ADS)

    Ye, Zhipeng; Liu, Peng; Zhao, Wei; Tang, Xianglong

    2015-09-01

    Semantic gap limits the performance of bag-of-visual-words. To deal with this problem, a hierarchical abstract semantics method that builds abstract semantic layers, generates semantic visual vocabularies, measures semantic gap, and constructs classifiers using the Adaboost strategy is proposed. First, abstract semantic layers are proposed to narrow the semantic gap between visual features and their interpretation. Then semantic visual words are extracted as features to train semantic classifiers. One popular form of measurement is used to quantify the semantic gap. The Adaboost training strategy is used to combine weak classifiers into strong ones to further improve performance. For a testing image, the category is estimated layer-by-layer. Corresponding abstract hierarchical structures for popular datasets, including Caltech-101 and MSRC, are proposed for evaluation. The experimental results show that the proposed method is capable of narrowing semantic gaps effectively and performs better than other categorization methods.

  20. Memristor models for machine learning.

    PubMed

    Carbajal, Juan Pablo; Dambre, Joni; Hermans, Michiel; Schrauwen, Benjamin

    2015-03-01

    In the quest for alternatives to traditional complementary metal-oxide-semiconductor, it is being suggested that digital computing efficiency and power can be improved by matching the precision to the application. Many applications do not need the high precision that is being used today. In particular, large gains in area and power efficiency could be achieved by dedicated analog realizations of approximate computing engines. In this work we explore the use of memristor networks for analog approximate computation, based on a machine learning framework called reservoir computing. Most experimental investigations on the dynamics of memristors focus on their nonvolatile behavior. Hence, the volatility that is present in the developed technologies is usually unwanted and is not included in simulation models. In contrast, in reservoir computing, volatility is not only desirable but necessary. Therefore, in this work, we propose two different ways to incorporate it into memristor simulation models. The first is an extension of Strukov's model, and the second is an equivalent Wiener model approximation. We analyze and compare the dynamical properties of these models and discuss their implications for the memory and the nonlinear processing capacity of memristor networks. Our results indicate that device variability, increasingly causing problems in traditional computer design, is an asset in the context of reservoir computing. We conclude that although both models could lead to useful memristor-based reservoir computing systems, their computational performance will differ. Therefore, experimental modeling research is required for the development of accurate volatile memristor models.

  1. Rough set models of Physarum machines

    NASA Astrophysics Data System (ADS)

    Pancerz, Krzysztof; Schumann, Andrew

    2015-04-01

    In this paper, we consider transition system models of behaviour of Physarum machines in terms of rough set theory. A Physarum machine, a biological computing device implemented in the plasmodium of Physarum polycephalum (true slime mould), is a natural transition system. In the behaviour of Physarum machines, one can notice some ambiguity in Physarum motions that influences exact anticipation of states of machines in time. To model this ambiguity, we propose to use rough set models created over transition systems. Rough sets are an appropriate tool to deal with rough (ambiguous, imprecise) concepts in the universe of discourse.

  2. Relative Effectiveness of Titles, Abstracts, and Subject Headings for Machine Retrieval from the COMPENDEX Services

    ERIC Educational Resources Information Center

    Byrne, Jerry R.

    1975-01-01

    Investigated the relative merits of searching on titles, subject headings, abstracts, free-language terms, and combinations of these elements. The combination of titles and abstracts came the closest to 100 percent retrieval. (Author/PF)

  3. How Pupils Use a Model for Abstract Concepts in Genetics

    ERIC Educational Resources Information Center

    Venville, Grady; Donovan, Jenny

    2008-01-01

    The purpose of this research was to explore the way pupils of different age groups use a model to understand abstract concepts in genetics. Pupils from early childhood to late adolescence were taught about genes and DNA using an analogical model (the wool model) during their regular biology classes. Changing conceptual understandings of the…

  4. Dissipation and irreversibility for models of mechanochemical machines

    NASA Astrophysics Data System (ADS)

    Brown, Aidan; Sivak, David

    For biological systems to maintain order and achieve directed progress, they must overcome fluctuations so that reactions and processes proceed forwards more than they go in reverse. It is well known that some free energy dissipation is required to achieve irreversible forward progress, but the quantitative relationship between irreversibility and free energy dissipation is not well understood. Previous studies focused on either abstract calculations or detailed simulations that are difficult to generalize. We present results for mechanochemical models of molecular machines, exploring a range of model characteristics and behaviours. Our results describe how irreversibility and dissipation trade off in various situations, and how this trade-off can depend on details of the model. The irreversibility-dissipation trade-off points towards general principles of microscopic machine operation or process design. Our analysis identifies system parameters which can be controlled to bring performance to the Pareto frontier.

  5. Concrete Model Checking with Abstract Matching and Refinement

    NASA Technical Reports Server (NTRS)

    Pasareanu Corina S.; Peianek Radek; Visser, Willem

    2005-01-01

    We propose an abstraction-based model checking method which relies on refinement of an under-approximation of the feasible behaviors of the system under analysis. The method preserves errors to safety properties, since all analyzed behaviors are feasible by definition. The method does not require an abstract transition relation to he generated, but instead executes the concrete transitions while storing abstract versions of the concrete states, as specified by a set of abstraction predicates. For each explored transition. the method checks, with the help of a theorem prover, whether there is any loss of precision introduced by abstraction. The results of these checks are used to decide termination or to refine the abstraction, by generating new abstraction predicates. If the (possibly infinite) concrete system under analysis has a finite bisimulation quotient, then the method is guaranteed to eventually explore an equivalent finite bisimilar structure. We illustrate the application of the approach for checking concurrent programs. We also show how a lightweight variant can be used for efficient software testing.

  6. Model abstraction results using state-space system identifications

    NASA Astrophysics Data System (ADS)

    Popken, Douglas A.

    2000-06-01

    In this paper we report on state-space system identification approaches to dynamic behavioral abstraction of military simulation models. Two stochastic simulation models were identified under a variety of scenarios. The `Attrition Simulation' is a model of two opposing forces with multiple weapon system types. The `Mission Simulation' is a model of a squadron of aircraft performing battlefield air interdiction. Four system identification techniques: Maximum Entropy, Compartmental Models, Canonical State-Space Models, and Hidden Markov Models (HMM), were applied to these simulation models. The system identification techniques were evaluated on how well their resulting abstractions replicated the distributions of the simulation states as well as the decision outputs. Encouraging results were achieved by the HMM technique applied to the Attrition Simulation--and by the Maximum Entropy technique applied to the Mission Simulation.

  7. Abstract of the Development of a Theoretical Basis for Machine Aids for Translation from Hebrew to English.

    ERIC Educational Resources Information Center

    Price, James D.

    1969-01-01

    Chapter I, an introduction to machine translation of languages, contains a simplified description of electronic computing machines, and a discussion of the advantages and disadvantages of machine translation research. A historical background of machine translation of languages is given, together with a description of various machine translation…

  8. Of Models and Machines: Implementing Bounded Rationality.

    PubMed

    Dick, Stephanie

    2015-09-01

    This essay explores the early history of Herbert Simon's principle of bounded rationality in the context of his Artificial Intelligence research in the mid 1950s. It focuses in particular on how Simon and his colleagues at the RAND Corporation translated a model of human reasoning into a computer program, the Logic Theory Machine. They were motivated by a belief that computers and minds were the same kind of thing--namely, information-processing systems. The Logic Theory Machine program was a model of how people solved problems in elementary mathematical logic. However, in making this model actually run on their 1950s computer, the JOHNNIAC, Simon and his colleagues had to navigate many obstacles and material constraints quite foreign to the human experience of logic. They crafted new tools and engaged in new practices that accommodated the affordances of their machine, rather than reflecting the character of human cognition and its bounds. The essay argues that tracking this implementation effort shows that "internal" cognitive practices and "external" tools and materials are not so easily separated as they are in Simon's principle of bounded rationality--the latter often shaping the dynamics of the former. PMID:26685521

  9. Of Models and Machines: Implementing Bounded Rationality.

    PubMed

    Dick, Stephanie

    2015-09-01

    This essay explores the early history of Herbert Simon's principle of bounded rationality in the context of his Artificial Intelligence research in the mid 1950s. It focuses in particular on how Simon and his colleagues at the RAND Corporation translated a model of human reasoning into a computer program, the Logic Theory Machine. They were motivated by a belief that computers and minds were the same kind of thing--namely, information-processing systems. The Logic Theory Machine program was a model of how people solved problems in elementary mathematical logic. However, in making this model actually run on their 1950s computer, the JOHNNIAC, Simon and his colleagues had to navigate many obstacles and material constraints quite foreign to the human experience of logic. They crafted new tools and engaged in new practices that accommodated the affordances of their machine, rather than reflecting the character of human cognition and its bounds. The essay argues that tracking this implementation effort shows that "internal" cognitive practices and "external" tools and materials are not so easily separated as they are in Simon's principle of bounded rationality--the latter often shaping the dynamics of the former.

  10. Abstracts of the symposium on unsaturated flow and transport modeling

    SciTech Connect

    Not Available

    1982-03-01

    Abstract titles are: Recent developments in modeling variably saturated flow and transport; Unsaturated flow modeling as applied to field problems; Coupled heat and moisture transport in unsaturated soils; Influence of climatic parameters on movement of radionuclides in a multilayered saturated-unsaturated media; Modeling water and solute transport in soil containing roots; Simulation of consolidation in partially saturated soil materials; modeling of water and solute transport in unsaturated heterogeneous fields; Fluid dynamics and mass transfer in variably-saturated porous media; Solute transport through soils; One-dimensional analytical transport modeling; Convective transport of ideal tracers in unsaturated soils; Chemical transport in macropore-mesopore media under partially saturated conditions; Influence of the tension-saturated zone on contaminant migration in shallow water regimes; Influence of the spatial distribution of velocities in porous media on the form of solute transport; Stochastic vs deterministic models for solute movement in the field; and Stochastic analysis of flow and solute transport. (DMC)

  11. Modelling the influence of irrigation abstractions on Scotland's water resources.

    PubMed

    Dunn, S M; Chalmers, N; Stalham, M; Lilly, A; Crabtree, B; Johnston, L

    2003-01-01

    Legislation to control abstraction of water in Scotland is limited and for purposes such as irrigation there are no restrictions in place over most of the country. This situation is set to change with implementation of the European Water Framework Directive. As a first step towards the development of appropriate policy for irrigation control there is a need to assess the current scale of irrigation practices in Scotland. This paper presents a modelling approach that has been used to quantify spatially the volume of water abstractions across the country for irrigation of potato crops under typical climatic conditions. A water balance model was developed to calculate soil moisture deficits and identify the potential need for irrigation. The results were then combined with spatial data on potato cropping and integrated to the sub-catchment scale to identify the river systems most at risk from over-abstraction. The results highlight that the areas that have greatest need for irrigation of potatoes are all concentrated in the central east-coast area of Scotland. The difference between irrigation demand in wet and dry years is very significant, although spatial patterns of the distribution are similar.

  12. Situation models, mental simulations, and abstract concepts in discourse comprehension.

    PubMed

    Zwaan, Rolf A

    2016-08-01

    This article sets out to examine the role of symbolic and sensorimotor representations in discourse comprehension. It starts out with a review of the literature on situation models, showing how mental representations are constrained by linguistic and situational factors. These ideas are then extended to more explicitly include sensorimotor representations. Following Zwaan and Madden (2005), the author argues that sensorimotor and symbolic representations mutually constrain each other in discourse comprehension. These ideas are then developed further to propose two roles for abstract concepts in discourse comprehension. It is argued that they serve as pointers in memory, used (1) cataphorically to integrate upcoming information into a sensorimotor simulation, or (2) anaphorically integrate previously presented information into a sensorimotor simulation. In either case, the sensorimotor representation is a specific instantiation of the abstract concept.

  13. Exploiting mid-range DNA patterns for sequence classification: binary abstraction Markov models.

    PubMed

    Shepard, Samuel S; McSweeny, Andrew; Serpen, Gursel; Fedorov, Alexei

    2012-06-01

    Messenger RNA sequences possess specific nucleotide patterns distinguishing them from non-coding genomic sequences. In this study, we explore the utilization of modified Markov models to analyze sequences up to 44 bp, far beyond the 8-bp limit of conventional Markov models, for exon/intron discrimination. In order to analyze nucleotide sequences of this length, their information content is first reduced by conversion into shorter binary patterns via the application of numerous abstraction schemes. After the conversion of genomic sequences to binary strings, homogenous Markov models trained on the binary sequences are used to discriminate between exons and introns. We term this approach the Binary Abstraction Markov Model (BAMM). High-quality abstraction schemes for exon/intron discrimination are selected using optimization algorithms on supercomputers. The best MM classifiers are then combined using support vector machines into a single classifier. With this approach, over 95% classification accuracy is achieved without taking reading frame into account. With further development, the BAMM approach can be applied to sequences lacking the genetic code such as ncRNAs and 5'-untranslated regions. PMID:22344692

  14. Uncovering protein interaction in abstracts and text using a novel linear model and word proximity networks

    PubMed Central

    Abi-Haidar, Alaa; Kaur, Jasleen; Maguitman, Ana; Radivojac, Predrag; Rechtsteiner, Andreas; Verspoor, Karin; Wang, Zhiping; Rocha, Luis M

    2008-01-01

    Background: We participated in three of the protein-protein interaction subtasks of the Second BioCreative Challenge: classification of abstracts relevant for protein-protein interaction (interaction article subtask [IAS]), discovery of protein pairs (interaction pair subtask [IPS]), and identification of text passages characterizing protein interaction (interaction sentences subtask [ISS]) in full-text documents. We approached the abstract classification task with a novel, lightweight linear model inspired by spam detection techniques, as well as an uncertainty-based integration scheme. We also used a support vector machine and singular value decomposition on the same features for comparison purposes. Our approach to the full-text subtasks (protein pair and passage identification) includes a feature expansion method based on word proximity networks. Results: Our approach to the abstract classification task (IAS) was among the top submissions for this task in terms of measures of performance used in the challenge evaluation (accuracy, F-score, and area under the receiver operating characteristic curve). We also report on a web tool that we produced using our approach: the Protein Interaction Abstract Relevance Evaluator (PIARE). Our approach to the full-text tasks resulted in one of the highest recall rates as well as mean reciprocal rank of correct passages. Conclusion: Our approach to abstract classification shows that a simple linear model, using relatively few features, can generalize and uncover the conceptual nature of protein-protein interactions from the bibliome. Because the novel approach is based on a rather lightweight linear model, it can easily be ported and applied to similar problems. In full-text problems, the expansion of word features with word proximity networks is shown to be useful, although the need for some improvements is discussed. PMID:18834489

  15. Entity-Centric Abstraction and Modeling Framework for Transportation Architectures

    NASA Technical Reports Server (NTRS)

    Lewe, Jung-Ho; DeLaurentis, Daniel A.; Mavris, Dimitri N.; Schrage, Daniel P.

    2007-01-01

    A comprehensive framework for representing transpportation architectures is presented. After discussing a series of preceding perspectives and formulations, the intellectual underpinning of the novel framework using an entity-centric abstraction of transportation is described. The entities include endogenous and exogenous factors and functional expressions are offered that relate these and their evolution. The end result is a Transportation Architecture Field which permits analysis of future concepts under the holistic perspective. A simulation model which stems from the framework is presented and exercised producing results which quantify improvements in air transportation due to advanced aircraft technologies. Finally, a modeling hypothesis and its accompanying criteria are proposed to test further use of the framework for evaluating new transportation solutions.

  16. Modeling and analysis of pulse electrochemical machining

    NASA Astrophysics Data System (ADS)

    Wei, Bin

    Pulse Electrochemical Machining (PECM) is a potentially cost effective technology meeting the increasing needs of precision manufacturing of superalloys, like titanium alloys, into complex shapes such as turbine airfoils. This dissertation reports: (1) an assessment of the worldwide state-of-the-art PECM research and industrial practice; (2) PECM process model development; (3) PECM of a superalloy (Ti-6Al-4V); and (4) key issues in future PECM research. The assessment focuses on identifying dimensional control problems with continuous ECM and how PECM can offer a solution. Previous research on PECM system design, process mechanisms, and dimensional control is analysed, leading to a clearer understanding of key issues in PECM development such as process characterization and modeling. New interelectrode gap dynamic models describing the gap evolution with time are developed for different PECM processes with an emphasis on the frontal gaps and a typical two-dimensional case. A 'PECM cosine principle' and several tool design formulae are also derived. PECM processes are characterized using concepts such as quasi-equilibrium gap and dissolution localization. Process simulation is performed to evaluate the effects of process inputs on dimensional accuracy control. Analysis is made on three types (single-phase, homogeneous, and inhomogeneous) of models concerning the physical processes (such as the electrolyte flow, Joule heating, and bubble generation) in the interelectrode gap. A physical model is introduced for the PECM with short pulses, which addresses the effect of electrolyte conductivity change on anodic dissolution. PECM of the titanium alloy is studied from a new perspective on the pulsating currents influence on surface quality and dimension control. An experimental methodology is developed to acquire instantaneous currents and to accurately measure the coefficient of machinability. The influence of pulse parameters on the surface passivation is explained based

  17. Information Model for Machine-Tool-Performance Tests

    PubMed Central

    Lee, Y. Tina; Soons, Johannes A.; Donmez, M. Alkan

    2001-01-01

    This report specifies an information model of machine-tool-performance tests in the EXPRESS [1] language. The information model provides a mechanism for describing the properties and results of machine-tool-performance tests. The objective of the information model is a standardized, computer-interpretable representation that allows for efficient archiving and exchange of performance test data throughout the life cycle of the machine. The report also demonstrates the implementation of the information model using three different implementation methods. PMID:27500031

  18. Prototype-based models in machine learning.

    PubMed

    Biehl, Michael; Hammer, Barbara; Villmann, Thomas

    2016-01-01

    An overview is given of prototype-based models in machine learning. In this framework, observations, i.e., data, are stored in terms of typical representatives. Together with a suitable measure of similarity, the systems can be employed in the context of unsupervised and supervised analysis of potentially high-dimensional, complex datasets. We discuss basic schemes of competitive vector quantization as well as the so-called neural gas approach and Kohonen's topology-preserving self-organizing map. Supervised learning in prototype systems is exemplified in terms of learning vector quantization. Most frequently, the familiar Euclidean distance serves as a dissimilarity measure. We present extensions of the framework to nonstandard measures and give an introduction to the use of adaptive distances in relevance learning.

  19. Robustness of thermal error compensation model of CNC machine tool

    NASA Astrophysics Data System (ADS)

    Lang, Xianli; Miao, Enming; Gong, Yayun; Niu, Pengcheng; Xu, Zhishang

    2013-01-01

    Thermal error is the major factor in restricting the accuracy of CNC machining. The modeling accuracy is the key of thermal error compensation which can achieve precision machining of CNC machine tool. The traditional thermal error compensation models mostly focus on the fitting accuracy without considering the robustness of the models, it makes the research results into practice is difficult. In this paper, the experiment of model robustness is done in different spinde speeds of leaderway V-450 machine tool. Combining fuzzy clustering and grey relevance selects temperature-sensitive points of thermal error. Using multiple linear regression model (MLR) and distributed lag model (DL) establishes model of the multi-batch experimental data and then gives robustness analysis, demonstrates the difference between fitting precision and prediction precision in engineering application, and provides a reference method to choose thermal error compensation model of CNC machine tool in the practical engineering application.

  20. Selected translated abstracts of Russian-language climate-change publications. 4: General circulation models

    SciTech Connect

    Burtis, M.D.; Razuvaev, V.N.; Sivachok, S.G.

    1996-10-01

    This report presents English-translated abstracts of important Russian-language literature concerning general circulation models as they relate to climate change. Into addition to the bibliographic citations and abstracts translated into English, this report presents the original citations and abstracts in Russian. Author and title indexes are included to assist the reader in locating abstracts of particular interest.

  1. Modeling of cumulative tool wear in machining metal matrix composites

    SciTech Connect

    Hung, N.P.; Tan, V.K.; Oon, B.E.

    1995-12-31

    Metal matrix composites (MMCs) are notoriously known for their low machinability because of the abrasive and brittle reinforcement. Although a near-net-shape product could be produced, finish machining is still required for the final shape and dimension. The classical Taylor`s tool life equation that relates tool life and cutting conditions has been traditionally used to study machinability. The turning operation is commonly used to investigate the machinability of a material; tedious and costly milling experiments have to be performed separately; while a facing test is not applicable for the Taylor`s model since the facing speed varies as the tool moves radially. Collecting intensive machining data for MMCs is often difficult because of the constraints on size, cost of the material, and the availability of sophisticated machine tools. A more flexible model and machinability testing technique are, therefore, sought. This study presents and verifies new models for turning, facing, and milling operations. Different cutting conditions were utilized to assess the machinability of MMCs reinforced with silicon carbide or alumina particles. Experimental data show that tool wear does not depend on the order of different cutting speeds since abrasion is the main wear mechanism. Correlation between data for turning, milling, and facing is presented. It is more economical to rank machinability using data for facing and then to convert the data for turning and milling, if required. Subsurface damages such as work-hardened and cracked matrix alloy, and fractured and delaminated particles are discussed.

  2. Modeling the dynamics of the chassis of construction machines

    NASA Astrophysics Data System (ADS)

    Sakhapov, R. L.; Nikolaeva, R. V.; Gatiyatullin, M. H.; Makhmutov, M. M.

    2016-08-01

    The article presents the results of a study of the transfer functions of a construction machine as a complex dynamic system. Authors constructed a dynamic model of a construction machine. The paper formulates and solves a system of nonlinear differential equations of motion of the chassis system of a construction machine on the basis of the d'Alembert- Lagrange equation. The numerical values of the transfer function coefficients for the construction machines were determined from the experimentally obtained curves of acceleration, processed by area method. Authors determined the experimental curves of the transition process of chassis system of a construction machine. The results of the study show that the difference of source curves ordinates and calculation of transients is less than 4% on average, which indicates a fairly accurate description of the process. The resulting expressions of transfer system functions of the chassis with sufficient precision can be used for practical purposes in the design and development of new construction machines.

  3. Limit model of electrochemical dimensional machining of metals

    NASA Astrophysics Data System (ADS)

    Zhitnikov, V. P.; Oshmarina, E. M.; Porechny, S. S.; Fedorova, G. I.

    2014-07-01

    The method of precision electrochemical machining is studied by using a model in which the current output has the form of a step function of current density. The problems of maximum stationary and quasistationary machining are formulated and solved, which made it possible to study the nonstationary process with sufficient accuracy.

  4. Two-Stage Machine Learning model for guideline development.

    PubMed

    Mani, S; Shankle, W R; Dick, M B; Pazzani, M J

    1999-05-01

    We present a Two-Stage Machine Learning (ML) model as a data mining method to develop practice guidelines and apply it to the problem of dementia staging. Dementia staging in clinical settings is at present complex and highly subjective because of the ambiguities and the complicated nature of existing guidelines. Our model abstracts the two-stage process used by physicians to arrive at the global Clinical Dementia Rating Scale (CDRS) score. The model incorporates learning intermediate concepts (CDRS category scores) in the first stage that then become the feature space for the second stage (global CDRS score). The sample consisted of 678 patients evaluated in the Alzheimer's Disease Research Center at the University of California, Irvine. The demographic variables, functional and cognitive test results used by physicians for the task of dementia severity staging were used as input to the machine learning algorithms. Decision tree learners and rule inducers (C4.5, Cart, C4.5 rules) were selected for our study as they give expressive models, and Naive Bayes was used as a baseline algorithm for comparison purposes. We first learned the six CDRS category scores (memory, orientation, judgement and problem solving, personal care, home and hobbies, and community affairs). These learned CDRS category scores were then used to learn the global CDRS scores. The Two-Stage ML model classified as well as or better than the published inter-rater agreements for both the category and global CDRS scoring by dementia experts. Furthermore, for the most critical distinction, normal versus very mildly impaired, the Two-Stage ML model was 28.1 and 6.6% more accurate than published performances by domain experts. Our study of the CDRS examined one of the largest, most diverse samples in the literature, suggesting that our findings are robust. The Two-Stage ML model also identified a CDRS category, Judgment and Problem Solving, which has low classification accuracy similar to published

  5. Model Machine Shop for Drafting Instruction.

    ERIC Educational Resources Information Center

    Jackson, Carl R.

    The development and implementation of a two-year interdisciplinary course integrating a machine shop and drafting curriculum are described in the report. The purpose of the course is to provide a learning process in industrial drafting featuring identifiable orientation in skills that will enable the student to develop competencies that are…

  6. Developing a PLC-friendly state machine model: lessons learned

    NASA Astrophysics Data System (ADS)

    Pessemier, Wim; Deconinck, Geert; Raskin, Gert; Saey, Philippe; Van Winckel, Hans

    2014-07-01

    Modern Programmable Logic Controllers (PLCs) have become an attractive platform for controlling real-time aspects of astronomical telescopes and instruments due to their increased versatility, performance and standardization. Likewise, vendor-neutral middleware technologies such as OPC Unified Architecture (OPC UA) have recently demonstrated that they can greatly facilitate the integration of these industrial platforms into the overall control system. Many practical questions arise, however, when building multi-tiered control systems that consist of PLCs for low level control, and conventional software and platforms for higher level control. How should the PLC software be structured, so that it can rely on well-known programming paradigms on the one hand, and be mapped to a well-organized OPC UA interface on the other hand? Which programming languages of the IEC 61131-3 standard closely match the problem domains of the abstraction levels within this structure? How can the recent additions to the standard (such as the support for namespaces and object-oriented extensions) facilitate a model based development approach? To what degree can our applications already take advantage of the more advanced parts of the OPC UA standard, such as the high expressiveness of the semantic modeling language that it defines, or the support for events, aggregation of data, automatic discovery, ... ? What are the timing and concurrency problems to be expected for the higher level tiers of the control system due to the cyclic execution of control and communication tasks by the PLCs? We try to answer these questions by demonstrating a semantic state machine model that can readily be implemented using IEC 61131 and OPC UA. One that does not aim to capture all possible states of a system, but rather one that attempts to organize the course-grained structure and behaviour of a system. In this paper we focus on the intricacies of this seemingly simple task, and on the lessons that we

  7. Context in Models of Human-Machine Systems

    NASA Technical Reports Server (NTRS)

    Callantine, Todd J.; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    All human-machine systems models represent context. This paper proposes a theory of context through which models may be usefully related and integrated for design. The paper presents examples of context representation in various models, describes an application to developing models for the Crew Activity Tracking System (CATS), and advances context as a foundation for integrated design of complex dynamic systems.

  8. Error modeling for tailored blank laser welding machine

    NASA Astrophysics Data System (ADS)

    Xin, Liming; Xu, Zhigang; Zhao, Mingyang; Zhu, Tianxu

    2008-12-01

    This paper introduces the research on error modeling of tailored blank laser welding machine which has four linear axes. The error models are established based on multi-body system (MBS) theories which are developed in this paper. The number arrays of low-order body are used to describe the topological structures which are taken to generalize and refine MBS, and the characteristic matrices are employed to represent the relative positions and gestures between any two bodies in MBS. Position error associated function which can reflect the influence of each error origin on the positioning error of the machine tool is given to describe transmission error of the machine in detail. Based on this method, the paper puts forward the error model of the tailored blank laser welding machine. The measurement and evaluation of their error parameters start, after complete error modeling of the machine. Leica Laser Tracker is introduced to measure the errors of the machine and to check the result of the error model.

  9. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    PubMed

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  10. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    PubMed

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235

  11. Predicting Market Impact Costs Using Nonparametric Machine Learning Models

    PubMed Central

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235

  12. Network Modeling and Energy-Efficiency Optimization for Advanced Machine-to-Machine Sensor Networks

    PubMed Central

    Jung, Sungmo; Kim, Jong Hyun; Kim, Seoksoo

    2012-01-01

    Wireless machine-to-machine sensor networks with multiple radio interfaces are expected to have several advantages, including high spatial scalability, low event detection latency, and low energy consumption. Here, we propose a network model design method involving network approximation and an optimized multi-tiered clustering algorithm that maximizes node lifespan by minimizing energy consumption in a non-uniformly distributed network. Simulation results show that the cluster scales and network parameters determined with the proposed method facilitate a more efficient performance compared to existing methods. PMID:23202190

  13. (abstract) Simple Spreadsheet Thermal Models for Cryogenic Applications

    NASA Technical Reports Server (NTRS)

    Nash, A. E.

    1994-01-01

    Self consistent circuit analog thermal models, that can be run in commercial spreadsheet programs on personal computers, have been created to calculate the cooldown and steady state performance of cryogen cooled Dewars. The models include temperature dependent conduction and radiation effects. The outputs of the models provide temperature distribution and Dewar performance information. These models have been used to analyze the Cryogenic Telescope Test Facility (CTTF). The facility will be on line in early 1995 for its first user, the Infrared Telescope Technology Testbed (ITTT), for the Space Infrared Telescope Facility (SIRTF) at JPL. The model algorithm as well as a comparison of the model predictions and actual performance of this facility will be presented.

  14. Modeling situated abstraction : action coalescence via multidimensional coherence.

    SciTech Connect

    Sallach, D. L.; Decision and Information Sciences; Univ. of Chicago

    2007-01-01

    Situated social agents weigh dozens of priorities, each with its own complexities. Domains of interest are intertwined, and progress in one area either complements or conflicts with other priorities. Interpretive agents address these complexities through: (1) integrating cognitive complexities through the use of radial concepts, (2) recognizing the role of emotion in prioritizing alternatives and urgencies, (3) using Miller-range constraints to avoid oversimplified notions omniscience, and (4) constraining actions to 'moves' in multiple prototype games. Situated agent orientations are dynamically grounded in pragmatic considerations as well as intertwined with internal and external priorities. HokiPoki is a situated abstraction designed to shape and focus strategic agent orientations. The design integrates four pragmatic pairs: (1) problem and solution, (2) dependence and power, (3) constraint and affordance, and (4) (agent) intent and effect. In this way, agents are empowered to address multiple facets of a situation in an exploratory, or even arbitrary, order. HokiPoki is open to the internal orientation of the agent as it evolves, but also to the communications and actions of other agents.

  15. Symbolic LTL Compilation for Model Checking: Extended Abstract

    NASA Technical Reports Server (NTRS)

    Rozier, Kristin Y.; Vardi, Moshe Y.

    2007-01-01

    In Linear Temporal Logic (LTL) model checking, we check LTL formulas representing desired behaviors against a formal model of the system designed to exhibit these behaviors. To accomplish this task, the LTL formulas must be translated into automata [21]. We focus on LTL compilation by investigating LTL satisfiability checking via a reduction to model checking. Having shown that symbolic LTL compilation algorithms are superior to explicit automata construction algorithms for this task [16], we concentrate here on seeking a better symbolic algorithm.We present experimental data comparing algorithmic variations such as normal forms, encoding methods, and variable ordering and examine their effects on performance metrics including processing time and scalability. Safety critical systems, such as air traffic control, life support systems, hazardous environment controls, and automotive control systems, pervade our daily lives, yet testing and simulation alone cannot adequately verify their reliability [3]. Model checking is a promising approach to formal verification for safety critical systems which involves creating a formal mathematical model of the system and translating desired safety properties into a formal specification for this model. The complement of the specification is then checked against the system model. When the model does not satisfy the specification, model-checking tools accompany this negative answer with a counterexample, which points to an inconsistency between the system and the desired behaviors and aids debugging efforts.

  16. Towards an Abstraction-Friendly Programming Model for High Productivity and High Performance Computing

    SciTech Connect

    Liao, C; Quinlan, D; Panas, T

    2009-10-06

    General purpose languages, such as C++, permit the construction of various high level abstractions to hide redundant, low level details and accelerate programming productivity. Example abstractions include functions, data structures, classes, templates and so on. However, the use of abstractions significantly impedes static code analyses and optimizations, including parallelization, applied to the abstractions complex implementations. As a result, there is a common perception that performance is inversely proportional to the level of abstraction. On the other hand, programming large scale, possibly heterogeneous high-performance computing systems is notoriously difficult and programmers are less likely to abandon the help from high level abstractions when solving real-world, complex problems. Therefore, the need for programming models balancing both programming productivity and execution performance has reached a new level of criticality. We are exploring a novel abstraction-friendly programming model in order to support high productivity and high performance computing. We believe that standard or domain-specific semantics associated with high level abstractions can be exploited to aid compiler analyses and optimizations, thus helping achieving high performance without losing high productivity. We encode representative abstractions and their useful semantics into an abstraction specification file. In the meantime, an accessible, source-to-source compiler infrastructure (the ROSE compiler) is used to facilitate recognizing high level abstractions and utilizing their semantics for more optimization opportunities. Our initial work has shown that recognizing abstractions and knowing their semantics within a compiler can dramatically extend the applicability of existing optimizations, including automatic parallelization. Moreover, a new set of optimizations have become possible within an abstraction-friendly and semantics-aware programming model. In the future, we will

  17. RAFFS: Model Checking a Robust Abstract Flash File Store

    NASA Astrophysics Data System (ADS)

    Taverne, Paul; Pronk, C. (Kees)

    This paper presents a case study in modeling and verifying a POSIX-like file store for Flash memory. This work fits in the context of Hoare's verification challenge and, in particular, Joshi and Holzmann's mini-challenge to build a verifiable file store. We have designed a simple robust file store and implemented it in the form of a Promela model. A test harness is used to exercise the file store in a number of ways. Model checking technology has been extensively used to verify the correctness of our implementation. A distinguishing feature of our approach is the (bounded) exhaustive verification of power loss recovery.

  18. A model for the synchronous machine using frequency response measurements

    SciTech Connect

    Bacalao, N.J.; Arizon, P. de; Sanchez L., R.O.

    1995-02-01

    This paper presents new techniques to improve the accuracy and velocity for the modeling of synchronous machines in stability and transient studies. The proposed model uses frequency responses as input data, obtained either directly from measurements or calculated from the available data. The new model is flexible as it allows changes in the detail in which the machine can be represented, and it is possible to partly compensate for the numerical errors incurred when using large integration time steps. The model can be used in transient stability and electromagnetic transient studies as secondary arc evaluation, load rejections and sub-synchronous resonance.

  19. Modelling machine ensembles with discrete event dynamical system theory

    NASA Technical Reports Server (NTRS)

    Hunter, Dan

    1990-01-01

    Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).

  20. Component based modelling of piezoelectric ultrasonic actuators for machining applications

    NASA Astrophysics Data System (ADS)

    Saleem, A.; Salah, M.; Ahmed, N.; Silberschmidt, V. V.

    2013-07-01

    Ultrasonically Assisted Machining (UAM) is an emerging technology that has been utilized to improve the surface finishing in machining processes such as turning, milling, and drilling. In this context, piezoelectric ultrasonic transducers are being used to vibrate the cutting tip while machining at predetermined amplitude and frequency. However, modelling and simulation of these transducers is a tedious and difficult task. This is due to the inherent nonlinearities associated with smart materials. Therefore, this paper presents a component-based model of ultrasonic transducers that mimics the nonlinear behaviour of such a system. The system is decomposed into components, a mathematical model of each component is created, and the whole system model is accomplished by aggregating the basic components' model. System parameters are identified using Finite Element technique which then has been used to simulate the system in Matlab/SIMULINK. Various operation conditions are tested and performed to demonstrate the system performance.

  1. Abstracting the principles of development using imaging and modeling

    PubMed Central

    Xiong, Fengzhu; Megason, Sean G.

    2015-01-01

    Summary Here we look at modern developmental biology with a focus on the relationship between different approaches of investigation. We argue that direct imaging is a powerful approach not only for obtaining descriptive information but also for model generation and testing that lead to mechanistic insights. Modeling, on the other hand, conceptualizes imaging data and provides guidance to perturbations. The inquiry progresses most efficiently when a trinity of approaches—quantitative imaging (measurement), modeling (theory) and perturbation (test) —are pursued in concert, but not when one approach is dominant. Using recent studies of the zebrafish system, we show how this combination has effectively advanced classic topics in developmental biology compared to a perturbation-centric approach. Finally, we show that interdisciplinary expertise and perhaps specialization are necessary for carrying out a systematic approach, and discuss the technical hurdles. PMID:25946995

  2. Parallel phase model : a programming model for high-end parallel machines with manycores.

    SciTech Connect

    Wu, Junfeng; Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  3. Hydro- abrasive jet machining modeling for computer control and optimization

    NASA Astrophysics Data System (ADS)

    Groppetti, R.; Jovane, F.

    1993-06-01

    Use of hydro-abrasive jet machining (HAJM) for machining a wide variety of materials—metals, poly-mers, ceramics, fiber-reinforced composites, metal-matrix composites, and bonded or hybridized mate-rials—primarily for two- and three-dimensional cutting and also for drilling, turning, milling, and deburring, has been reported. However, the potential of this innovative process has not been explored fully. This article discusses process control, integration, and optimization of HAJM to establish a plat-form for the implementation of real-time adaptive control constraint (ACC), adaptive control optimiza-tion (ACO), and CAD/CAM integration. It presents the approach followed and the main results obtained during the development, implementation, automation, and integration of a HAJM cell and its computer-ized controller. After a critical analysis of the process variables and models reported in the literature to identify process variables and to define a process model suitable for HAJM real-time control and optimi-zation, to correlate process variables and parameters with machining results, and to avoid expensive and time-consuming experiments for determination of the optimal machining conditions, a process predic-tion and optimization model was identified and implemented. Then, the configuration of the HAJM cell, architecture, and multiprogramming operation of the controller in terms of monitoring, control, process result prediction, and process condition optimization were analyzed. This prediction and optimization model for selection of optimal machining conditions using multi-objective programming was analyzed. Based on the definition of an economy function and a productivity function, with suitable constraints relevant to required machining quality, required kerfing depth, and available resources, the model was applied to test cases based on experimental results.

  4. Control of discrete event systems modeled as hierarchical state machines

    NASA Technical Reports Server (NTRS)

    Brave, Y.; Heymann, M.

    1991-01-01

    The authors examine a class of discrete event systems (DESs) modeled as asynchronous hierarchical state machines (AHSMs). For this class of DESs, they provide an efficient method for testing reachability, which is an essential step in many control synthesis procedures. This method utilizes the asynchronous nature and hierarchical structure of AHSMs, thereby illustrating the advantage of the AHSM representation as compared with its equivalent (flat) state machine representation. An application of the method is presented where an online minimally restrictive solution is proposed for the problem of maintaining a controlled AHSM within prescribed legal bounds.

  5. Three dimensional CAD model of the Ignitor machine

    NASA Astrophysics Data System (ADS)

    Orlandi, S.; Zanaboni, P.; Macco, A.; Sioli, V.; Risso, E.

    1998-11-01

    defind The final, global product of all the structural and thermomechanical design activities is a complete three dimensional CAD (AutoCAD and Intergraph Design Review) model of the IGNITOR machine. With this powerful tool, any interface, modification, or upgrading of the machine design is managed as an integrated part of the general effort aimed at the construction of the Ignitor facility. ind The activities that are underway, to complete the design of the core of the experiment and that will be described, concern the following: ind - the cryogenic cooling system, ind - the radial press, the center post, the mechanical supports (legs) of the entire machine, ind - the inner mechanical supports of major components such as the plasma chamber and the outer poloidal field coils.

  6. Thermal-mechanical modeling of laser ablation hybrid machining

    NASA Astrophysics Data System (ADS)

    Matin, Mohammad Kaiser

    2001-08-01

    Hard, brittle and wear-resistant materials like ceramics pose a problem when being machined using conventional machining processes. Machining ceramics even with a diamond cutting tool is very difficult and costly. Near net-shape processes, like laser evaporation, produce micro-cracks that require extra finishing. Thus it is anticipated that ceramic machining will have to continue to be explored with new-sprung techniques before ceramic materials become commonplace. This numerical investigation results from the numerical simulations of the thermal and mechanical modeling of simultaneous material removal from hard-to-machine materials using both laser ablation and conventional tool cutting utilizing the finite element method. The model is formulated using a two dimensional, planar, computational domain. The process simulation acronymed, LAHM (Laser Ablation Hybrid Machining), uses laser energy for two purposes. The first purpose is to remove the material by ablation. The second purpose is to heat the unremoved material that lies below the ablated material in order to ``soften'' it. The softened material is then simultaneously removed by conventional machining processes. The complete solution determines the temperature distribution and stress contours within the material and tracks the moving boundary that occurs due to material ablation. The temperature distribution is used to determine the distance below the phase change surface where sufficient ``softening'' has occurred, so that a cutting tool may be used to remove additional material. The model incorporated for tracking the ablative surface does not assume an isothermal melt phase (e.g. Stefan problem) for laser ablation. Both surface absorption and volume absorption of laser energy as function of depth have been considered in the models. LAHM, from the thermal and mechanical point of view is a complex machining process involving large deformations at high strain rates, thermal effects of the laser, removal of

  7. Global ocean modeling on the Connection Machine

    SciTech Connect

    Smith, R.D.; Dukowicz, J.K.; Malone, R.C.

    1993-10-01

    The authors have developed a version of the Bryan-Cox-Semtner ocean model (Bryan, 1969; Semtner, 1976; Cox, 1984) for massively parallel computers. Such models are three-dimensional, Eulerian models that use latitude and longitude as the horizontal spherical coordinates and fixed depth levels as the vertical coordinate. The incompressible Navier-Stokes equations, with a turbulent eddy viscosity, and mass continuity equation are solved, subject to the hydrostatic and Boussinesq approximations. The traditional model formulation uses a rigid-lid approximation (vertical velocity = 0 at the ocean surface) to eliminate fast surface waves. These waves would otherwise require that a very short time step be used in numerical simulations, which would greatly increase the computational cost. To solve the equations with the rigid-lid assumption, the equations of motion are split into two parts: a set of twodimensional ``barotropic`` equations describing the vertically-averaged flow, and a set of three-dimensional ``baroclinic`` equations describing temperature, salinity and deviations of the horizontal velocities from the vertically-averaged flow.

  8. An Expectation-Maximization Method for Calibrating Synchronous Machine Models

    SciTech Connect

    Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang

    2013-07-21

    The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.

  9. Abstract Model of the SATS Concept of Operations: Initial Results and Recommendations

    NASA Technical Reports Server (NTRS)

    Dowek, Gilles; Munoz, Cesar; Carreno, Victor A.

    2004-01-01

    An abstract mathematical model of the concept of operations for the Small Aircraft Transportation System (SATS) is presented. The Concept of Operations consist of several procedures that describe nominal operations for SATS, Several safety properties of the system are proven using formal techniques. The final goal of the verification effort is to show that under nominal operations, aircraft are safely separated. The abstract model was written and formally verified in the Prototype Verification System (PVS).

  10. Knowledge in formation: The machine-modeled frame of mind

    SciTech Connect

    Shore, B.

    1996-12-31

    Artificial Intelligence researchers have used the digital computer as a model for the human mind in two different ways. Most obviously, the computer has been used as a tool on which simulations of thinking-as-programs are developed and tested. Less obvious, but of great significance, is the use of the computer as a conceptual model for the human mind. This essay traces the sources of this machine-modeled conception of cognition in a great variety of social institutions and everyday experienced treating them as {open_quotes}cultural models{close_quotes} which have contributed to the naturalness of The mine-as-machine paradigm for many Americans. The roots of these models antedate the actual development of modern computers, and take the form of a {open_quotes}modularity schema{close_quotes} that has shaped the cultural and cognitive landscape of modernity. The essay concludes with a consideration of some of the cognitive consequences of this extension of machine logic into modern life, and proposes an important distinction between information processing models of thought and meaning-making in how human cognition is conceptualized.

  11. Bilingual Cluster Based Models for Statistical Machine Translation

    NASA Astrophysics Data System (ADS)

    Yamamoto, Hirofumi; Sumita, Eiichiro

    We propose a domain specific model for statistical machine translation. It is well-known that domain specific language models perform well in automatic speech recognition. We show that domain specific language and translation models also benefit statistical machine translation. However, there are two problems with using domain specific models. The first is the data sparseness problem. We employ an adaptation technique to overcome this problem. The second issue is domain prediction. In order to perform adaptation, the domain must be provided, however in many cases, the domain is not known or changes dynamically. For these cases, not only the translation target sentence but also the domain must be predicted. This paper focuses on the domain prediction problem for statistical machine translation. In the proposed method, a bilingual training corpus, is automatically clustered into sub-corpora. Each sub-corpus is deemed to be a domain. The domain of a source sentence is predicted by using its similarity to the sub-corpora. The predicted domain (sub-corpus) specific language and translation models are then used for the translation decoding. This approach gave an improvement of 2.7 in BLEU score on the IWSLT05 Japanese to English evaluation corpus (improving the score from 52.4 to 55.1). This is a substantial gain and indicates the validity of the proposed bilingual cluster based models.

  12. The rise of machine consciousness: studying consciousness with computational models.

    PubMed

    Reggia, James A

    2013-08-01

    Efforts to create computational models of consciousness have accelerated over the last two decades, creating a field that has become known as artificial consciousness. There have been two main motivations for this controversial work: to develop a better scientific understanding of the nature of human/animal consciousness and to produce machines that genuinely exhibit conscious awareness. This review begins by briefly explaining some of the concepts and terminology used by investigators working on machine consciousness, and summarizes key neurobiological correlates of human consciousness that are particularly relevant to past computational studies. Models of consciousness developed over the last twenty years are then surveyed. These models are largely found to fall into five categories based on the fundamental issue that their developers have selected as being most central to consciousness: a global workspace, information integration, an internal self-model, higher-level representations, or attention mechanisms. For each of these five categories, an overview of past work is given, a representative example is presented in some detail to illustrate the approach, and comments are provided on the contributions and limitations of the methodology. Three conclusions are offered about the state of the field based on this review: (1) computational modeling has become an effective and accepted methodology for the scientific study of consciousness, (2) existing computational models have successfully captured a number of neurobiological, cognitive, and behavioral correlates of conscious information processing as machine simulations, and (3) no existing approach to artificial consciousness has presented a compelling demonstration of phenomenal machine consciousness, or even clear evidence that artificial phenomenal consciousness will eventually be possible. The paper concludes by discussing the importance of continuing work in this area, considering the ethical issues it raises

  13. Transferable Atomic Multipole Machine Learning Models for Small Organic Molecules.

    PubMed

    Bereau, Tristan; Andrienko, Denis; von Lilienfeld, O Anatole

    2015-07-14

    Accurate representation of the molecular electrostatic potential, which is often expanded in distributed multipole moments, is crucial for an efficient evaluation of intermolecular interactions. Here we introduce a machine learning model for multipole coefficients of atom types H, C, O, N, S, F, and Cl in any molecular conformation. The model is trained on quantum-chemical results for atoms in varying chemical environments drawn from thousands of organic molecules. Multipoles in systems with neutral, cationic, and anionic molecular charge states are treated with individual models. The models' predictive accuracy and applicability are illustrated by evaluating intermolecular interaction energies of nearly 1,000 dimers and the cohesive energy of the benzene crystal.

  14. Stochastic Local Interaction (SLI) model: Bridging machine learning and geostatistics

    NASA Astrophysics Data System (ADS)

    Hristopulos, Dionissios T.

    2015-12-01

    Machine learning and geostatistics are powerful mathematical frameworks for modeling spatial data. Both approaches, however, suffer from poor scaling of the required computational resources for large data applications. We present the Stochastic Local Interaction (SLI) model, which employs a local representation to improve computational efficiency. SLI combines geostatistics and machine learning with ideas from statistical physics and computational geometry. It is based on a joint probability density function defined by an energy functional which involves local interactions implemented by means of kernel functions with adaptive local kernel bandwidths. SLI is expressed in terms of an explicit, typically sparse, precision (inverse covariance) matrix. This representation leads to a semi-analytical expression for interpolation (prediction), which is valid in any number of dimensions and avoids the computationally costly covariance matrix inversion.

  15. 97. View of International Business Machine (IBM) digital computer model ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    97. View of International Business Machine (IBM) digital computer model 7090 magnetic core installation, international telephone and telegraph (ITT) Artic Services Inc., Official photograph BMEWS site II, Clear, AK, by unknown photographer, 17 September 1965, BMEWS, clear as negative no. A-6604. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  16. Editorial: Mathematical Methods and Modeling in Machine Fault Diagnosis

    SciTech Connect

    Yan, Ruqiang; Chen, Xuefeng; Li, Weihua; Sheng, Shuangwen

    2014-12-18

    Modern mathematics has commonly been utilized as an effective tool to model mechanical equipment so that their dynamic characteristics can be studied analytically. This will help identify potential failures of mechanical equipment by observing change in the equipment’s dynamic parameters. On the other hand, dynamic signals are also important and provide reliable information about the equipment’s working status. Modern mathematics has also provided us with a systematic way to design and implement various signal processing methods, which are used to analyze these dynamic signals, and to enhance intrinsic signal components that are directly related to machine failures. This special issue is aimed at stimulating not only new insights on mathematical methods for modeling but also recently developed signal processing methods, such as sparse decomposition with potential applications in machine fault diagnosis. Finally, the papers included in this special issue provide a glimpse into some of the research and applications in the field of machine fault diagnosis through applications of the modern mathematical methods.

  17. Editorial: Mathematical Methods and Modeling in Machine Fault Diagnosis

    DOE PAGES

    Yan, Ruqiang; Chen, Xuefeng; Li, Weihua; Sheng, Shuangwen

    2014-12-18

    Modern mathematics has commonly been utilized as an effective tool to model mechanical equipment so that their dynamic characteristics can be studied analytically. This will help identify potential failures of mechanical equipment by observing change in the equipment’s dynamic parameters. On the other hand, dynamic signals are also important and provide reliable information about the equipment’s working status. Modern mathematics has also provided us with a systematic way to design and implement various signal processing methods, which are used to analyze these dynamic signals, and to enhance intrinsic signal components that are directly related to machine failures. This special issuemore » is aimed at stimulating not only new insights on mathematical methods for modeling but also recently developed signal processing methods, such as sparse decomposition with potential applications in machine fault diagnosis. Finally, the papers included in this special issue provide a glimpse into some of the research and applications in the field of machine fault diagnosis through applications of the modern mathematical methods.« less

  18. Generation of polyhedral models from machine vision data

    NASA Astrophysics Data System (ADS)

    Bradley, Colin H.; Wei, S.; Zhang, Y.; Loh, H. T.

    1999-11-01

    An algorithm suitable for triangulating 3D data points, produced by a machine vision system or coordinate measuring machine (CMM), is described. The algorithm is suitable for processing the data collected from objects composed of free form surface patches. The data is produced by a 3D machine vision system integrated into a computer numerically controlled CMM. The software can model very large 3D data sets, termed cloud data, using a unified, non-redundant triangular mesh. This is accomplished from the 3D data points in two steps. Firstly, an initial data thinning is performed, to reduce the copious data set size, employing 3D spatial filtering. Secondary, the triangulation commences, utilizing a set of heuristic rules, from a user defined seed point. The triangulation algorithm interrogates the local geometric and topological information inherent in the cloud data points. The spatial filtering parameters are extracted from the cloud data set, by a series of local surface patches, and the required spatial error between the final triangulation and the cloud data. Case studies are presented that illustrate the efficacy of the technique for rapidly constructing a geometric model from 3D digitized data.

  19. Modeling of autoresonant control of a parametrically excited screen machine

    NASA Astrophysics Data System (ADS)

    Abolfazl Zahedi, S.; Babitsky, Vladimir

    2016-10-01

    Modelling of nonlinear dynamic response of a screen machine described by the nonlinear coupled differential equations and excited by the system of autoresonant control is presented. The displacement signal of the screen is fed to the screen excitation directly by means of positive feedback. Negative feedback is used to fix the level of screen amplitude response within the expected range. The screen is anticipated to vibrate with a parametric resonance and the excitation, stabilization and control response of the system are studied in the stable mode. Autoresonant control is thoroughly investigated and output tracking is reported. The control developed provides the possibility of self-tuning and self-adaptation mechanisms that allow the screen machine to maintain a parametric resonant mode of oscillation under a wide range of uncertainty of mass and viscosity.

  20. Model-Driven Engineering of Machine Executable Code

    NASA Astrophysics Data System (ADS)

    Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira

    Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.

  1. Modelling, abstraction, and computation in systems biology: A view from computer science.

    PubMed

    Melham, Tom

    2013-04-01

    Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology.

  2. Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents

    SciTech Connect

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard; Parker, Lynne Edwards

    2014-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.

  3. Geochemistry Model Abstraction and Sensitivity Studies for the 21 PWR CSNF Waste Package

    SciTech Connect

    P. Bernot; S. LeStrange; E. Thomas; K. Zarrabi; S. Arthur

    2002-10-29

    The CSNF geochemistry model abstraction, as directed by the TWP (BSC 2002b), was developed to provide regression analysis of EQ6 cases to obtain abstracted values of pH (and in some cases HCO{sub 3}{sup -} concentration) for use in the Configuration Generator Model. The pH of the system is the controlling factor over U mineralization, CSNF degradation rate, and HCO{sub 3}{sup -} concentration in solution. The abstraction encompasses a large variety of combinations for the degradation rates of materials. The ''base case'' used EQ6 simulations looking at differing steel/alloy corrosion rates, drip rates, and percent fuel exposure. Other values such as the pH/HCO{sub 3}{sup -} dependent fuel corrosion rate and the corrosion rate of A516 were kept constant. Relationships were developed for pH as a function of these differing rates to be used in the calculation of total C and subsequently, the fuel rate. An additional refinement to the abstraction was the addition of abstracted pH values for cases where there was limited O{sub 2} for waste package corrosion and a flushing fluid other than J-13, which has been used in all EQ6 calculation up to this point. These abstractions also used EQ6 simulations with varying combinations of corrosion rates of materials to abstract the pH (and HCO{sub 3}{sup -} in the case of the limiting O{sub 2} cases) as a function of WP materials corrosion rates. The goodness of fit for most of the abstracted values was above an R{sup 2} of 0.9. Those below this value occurred during the time at the very beginning of WP corrosion when large variations in the system pH are observed. However, the significance of F-statistic for all the abstractions showed that the variable relationships are significant. For the abstraction, an analysis of the minerals that may form the ''sludge'' in the waste package was also presented. This analysis indicates that a number a different iron and aluminum minerals may form in the waste package other than those

  4. Modelling fate and transport of pesticides in river catchments with drinking water abstractions

    NASA Astrophysics Data System (ADS)

    Desmet, Nele; Seuntjens, Piet; Touchant, Kaatje

    2010-05-01

    When drinking water is abstracted from surface water, the presence of pesticides may have a large impact on the purification costs. In order to respect imposed thresholds at points of drinking water abstraction in a river catchment, sustainable pesticide management strategies might be required in certain areas. To improve management strategies, a sound understanding of the emission routes, the transport, the environmental fate and the sources of pesticides is needed. However, pesticide monitoring data on which measures are founded, are generally scarce. Data scarcity hampers the interpretation and the decision making. In such a case, a modelling approach can be very useful as a tool to obtain complementary information. Modelling allows to take into account temporal and spatial variability in both discharges and concentrations. In the Netherlands, the Meuse river is used for drinking water abstraction and the government imposes the European drinking water standard for individual pesticides (0.1 ?g.L-1) for surface waters at points of drinking water abstraction. The reported glyphosate concentrations in the Meuse river frequently exceed the standard and this enhances the request for targeted measures. In this study, a model for the Meuse river was developed to estimate the contribution of influxes at the Dutch-Belgian border on the concentration levels detected at the drinking water intake 250 km downstream and to assess the contribution of the tributaries to the glyphosate loads. The effects of glyphosate decay on environmental fate were considered as well. Our results show that the application of a river model allows to asses fate and transport of pesticides in a catchment in spite of monitoring data scarcity. Furthermore, the model provides insight in the contribution of different sub basins to the pollution level. The modelling results indicate that the effect of local measures to reduce pesticides concentrations in the river at points of drinking water

  5. Making the abstract concrete: the role of norms and values in experimental modeling.

    PubMed

    Peschard, Isabelle F; van Fraassen, Bas C

    2014-06-01

    Experimental modeling is the construction of theoretical models hand in hand with experimental activity. As explained in Section 1, experimental modeling starts with claims about phenomena that use abstract concepts, concepts whose conditions of realization are not yet specified; and it ends with a concrete model of the phenomenon, a model that can be tested against data. This paper argues that this process from abstract concepts to concrete models involves judgments of relevance, which are irreducibly normative. In Section 2, we show, on the basis of several case studies, how these judgments contribute to the determination of the conditions of realization of the abstract concepts and, at the same time, of the quantities that characterize the phenomenon under study. Then, in Section 3, we compare this view on modeling with other approaches that also have acknowledged the role of relevance judgments in science. To conclude, in Section 4, we discuss the possibility of a plurality of relevance judgments and introduce a distinction between locally and generally relevant factors.

  6. Modeling of Unsteady Three-dimensional Flows in Multistage Machines

    NASA Technical Reports Server (NTRS)

    Hall, Kenneth C.; Pratt, Edmund T., Jr.; Kurkov, Anatole (Technical Monitor)

    2003-01-01

    Despite many years of development, the accurate and reliable prediction of unsteady aerodynamic forces acting on turbomachinery blades remains less than satisfactory, especially when viewed next to the great success investigators have had in predicting steady flows. Hall and Silkowski (1997) have proposed that one of the main reasons for the discrepancy between theory and experiment and/or industrial experience is that many of the current unsteady aerodynamic theories model a single blade row in an infinitely long duct, ignoring potentially important multistage effects. However, unsteady flows are made up of acoustic, vortical, and entropic waves. These waves provide a mechanism for the rotors and stators of multistage machines to communicate with one another. In other words, wave behavior makes unsteady flows fundamentally a multistage (and three-dimensional) phenomenon. In this research program, we have has as goals (1) the development of computationally efficient computer models of the unsteady aerodynamic response of blade rows embedded in a multistage machine (these models will ultimately be capable of analyzing three-dimensional viscous transonic flows), and (2) the use of these computer codes to study a number of important multistage phenomena.

  7. Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT

    SciTech Connect

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    2011-07-27

    Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy in reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.

  8. Modeling the Swift BAT Trigger Algorithm with Machine Learning

    NASA Technical Reports Server (NTRS)

    Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori

    2015-01-01

    To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. (2014) is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of approximately greater than 97% (approximately less than 3% error), which is a significant improvement on a cut in GRB flux which has an accuracy of 89:6% (10:4% error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of eta(sub 0) approximately 0.48(+0.41/-0.23) Gpc(exp -3) yr(exp -1) with power-law indices of eta(sub 1) approximately 1.7(+0.6/-0.5) and eta(sub 2) approximately -5.9(+5.7/-0.1) for GRBs above and below a break point of z(sub 1) approximately 6.8(+2.8/-3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting. The code used in this is analysis is publicly available online.

  9. Machine vision algorithm generation using human visual models

    NASA Astrophysics Data System (ADS)

    Daley, Wayne D.; Doll, Theodore J.; McWhorter, Shane W.; Wasilewski, Anthony A.

    1999-01-01

    The design of robust machine vision algorithms is one of the most difficult parts of developing and integrating automated systems. Historically, most of the techniques have been developed using ad hoc methodologies. This problem is more severe in the area of natural/biological products. In this arena, it has been difficult to capture and model the natural variability to be expected in the products. This present difficulty in performing quality and process control in the meat, fruit and vegetable industries. While some systems have been introduced, they do not adequately address the wide range of needs. This paper will propose an algorithm development technique that utilizes modes of the human visual system. It will address that subset of problems that humans perform well, but have proven difficult to automate with the standard machine vision techniques. The basis of the technique evaluation will be the Georgia Tech Vision model. This approach demonstrates a high level of accuracy in its ability to solve difficult problems. This paper will present the approach, the result, and possibilities for implementation.

  10. Global atmospheric and ocean modeling on the connection machine

    SciTech Connect

    Atlas, S.R.

    1993-12-01

    This paper describes the high-level architecture of two parallel global climate models: an atmospheric model based on the Geophysical Fluid Dynamics Laboratory (GFDL) SKYHI model, and an ocean model descended from the Bryan-Cox-Semtner ocean general circulation model. These parallel models are being developed as part of a long-term research collaboration between Los Alamos National Laboratory (LANL) and the GFDL. The goal of this collaboration is to develop parallel global climate models which are modular in structure, portable across a wide variety of machine architectures and programming paradigms, and provide an appropriate starting point for a fully coupled model. Several design considerations have emerged as central to achieving these goals. These include the expression of the models in terms of mathematical primitives such as stencil operators, to facilitate performance optimization on different computational platforms; the isolation of communication from computation to allow flexible implementation of a single code under message-passing or data parallel programming paradigms; and judicious memory management to achieve modularity without memory explosion costs.

  11. Influence of Material Models Used in Finite Element Modeling on Cutting Forces in Machining

    NASA Astrophysics Data System (ADS)

    Jivishov, Vusal; Rzayev, Elchin

    2016-08-01

    Finite element modeling of machining is significantly influenced by various modeling input parameters such as boundary conditions, mesh size and distribution, as well as properties of workpiece and tool materials. The flow stress model of the workpiece material is the most critical input parameter. However, it is very difficult to obtain experimental values under the same conditions as in machining operations.. This paper analyses the influence of different material models for two steels (AISI 1045 and hardened AISI 52100) in finite element modelling of cutting forces. In this study, the machining process is scaled by a constant ratio of the variable depth of cut h and cutting edge radius rβ. The simulation results are compared with experimental measurements. This comparison illustrates some of the capabilities and limitations of FEM modelling.

  12. Intelligent machining of rough components from optimized CAD models

    NASA Astrophysics Data System (ADS)

    Lewis, Geoff; Thompson, William

    1995-08-01

    This paper describes a technique for automatically generating NC machine programs from CAD images of a rough work piece and an optimally positioned component. The paper briefly compares the generative and variant methods of automatic machine program development and then presents a technique based on the variant method where a reference machine program is transformed to machine the optimized component. The transformed machine program is examined to remove any redundant cutter motions and correct any invalid cutter motions. The research is part of a larger project on intelligent manufacturing systems and is being conducted at the CIM Centre, Swinburne University of Technology, Hawthorn, Australia.

  13. Rotary ATPases: models, machine elements and technical specifications.

    PubMed

    Stewart, Alastair G; Sobti, Meghna; Harvey, Richard P; Stock, Daniela

    2013-01-01

    Rotary ATPases are molecular rotary motors involved in biological energy conversion. They either synthesize or hydrolyze the universal biological energy carrier adenosine triphosphate. Recent work has elucidated the general architecture and subunit compositions of all three sub-types of rotary ATPases. Composite models of the intact F-, V- and A-type ATPases have been constructed by fitting high-resolution X-ray structures of individual subunits or sub-complexes into low-resolution electron densities of the intact enzymes derived from electron cryo-microscopy. Electron cryo-tomography has provided new insights into the supra-molecular arrangement of eukaryotic ATP synthases within mitochondria and mass-spectrometry has started to identify specifically bound lipids presumed to be essential for function. Taken together these molecular snapshots show that nano-scale rotary engines have much in common with basic design principles of man made machines from the function of individual "machine elements" to the requirement of the right "fuel" and "oil" for different types of motors.

  14. Ontological modelling of knowledge management for human-machine integrated design of ultra-precision grinding machine

    NASA Astrophysics Data System (ADS)

    Hong, Haibo; Yin, Yuehong; Chen, Xing

    2016-11-01

    Despite the rapid development of computer science and information technology, an efficient human-machine integrated enterprise information system for designing complex mechatronic products is still not fully accomplished, partly because of the inharmonious communication among collaborators. Therefore, one challenge in human-machine integration is how to establish an appropriate knowledge management (KM) model to support integration and sharing of heterogeneous product knowledge. Aiming at the diversity of design knowledge, this article proposes an ontology-based model to reach an unambiguous and normative representation of knowledge. First, an ontology-based human-machine integrated design framework is described, then corresponding ontologies and sub-ontologies are established according to different purposes and scopes. Second, a similarity calculation-based ontology integration method composed of ontology mapping and ontology merging is introduced. The ontology searching-based knowledge sharing method is then developed. Finally, a case of human-machine integrated design of a large ultra-precision grinding machine is used to demonstrate the effectiveness of the method.

  15. A salamander's flexible spinal network for locomotion, modeled at two levels of abstraction.

    PubMed

    Knüsel, Jeremie; Bicanski, Andrej; Ryczko, Dimitri; Cabelguen, Jean-Marie; Ijspeert, Auke Jan

    2013-08-01

    Animals have to coordinate a large number of muscles in different ways to efficiently move at various speeds and in different and complex environments. This coordination is in large part based on central pattern generators (CPGs). These neural networks are capable of producing complex rhythmic patterns when activated and modulated by relatively simple control signals. Although the generation of particular gaits by CPGs has been successfully modeled at many levels of abstraction, the principles underlying the generation and selection of a diversity of patterns of coordination in a single neural network are still not well understood. The present work specifically addresses the flexibility of the spinal locomotor networks in salamanders. We compare an abstract oscillator model and a CPG network composed of integrate-and-fire neurons, according to their ability to account for different axial patterns of coordination, and in particular the transition in gait between swimming and stepping modes. The topology of the network is inspired by models of the lamprey CPG, complemented by additions based on experimental data from isolated spinal cords of salamanders. Oscillatory centers of the limbs are included in a way that preserves the flexibility of the axial network. Similarly to the selection of forward and backward swimming in lamprey models via different excitation to the first axial segment, we can account for the modification of the axial coordination pattern between swimming and forward stepping on land in the salamander model, via different uncoupled frequencies in limb versus axial oscillators (for the same level of excitation). These results transfer partially to a more realistic model based on formal spiking neurons, and we discuss the difference between the abstract oscillator model and the model built with formal spiking neurons.

  16. Modeling the meaning of words: neural correlates of abstract and concrete noun processing.

    PubMed

    Mårtensson, Frida; Roll, Mikael; Apt, Pia; Horne, Merle

    2011-01-01

    We present a model relating analysis of abstract and concrete word meaning in terms of semantic features and contextual frames within a general framework of neurocognitive information processing. The approach taken here assumes concrete noun meanings to be intimately related to sensory feature constellations. These features are processed by posterior sensory regions of the brain, e.g. the occipital lobe, which handles visual information. The interpretation of abstract nouns, however, is likely to be more dependent on semantic frames and linguistic context. A greater involvement of more anteriorly located, perisylvian brain areas has previously been found for the processing of abstract words. In the present study, a word association test was carried out in order to compare semantic processing in healthy subjects (n=12) with subjects with aphasia due to perisylvian lesions (n=3) and occipital lesions (n=1). The word associations were coded into different categories depending on their semantic content. A double dissociation was found, where, compared to the controls, the perisylvian aphasic subjects had problems associating to abstract nouns and produced fewer semantic framebased associations, whereas the occipital aphasic subject showed disturbances in concrete noun processing and made fewer semantic feature based associations.

  17. An initial-abstraction, constant-loss model for unit hydrograph modeling for applicable watersheds in Texas

    USGS Publications Warehouse

    Asquith, William H.; Roussel, Meghan C.

    2007-01-01

    Estimation of representative hydrographs from design storms, which are known as design hydrographs, provides for cost-effective, riskmitigated design of drainage structures such as bridges, culverts, roadways, and other infrastructure. During 2001?07, the U.S. Geological Survey (USGS), in cooperation with the Texas Department of Transportation, investigated runoff hydrographs, design storms, unit hydrographs,and watershed-loss models to enhance design hydrograph estimation in Texas. Design hydrographs ideally should mimic the general volume, peak, and shape of observed runoff hydrographs. Design hydrographs commonly are estimated in part by unit hydrographs. A unit hydrograph is defined as the runoff hydrograph that results from a unit pulse of excess rainfall uniformly distributed over the watershed at a constant rate for a specific duration. A time-distributed, watershed-loss model is required for modeling by unit hydrographs. This report develops a specific time-distributed, watershed-loss model known as an initial-abstraction, constant-loss model. For this watershed-loss model, a watershed is conceptualized to have the capacity to store or abstract an absolute depth of rainfall at and near the beginning of a storm. Depths of total rainfall less than this initial abstraction do not produce runoff. The watershed also is conceptualized to have the capacity to remove rainfall at a constant rate (loss) after the initial abstraction is satisfied. Additional rainfall inputs after the initial abstraction is satisfied contribute to runoff if the rainfall rate (intensity) is larger than the constant loss. The initial abstraction, constant-loss model thus is a two-parameter model. The initial-abstraction, constant-loss model is investigated through detailed computational and statistical analysis of observed rainfall and runoff data for 92 USGS streamflow-gaging stations (watersheds) in Texas with contributing drainage areas from 0.26 to 166 square miles. The analysis is

  18. Kinetic modeling of α-hydrogen abstractions from unsaturated and saturated oxygenate compounds by hydrogen atoms.

    PubMed

    Paraskevas, Paschalis D; Sabbe, Maarten K; Reyniers, Marie-Françoise; Papayannakos, Nikos G; Marin, Guy B

    2014-10-01

    Hydrogen-abstraction reactions play a significant role in thermal biomass conversion processes, as well as regular gasification, pyrolysis, or combustion. In this work, a group additivity model is constructed that allows prediction of reaction rates and Arrhenius parameters of hydrogen abstractions by hydrogen atoms from alcohols, ethers, esters, peroxides, ketones, aldehydes, acids, and diketones in a broad temperature range (300-2000 K). A training set of 60 reactions was developed with rate coefficients and Arrhenius parameters calculated by the CBS-QB3 method in the high-pressure limit with tunneling corrections using Eckart tunneling coefficients. From this set of reactions, 15 group additive values were derived for the forward and the reverse reaction, 4 referring to primary and 11 to secondary contributions. The accuracy of the model is validated upon an ab initio and an experimental validation set of 19 and 21 reaction rates, respectively, showing that reaction rates can be predicted with a mean factor of deviation of 2 for the ab initio and 3 for the experimental values. Hence, this work illustrates that the developed group additive model can be reliably applied for the accurate prediction of kinetics of α-hydrogen abstractions by hydrogen atoms from a broad range of oxygenates. PMID:25209711

  19. Physiological model of motion analysis for machine vision

    NASA Astrophysics Data System (ADS)

    Young, Richard A.; Lesperance, Ronald M.

    1993-09-01

    We studied the spatio-temporal shape of `receptive fields' of simple cells in the monkey visual cortex. Receptive fields are maps of the regions in space and time that affect a cell's electrical responses. Fields with no change in shape over time responded to all directions of motion; fields with changing shape over time responded to only some directions of motion. A Gaussian Derivative (GD) model fit these fields well, in a transformed variable space that aligned the centers and principal axes of the field and model in space-time. The model accounts for fields that vary in orientation, location, spatial scale, motion properties, and number of lobes. The model requires only ten parameters (the minimum possible) to describe fields in two dimensions of space and one of time. A difference-of-offset-Gaussians (DOOG) provides a plausible physiological means to form GD model fields. Because of its simplicity, the GD model improves the efficiency of machine vision systems for analyzing motion. An implementation produced robust local estimates of the direction and speed of moving objects in real scenes.

  20. Risk Classification with an Adaptive Naive Bayes Kernel Machine Model

    PubMed Central

    Minnier, Jessica; Yuan, Ming; Liu, Jun S.; Cai, Tianxi

    2014-01-01

    Genetic studies of complex traits have uncovered only a small number of risk markers explaining a small fraction of heritability and adding little improvement to disease risk prediction. Standard single marker methods may lack power in selecting informative markers or estimating effects. Most existing methods also typically do not account for non-linearity. Identifying markers with weak signals and estimating their joint effects among many non-informative markers remains challenging. One potential approach is to group markers based on biological knowledge such as gene structure. If markers in a group tend to have similar effects, proper usage of the group structure could improve power and efficiency in estimation. We propose a two-stage method relating markers to disease risk by taking advantage of known gene-set structures. Imposing a naive bayes kernel machine (KM) model, we estimate gene-set specific risk models that relate each gene-set to the outcome in stage I. The KM framework efficiently models potentially non-linear effects of predictors without requiring explicit specification of functional forms. In stage II, we aggregate information across gene-sets via a regularization procedure. Estimation and computational efficiency is further improved with kernel principle component analysis. Asymptotic results for model estimation and gene set selection are derived and numerical studies suggest that the proposed procedure could outperform existing procedures for constructing genetic risk models. PMID:26236061

  1. The abstract geometry modeling language (AgML): experience and road map toward eRHIC

    NASA Astrophysics Data System (ADS)

    Webb, Jason; Lauret, Jerome; Perevoztchikov, Victor

    2014-06-01

    The STAR experiment has adopted an Abstract Geometry Modeling Language (AgML) as the primary description of our geometry model. AgML establishes a level of abstraction, decoupling the definition of the detector from the software libraries used to create the concrete geometry model. Thus, AgML allows us to support both our legacy GEANT 3 simulation application and our ROOT/TGeo based reconstruction software from a single source, which is demonstrably self- consistent. While AgML was developed primarily as a tool to migrate away from our legacy FORTRAN-era geometry codes, it also provides a rich syntax geared towards the rapid development of detector models. AgML has been successfully employed by users to quickly develop and integrate the descriptions of several new detectors in the RHIC/STAR experiment including the Forward GEM Tracker (FGT) and Heavy Flavor Tracker (HFT) upgrades installed in STAR for the 2012 and 2013 runs. AgML has furthermore been heavily utilized to study future upgrades to the STAR detector as it prepares for the eRHIC era. With its track record of practical use in a live experiment in mind, we present the status, lessons learned and future of the AgML language as well as our experience in bringing the code into our production and development environments. We will discuss the path toward eRHIC and pushing the current model to accommodate for detector miss-alignment and high precision physics.

  2. Modeling the Virtual Machine Launching Overhead under Fermicloud

    SciTech Connect

    Garzoglio, Gabriele; Wu, Hao; Ren, Shangping; Timm, Steven; Bernabeu, Gerard; Noh, Seo-Young

    2014-11-12

    FermiCloud is a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows. The Cloud Bursting module of the FermiCloud enables the FermiCloud, when more computational resources are needed, to automatically launch virtual machines to available resources such as public clouds. One of the main challenges in developing the cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on FermiCloud’s system operational data, the VM launching overhead is not a constant. It varies with physical resource (CPU, memory, I/O device) utilization at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launch overhead reference model is needed. The paper is to develop a VM launch overhead reference model based on operational data we have obtained on FermiCloud and uses the reference model to guide the cloud bursting process.

  3. Machine learning and cosmological simulations - I. Semi-analytical models

    NASA Astrophysics Data System (ADS)

    Kamdar, Harshil M.; Turk, Matthew J.; Brunner, Robert J.

    2016-01-01

    We present a new exploratory framework to model galaxy formation and evolution in a hierarchical Universe by using machine learning (ML). Our motivations are two-fold: (1) presenting a new, promising technique to study galaxy formation, and (2) quantitatively analysing the extent of the influence of dark matter halo properties on galaxies in the backdrop of semi-analytical models (SAMs). We use the influential Millennium Simulation and the corresponding Munich SAM to train and test various sophisticated ML algorithms (k-Nearest Neighbors, decision trees, random forests, and extremely randomized trees). By using only essential dark matter halo physical properties for haloes of M > 1012 M⊙ and a partial merger tree, our model predicts the hot gas mass, cold gas mass, bulge mass, total stellar mass, black hole mass and cooling radius at z = 0 for each central galaxy in a dark matter halo for the Millennium run. Our results provide a unique and powerful phenomenological framework to explore the galaxy-halo connection that is built upon SAMs and demonstrably place ML as a promising and a computationally efficient tool to study small-scale structure formation.

  4. Access, Equity, and Opportunity. Women in Machining: A Model Program.

    ERIC Educational Resources Information Center

    Warner, Heather

    The Women in Machining (WIM) program is a Machine Action Project (MAP) initiative that was developed in response to a local skilled metalworking labor shortage, despite a virtual absence of women and people of color from area shops. The project identified post-war stereotypes and other barriers that must be addressed if women are to have an equal…

  5. Parameterizing Phrase Based Statistical Machine Translation Models: An Analytic Study

    ERIC Educational Resources Information Center

    Cer, Daniel

    2011-01-01

    The goal of this dissertation is to determine the best way to train a statistical machine translation system. I first develop a state-of-the-art machine translation system called Phrasal and then use it to examine a wide variety of potential learning algorithms and optimization criteria and arrive at two very surprising results. First, despite the…

  6. Modeling Stochastic Kinetics of Molecular Machines at Multiple Levels: From Molecules to Modules

    PubMed Central

    Chowdhury, Debashish

    2013-01-01

    A molecular machine is either a single macromolecule or a macromolecular complex. In spite of the striking superficial similarities between these natural nanomachines and their man-made macroscopic counterparts, there are crucial differences. Molecular machines in a living cell operate stochastically in an isothermal environment far from thermodynamic equilibrium. In this mini-review we present a catalog of the molecular machines and an inventory of the essential toolbox for theoretically modeling these machines. The tool kits include 1), nonequilibrium statistical-physics techniques for modeling machines and machine-driven processes; and 2), statistical-inference methods for reverse engineering a functional machine from the empirical data. The cell is often likened to a microfactory in which the machineries are organized in modular fashion; each module consists of strongly coupled multiple machines, but different modules interact weakly with each other. This microfactory has its own automated supply chain and delivery system. Buoyed by the success achieved in modeling individual molecular machines, we advocate integration of these models in the near future to develop models of functional modules. A system-level description of the cell from the perspective of molecular machinery (the mechanome) is likely to emerge from further integrations that we envisage here. PMID:23746505

  7. Modeling stochastic kinetics of molecular machines at multiple levels: from molecules to modules.

    PubMed

    Chowdhury, Debashish

    2013-06-01

    A molecular machine is either a single macromolecule or a macromolecular complex. In spite of the striking superficial similarities between these natural nanomachines and their man-made macroscopic counterparts, there are crucial differences. Molecular machines in a living cell operate stochastically in an isothermal environment far from thermodynamic equilibrium. In this mini-review we present a catalog of the molecular machines and an inventory of the essential toolbox for theoretically modeling these machines. The tool kits include 1), nonequilibrium statistical-physics techniques for modeling machines and machine-driven processes; and 2), statistical-inference methods for reverse engineering a functional machine from the empirical data. The cell is often likened to a microfactory in which the machineries are organized in modular fashion; each module consists of strongly coupled multiple machines, but different modules interact weakly with each other. This microfactory has its own automated supply chain and delivery system. Buoyed by the success achieved in modeling individual molecular machines, we advocate integration of these models in the near future to develop models of functional modules. A system-level description of the cell from the perspective of molecular machinery (the mechanome) is likely to emerge from further integrations that we envisage here.

  8. Alternative Models of Service, Centralized Machine Operations. Phase II Report. Volume II.

    ERIC Educational Resources Information Center

    Technology Management Corp., Alexandria, VA.

    A study was conducted to determine if the centralization of playback machine operations for the national free library program would be feasible, economical, and desirable. An alternative model of playback machine services was constructed and compared with existing network operations considering both cost and service. The alternative model was…

  9. Modelling the sensitivity of river reaches to water abstraction: RAPHSA- a hydroecology tool for environmental managers

    NASA Astrophysics Data System (ADS)

    Klaar, Megan; Laize, Cedric; Maddock, Ian; Acreman, Mike; Tanner, Kath; Peet, Sarah

    2014-05-01

    A key challenge for environmental managers is the determination of environmental flows which allow a maximum yield of water resources to be taken from surface and sub-surface sources, whilst ensuring sufficient water remains in the environment to support biota and habitats. It has long been known that sensitivity to changes in water levels resulting from river and groundwater abstractions varies between rivers. Whilst assessment at the catchment scale is ideal for determining broad pressures on water resources and ecosystems, assessment of the sensitivity of reaches to changes in flow has previously been done on a site-by-site basis, often with the application of detailed but time consuming techniques (e.g. PHABSIM). While this is appropriate for a limited number of sites, it is costly in terms of money and time resources and therefore not appropriate for application at a national level required by responsible licensing authorities. To address this need, the Environment Agency (England) is developing an operational tool to predict relationships between physical habitat and flow which may be applied by field staff to rapidly determine the sensitivity of physical habitat to flow alteration for use in water resource management planning. An initial model of river sensitivity to abstraction (defined as the change in physical habitat related to changes in river discharge) was developed using site characteristics and data from 66 individual PHABSIM surveys throughout the UK (Booker & Acreman, 2008). By applying a multivariate multiple linear regression analysis to the data to define habitat availability-flow curves using resource intensity as predictor variables, the model (known as RAPHSA- Rapid Assessment of Physical Habitat Sensitivity to Abstraction) is able to take a risk-based approach to modeled certainty. Site specific information gathered using desk-based, or a variable amount of field work can be used to predict the shape of the habitat- flow curves, with the

  10. The prototype effect revisited: Evidence for an abstract feature model of face recognition.

    PubMed

    Wallis, Guy; Siebeck, Ulrike E; Swann, Kellie; Blanz, Volker; Bülthoff, Heinrich H

    2008-01-01

    Humans typically have a remarkable memory for faces. Nonetheless, in some cases they can be fooled. Experiments described in this paper provide new evidence for an effect in which observers falsely "recognize" a face that they have never seen before. The face is a chimera (prototype) built from parts extracted from previously viewed faces. It is known that faces of this kind can be confused with truly familiar faces, a result referred to as the prototype effect. However, recent studies have failed to find evidence for a full effect, one in which the prototype is regarded not only as familiar, but as more familiar than faces which have been seen before. This study sought to reinvestigate the effect. In a pair of experiments, evidence is reported for the full effect based on both an old/new discrimination task and a familiarity ranking task. The results are shown to be consistent with a recognition model in which faces are represented as combinations of reusable, abstract features. In a final experiment, novel predictions of the model are verified by comparing the size of the prototype effect for upright and upside-down faces. Despite the fundamentally piecewise nature of the model, an explanation is provided as to how it can also account for the sensitivity of observers to configural and holistic cues. This discussion is backed up with the use of an unsupervised network model. Overall, the paper describes how an abstract feature-based model can reconcile a range of results in the face recognition literature and, in turn, lessen currently perceived differences between the representation of faces and other objects. PMID:18484826

  11. R-Models: a mathematical framework for capturing notions of abstraction and assistance in reproductive systems.

    PubMed

    Webster, Matt; Malcolm, Grant

    2012-11-01

    R-Models are an approach to capturing notions of assistance and abstraction in reproductive systems, based on labelled transition systems and Gibson's theory of affordances. R-Models incorporate a labelled transition system that describes how a reproductive system changes over the course of reproduction. The actors in the system are represented by a set of entities together with a relation describing the states in which those entities are present, and an affordance-modelling function mapping actions to sets of entities which enable those actions to be performed. We show how R-models can be classified based on whether the reproducer is assisted or unassisted in reproduction, and whether or not the reproducer is active during reproduction. We prove that all assisted and unassisted R-models have a related R-model which has the opposite classification. We discuss the relevance to the field of artificial life, give a potential application to the fields of computer virology, and demonstrate reproduction modelling and classification in action using examples.

  12. Unitary dilation models of Turing machines in quantum mechanics

    SciTech Connect

    Benioff, P.

    1995-05-01

    A goal of quantum-mechanical models of the computation process is the description of operators that model changes in the information-bearing degrees of freedom. Iteration of the operators should correspond to steps in the computation, and the final state of halting computations should be stable under iteration. The problem is that operators constructed directly from the process description do not have these properties. In general these operators annihilate the halted state. If information-erasing steps are present, there are additional problems. These problems are illustrated in this paper by consideration of operators for two simple one-step processes and two simple Turing machines. In general the operators are not unitary and, if erasing steps are present, they are not even contraction operators. Various methods of extension or dilation to unitary operators are discussed. Here unitary power dilations are considered as a solution to these problems. It is seen that these dilations automatically provide a good solution to the initial- and final-state problems. For processes with erasing steps, recording steps must be included prior to the dilation, but only for the steps that erase information. Hamiltonians for these processes are also discussed. It is noted that {ital H}, described by exp({minus}{ital iH}{Delta})={ital U}{sup {ital T}}, where {ital U}{sup {ital T}} is a unitary step operator for the process and {Delta} a time interval, has complexity problems. These problems and those noted above are avoided here by the use of the Feynman approach to constructing Hamiltonians directly from the unitary power dilations of the model operators. It is seen that the Hamiltonians so constructed have some interesting properties.

  13. Experimental "evolutional machines": mathematical and experimental modeling of biological evolution

    NASA Astrophysics Data System (ADS)

    Brilkov, A. V.; Loginov, I. A.; Morozova, E. V.; Shuvaev, A. N.; Pechurkin, N. S.

    Experimentalists possess model systems of two major types for study of evolution continuous cultivation in the chemostat and long-term development in closed laboratory microecosystems with several trophic structure If evolutionary changes or transfer from one steady state to another in the result of changing qualitative properties of the system take place in such systems the main characteristics of these evolution steps can be measured By now this has not been realized from the point of view of methodology though a lot of data on the work of both types of evolutionary machines has been collected In our experiments with long-term continuous cultivation we used the bacterial strains containing in plasmids the cloned genes of bioluminescence and green fluorescent protein which expression level can be easily changed and controlled In spite of the apparent kinetic diversity of evolutionary transfers in two types of systems the general mechanisms characterizing the increase of used energy flow by populations of primer producent can be revealed at their study According to the energy approach at spontaneous transfer from one steady state to another e g in the process of microevolution competition or selection heat dissipation characterizing the rate of entropy growth should increase rather then decrease or maintain steady as usually believed The results of our observations of experimental evolution require further development of thermodynamic theory of open and closed biological systems and further study of general mechanisms of biological

  14. Mathematical modeling of synergetic aspects of machine building enterprise management

    NASA Astrophysics Data System (ADS)

    Kazakov, O. D.; Andriyanov, S. V.

    2016-04-01

    The multivariate method of determining the optimal values of leading key performance indicators of production divisions of machine-building enterprises in the aspect of synergetics has been worked out.

  15. Complex hybrid models combining deterministic and machine learning components for numerical climate modeling and weather prediction.

    PubMed

    Krasnopolsky, Vladimir M; Fox-Rabinovitz, Michael S

    2006-03-01

    A new practical application of neural network (NN) techniques to environmental numerical modeling has been developed. Namely, a new type of numerical model, a complex hybrid environmental model based on a synergetic combination of deterministic and machine learning model components, has been introduced. Conceptual and practical possibilities of developing hybrid models are discussed in this paper for applications to climate modeling and weather prediction. The approach presented here uses NN as a statistical or machine learning technique to develop highly accurate and fast emulations for time consuming model physics components (model physics parameterizations). The NN emulations of the most time consuming model physics components, short and long wave radiation parameterizations or full model radiation, presented in this paper are combined with the remaining deterministic components (like model dynamics) of the original complex environmental model--a general circulation model or global climate model (GCM)--to constitute a hybrid GCM (HGCM). The parallel GCM and HGCM simulations produce very similar results but HGCM is significantly faster. The speed-up of model calculations opens the opportunity for model improvement. Examples of developed HGCMs illustrate the feasibility and efficiency of the new approach for modeling complex multidimensional interdisciplinary systems.

  16. Machinability and modeling of cutting mechanism for Titanium Metal Matrix composites

    NASA Astrophysics Data System (ADS)

    Bejjani, Roland

    Titanium Metal Matrix composites (TiMMC) is a new class of material. However, it is a very difficult to cut material. Therefore, the tool life is limited. In order to optimize the machining of TiMMC, three approaches (stages) were used. First, a TAGUCHI method for the design of experiments was used in order to identify the effects of the machining inputs (speed, feed, depth) to the output (cutting forces, surface roughness). To enhance even further the tool life, Laser Assisted Machining (LAM) was also experimented. In a second approach, and in order to better understand the cutting mechanism of TiMMC, the chip formation was analyzed and a new model for the adiabatic shear band in the chip segment was developed. In the last approach, and in order to have a better analysis tool to understand the cutting mechanism, a new constitutive model for TiMMC for simulation purposes was developed, with an added damage model. The FEM simulations results led to predictions of temperature, stress, strain, and damage, and can be used as an analysis tool and even for industrial applications. Following experimental work and analysis, I found that cutting TiMMC at higher speeds is more efficient and productive because it increases tool life. It was found that at higher speeds, fewer hard TiC particles are broken, resulting in reduced tool abrasion wear. In order to further optimize the machining of TiMMC, an unconventional machining method was used. In fact, Laser Assisted Machining (LAM) was used and was found to increase the tool life by approximately 180%. To understand the effects of the particles on the tool, micro scale observations of hard particles with SEM microscopy were performed and it was found that the tool/particle interaction while cutting can exist under three forms. The particles can either be cut at the surface, pushed inside the material, or even some of the pieces of the cut particles can be pushed inside the material. No particle de-bonding was observed. Some

  17. DFT modeling of chemistry on the Z machine

    NASA Astrophysics Data System (ADS)

    Mattsson, Thomas

    2013-06-01

    Density Functional Theory (DFT) has proven remarkably accurate in predicting properties of matter under shock compression for a wide-range of elements and compounds: from hydrogen to xenon via water. Materials where chemistry plays a role are of particular interest for many applications. For example the deep interiors of Neptune, Uranus, and hundreds of similar exoplanets are composed of molecular ices of carbon, hydrogen, oxygen, and nitrogen at pressures of several hundred GPa and temperatures of many thousand Kelvin. High-quality thermophysical experimental data and high-fidelity simulations including chemical reaction are necessary to constrain planetary models over a large range of conditions. As examples of where chemical reactions are important, and demonstration of the high fidelity possible for these both structurally and chemically complex systems, we will discuss shock- and re-shock of liquid carbon dioxide (CO2) in the range 100 to 800 GPa, shock compression of the hydrocarbon polymers polyethylene (PE) and poly(4-methyl-1-pentene) (PMP), and finally simulations of shock compression of glow discharge polymer (GDP) including the effects of doping with germanium. Experimental results from Sandia's Z machine have time and again validated the DFT simulations at extreme conditions and the combination of experiment and DFT provide reliable data for evaluating existing and constructing future wide-range equations of state models for molecular compounds like CO2 and polymers like PE, PMP, and GDP. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  18. Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness

    NASA Astrophysics Data System (ADS)

    Kusuma, K. K.; Maruf, A.

    2016-02-01

    Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.

  19. A Consistent Information Criterion for Support Vector Machines in Diverging Model Spaces

    PubMed Central

    Zhang, Xiang; Wu, Yichao; Wang, Lan; Li, Runze

    2015-01-01

    Information criteria have been popularly used in model selection and proved to possess nice theoretical properties. For classification, Claeskens et al. (2008) proposed support vector machine information criterion for feature selection and provided encouraging numerical evidence. Yet no theoretical justification was given there. This work aims to fill the gap and to provide some theoretical justifications for support vector machine information criterion in both fixed and diverging model spaces. We first derive a uniform convergence rate for the support vector machine solution and then show that a modification of the support vector machine information criterion achieves model selection consistency even when the number of features diverges at an exponential rate of the sample size. This consistency result can be further applied to selecting the optimal tuning parameter for various penalized support vector machine methods. Finite-sample performance of the proposed information criterion is investigated using Monte Carlo studies and one real-world gene selection problem. PMID:27239164

  20. Using financial risk measures for analyzing generalization performance of machine learning models.

    PubMed

    Takeda, Akiko; Kanamori, Takafumi

    2014-09-01

    We propose a unified machine learning model (UMLM) for two-class classification, regression and outlier (or novelty) detection via a robust optimization approach. The model embraces various machine learning models such as support vector machine-based and minimax probability machine-based classification and regression models. The unified framework makes it possible to compare and contrast existing learning models and to explain their differences and similarities. In this paper, after relating existing learning models to UMLM, we show some theoretical properties for UMLM. Concretely, we show an interpretation of UMLM as minimizing a well-known financial risk measure (worst-case value-at risk (VaR) or conditional VaR), derive generalization bounds for UMLM using such a risk measure, and prove that solving problems of UMLM leads to estimators with the minimized generalization bounds. Those theoretical properties are applicable to related existing learning models.

  1. A Sustainable Model for Integrating Current Topics in Machine Learning Research into the Undergraduate Curriculum

    ERIC Educational Resources Information Center

    Georgiopoulos, M.; DeMara, R. F.; Gonzalez, A. J.; Wu, A. S.; Mollaghasemi, M.; Gelenbe, E.; Kysilka, M.; Secretan, J.; Sharma, C. A.; Alnsour, A. J.

    2009-01-01

    This paper presents an integrated research and teaching model that has resulted from an NSF-funded effort to introduce results of current Machine Learning research into the engineering and computer science curriculum at the University of Central Florida (UCF). While in-depth exposure to current topics in Machine Learning has traditionally occurred…

  2. Abstract Painting

    ERIC Educational Resources Information Center

    Henkes, Robert

    1978-01-01

    Abstract art provokes numerous interpretations, and as many misunderstandings. The adolescent reaction is no exception. The procedure described here can help the student to understand the abstract from at least one direction. (Author/RK)

  3. On problems in defining abstract and metaphysical concepts--emergence of a new model.

    PubMed

    Nahod, Bruno; Nahod, Perina Vukša

    2014-12-01

    Basic anthropological terminology is the first project covering terms from the domain of the social sciences under the Croatian Special Field Terminology program (Struna). Problems that have been sporadically noticed or whose existence could have been presumed during the processing of terms mainly from technical fields and sciences have finally emerged in "anthropology". The principles of the General Theory of Terminology (GTT), which are followed in Struna, were put to a truly exacting test, and sometimes stretched beyond their limits when applied to concepts that do not necessarily have references in the physical world; namely, abstract and metaphysical concepts. We are currently developing a new terminographical model based on Idealized Cognitive Models (ICM), which will hopefully ensure a better cross-filed implementation of various types of concepts and their relations. The goal of this paper is to introduce the theoretical bases of our model. Additionally, we will present a pilot study of the series of experiments in which we are trying to investigate the nature of conceptual categorization in special languages and its proposed difference form categorization in general language.

  4. Comparison of two different surfaces for 3d model abstraction in support of remote sensing simulations

    SciTech Connect

    Pope, Paul A; Ranken, Doug M

    2010-01-01

    A method for abstracting a 3D model by shrinking a triangular mesh, defined upon a best fitting ellipsoid surrounding the model, onto the model's surface has been previously described. This ''shrinkwrap'' process enables a semi-regular mesh to be defined upon an object's surface. This creates a useful data structure for conducting remote sensing simulations and image processing. However, using a best fitting ellipsoid having a graticule-based tessellation to seed the shrinkwrap process suffers from a mesh which is too dense at the poles. To achieve a more regular mesh, the use of a best fitting, subdivided icosahedron was tested. By subdividing each of the twenty facets of the icosahedron into regular triangles of a predetermined size, arbitrarily dense, highly-regular starting meshes can be created. Comparisons of the meshes resulting from these two seed surfaces are described. Use of a best fitting icosahedron-based mesh as the seed surface in the shrinkwrap process is preferable to using a best fitting ellipsoid. The impacts to remote sensing simulations, specifically generation of synthetic imagery, is illustrated.

  5. (abstract) Modeling Protein Families and Human Genes: Hidden Markov Models and a Little Beyond

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre

    1994-01-01

    We will first give a brief overview of Hidden Markov Models (HMMs) and their use in Computational Molecular Biology. In particular, we will describe a detailed application of HMMs to the G-Protein-Coupled-Receptor Superfamily. We will also describe a number of analytical results on HMMs that can be used in discrimination tests and database mining. We will then discuss the limitations of HMMs and some new directions of research. We will conclude with some recent results on the application of HMMs to human gene modeling and parsing.

  6. A Framework for the Abstraction of Mesoscale Modeling for Weather Simulation

    NASA Astrophysics Data System (ADS)

    Limpasuvan, V.; Ujcich, B. E.

    2009-12-01

    Widely disseminated weather forecast results (e. g. from various national centers and private companies) are useful for typical users in gauging future atmospheric disturbances. However, these canonical forecasts may not adequately meet the needs of end-users in the various scientific fields since a predetermined model, as structured by the model administrator, produces these forecasts. To perform his/her own successful forecasts, a user faces a steep learning curve involving the collection of initial condition data (e.g. radar, satellite, and reanalyses) and operation of a suitable model (and associated software/computing). In this project, we develop an intermediate (prototypical) software framework and a web-based front-end interface that allow for the abstraction of an advanced weather model upon which the end-user can perform customizable forecasts and analyses. Having such an accessible, front-end interface for a weather model can benefit educational programs at the secondary school and undergraduate level, scientific research in the fields like fluid dynamics and meteorology, and the general public. In all cases, our project allows the user to generate a localized domain of choice, run the desired forecast on a remote high-performance computer cluster, and visually see the results. For instance, an undergraduate science curriculum could incorporate the resulting weather forecast performed under this project in laboratory exercises. Scientific researchers and graduate students would be able to readily adjust key prognostic variables in the simulation within this project’s framework. The general public within the contiguous United States could also run a simplified version of the project’s software with adjustments in forecast clarity (spatial resolution) and region size (domain). Special cases of general interests, in which a detailed forecast may be required, would be over areas of possible strong weather activities.

  7. Categorization of sentence types in medical abstracts.

    PubMed

    McKnight, Larry; Srinivasan, Padmini

    2003-01-01

    This study evaluated the use of machine learning techniques in the classification of sentence type. 7253 structured abstracts and 204 unstructured abstracts of Randomized Controlled Trials from MedLINE were parsed into sentences and each sentence was labeled as one of four types (Introduction, Method, Result, or Conclusion). Support Vector Machine (SVM) and Linear Classifier models were generated and evaluated on cross-validated data. Treating sentences as a simple "bag of words", the SVM model had an average ROC area of 0.92. Adding a feature of relative sentence location improved performance markedly for some models and overall increasing the average ROC to 0.95. Linear classifier performance was significantly worse than the SVM in all datasets. Using the SVM model trained on structured abstracts to predict unstructured abstracts yielded performance similar to that of models trained with unstructured abstracts in 3 of the 4 types. We conclude that classification of sentence type seems feasible within the domain of RCT's. Identification of sentence types may be helpful for providing context to end users or other text summarization techniques.

  8. Modelling of the dynamic behaviour of hard-to-machine alloys

    NASA Astrophysics Data System (ADS)

    Hokka, M.; Leemet, T.; Shrot, A.; Bäker, M.; Kuokkala, V.-T.

    2012-08-01

    Machining of titanium alloys and nickel based superalloys can be difficult due to their excellent mechanical properties combining high strength, ductility, and excellent overall high temperature performance. Machining of these alloys can, however, be improved by simulating the processes and by optimizing the machining parameters. The simulations, however, need accurate material models that predict the material behaviour in the range of strains and strain rates that occur in the machining processes. In this work, the behaviour of titanium 15-3-3-3 alloy and nickel based superalloy 625 were characterized in compression, and Johnson-Cook material model parameters were obtained from the results. For the titanium alloy, the adiabatic Johnson-Cook model predicts softening of the material adequately, but the high strain hardening rate of Alloy 625 in the model prevents the localization of strain and no shear bands were formed when using this model. For Alloy 625, the Johnson-Cook model was therefore modified to decrease the strain hardening rate at large strains. The models were used in the simulations of orthogonal cutting of the material. For both materials, the models are able to predict the serrated chip formation, frequently observed in the machining of these alloys. The machining forces also match relatively well, but some differences can be seen in the details of the experimentally obtained and simulated chip shapes.

  9. Improving protein–protein interactions prediction accuracy using protein evolutionary information and relevance vector machine model

    PubMed Central

    An, Ji‐Yong; Meng, Fan‐Rong; Chen, Xing; Yan, Gui‐Ying; Hu, Ji‐Pu

    2016-01-01

    Abstract Predicting protein–protein interactions (PPIs) is a challenging task and essential to construct the protein interaction networks, which is important for facilitating our understanding of the mechanisms of biological systems. Although a number of high‐throughput technologies have been proposed to predict PPIs, there are unavoidable shortcomings, including high cost, time intensity, and inherently high false positive rates. For these reasons, many computational methods have been proposed for predicting PPIs. However, the problem is still far from being solved. In this article, we propose a novel computational method called RVM‐BiGP that combines the relevance vector machine (RVM) model and Bi‐gram Probabilities (BiGP) for PPIs detection from protein sequences. The major improvement includes (1) Protein sequences are represented using the Bi‐gram probabilities (BiGP) feature representation on a Position Specific Scoring Matrix (PSSM), in which the protein evolutionary information is contained; (2) For reducing the influence of noise, the Principal Component Analysis (PCA) method is used to reduce the dimension of BiGP vector; (3) The powerful and robust Relevance Vector Machine (RVM) algorithm is used for classification. Five‐fold cross‐validation experiments executed on yeast and Helicobacter pylori datasets, which achieved very high accuracies of 94.57 and 90.57%, respectively. Experimental results are significantly better than previous methods. To further evaluate the proposed method, we compare it with the state‐of‐the‐art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM‐BiGP method is significantly better than the SVM‐based method. In addition, we achieved 97.15% accuracy on imbalance yeast dataset, which is higher than that of balance yeast dataset. The promising experimental results show the efficiency and robust of the proposed method, which can be an automatic

  10. A stochastic model for the cell formation problem considering machine reliability

    NASA Astrophysics Data System (ADS)

    Esmailnezhad, Bahman; Fattahi, Parviz; Kheirkhah, Amir Saman

    2015-03-01

    This paper presents a new mathematical model to solve cell formation problem in cellular manufacturing systems, where inter-arrival time, processing time, and machine breakdown time are probabilistic. The objective function maximizes the number of operations of each part with more arrival rate within one cell. Because a queue behind each machine; queuing theory is used to formulate the model. To solve the model, two metaheurstic algorithms such as modified particle swarm optimization and genetic algorithm are proposed. For the generation of initial solutions in these algorithms, a new heuristic method is developed, which always creates feasible solutions. Both metaheurstic algorithms are compared against global solutions obtained from Lingo software's branch and bound (B&B). Also, a statistical method will be used for comparison of solutions of two metaheurstic algorithms. The results of numerical examples indicate that considering the machine breakdown has significant effect on block structures of machine-part matrixes.

  11. What good are abstract and what-if models? Lessons from the Gaïa hypothesis.

    PubMed

    Dutreuil, Sébastien

    2014-08-01

    This article on the epistemology of computational models stems from an analysis of the Gaïa hypothesis (GH). It begins with James Kirchner's criticisms of the central computational model of GH: Daisyworld. Among other things, the model has been criticized for being too abstract, describing fictional entities (fictive daisies on an imaginary planet) and trying to answer counterfactual (what-if) questions (how would a planet look like if life had no influence on it?). For these reasons the model has been considered not testable and therefore not legitimate in science, and in any case not very interesting since it explores non actual issues. This criticism implicitly assumes that science should only be involved in the making of models that are "actual" (by opposition to what-if) and "specific" (by opposition to abstract). I challenge both of these criticisms in this article. First by showing that although the testability-understood as the comparison of model output with empirical data-is an important procedure for explanatory models, there are plenty of models that are not testable. The fact that these are not testable (in this restricted sense) has nothing to do with their being "abstract" or "what-if" but with their being predictive models. Secondly, I argue that "abstract" and "what-if" models aim at (respectable) epistemic purposes distinct from those pursued by "actual and specific models". Abstract models are used to propose how-possibly explanation or to pursue theorizing. What-if models are used to attribute causal or explanatory power to a variable of interest. The fact that they aim at different epistemic goals entails that it may not be accurate to consider the choice between different kinds of model as a "strategy".

  12. What good are abstract and what-if models? Lessons from the Gaïa hypothesis.

    PubMed

    Dutreuil, Sébastien

    2014-08-01

    This article on the epistemology of computational models stems from an analysis of the Gaïa hypothesis (GH). It begins with James Kirchner's criticisms of the central computational model of GH: Daisyworld. Among other things, the model has been criticized for being too abstract, describing fictional entities (fictive daisies on an imaginary planet) and trying to answer counterfactual (what-if) questions (how would a planet look like if life had no influence on it?). For these reasons the model has been considered not testable and therefore not legitimate in science, and in any case not very interesting since it explores non actual issues. This criticism implicitly assumes that science should only be involved in the making of models that are "actual" (by opposition to what-if) and "specific" (by opposition to abstract). I challenge both of these criticisms in this article. First by showing that although the testability-understood as the comparison of model output with empirical data-is an important procedure for explanatory models, there are plenty of models that are not testable. The fact that these are not testable (in this restricted sense) has nothing to do with their being "abstract" or "what-if" but with their being predictive models. Secondly, I argue that "abstract" and "what-if" models aim at (respectable) epistemic purposes distinct from those pursued by "actual and specific models". Abstract models are used to propose how-possibly explanation or to pursue theorizing. What-if models are used to attribute causal or explanatory power to a variable of interest. The fact that they aim at different epistemic goals entails that it may not be accurate to consider the choice between different kinds of model as a "strategy". PMID:25515262

  13. Human factors model concerning the man-machine interface of mining crewstations

    NASA Technical Reports Server (NTRS)

    Rider, James P.; Unger, Richard L.

    1989-01-01

    The U.S. Bureau of Mines is developing a computer model to analyze the human factors aspect of mining machine operator compartments. The model will be used as a research tool and as a design aid. It will have the capability to perform the following: simulated anthropometric or reach assessment, visibility analysis, illumination analysis, structural analysis of the protective canopy, operator fatigue analysis, and computation of an ingress-egress rating. The model will make extensive use of graphics to simplify data input and output. Two dimensional orthographic projections of the machine and its operator compartment are digitized and the data rebuilt into a three dimensional representation of the mining machine. Anthropometric data from either an individual or any size population may be used. The model is intended for use by equipment manufacturers and mining companies during initial design work on new machines. In addition to its use in machine design, the model should prove helpful as an accident investigation tool and for determining the effects of machine modifications made in the field on the critical areas of visibility and control reach ability.

  14. ABSTRACTION OF INFORMATION FROM 2- AND 3-DIMENSIONAL PORFLOW MODELS INTO A 1-D GOLDSIM MODEL - 11404

    SciTech Connect

    Taylor, G.; Hiergesell, R.

    2010-11-16

    The Savannah River National Laboratory has developed a 'hybrid' approach to Performance Assessment modeling which has been used for a number of Performance Assessments. This hybrid approach uses a multi-dimensional modeling platform (PorFlow) to develop deterministic flow fields and perform contaminant transport. The GoldSim modeling platform is used to develop the Sensitivity and Uncertainty analyses. Because these codes are performing complementary tasks, it is incumbent upon them that for the deterministic cases they produce very similar results. This paper discusses two very different waste forms, one with no engineered barriers and one with engineered barriers, each of which present different challenges to the abstraction of data. The hybrid approach to Performance Assessment modeling used at the SRNL uses a 2-D unsaturated zone (UZ) and a 3-D saturated zone (SZ) model in the PorFlow modeling platform. The UZ model consists of the waste zone and the unsaturated zoned between the waste zone and the water table. The SZ model consists of source cells beneath the waste form to the points of interest. Both models contain 'buffer' cells so that modeling domain boundaries do not adversely affect the calculation. The information pipeline between the two models is the contaminant flux. The domain contaminant flux, typically in units of moles (or Curies) per year from the UZ model is used as a boundary condition for the source cells in the SZ. The GoldSim modeling component of the hybrid approach is an integrated UZ-SZ model. The model is a 1-D representation of the SZ, typically 1-D in the UZ, but as discussed below, depending on the waste form being analyzed may contain pseudo-2-D elements. A waste form at the Savannah River Site (SRS) which has no engineered barriers is commonly referred to as a slit trench. A slit trench, as its name implies, is an unlined trench, typically 6 m deep, 6 m wide, and 200 m long. Low level waste consisting of soil, debris, rubble, wood

  15. Quantum turing machine and brain model represented by Fock space

    NASA Astrophysics Data System (ADS)

    Iriyama, Satoshi; Ohya, Masanori

    2016-05-01

    The adaptive dynamics is known as a new mathematics to treat with a complex phenomena, for example, chaos, quantum algorithm and psychological phenomena. In this paper, we briefly review the notion of the adaptive dynamics, and explain the definition of the generalized Turing machine (GTM) and recognition process represented by the Fock space. Moreover, we show that there exists the quantum channel which is described by the GKSL master equation to achieve the Chaos Amplifier used in [M. Ohya and I. V. Volovich, J. Opt. B 5(6) (2003) 639., M. Ohya and I. V. Volovich, Rep. Math. Phys. 52(1) (2003) 25.

  16. A Framework for Modeling Human-Machine Interactions

    NASA Technical Reports Server (NTRS)

    Shafto, Michael G.; Rosekind, Mark R. (Technical Monitor)

    1996-01-01

    Modern automated flight-control systems employ a variety of different behaviors, or modes, for managing the flight. While developments in cockpit automation have resulted in workload reduction and economical advantages, they have also given rise to an ill-defined class of human-machine problems, sometimes referred to as 'automation surprises'. Our interest in applying formal methods for describing human-computer interaction stems from our ongoing research on cockpit automation. In this area of aeronautical human factors, there is much concern about how flight crews interact with automated flight-control systems, so that the likelihood of making errors, in particular mode-errors, is minimized and the consequences of such errors are contained. The goal of the ongoing research on formal methods in this context is: (1) to develop a framework for describing human interaction with control systems; (2) to formally categorize such automation surprises; and (3) to develop tests for identification of these categories early in the specification phase of a new human-machine system.

  17. Computationally-efficient finite-element-based thermal and electromagnetic models of electric machines

    NASA Astrophysics Data System (ADS)

    Zhou, Kan

    With the modern trend of transportation electrification, electric machines are a key component of electric/hybrid electric vehicle (EV/HEV) powertrains. It is therefore important that vehicle powertrain-level and system-level designers and control engineers have access to accurate yet computationally-efficient (CE), physics-based modeling tools of the thermal and electromagnetic (EM) behavior of electric machines. In this dissertation, CE yet sufficiently-accurate thermal and EM models for electric machines, which are suitable for use in vehicle powertrain design, optimization, and control, are developed. This includes not only creating fast and accurate thermal and EM models for specific machine designs, but also the ability to quickly generate and determine the performance of new machine designs through the application of scaling techniques to existing designs. With the developed techniques, the thermal and EM performance can be accurately and efficiently estimated. Furthermore, powertrain or system designers can easily and quickly adjust the characteristics and the performance of the machine in ways that are favorable to the overall vehicle performance.

  18. Quantum turing machine and brain model represented by Fock space

    NASA Astrophysics Data System (ADS)

    Iriyama, Satoshi; Ohya, Masanori

    2016-05-01

    The adaptive dynamics is known as a new mathematics to treat with a complex phenomena, for example, chaos, quantum algorithm and psychological phenomena. In this paper, we briefly review the notion of the adaptive dynamics, and explain the definition of the generalized Turing machine (GTM) and recognition process represented by the Fock space. Moreover, we show that there exists the quantum channel which is described by the GKSL master equation to achieve the Chaos Amplifier used in [M. Ohya and I. V. Volovich, J. Opt. B 5(6) (2003) 639., M. Ohya and I. V. Volovich, Rep. Math. Phys. 52(1) (2003) 25.

  19. Learning with Technology: Video Modeling with Concrete-Representational-Abstract Sequencing for Students with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Yakubova, Gulnoza; Hughes, Elizabeth M.; Shinaberry, Megan

    2016-01-01

    The purpose of this study was to determine the effectiveness of a video modeling intervention with concrete-representational-abstract instructional sequence in teaching mathematics concepts to students with autism spectrum disorder (ASD). A multiple baseline across skills design of single-case experimental methodology was used to determine the…

  20. Research Abstracts.

    ERIC Educational Resources Information Center

    Plotnick, Eric

    2001-01-01

    Presents research abstracts from the ERIC Clearinghouse on Information and Technology. Topics include: classroom communication apprehension and distance education; outcomes of a distance-delivered science course; the NASA/Kennedy Space Center Virtual Science Mentor program; survey of traditional and distance learning higher education members;…

  1. Abstract Constructions.

    ERIC Educational Resources Information Center

    Pietropola, Anne

    1998-01-01

    Describes a lesson designed to culminate a year of eighth-grade art classes in which students explore elements of design and space by creating 3-D abstract constructions. Outlines the process of using foam board and markers to create various shapes and optical effects. (DSK)

  2. A Rapid Compression Machine Modelling Study of the Heptane Isomers

    SciTech Connect

    Silke, E J; Curran, H J; Simmie, J M; Pitz, W J; Westbrook, C K

    2005-05-10

    Previously we have reported on the combustion behavior of all nine isomers of heptane in a rapid compression machine (RCM) with stoichiometric fuel and ''air'' mixtures at a compressed gas pressure of 15 atm. The dependence of autoignition delay times on molecular structure was illustrated. Here, we report some additional experimental work that was performed in order to address unusual results regarding significant differences in the ignition delay times recorded at the same fuel and oxygen composition, but with different fractions of nitrogen and argon diluent gases. Moreover, we have begun to simulate these experiments with detailed chemical kinetic mechanisms. These mechanisms are based on previous studies of other alkane molecules, in particular, n-heptane and iso-octane. We have focused our attention on n-heptane in order to systematically redevelop the chemistry and thermochemistry for this C{sub 7} isomer with the intention of extending our greater knowledge gained to the other eight isomers. The addition of new reaction types, that were not included previously, has had a significant impact on the simulations, particularly at low temperatures.

  3. Scientist-Centered Workflow Abstractions via Generic Actors, Workflow Templates, and Context-Awareness for Groundwater Modeling and Analysis

    SciTech Connect

    Chin, George; Sivaramakrishnan, Chandrika; Critchlow, Terence J.; Schuchardt, Karen L.; Ngu, Anne Hee Hiong

    2011-07-04

    A drawback of existing scientific workflow systems is the lack of support to domain scientists in designing and executing their own scientific workflows. Many domain scientists avoid developing and using workflows because the basic objects of workflows are too low-level and high-level tools and mechanisms to aid in workflow construction and use are largely unavailable. In our research, we are prototyping higher-level abstractions and tools to better support scientists in their workflow activities. Specifically, we are developing generic actors that provide abstract interfaces to specific functionality, workflow templates that encapsulate workflow and data patterns that can be reused and adapted by scientists, and context-awareness mechanisms to gather contextual information from the workflow environment on behalf of the scientist. To evaluate these scientist-centered abstractions on real problems, we apply them to construct and execute scientific workflows in the specific domain area of groundwater modeling and analysis.

  4. Modelling of Tool Wear and Residual Stress during Machining of AISI H13 Tool Steel

    NASA Astrophysics Data System (ADS)

    Outeiro, José C.; Umbrello, Domenico; Pina, José C.; Rizzuti, Stefania

    2007-05-01

    Residual stresses can enhance or impair the ability of a component to withstand loading conditions in service (fatigue, creep, stress corrosion cracking, etc.), depending on their nature: compressive or tensile, respectively. This poses enormous problems in structural assembly as this affects the structural integrity of the whole part. In addition, tool wear issues are of critical importance in manufacturing since these affect component quality, tool life and machining cost. Therefore, prediction and control of both tool wear and the residual stresses in machining are absolutely necessary. In this work, a two-dimensional Finite Element model using an implicit Lagrangian formulation with an automatic remeshing was applied to simulate the orthogonal cutting process of AISI H13 tool steel. To validate such model the predicted and experimentally measured chip geometry, cutting forces, temperatures, tool wear and residual stresses on the machined affected layers were compared. The proposed FE model allowed us to investigate the influence of tool geometry, cutting regime parameters and tool wear on residual stress distribution in the machined surface and subsurface of AISI H13 tool steel. The obtained results permit to conclude that in order to reduce the magnitude of surface residual stresses, the cutting speed should be increased, the uncut chip thickness (or feed) should be reduced and machining with honed tools having large cutting edge radii produce better results than chamfered tools. Moreover, increasing tool wear increases the magnitude of surface residual stresses.

  5. Numerically Controlled Machining Of Wind-Tunnel Models

    NASA Technical Reports Server (NTRS)

    Kovtun, John B.

    1990-01-01

    New procedure for dynamic models and parts for wind-tunnel tests or radio-controlled flight tests constructed. Involves use of single-phase numerical control (NC) technique to produce highly-accurate, symmetrical models in less time.

  6. Nonlinear and Digital Man-machine Control Systems Modeling

    NASA Technical Reports Server (NTRS)

    Mekel, R.

    1972-01-01

    An adaptive modeling technique is examined by which controllers can be synthesized to provide corrective dynamics to a human operator's mathematical model in closed loop control systems. The technique utilizes a class of Liapunov functions formulated for this purpose, Liapunov's stability criterion and a model-reference system configuration. The Liapunov function is formulated to posses variable characteristics to take into consideration the identification dynamics. The time derivative of the Liapunov function generate the identification and control laws for the mathematical model system. These laws permit the realization of a controller which updates the human operator's mathematical model parameters so that model and human operator produce the same response when subjected to the same stimulus. A very useful feature is the development of a digital computer program which is easily implemented and modified concurrent with experimentation. The program permits the modeling process to interact with the experimentation process in a mutually beneficial way.

  7. Remotely sensed data assimilation technique to develop machine learning models for use in water management

    NASA Astrophysics Data System (ADS)

    Zaman, Bushra

    Increasing population and water conflicts are making water management one of the most important issues of the present world. It has become absolutely necessary to find ways to manage water more efficiently. Technological advancement has introduced various techniques for data acquisition and analysis, and these tools can be used to address some of the critical issues that challenge water resource management. This research used learning machine techniques and information acquired through remote sensing, to solve problems related to soil moisture estimation and crop identification on large spatial scales. In this dissertation, solutions were proposed in three problem areas that can be important in the decision making process related to water management in irrigated systems. A data assimilation technique was used to build a learning machine model that generated soil moisture estimates commensurate with the scale of the data. The research was taken further by developing a multivariate machine learning algorithm to predict root zone soil moisture both in space and time. Further, a model was developed for supervised classification of multi-spectral reflectance data using a multi-class machine learning algorithm. The procedure was designed for classifying crops but the model is data dependent and can be used with other datasets and hence can be applied to other landcover classification problems. The dissertation compared the performance of relevance vector and the support vector machines in estimating soil moisture. A multivariate relevance vector machine algorithm was tested in the spatio-temporal prediction of soil moisture, and the multi-class relevance vector machine model was used for classifying different crop types. It was concluded that the classification scheme may uncover important data patterns contributing greatly to knowledge bases, and to scientific and medical research. The results for the soil moisture models would give a rough idea to farmers

  8. Fusing Dual-Event Datasets for Mycobacterium Tuberculosis Machine Learning Models and their Evaluation

    PubMed Central

    Ekins, Sean; Freundlich, Joel S.; Reynolds, Robert C.

    2013-01-01

    The search for new tuberculosis treatments continues as we need to find molecules that can act more quickly, be accommodated in multi-drug regimens, and overcome ever increasing levels of drug resistance. Multiple large scale phenotypic high-throughput screens against Mycobacterium tuberculosis (Mtb) have generated dose response data, enabling the generation of machine learning models. These models also incorporated cytotoxicity data and were recently validated with a large external dataset. A cheminformatics data-fusion approach followed by Bayesian machine learning, Support Vector Machine or Recursive Partitioning model development (based on publicly available Mtb screening data) was used to compare individual datasets and subsequent combined models. A set of 1924 commercially available molecules with promising antitubercular activity (and lack of relative cytotoxicity to Vero cells) were used to evaluate the predictive nature of the models. We demonstrate that combining three datasets incorporating antitubercular and cytotoxicity data in Vero cells from our previous screens results in external validation receiver operator curve (ROC) of 0.83 (Bayesian or RP Forest). Models that do not have the highest five-fold cross validation ROC scores can outperform other models in a test set dependent manner. We demonstrate with predictions for a recently published set of Mtb leads from GlaxoSmithKline that no single machine learning model may be enough to identify compounds of interest. Dataset fusion represents a further useful strategy for machine learning construction as illustrated with Mtb. Coverage of chemistry and Mtb target spaces may also be limiting factors for the whole-cell screening data generated to date. PMID:24144044

  9. State Machine Modeling of the Space Launch System Solid Rocket Boosters

    NASA Technical Reports Server (NTRS)

    Harris, Joshua A.; Patterson-Hine, Ann

    2013-01-01

    The Space Launch System is a Shuttle-derived heavy-lift vehicle currently in development to serve as NASA's premiere launch vehicle for space exploration. The Space Launch System is a multistage rocket with two Solid Rocket Boosters and multiple payloads, including the Multi-Purpose Crew Vehicle. Planned Space Launch System destinations include near-Earth asteroids, the Moon, Mars, and Lagrange points. The Space Launch System is a complex system with many subsystems, requiring considerable systems engineering and integration. To this end, state machine analysis offers a method to support engineering and operational e orts, identify and avert undesirable or potentially hazardous system states, and evaluate system requirements. Finite State Machines model a system as a finite number of states, with transitions between states controlled by state-based and event-based logic. State machines are a useful tool for understanding complex system behaviors and evaluating "what-if" scenarios. This work contributes to a state machine model of the Space Launch System developed at NASA Ames Research Center. The Space Launch System Solid Rocket Booster avionics and ignition subsystems are modeled using MATLAB/Stateflow software. This model is integrated into a larger model of Space Launch System avionics used for verification and validation of Space Launch System operating procedures and design requirements. This includes testing both nominal and o -nominal system states and command sequences.

  10. [A study on three dimensional modeling of human body in man-machine system simulation].

    PubMed

    Wei, B; Yuan, X

    1997-12-01

    Modeling of the human body is a basic problem in human-machine system simulation. In this study a B-spline surface model of the human body was established. In the modeling, human body is split into several segments and each segment is a cubic B-spline surface. A blend surface was used to link two jointed segments. It is easy to simulate the motion of the human body by using the algorithm of axial deformation. PMID:11540444

  11. Experience with abstract notation one

    NASA Technical Reports Server (NTRS)

    Harvey, James D.; Weaver, Alfred C.

    1990-01-01

    The development of computer science has produced a vast number of machine architectures, programming languages, and compiler technologies. The cross product of these three characteristics defines the spectrum of previous and present data representation methodologies. With regard to computer networks, the uniqueness of these methodologies presents an obstacle when disparate host environments are to be interconnected. Interoperability within a heterogeneous network relies upon the establishment of data representation commonality. The International Standards Organization (ISO) is currently developing the abstract syntax notation one standard (ASN.1) and the basic encoding rules standard (BER) that collectively address this problem. When used within the presentation layer of the open systems interconnection reference model, these two standards provide the data representation commonality required to facilitate interoperability. The details of a compiler that was built to automate the use of ASN.1 and BER are described. From this experience, insights into both standards are given and potential problems relating to this development effort are discussed.

  12. Multiscale Modeling and Analysis of an Ultra-Precision Damage Free Machining Method

    NASA Astrophysics Data System (ADS)

    Guan, Chaoliang; Peng, Wenqiang

    2016-06-01

    Under the condition of high laser flux, laser induced damage of optical element does not occur is the key to success of laser fusion ignition system. US government survey showed that the processing defects caused the laser induced damage threshold (LIDT) to decrease is one of the three major challenges. Cracks and scratches caused by brittle and plastic removal machining are fatal flaws. Using hydrodynamic effect polishing method can obtain damage free surface on quartz glass. The material removal mechanism of this typical ultra-precision machining process was modeled in multiscale. In atomic scale, chemical modeling illustrated the weakening and breaking of chemical bond energy. In particle scale, micro contact modeling given the elastic remove mode boundary of materials. In slurry scale, hydrodynamic flow modeling showed the dynamic pressure and shear stress distribution which are relations with machining effect. Experiment was conducted on a numerically controlled system, and one quartz glass optical component was polished in the elastic mode. Results show that the damages are removed away layer by layer as the removal depth increases due to the high damage free machining ability of the HEP. And the LIDT of sample was greatly improved.

  13. Assessing model uncertainty using hexavalent chromium and lung cancer mortality as an example [Abstract 2015

    EPA Science Inventory

    Introduction: The National Research Council recommended quantitative evaluation of uncertainty in effect estimates for risk assessment. This analysis considers uncertainty across model forms and model parameterizations with hexavalent chromium [Cr(VI)] and lung cancer mortality a...

  14. Machining Error Compensation Based on 3D Surface Model Modified by Measured Accuracy

    NASA Astrophysics Data System (ADS)

    Abe, Go; Aritoshi, Masatoshi; Tomita, Tomoki; Shirase, Keiichi

    Recently, a demand for precision machining of dies and molds with complex shapes has been increasing. Although CNC machine tools are utilized widely for machining, still machining error compensation is required to meet the increasing demand of machining accuracy. However, the machining error compensation is an operation which takes huge amount of skill, time and cost. This paper deals with a new method of the machining error compensation. The 3D surface data of the machined part is modified according to the machining error measured by CMM (Coordinate Measuring Machine). A compensated NC program is generated from the modified 3D surface data for the machining error compensation.

  15. Lateral-Directional Parameter Estimation on the X-48B Aircraft Using an Abstracted, Multi-Objective Effector Model

    NASA Technical Reports Server (NTRS)

    Ratnayake, Nalin A.; Waggoner, Erin R.; Taylor, Brian R.

    2011-01-01

    The problem of parameter estimation on hybrid-wing-body aircraft is complicated by the fact that many design candidates for such aircraft involve a large number of aerodynamic control effectors that act in coplanar motion. This adds to the complexity already present in the parameter estimation problem for any aircraft with a closed-loop control system. Decorrelation of flight and simulation data must be performed in order to ascertain individual surface derivatives with any sort of mathematical confidence. Non-standard control surface configurations, such as clamshell surfaces and drag-rudder modes, further complicate the modeling task. In this paper, time-decorrelation techniques are applied to a model structure selected through stepwise regression for simulated and flight-generated lateral-directional parameter estimation data. A virtual effector model that uses mathematical abstractions to describe the multi-axis effects of clamshell surfaces is developed and applied. Comparisons are made between time history reconstructions and observed data in order to assess the accuracy of the regression model. The Cram r-Rao lower bounds of the estimated parameters are used to assess the uncertainty of the regression model relative to alternative models. Stepwise regression was found to be a useful technique for lateral-directional model design for hybrid-wing-body aircraft, as suggested by available flight data. Based on the results of this study, linear regression parameter estimation methods using abstracted effectors are expected to perform well for hybrid-wing-body aircraft properly equipped for the task.

  16. The Sausage Machine: A New Two-Stage Parsing Model.

    ERIC Educational Resources Information Center

    Frazier, Lyn; Fodor, Janet Dean

    1978-01-01

    The human sentence parsing device assigns phrase structure to sentences in two steps. The first stage parser assigns lexical and phrasal nodes to substrings of words. The second stage parser then adds higher nodes to link these phrasal packages together into a complete phrase marker. This model is compared with others. (Author/RD)

  17. Ghosts in the Machine. Interoceptive Modeling for Chronic Pain Treatment

    PubMed Central

    Di Lernia, Daniele; Serino, Silvia; Cipresso, Pietro; Riva, Giuseppe

    2016-01-01

    Pain is a complex and multidimensional perception, embodied in our daily experiences through interoceptive appraisal processes. The article reviews the recent literature about interoception along with predictive coding theories and tries to explain a missing link between the sense of the physiological condition of the entire body and the perception of pain in chronic conditions, which are characterized by interoceptive deficits. Understanding chronic pain from an interoceptive point of view allows us to better comprehend the multidimensional nature of this specific organic information, integrating the input of several sources from Gifford's Mature Organism Model to Melzack's neuromatrix. The article proposes the concept of residual interoceptive images (ghosts), to explain the diffuse multilevel nature of chronic pain perceptions. Lastly, we introduce a treatment concept, forged upon the possibility to modify the interoceptive chronic representation of pain through external input in a process that we call interoceptive modeling, with the ultimate goal of reducing pain in chronic subjects. PMID:27445681

  18. Machine Visual Motion Detection Modeled On Vertebrate Retina

    NASA Astrophysics Data System (ADS)

    Blackburn, M. R.; Nguyen, H. G.; Kaomea, P. K.

    1988-12-01

    Real-time motion analysis would be very useful for autonomous undersea vehicle (AUV) navigation, target tracking, homing, and obstacle avoidance. The perception of motion is well developed in animals from insects to man, providing solutions to similar problems. We have therefore applied a model of the motion analysis subnetwork in the vertebrate retina to visual navigation in the AUV. The model is currently implemented in the C programming language as a discrete- time serial approximation of a continuous-time parallel process. Running on an IBM-PC/AT with digitized video camera images, the system can detect and describe motion in a 16 by 16 receptor field at the rate of 4 updates per second. The system responds accurately with direction and speed information to images moving across the visual field at velocities less than 8 degrees of visual angle per second at signal-to-noise ratios greater than 3. The architecture is parallel and its sparse connections do not require long-term modifications. The model is thus appropriate for implementation in VLSI optoelectronics.

  19. ShrinkWrap: 3D model abstraction for remote sensing simulation

    SciTech Connect

    Pope, Paul A

    2009-01-01

    Remote sensing simulations often require the use of 3D models of objects of interest. There are a multitude of these models available from various commercial sources. There are image processing, computational, database storage, and . data access advantages to having a regularized, encapsulating, triangular mesh representing the surface of a 3D object model. However, this is usually not how these models are stored. They can have too much detail in some areas, and not enough detail in others. They can have a mix of planar geometric primitives (triangles, quadrilaterals, n-sided polygons) representing not only the surface of the model, but also interior features. And the exterior mesh is usually not regularized nor encapsulating. This paper presents a method called SHRlNKWRAP which can be used to process 3D object models to achieve output models having the aforementioned desirable traits. The method works by collapsing an encapsulating sphere, which has a regularized triangular mesh on its surface, onto the surface of the model. A GUI has been developed to make it easy to leverage this capability. The SHRlNKWRAP processing chain and use of the GUI are described and illustrated.

  20. Modeling aspects of estuarine eutrophication. (Latest citations from the Selected Water Resources Abstracts database). Published Search

    SciTech Connect

    Not Available

    1993-05-01

    The bibliography contains citations concerning mathematical modeling of existing water quality stresses in estuaries, harbors, bays, and coves. Both physical hydraulic and numerical models for estuarine circulation are discussed. (Contains a minimum of 96 citations and includes a subject term index and title list.)

  1. Modeling of the flow stress for AISI H13 Tool Steel during Hard Machining Processes

    NASA Astrophysics Data System (ADS)

    Umbrello, Domenico; Rizzuti, Stefania; Outeiro, José C.; Shivpuri, Rajiv

    2007-04-01

    In general, the flow stress models used in computer simulation of machining processes are a function of effective strain, effective strain rate and temperature developed during the cutting process. However, these models do not adequately describe the material behavior in hard machining, where a range of material hardness between 45 and 60 HRC are used. Thus, depending on the specific material hardness different material models must be used in modeling the cutting process. This paper describes the development of a hardness-based flow stress and fracture models for the AISI H13 tool steel, which can be applied for range of material hardness mentioned above. These models were implemented in a non-isothermal viscoplastic numerical model to simulate the machining process for AISI H13 with various hardness values and applying different cutting regime parameters. Predicted results are validated by comparing them with experimental results found in the literature. They are found to predict reasonably well the cutting forces as well as the change in chip morphology from continuous to segmented chip as the material hardness change.

  2. Improved quality prediction model for multistage machining process based on geometric constraint equation

    NASA Astrophysics Data System (ADS)

    Zhu, Limin; He, Gaiyun; Song, Zhanjie

    2016-03-01

    Product variation reduction is critical to improve process efficiency and product quality, especially for multistage machining process (MMP). However, due to the variation accumulation and propagation, it becomes quite difficult to predict and reduce product variation for MMP. While the method of statistical process control can be used to control product quality, it is used mainly to monitor the process change rather than to analyze the cause of product variation. In this paper, based on a differential description of the contact kinematics of locators and part surfaces, and the geometric constraints equation defined by the locating scheme, an improved analytical variation propagation model for MMP is presented. In which the influence of both locator position and machining error on part quality is considered while, in traditional model, it usually focuses on datum error and fixture error. Coordinate transformation theory is used to reflect the generation and transmission laws of error in the establishment of the model. The concept of deviation matrix is heavily applied to establish an explicit mapping between the geometric deviation of part and the process error sources. In each machining stage, the part deviation is formulized as three separated components corresponding to three different kinds of error sources, which can be further applied to fault identification and design optimization for complicated machining process. An example part for MMP is given out to validate the effectiveness of the methodology. The experiment results show that the model prediction and the actual measurement match well. This paper provides a method to predict part deviation under the influence of fixture error, datum error and machining error, and it enriches the way of quality prediction for MMP.

  3. Modeling and optimizing electrodischarge machine process (EDM) with an approach based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zabbah, Iman

    2012-01-01

    Electro Discharge Machine (EDM) is the commonest untraditional method of production for forming metals and the Non-Oxide ceramics. The increase of smoothness, the increase of the remove of filings, and also the decrease of proportional erosion tool has an important role in this machining. That is directly related to the choosing of input parameters.The complicated and non-linear nature of EDM has made the process impossible with usual and classic method. So far, some methods have been used based on intelligence to optimize this process. At the top of them we can mention artificial neural network that has modelled the process as a black box. The problem of this kind of machining is seen when a workpiece is composited of the collection of carbon-based materials such as silicon carbide. In this article, besides using the new method of mono-pulse technical of EDM, we design a fuzzy neural network and model it. Then the genetic algorithm is used to find the optimal inputs of machine. In our research, workpiece is a Non-Oxide metal called silicon carbide. That makes the control process more difficult. At last, the results are compared with the previous methods.

  4. Modeling and optimizing electrodischarge machine process (EDM) with an approach based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zabbah, Iman

    2011-12-01

    Electro Discharge Machine (EDM) is the commonest untraditional method of production for forming metals and the Non-Oxide ceramics. The increase of smoothness, the increase of the remove of filings, and also the decrease of proportional erosion tool has an important role in this machining. That is directly related to the choosing of input parameters.The complicated and non-linear nature of EDM has made the process impossible with usual and classic method. So far, some methods have been used based on intelligence to optimize this process. At the top of them we can mention artificial neural network that has modelled the process as a black box. The problem of this kind of machining is seen when a workpiece is composited of the collection of carbon-based materials such as silicon carbide. In this article, besides using the new method of mono-pulse technical of EDM, we design a fuzzy neural network and model it. Then the genetic algorithm is used to find the optimal inputs of machine. In our research, workpiece is a Non-Oxide metal called silicon carbide. That makes the control process more difficult. At last, the results are compared with the previous methods.

  5. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications

    SciTech Connect

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi; Sozer, Yilmaz; Husain, Iqbal; Muljadi, Eduard

    2015-09-02

    This paper presents a nonlinear analytical model of a novel double sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets (PM), stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry which makes it a good alternative for evaluating prospective designs of TFM as compared to finite element solvers which are numerically intensive and require more computation time. A single phase, 1 kW, 400 rpm machine is analytically modeled and its resulting flux distribution, no-load EMF and torque, verified with Finite Element Analysis (FEA). The results are found to be in agreement with less than 5% error, while reducing the computation time by 25 times.

  6. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications: Preprint

    SciTech Connect

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi; Sozer, Yilmaz; Husain; Iqbal; Muljadi, Eduard

    2015-08-24

    This paper presents a nonlinear analytical model of a novel double-sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets, stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry that makes it a good alternative for evaluating prospective designs of TFM compared to finite element solvers that are numerically intensive and require more computation time. A single-phase, 1-kW, 400-rpm machine is analytically modeled, and its resulting flux distribution, no-load EMF, and torque are verified with finite element analysis. The results are found to be in agreement, with less than 5% error, while reducing the computation time by 25 times.

  7. Abstraction and art.

    PubMed Central

    Gortais, Bernard

    2003-01-01

    In a given social context, artistic creation comprises a set of processes, which relate to the activity of the artist and the activity of the spectator. Through these processes we see and understand that the world is vaster than it is said to be. Artistic processes are mediated experiences that open up the world. A successful work of art expresses a reality beyond actual reality: it suggests an unknown world using the means and the signs of the known world. Artistic practices incorporate the means of creation developed by science and technology and change forms as they change. Artists and the public follow different processes of abstraction at different levels, in the definition of the means of creation, of representation and of perception of a work of art. This paper examines how the processes of abstraction are used within the framework of the visual arts and abstract painting, which appeared during a period of growing importance for the processes of abstraction in science and technology, at the beginning of the twentieth century. The development of digital platforms and new man-machine interfaces allow multimedia creations. This is performed under the constraint of phases of multidisciplinary conceptualization using generic representation languages, which tend to abolish traditional frontiers between the arts: visual arts, drama, dance and music. PMID:12903659

  8. A Simple Computational Model of a jellyfish-like flying machine

    NASA Astrophysics Data System (ADS)

    Fang, Fang; Ristroph, Leif; Shelley, Michael

    2013-11-01

    We explore theoretically the aerodynamics of a jellyfish-like flying machine recently fabricated at NYU. This experimental device achieves flight and hovering by opening and closing a set of flapping wings. It displays orientational flight stability without additional control surfaces or feedback control. Our model machine consists of two symmetric massless flapping wings connected to a body with mass and moment of inertia. A vortex sheet shedding and wake model is used for the flow simulation. Use of the Fast Multipole Method (FMM), and adaptive addition/deletion of vortices, allows us to simulate for long times and resolve complex wakes. We use our model to explore the physical parameters that maintain body hovering, its ascent and descent, and investigate the stability of these states.

  9. Using Machine Learning to Create Turbine Performance Models (Presentation)

    SciTech Connect

    Clifton, A.

    2013-04-01

    Wind turbine power output is known to be a strong function of wind speed, but is also affected by turbulence and shear. In this work, new aerostructural simulations of a generic 1.5 MW turbine are used to explore atmospheric influences on power output. Most significant is the hub height wind speed, followed by hub height turbulence intensity and then wind speed shear across the rotor disk. These simulation data are used to train regression trees that predict the turbine response for any combination of wind speed, turbulence intensity, and wind shear that might be expected at a turbine site. For a randomly selected atmospheric condition, the accuracy of the regression tree power predictions is three times higher than that of the traditional power curve methodology. The regression tree method can also be applied to turbine test data and used to predict turbine performance at a new site. No new data is required in comparison to the data that are usually collected for a wind resource assessment. Implementing the method requires turbine manufacturers to create a turbine regression tree model from test site data. Such an approach could significantly reduce bias in power predictions that arise because of different turbulence and shear at the new site, compared to the test site.

  10. Law machines: scale models, forensic materiality and the making of modern patent law.

    PubMed

    Pottage, Alain

    2011-10-01

    Early US patent law was machine made. Before the Patent Office took on the function of examining patent applications in 1836, questions of novelty and priority were determined in court, within the forum of the infringement action. And at all levels of litigation, from the circuit courts up to the Supreme Court, working models were the media through which doctrine, evidence and argument were made legible, communicated and interpreted. A model could be set on a table, pointed at, picked up, rotated or upended so as to display a point of interest to a particular audience within the courtroom, and, crucially, set in motion to reveal the 'mode of operation' of a machine. The immediate object of demonstration was to distinguish the intangible invention from its tangible embodiment, but models also'machined' patent law itself. Demonstrations of patent claims with models articulated and resolved a set of conceptual tensions that still make the definition and apprehension of the invention difficult, even today, but they resolved these tensions in the register of materiality, performativity and visibility, rather than the register of conceptuality. The story of models tells us something about how inventions emerge and subsist within the context of patent litigation and patent doctrine, and it offers a starting point for renewed reflection on the question of how technology becomes property.

  11. Modeling and predicting abstract concept or idea introduction and propagation through geopolitical groups

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger M.; Handley, James W.; Hicklen, Michael L.

    2007-04-01

    This paper describes a novel capability for modeling known idea propagation transformations and predicting responses to new ideas from geopolitical groups. Ideas are captured using semantic words that are text based and bear cognitive definitions. We demonstrate a unique algorithm for converting these into analytical predictive equations. Using the illustrative idea of "proposing a gasoline price increase of 1 per gallon from 2" and its changing perceived impact throughout 5 demographic groups, we identify 13 cost of living Diplomatic, Information, Military, and Economic (DIME) features common across all 5 demographic groups. This enables the modeling and monitoring of Political, Military, Economic, Social, Information, and Infrastructure (PMESII) effects of each group to this idea and how their "perception" of this proposal changes. Our algorithm and results are summarized in this paper.

  12. A paradigm for data-driven predictive modeling using field inversion and machine learning

    NASA Astrophysics Data System (ADS)

    Parish, Eric J.; Duraisamy, Karthik

    2016-01-01

    We propose a modeling paradigm, termed field inversion and machine learning (FIML), that seeks to comprehensively harness data from sources such as high-fidelity simulations and experiments to aid the creation of improved closure models for computational physics applications. In contrast to inferring model parameters, this work uses inverse modeling to obtain corrective, spatially distributed functional terms, offering a route to directly address model-form errors. Once the inference has been performed over a number of problems that are representative of the deficient physics in the closure model, machine learning techniques are used to reconstruct the model corrections in terms of variables that appear in the closure model. These reconstructed functional forms are then used to augment the closure model in a predictive computational setting. As a first demonstrative example, a scalar ordinary differential equation is considered, wherein the model equation has missing and deficient terms. Following this, the methodology is extended to the prediction of turbulent channel flow. In both of these applications, the approach is demonstrated to be able to successfully reconstruct functional corrections and yield accurate predictive solutions while providing a measure of model form uncertainties.

  13. ERGONOMICS ABSTRACTS 48347-48982.

    ERIC Educational Resources Information Center

    Ministry of Technology, London (England). Warren Spring Lab.

    IN THIS COLLECTION OF ERGONOMICS ABSTRACTS AND ANNOTATIONS THE FOLLOWING AREAS OF CONCERN ARE REPRESENTED--GENERAL REFERENCES, METHODS, FACILITIES, AND EQUIPMENT RELATING TO ERGONOMICS, SYSTEMS OF MAN AND MACHINES, VISUAL, AUDITORY, AND OTHER SENSORY INPUTS AND PROCESSES (INCLUDING SPEECH AND INTELLIGIBILITY), INPUT CHANNELS, BODY MEASUREMENTS,…

  14. Machine Learning Techniques for Combining Multi-Model Climate Projections (Invited)

    NASA Astrophysics Data System (ADS)

    Monteleoni, C.

    2013-12-01

    The threat of climate change is one of the greatest challenges currently facing society. Given the profound impact machine learning has made on the natural sciences to which it has been applied, such as the field of bioinformatics, machine learning is poised to accelerate discovery in climate science. Recent advances in the fledgling field of climate informatics have demonstrated the promise of machine learning techniques for problems in climate science. A key problem in climate science is how to combine the projections of the multi-model ensemble of global climate models that inform the Intergovernmental Panel on Climate Change (IPCC). I will present three approaches to this problem. Our Tracking Climate Models (TCM) work demonstrated the promise of an algorithm for online learning with expert advice, for this task. Given temperature projections and hindcasts from 20 IPCC global climate models, and over 100 years of historical temperature data, TCM generated predictions that tracked the changing sequence of which model currently predicts best. On historical data, at both annual and monthly time-scales, and in future simulations, TCM consistently outperformed the average over climate models, the existing benchmark in climate science, at both global and continental scales. We then extended TCM to take into account climate model projections at higher spatial resolutions, and to model geospatial neighborhood influence between regions. Our second algorithm enables neighborhood influence by modifying the transition dynamics of the Hidden Markov Model from which TCM is derived, allowing the performance of spatial neighbors to influence the temporal switching probabilities for the best climate model at a given location. We recently applied a third technique, sparse matrix completion, in which we create a sparse (incomplete) matrix from climate model projections/hindcasts and observed temperature data, and apply a matrix completion algorithm to recover it, yielding

  15. Monte Carlo simulation of domain growth in the kinetic Ising model on the connection machine

    NASA Astrophysics Data System (ADS)

    Amar, Jacques G.; Sullivan, Francis

    1989-10-01

    A fast multispin algorithm for the Monte Carlo simulation of the two-dimensional spin-exchange kinetic Ising model, previously described by Sullivan and Mountain and used by Amar et al. has been adapted for use on the Connection Machine and applied as a first test in a calculation of domain growth. Features of the code include: (a) the use of demon bits, (b) the simulation of several runs simultaneously to improve the efficiency of the code, (c) the use of virtual processors to simulate easily and efficiently a larger system size, (d) the use of the (NEWS) grid for last communication between neighbouring processors and updating of boundary layers, (e) the implementation of an efficient random number generator much faster than that provided by Thinking Machines Corp., and (f) the use of the LISP function "funcall" to select which processors to update. Overall speed of the code when run on a (128x128) processor machine is about 130 million attempted spin-exchanges per second, about 9 times faster than the comparable code, using hardware vectorised-logic operations and 64-bit multispin coding on the Cyber 205. The same code can be used on a larger machine (65 536 processors) and should produce speeds in excess of 500 million attempted spin-exchanges per second.

  16. Abstraction of mechanistic sorption model results for performance assessment calculations at Yucca Mountain, Nevada

    SciTech Connect

    Turner, D.R.; Pabalan, R.T. )

    1999-01-01

    Sorption onto minerals in the geologic setting may help to mitigate potential radionuclide transport from the proposed high-level radioactive waste repository at Yucca Mountain (YM), Nevada. An approach is developed for including aspects of more mechanistic sorption models into current probabilistic performance assessment (PA) calculations. Data on water chemistry from the vicinity of YM are screened and used to calculate the ranges in parameters that could exert control on radionuclide corruption behavior. Using a diffuse-layer surface complexation model, sorption parameters for Np(V) and U(VI) are calculated based on the chemistry of each water sample. Model results suggest that log normal probability distribution functions (PDFs) of sorption parameters are appropriate for most of the samples, but the calculated range is almost five orders of magnitude for Np(V) sorption and nine orders of magnitude for U(VI) sorption. Calculated sorption parameters may also vary at a single sample location by almost a factor of 10 over time periods of the order of days to years due to changes in chemistry, although sampling and analytical methodologies may introduce artifacts that add uncertainty to the evaluation of these fluctuations. Finally, correlation coefficients between the calculated Np(V) and U(VI) sorption parameters can be included as input into PA sampling routines, so that the value selected for one radionuclide sorption parameter is conditioned by its statistical relationship to the others. The approaches outlined here can be adapted readily to current PA efforts, using site-specific information to provide geochemical constraints on PDFs for radionuclide transport parameters.

  17. Abstraction of mechanistic sorption model results for performance assessment calculations at Yucca Mountain, Nevada

    SciTech Connect

    Turner, D.R.; Pabalan, R.T.

    1999-11-01

    Sorption onto minerals in the geologic setting may help to mitigate potential radionuclide transport from the proposed high-level radioactive waste repository at Yucca Mountain (YM), Nevada. An approach is developed for including aspects of more mechanistic sorption models into current probabilistic performance assessment (PA) calculations. Data on water chemistry from the vicinity of YM are screened and used to calculate the ranges in parameters that could exert control on radionuclide corruption behavior. Using a diffuse-layer surface complexation model, sorption parameters for Np(V) and U(VI) are calculated based on the chemistry of each water sample. Model results suggest that log normal probability distribution functions (PDFs) of sorption parameters are appropriate for most of the samples, but the calculated range is almost five orders of magnitude for Np(V) sorption and nine orders of magnitude for U(VI) sorption. Calculated sorption parameters may also vary at a single sample location by almost a factor of 10 over time periods of the order of days to years due to changes in chemistry, although sampling and analytical methodologies may introduce artifacts that add uncertainty to the evaluation of these fluctuations. Finally, correlation coefficients between the calculated Np(V) and U(VI) sorption parameters can be included as input into PA sampling routines, so that the value selected for one radionuclide sorption parameter is conditioned by its statistical relationship to the others. The approaches outlined here can be adapted readily to current PA efforts, using site-specific information to provide geochemical constraints on PDFs for radionuclide transport parameters.

  18. Learning with Technology: Video Modeling with Concrete-Representational-Abstract Sequencing for Students with Autism Spectrum Disorder.

    PubMed

    Yakubova, Gulnoza; Hughes, Elizabeth M; Shinaberry, Megan

    2016-07-01

    The purpose of this study was to determine the effectiveness of a video modeling intervention with concrete-representational-abstract instructional sequence in teaching mathematics concepts to students with autism spectrum disorder (ASD). A multiple baseline across skills design of single-case experimental methodology was used to determine the effectiveness of the intervention on the acquisition and maintenance of addition, subtraction, and number comparison skills for four elementary school students with ASD. Findings supported the effectiveness of the intervention in improving skill acquisition and maintenance at a 3-week follow-up. Implications for practice and future research are discussed. PMID:26983919

  19. A study of sound transmission in an abstract middle ear using physical and finite element models.

    PubMed

    Gonzalez-Herrera, Antonio; Olson, Elizabeth S

    2015-11-01

    The classical picture of middle ear (ME) transmission has the tympanic membrane (TM) as a piston and the ME cavity as a vacuum. In reality, the TM moves in a complex multiphasic pattern and substantial pressure is radiated into the ME cavity by the motion of the TM. This study explores ME transmission with a simple model, using a tube terminated with a plastic membrane. Membrane motion was measured with a laser interferometer and pressure on both sides of the membrane with micro-sensors that could be positioned close to the membrane without disturbance. A finite element model of the system explored the experimental results. Both experimental and theoretical results show resonances that are in some cases primarily acoustical or mechanical and sometimes produced by coupled acousto-mechanics. The largest membrane motions were a result of the membrane's mechanical resonances. At these resonant frequencies, sound transmission through the system was larger with the membrane in place than it was when the membrane was absent.

  20. Modeling Physical Processes at the Nanoscale—Insight into Self-Organization of Small Systems (abstract)

    NASA Astrophysics Data System (ADS)

    Proykova, Ana

    2009-04-01

    Essential contributions have been made in the field of finite-size systems of ingredients interacting with potentials of various ranges. Theoretical simulations have revealed peculiar size effects on stability, ground state structure, phases, and phase transformation of systems confined in space and time. Models developed in the field of pure physics (atomic and molecular clusters) have been extended and successfully transferred to finite-size systems that seem very different—small-scale financial markets, autoimmune reactions, and social group reactions to advertisements. The models show that small-scale markets diverge unexpectedly fast as a result of small fluctuations; autoimmune reactions are sequences of two discontinuous phase transitions; and social groups possess critical behavior (social percolation) under the influence of an external field (advertisement). Some predicted size-dependent properties have been experimentally observed. These findings lead to the hypothesis that restrictions on an object's size determine the object's total internal (configuration) and external (environmental) interactions. Since phases are emergent phenomena produced by self-organization of a large number of particles, the occurrence of a phase in a system containing a small number of ingredients is remarkable.

  1. Kinetic modeling of hydrocarbon autoignition at low and intermediate temperatures in a rapid compression machine

    SciTech Connect

    Curran, H J; Pitz, W J; Westbrook, C K; Griffiths, J F; Mohamed, C

    2000-11-01

    A computer model is used to examine oxidation of hydrocarbon fuels in a rapid compression machine. For one of the fuels studied, n-heptane, significant fuel consumption is computed to take place during the compression stroke under some operating conditions, while for the less reactive n-pentane, no appreciable fuel consumption occurs until after the end of compression. The third fuel studied, a 60 PRF mixture of iso-octane and n-heptane, exhibits behavior that is intermediate between that of n-heptane and n-pentane. The model results indicate that computational studies of rapid compression machine ignition must consider fuel reaction during compression in order to achieve satisfactory agreement between computed and experimental results.

  2. A mathematical model of the controlled axial flow divider for mobile machines

    NASA Astrophysics Data System (ADS)

    Mulyukin, V. L.; Karelin, D. L.; Belousov, A. M.

    2016-06-01

    The authors give a mathematical model of the axial adjustable flow divider allowing one to define the parameters of the feed pump and the hydraulic motor-wheels in the multi-circuit hydrostatic transmission of mobile machines, as well as for example built features that allows to clearly evaluate the mutual influence of the values of pressure and flow on all input and output circuits of the system.

  3. RMP model based optimization of power system stabilizers in multi-machine power system.

    PubMed

    Baek, Seung-Mook; Park, Jung-Wook

    2009-01-01

    This paper describes the nonlinear parameter optimization of power system stabilizer (PSS) by using the reduced multivariate polynomial (RMP) algorithm with the one-shot property. The RMP model estimates the second-order partial derivatives of the Hessian matrix after identifying the trajectory sensitivities, which can be computed from the hybrid system modeling with a set of differential-algebraic-impulsive-switched (DAIS) structure for a power system. Then, any nonlinear controller in the power system can be optimized by achieving a desired performance measure, mathematically represented by an objective function (OF). In this paper, the output saturation limiter of the PSS, which is used to improve low-frequency oscillation damping performance during a large disturbance, is optimally tuned exploiting the Hessian estimated by the RMP model. Its performances are evaluated with several case studies on both single-machine infinite bus (SMIB) and multi-machine power system (MMPS) by time-domain simulation. In particular, all nonlinear parameters of multiple PSSs on IEEE benchmark two-area four-machine power system are optimized to be robust against various disturbances by using the weighted sum of the OFs. PMID:19596547

  4. A model of unsteady spatially inhomogeneous flow in a radial-axial blade machine

    NASA Astrophysics Data System (ADS)

    Ambrozhevich, A. V.; Munshtukov, D. A.

    A two-dimensional model of the gasdynamic process in a radial-axial blade machine is proposed which allows for the instantaneous local state of the field of flow parameters, changes in the set angles along the median profile line, profile losses, and centrifugal and Coriolis forces. The model also allows for the injection of cooling air and completion of fuel combustion in the flow. The model is equally applicable to turbines and compressors. The use of the method of singularities provides for a unified and relatively simple description of various factors affecting the flow and, therefore, for computational efficiency.

  5. Extreme learning machine based spatiotemporal modeling of lithium-ion battery thermal dynamics

    NASA Astrophysics Data System (ADS)

    Liu, Zhen; Li, Han-Xiong

    2015-03-01

    Due to the overwhelming complexity of the electrochemical related behaviors and internal structure of lithium ion batteries, it is difficult to obtain an accurate mathematical expression of their thermal dynamics based on the physical principal. In this paper, a data based thermal model which is suitable for online temperature distribution estimation is proposed for lithium-ion batteries. Based on the physics based model, a simple but effective low order model is obtained using the Karhunen-Loeve decomposition method. The corresponding uncertain chemical related heat generation term in the low order model is approximated using extreme learning machine. All uncertain parameters in the low order model can be determined analytically in a linear way. Finally, the temperature distribution of the whole battery can be estimated in real time based on the identified low order model. Simulation results demonstrate the effectiveness of the proposed model. The simple training process of the model makes it superior for onboard application.

  6. Hypoglycemia prediction using machine learning models for patients with type 2 diabetes.

    PubMed

    Sudharsan, Bharath; Peeples, Malinda; Shomali, Mansur

    2015-01-01

    Minimizing the occurrence of hypoglycemia in patients with type 2 diabetes is a challenging task since these patients typically check only 1 to 2 self-monitored blood glucose (SMBG) readings per day. We trained a probabilistic model using machine learning algorithms and SMBG values from real patients. Hypoglycemia was defined as a SMBG value < 70 mg/dL. We validated our model using multiple data sets. In addition, we trained a second model, which used patient SMBG values and information about patient medication administration. The optimal number of SMBG values needed by the model was approximately 10 per week. The sensitivity of the model for predicting a hypoglycemia event in the next 24 hours was 92% and the specificity was 70%. In the model that incorporated medication information, the prediction window was for the hour of hypoglycemia, and the specificity improved to 90%. Our machine learning models can predict hypoglycemia events with a high degree of sensitivity and specificity. These models-which have been validated retrospectively and if implemented in real time-could be useful tools for reducing hypoglycemia in vulnerable patients.

  7. Machine Learning Methods Enable Predictive Modeling of Antibody Feature:Function Relationships in RV144 Vaccinees

    PubMed Central

    Choi, Ickwon; Chung, Amy W.; Suscovich, Todd J.; Rerks-Ngarm, Supachai; Pitisuttithum, Punnee; Nitayaphan, Sorachai; Kaewkungwal, Jaranit; O'Connell, Robert J.; Francis, Donald; Robb, Merlin L.; Michael, Nelson L.; Kim, Jerome H.; Alter, Galit; Ackerman, Margaret E.; Bailey-Kellogg, Chris

    2015-01-01

    The adaptive immune response to vaccination or infection can lead to the production of specific antibodies to neutralize the pathogen or recruit innate immune effector cells for help. The non-neutralizing role of antibodies in stimulating effector cell responses may have been a key mechanism of the protection observed in the RV144 HIV vaccine trial. In an extensive investigation of a rich set of data collected from RV144 vaccine recipients, we here employ machine learning methods to identify and model associations between antibody features (IgG subclass and antigen specificity) and effector function activities (antibody dependent cellular phagocytosis, cellular cytotoxicity, and cytokine release). We demonstrate via cross-validation that classification and regression approaches can effectively use the antibody features to robustly predict qualitative and quantitative functional outcomes. This integration of antibody feature and function data within a machine learning framework provides a new, objective approach to discovering and assessing multivariate immune correlates. PMID:25874406

  8. Experimental study on light induced influence model to mice using support vector machine

    NASA Astrophysics Data System (ADS)

    Ji, Lei; Zhao, Zhimin; Yu, Yinshan; Zhu, Xingyue

    2014-08-01

    Previous researchers have made studies on different influences created by light irradiation to animals, including retinal damage, changes of inner index and so on. However, the model of light induced damage to animals using physiological indicators as features in machine learning method is never founded. This study was designed to evaluate the changes in micro vascular diameter, the serum absorption spectrum and the blood flow influenced by light irradiation of different wavelengths, powers and exposure time with support vector machine (SVM). The micro images of the mice auricle were recorded and the vessel diameters were calculated by computer program. The serum absorption spectrums were analyzed. The result shows that training sample rate 20% and 50% have almost the same correct recognition rate. Better performance and accuracy was achieved by third-order polynomial kernel SVM quadratic optimization method and it worked suitably for predicting the light induced damage to organisms.

  9. Predicting Pre-planting Risk of Stagonospora nodorum blotch in Winter Wheat Using Machine Learning Models.

    PubMed

    Mehra, Lucky K; Cowger, Christina; Gross, Kevin; Ojiambo, Peter S

    2016-01-01

    Pre-planting factors have been associated with the late-season severity of Stagonospora nodorum blotch (SNB), caused by the fungal pathogen Parastagonospora nodorum, in winter wheat (Triticum aestivum). The relative importance of these factors in the risk of SNB has not been determined and this knowledge can facilitate disease management decisions prior to planting of the wheat crop. In this study, we examined the performance of multiple regression (MR) and three machine learning algorithms namely artificial neural networks, categorical and regression trees, and random forests (RF), in predicting the pre-planting risk of SNB in wheat. Pre-planting factors tested as potential predictor variables were cultivar resistance, latitude, longitude, previous crop, seeding rate, seed treatment, tillage type, and wheat residue. Disease severity assessed at the end of the growing season was used as the response variable. The models were developed using 431 disease cases (unique combinations of predictors) collected from 2012 to 2014 and these cases were randomly divided into training, validation, and test datasets. Models were evaluated based on the regression of observed against predicted severity values of SNB, sensitivity-specificity ROC analysis, and the Kappa statistic. A strong relationship was observed between late-season severity of SNB and specific pre-planting factors in which latitude, longitude, wheat residue, and cultivar resistance were the most important predictors. The MR model explained 33% of variability in the data, while machine learning models explained 47 to 79% of the total variability. Similarly, the MR model correctly classified 74% of the disease cases, while machine learning models correctly classified 81 to 83% of these cases. Results show that the RF algorithm, which explained 79% of the variability within the data, was the most accurate in predicting the risk of SNB, with an accuracy rate of 93%. The RF algorithm could allow early assessment of

  10. Predicting Pre-planting Risk of Stagonospora nodorum blotch in Winter Wheat Using Machine Learning Models

    PubMed Central

    Mehra, Lucky K.; Cowger, Christina; Gross, Kevin; Ojiambo, Peter S.

    2016-01-01

    Pre-planting factors have been associated with the late-season severity of Stagonospora nodorum blotch (SNB), caused by the fungal pathogen Parastagonospora nodorum, in winter wheat (Triticum aestivum). The relative importance of these factors in the risk of SNB has not been determined and this knowledge can facilitate disease management decisions prior to planting of the wheat crop. In this study, we examined the performance of multiple regression (MR) and three machine learning algorithms namely artificial neural networks, categorical and regression trees, and random forests (RF), in predicting the pre-planting risk of SNB in wheat. Pre-planting factors tested as potential predictor variables were cultivar resistance, latitude, longitude, previous crop, seeding rate, seed treatment, tillage type, and wheat residue. Disease severity assessed at the end of the growing season was used as the response variable. The models were developed using 431 disease cases (unique combinations of predictors) collected from 2012 to 2014 and these cases were randomly divided into training, validation, and test datasets. Models were evaluated based on the regression of observed against predicted severity values of SNB, sensitivity-specificity ROC analysis, and the Kappa statistic. A strong relationship was observed between late-season severity of SNB and specific pre-planting factors in which latitude, longitude, wheat residue, and cultivar resistance were the most important predictors. The MR model explained 33% of variability in the data, while machine learning models explained 47 to 79% of the total variability. Similarly, the MR model correctly classified 74% of the disease cases, while machine learning models correctly classified 81 to 83% of these cases. Results show that the RF algorithm, which explained 79% of the variability within the data, was the most accurate in predicting the risk of SNB, with an accuracy rate of 93%. The RF algorithm could allow early assessment of

  11. Predicting Pre-planting Risk of Stagonospora nodorum blotch in Winter Wheat Using Machine Learning Models.

    PubMed

    Mehra, Lucky K; Cowger, Christina; Gross, Kevin; Ojiambo, Peter S

    2016-01-01

    Pre-planting factors have been associated with the late-season severity of Stagonospora nodorum blotch (SNB), caused by the fungal pathogen Parastagonospora nodorum, in winter wheat (Triticum aestivum). The relative importance of these factors in the risk of SNB has not been determined and this knowledge can facilitate disease management decisions prior to planting of the wheat crop. In this study, we examined the performance of multiple regression (MR) and three machine learning algorithms namely artificial neural networks, categorical and regression trees, and random forests (RF), in predicting the pre-planting risk of SNB in wheat. Pre-planting factors tested as potential predictor variables were cultivar resistance, latitude, longitude, previous crop, seeding rate, seed treatment, tillage type, and wheat residue. Disease severity assessed at the end of the growing season was used as the response variable. The models were developed using 431 disease cases (unique combinations of predictors) collected from 2012 to 2014 and these cases were randomly divided into training, validation, and test datasets. Models were evaluated based on the regression of observed against predicted severity values of SNB, sensitivity-specificity ROC analysis, and the Kappa statistic. A strong relationship was observed between late-season severity of SNB and specific pre-planting factors in which latitude, longitude, wheat residue, and cultivar resistance were the most important predictors. The MR model explained 33% of variability in the data, while machine learning models explained 47 to 79% of the total variability. Similarly, the MR model correctly classified 74% of the disease cases, while machine learning models correctly classified 81 to 83% of these cases. Results show that the RF algorithm, which explained 79% of the variability within the data, was the most accurate in predicting the risk of SNB, with an accuracy rate of 93%. The RF algorithm could allow early assessment of

  12. Uncertainty "escalation" and use of machine learning to forecast residual and data model uncertainties

    NASA Astrophysics Data System (ADS)

    Solomatine, Dimitri

    2016-04-01

    When speaking about model uncertainty many authors implicitly assume the data uncertainty (mainly in parameters or inputs) which is probabilistically described by distributions. Often however it is look also into the residual uncertainty as well. It is hence reasonable to classify the main approaches to uncertainty analysis with respect to the two main types of model uncertainty that can be distinguished: A. The residual uncertainty of models. In this case the model parameters and/or model inputs are considered to be fixed (deterministic), i.e. the model is considered to be optimal (calibrated) and deterministic. Model error is considered as the manifestation of uncertainty. If there is enough past data about the model errors (i.e. it uncertainty), it is possible to build a statistical or machine learning model of uncertainty trained on this data. The following methods can be mentioned: (a) quantile regression (QR) method by Koenker and Basset in which linear regression is used to build predictive models for distribution quantiles [1] (b) a more recent approach that takes into account the input variables influencing such uncertainty and uses more advanced machine learning (non-linear) methods (neural networks, model trees etc.) - the UNEEC method [2,3,7] (c) and even more recent DUBRAUE method (Dynamic Uncertainty Model By Regression on Absolute Error), a autoregressive model of model residuals (it corrects the model residual first and then carries out the uncertainty prediction by a autoregressive statistical model) [5] B. The data uncertainty (parametric and/or input) - in this case we study the propagation of uncertainty (presented typically probabilistically) from parameters or inputs to the model outputs. In case of simple functions representing models analytical approaches can be used, or approximation methods (e.g., first-order second moment method). However, for real complex non-linear models implemented in software there is no other choice except using

  13. Mathematical concepts for modeling human behavior in complex man-machine systems

    NASA Technical Reports Server (NTRS)

    Johannsen, G.; Rouse, W. B.

    1979-01-01

    Many human behavior (e.g., manual control) models have been found to be inadequate for describing processes in certain real complex man-machine systems. An attempt is made to find a way to overcome this problem by examining the range of applicability of existing mathematical models with respect to the hierarchy of human activities in real complex tasks. Automobile driving is chosen as a baseline scenario, and a hierarchy of human activities is derived by analyzing this task in general terms. A structural description leads to a block diagram and a time-sharing computer analogy.

  14. Estimating the complexity of 3D structural models using machine learning methods

    NASA Astrophysics Data System (ADS)

    Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques

    2016-04-01

    Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.

  15. Structure Based Thermostability Prediction Models for Protein Single Point Mutations with Machine Learning Tools

    PubMed Central

    Jia, Lei; Yarlagadda, Ramya; Reed, Charles C.

    2015-01-01

    Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find “hot spots” in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html) is a public database that consists of thousands of protein mutants’ experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG) and melting temperature change (dTm) were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor) and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models. PMID:26361227

  16. Structure Based Thermostability Prediction Models for Protein Single Point Mutations with Machine Learning Tools.

    PubMed

    Jia, Lei; Yarlagadda, Ramya; Reed, Charles C

    2015-01-01

    Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find "hot spots" in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html) is a public database that consists of thousands of protein mutants' experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG) and melting temperature change (dTm) were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor) and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models.

  17. Structure Based Thermostability Prediction Models for Protein Single Point Mutations with Machine Learning Tools.

    PubMed

    Jia, Lei; Yarlagadda, Ramya; Reed, Charles C

    2015-01-01

    Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find "hot spots" in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html) is a public database that consists of thousands of protein mutants' experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG) and melting temperature change (dTm) were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor) and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models. PMID:26361227

  18. Constructing and validating readability models: the method of integrating multilevel linguistic features with machine learning.

    PubMed

    Sung, Yao-Ting; Chen, Ju-Ling; Cha, Ji-Her; Tseng, Hou-Chiang; Chang, Tao-Hsing; Chang, Kuo-En

    2015-06-01

    Multilevel linguistic features have been proposed for discourse analysis, but there have been few applications of multilevel linguistic features to readability models and also few validations of such models. Most traditional readability formulae are based on generalized linear models (GLMs; e.g., discriminant analysis and multiple regression), but these models have to comply with certain statistical assumptions about data properties and include all of the data in formulae construction without pruning the outliers in advance. The use of such readability formulae tends to produce a low text classification accuracy, while using a support vector machine (SVM) in machine learning can enhance the classification outcome. The present study constructed readability models by integrating multilevel linguistic features with SVM, which is more appropriate for text classification. Taking the Chinese language as an example, this study developed 31 linguistic features as the predicting variables at the word, semantic, syntax, and cohesion levels, with grade levels of texts as the criterion variable. The study compared four types of readability models by integrating unilevel and multilevel linguistic features with GLMs and an SVM. The results indicate that adopting a multilevel approach in readability analysis provides a better representation of the complexities of both texts and the reading comprehension process.

  19. Biosimilarity Assessments of Model IgG1-Fc Glycoforms Using a Machine Learning Approach.

    PubMed

    Kim, Jae Hyun; Joshi, Sangeeta B; Tolbert, Thomas J; Middaugh, C Russell; Volkin, David B; Smalter Hall, Aaron

    2016-02-01

    Biosimilarity assessments are performed to decide whether 2 preparations of complex biomolecules can be considered "highly similar." In this work, a machine learning approach is demonstrated as a mathematical tool for such assessments using a variety of analytical data sets. As proof-of-principle, physical stability data sets from 8 samples, 4 well-defined immunoglobulin G1-Fragment crystallizable glycoforms in 2 different formulations, were examined (see More et al., companion article in this issue). The data sets included triplicate measurements from 3 analytical methods across different pH and temperature conditions (2066 data features). Established machine learning techniques were used to determine whether the data sets contain sufficient discriminative power in this application. The support vector machine classifier identified the 8 distinct samples with high accuracy. For these data sets, there exists a minimum threshold in terms of information quality and volume to grant enough discriminative power. Generally, data from multiple analytical techniques, multiple pH conditions, and at least 200 representative features were required to achieve the highest discriminative accuracy. In addition to classification accuracy tests, various methods such as sample space visualization, similarity analysis based on Euclidean distance, and feature ranking by mutual information scores are demonstrated to display their effectiveness as modeling tools for biosimilarity assessments.

  20. Supercomputer Assisted Generation of Machine Learning Agents for the Calibration of Building Energy Models

    SciTech Connect

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard

    2013-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrot pur- poses. EnergyPlus is the agship Department of Energy software that performs BEM for dierent types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manu- ally by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building en- ergy modeling unfeasible for smaller projects. In this paper, we describe the \\Autotune" research which employs machine learning algorithms to generate agents for the dierent kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of En- ergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-eective cali- bration of building models.

  1. Classification of Mouse Sperm Motility Patterns Using an Automated Multiclass Support Vector Machines Model1

    PubMed Central

    Goodson, Summer G.; Zhang, Zhaojun; Tsuruta, James K.; Wang, Wei; O'Brien, Deborah A.

    2011-01-01

    Vigorous sperm motility, including the transition from progressive to hyperactivated motility that occurs in the female reproductive tract, is required for normal fertilization in mammals. We developed an automated, quantitative method that objectively classifies five distinct motility patterns of mouse sperm using Support Vector Machines (SVM), a common method in supervised machine learning. This multiclass SVM model is based on more than 2000 sperm tracks that were captured by computer-assisted sperm analysis (CASA) during in vitro capacitation and visually classified as progressive, intermediate, hyperactivated, slow, or weakly motile. Parameters associated with the classified tracks were incorporated into established SVM algorithms to generate a series of equations. These equations were integrated into a binary decision tree that sequentially sorts uncharacterized tracks into distinct categories. The first equation sorts CASA tracks into vigorous and nonvigorous categories. Additional equations classify vigorous tracks as progressive, intermediate, or hyperactivated and nonvigorous tracks as slow or weakly motile. Our CASAnova software uses these SVM equations to classify individual sperm motility patterns automatically. Comparisons of motility profiles from sperm incubated with and without bicarbonate confirmed the ability of the model to distinguish hyperactivated patterns of motility that develop during in vitro capacitation. The model accurately classifies motility profiles of sperm from a mutant mouse model with severe motility defects. Application of the model to sperm from multiple inbred strains reveals strain-dependent differences in sperm motility profiles. CASAnova provides a rapid and reproducible platform for quantitative comparisons of motility in large, heterogeneous populations of mouse sperm. PMID:21349820

  2. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    PubMed

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  3. A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces

    NASA Astrophysics Data System (ADS)

    Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  4. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP

    PubMed Central

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740

  5. Selecting statistical or machine learning techniques for regional landslide susceptibility modelling by evaluating spatial prediction

    NASA Astrophysics Data System (ADS)

    Goetz, Jason; Brenning, Alexander; Petschko, Helene; Leopold, Philip

    2015-04-01

    With so many techniques now available for landslide susceptibility modelling, it can be challenging to decide on which technique to apply. Generally speaking, the criteria for model selection should be tied closely to end users' purpose, which could be spatial prediction, spatial analysis or both. In our research, we focus on comparing the spatial predictive abilities of landslide susceptibility models. We illustrate how spatial cross-validation, a statistical approach for assessing spatial prediction performance, can be applied with the area under the receiver operating characteristic curve (AUROC) as a prediction measure for model comparison. Several machine learning and statistical techniques are evaluated for prediction in Lower Austria: support vector machine, random forest, bundling with penalized linear discriminant analysis, logistic regression, weights of evidence, and the generalized additive model. In addition to predictive performance, the importance of predictor variables in each model was estimated using spatial cross-validation by calculating the change in AUROC performance when variables are randomly permuted. The susceptibility modelling techniques were tested in three areas of interest in Lower Austria, which have unique geologic conditions associated with landslide occurrence. Overall, we found for the majority of comparisons that there were little practical or even statistically significant differences in AUROCs. That is the models' prediction performances were very similar. Therefore, in addition to prediction, the ability to interpret models for spatial analysis and the qualitative qualities of the prediction surface (map) are considered and discussed. The measure of variable importance provided some insight into the model behaviour for prediction, in particular for "black-box" models. However, there were no clear patterns in all areas of interest to why certain variables were given more importance over others.

  6. Evaluating machine learning and statistical prediction techniques for landslide susceptibility modeling

    NASA Astrophysics Data System (ADS)

    Goetz, J. N.; Brenning, A.; Petschko, H.; Leopold, P.

    2015-08-01

    Statistical and now machine learning prediction methods have been gaining popularity in the field of landslide susceptibility modeling. Particularly, these data driven approaches show promise when tackling the challenge of mapping landslide prone areas for large regions, which may not have sufficient geotechnical data to conduct physically-based methods. Currently, there is no best method for empirical susceptibility modeling. Therefore, this study presents a comparison of traditional statistical and novel machine learning models applied for regional scale landslide susceptibility modeling. These methods were evaluated by spatial k-fold cross-validation estimation of the predictive performance, assessment of variable importance for gaining insights into model behavior and by the appearance of the prediction (i.e. susceptibility) map. The modeling techniques applied were logistic regression (GLM), generalized additive models (GAM), weights of evidence (WOE), the support vector machine (SVM), random forest classification (RF), and bootstrap aggregated classification trees (bundling) with penalized discriminant analysis (BPLDA). These modeling methods were tested for three areas in the province of Lower Austria, Austria. The areas are characterized by different geological and morphological settings. Random forest and bundling classification techniques had the overall best predictive performances. However, the performances of all modeling techniques were for the majority not significantly different from each other; depending on the areas of interest, the overall median estimated area under the receiver operating characteristic curve (AUROC) differences ranged from 2.9 to 8.9 percentage points. The overall median estimated true positive rate (TPR) measured at a 10% false positive rate (FPR) differences ranged from 11 to 15pp. The relative importance of each predictor was generally different between the modeling methods. However, slope angle, surface roughness and plan

  7. Prediction of effluent concentration in a wastewater treatment plant using machine learning models.

    PubMed

    Guo, Hong; Jeong, Kwanho; Lim, Jiyeon; Jo, Jeongwon; Kim, Young Mo; Park, Jong-pyo; Kim, Joon Ha; Cho, Kyung Hwa

    2015-06-01

    Of growing amount of food waste, the integrated food waste and waste water treatment was regarded as one of the efficient modeling method. However, the load of food waste to the conventional waste treatment process might lead to the high concentration of total nitrogen (T-N) impact on the effluent water quality. The objective of this study is to establish two machine learning models-artificial neural networks (ANNs) and support vector machines (SVMs), in order to predict 1-day interval T-N concentration of effluent from a wastewater treatment plant in Ulsan, Korea. Daily water quality data and meteorological data were used and the performance of both models was evaluated in terms of the coefficient of determination (R2), Nash-Sutcliff efficiency (NSE), relative efficiency criteria (drel). Additionally, Latin-Hypercube one-factor-at-a-time (LH-OAT) and a pattern search algorithm were applied to sensitivity analysis and model parameter optimization, respectively. Results showed that both models could be effectively applied to the 1-day interval prediction of T-N concentration of effluent. SVM model showed a higher prediction accuracy in the training stage and similar result in the validation stage. However, the sensitivity analysis demonstrated that the ANN model was a superior model for 1-day interval T-N concentration prediction in terms of the cause-and-effect relationship between T-N concentration and modeling input values to integrated food waste and waste water treatment. This study suggested the efficient and robust nonlinear time-series modeling method for an early prediction of the water quality of integrated food waste and waste water treatment process.

  8. Uncertainty "escalation" and use of machine learning to forecast residual and data model uncertainties

    NASA Astrophysics Data System (ADS)

    Solomatine, Dimitri

    2016-04-01

    When speaking about model uncertainty many authors implicitly assume the data uncertainty (mainly in parameters or inputs) which is probabilistically described by distributions. Often however it is look also into the residual uncertainty as well. It is hence reasonable to classify the main approaches to uncertainty analysis with respect to the two main types of model uncertainty that can be distinguished: A. The residual uncertainty of models. In this case the model parameters and/or model inputs are considered to be fixed (deterministic), i.e. the model is considered to be optimal (calibrated) and deterministic. Model error is considered as the manifestation of uncertainty. If there is enough past data about the model errors (i.e. it uncertainty), it is possible to build a statistical or machine learning model of uncertainty trained on this data. The following methods can be mentioned: (a) quantile regression (QR) method by Koenker and Basset in which linear regression is used to build predictive models for distribution quantiles [1] (b) a more recent approach that takes into account the input variables influencing such uncertainty and uses more advanced machine learning (non-linear) methods (neural networks, model trees etc.) - the UNEEC method [2,3,7] (c) and even more recent DUBRAUE method (Dynamic Uncertainty Model By Regression on Absolute Error), a autoregressive model of model residuals (it corrects the model residual first and then carries out the uncertainty prediction by a autoregressive statistical model) [5] B. The data uncertainty (parametric and/or input) - in this case we study the propagation of uncertainty (presented typically probabilistically) from parameters or inputs to the model outputs. In case of simple functions representing models analytical approaches can be used, or approximation methods (e.g., first-order second moment method). However, for real complex non-linear models implemented in software there is no other choice except using

  9. Integrating Machine Learning into a Crowdsourced Model for Earthquake-Induced Damage Assessment

    NASA Technical Reports Server (NTRS)

    Rebbapragada, Umaa; Oommen, Thomas

    2011-01-01

    On January 12th, 2010, a catastrophic 7.0M earthquake devastated the country of Haiti. In the aftermath of an earthquake, it is important to rapidly assess damaged areas in order to mobilize the appropriate resources. The Haiti damage assessment effort introduced a promising model that uses crowdsourcing to map damaged areas in freely available remotely-sensed data. This paper proposes the application of machine learning methods to improve this model. Specifically, we apply work on learning from multiple, imperfect experts to the assessment of volunteer reliability, and propose the use of image segmentation to automate the detection of damaged areas. We wrap both tasks in an active learning framework in order to shift volunteer effort from mapping a full catalog of images to the generation of high-quality training data. We hypothesize that the integration of machine learning into this model improves its reliability, maintains the speed of damage assessment, and allows the model to scale to higher data volumes.

  10. Three-Phase Unbalanced Transient Dynamics and Powerflow for Modeling Distribution Systems With Synchronous Machines

    SciTech Connect

    Elizondo, Marcelo A.; Tuffner, Francis K.; Schneider, Kevin P.

    2016-01-01

    Unlike transmission systems, distribution feeders in North America operate under unbalanced conditions at all times, and generally have a single strong voltage source. When a distribution feeder is connected to a strong substation source, the system is dynamically very stable, even for large transients. However if a distribution feeder, or part of the feeder, is separated from the substation and begins to operate as an islanded microgrid, transient dynamics become more of an issue. To assess the impact of transient dynamics at the distribution level, it is not appropriate to use traditional transmission solvers, which generally assume transposed lines and balanced loads. Full electromagnetic solvers capture a high level of detail, but it is difficult to model large systems because of the required detail. This paper proposes an electromechanical transient model of synchronous machine for distribution-level modeling and microgrids. This approach includes not only the machine model, but also its interface with an unbalanced network solver, and a powerflow method to solve unbalanced conditions without a strong reference bus. The presented method is validated against a full electromagnetic transient simulation.

  11. A Genetic Algorithm Based Support Vector Machine Model for Blood-Brain Barrier Penetration Prediction

    PubMed Central

    Zhang, Daqing; Xiao, Jianfeng; Zhou, Nannan; Zheng, Mingyue; Luo, Xiaomin; Jiang, Hualiang; Chen, Kaixian

    2015-01-01

    Blood-brain barrier (BBB) is a highly complex physical barrier determining what substances are allowed to enter the brain. Support vector machine (SVM) is a kernel-based machine learning method that is widely used in QSAR study. For a successful SVM model, the kernel parameters for SVM and feature subset selection are the most important factors affecting prediction accuracy. In most studies, they are treated as two independent problems, but it has been proven that they could affect each other. We designed and implemented genetic algorithm (GA) to optimize kernel parameters and feature subset selection for SVM regression and applied it to the BBB penetration prediction. The results show that our GA/SVM model is more accurate than other currently available log BB models. Therefore, to optimize both SVM parameters and feature subset simultaneously with genetic algorithm is a better approach than other methods that treat the two problems separately. Analysis of our log BB model suggests that carboxylic acid group, polar surface area (PSA)/hydrogen-bonding ability, lipophilicity, and molecular charge play important role in BBB penetration. Among those properties relevant to BBB penetration, lipophilicity could enhance the BBB penetration while all the others are negatively correlated with BBB penetration. PMID:26504797

  12. The applications of machine learning algorithms in the modeling of estrogen-like chemicals.

    PubMed

    Liu, Huanxiang; Yao, Xiaojun; Gramatica, Paola

    2009-06-01

    Increasing concern is being shown by the scientific community, government regulators, and the public about endocrine-disrupting chemicals that, in the environment, are adversely affecting human and wildlife health through a variety of mechanisms, mainly estrogen receptor-mediated mechanisms of toxicity. Because of the large number of such chemicals in the environment, there is a great need for an effective means of rapidly assessing endocrine-disrupting activity in the toxicology assessment process. When faced with the challenging task of screening large libraries of molecules for biological activity, the benefits of computational predictive models based on quantitative structure-activity relationships to identify possible estrogens become immediately obvious. Recently, in order to improve the accuracy of prediction, some machine learning techniques were introduced to build more effective predictive models. In this review we will focus our attention on some recent advances in the use of these methods in modeling estrogen-like chemicals. The advantages and disadvantages of the machine learning algorithms used in solving this problem, the importance of the validation and performance assessment of the built models as well as their applicability domains will be discussed.

  13. Linear combinations of nonlinear models for predicting human-machine interface forces.

    PubMed

    Patton, James L; Mussa-Ivaldi, Ferdinando A

    2002-01-01

    This study presents a computational framework that capitalizes on known human neuromechanical characteristics during limb movements in order to predict human-machine interactions. A parallel-distributed approach, the mixture of nonlinear models, fits the relationship between the measured kinematics and kinetics at the handle of a robot. Each element of the mixture represented the arm and its controller as a feedforward nonlinear model of inverse dynamics plus a linear approximation of musculotendonous impedance. We evaluated this approach with data from experiments where subjects held the handle of a planar manipulandum robot and attempted to make point-to-point reaching movements. We compared the performance to the more conventional approach of a constrained, nonlinear optimization of the parameters. The mixture of nonlinear models accounted for 79 +/- 11% (mean +/- SD) of the variance in measured force, and force errors were 0.73 +/- 0.20% of the maximum exerted force. Solutions were acquired in half the time with a significantly better fit. However, both approaches suffered equally from the simplifying assumptions, namely that the human neuromechanical system consisted of a feedforward controller coupled with linear impedances and a moving state equilibrium. Hence, predictability was best limited to the first half of the movement. The mixture of nonlinear models may be useful in human-machine tasks such as in telerobotics, fly-by-wire vehicles, robotic training, and rehabilitation.

  14. A model-based analysis of impulsivity using a slot-machine gambling paradigm.

    PubMed

    Paliwal, Saee; Petzschner, Frederike H; Schmitz, Anna Katharina; Tittgemeyer, Marc; Stephan, Klaas E

    2014-01-01

    Impulsivity plays a key role in decision-making under uncertainty. It is a significant contributor to problem and pathological gambling (PG). Standard assessments of impulsivity by questionnaires, however, have various limitations, partly because impulsivity is a broad, multi-faceted concept. What remains unclear is which of these facets contribute to shaping gambling behavior. In the present study, we investigated impulsivity as expressed in a gambling setting by applying computational modeling to data from 47 healthy male volunteers who played a realistic, virtual slot-machine gambling task. Behaviorally, we found that impulsivity, as measured independently by the 11th revision of the Barratt Impulsiveness Scale (BIS-11), correlated significantly with an aggregate read-out of the following gambling responses: bet increases (BIs), machines switches (MS), casino switches (CS), and double-ups (DUs). Using model comparison, we compared a set of hierarchical Bayesian belief-updating models, i.e., the Hierarchical Gaussian Filter (HGF) and Rescorla-Wagner reinforcement learning (RL) models, with regard to how well they explained different aspects of the behavioral data. We then examined the construct validity of our winning models with multiple regression, relating subject-specific model parameter estimates to the individual BIS-11 total scores. In the most predictive model (a three-level HGF), the two free parameters encoded uncertainty-dependent mechanisms of belief updates and significantly explained BIS-11 variance across subjects. Furthermore, in this model, decision noise was a function of trial-wise uncertainty about winning probability. Collectively, our results provide a proof of concept that hierarchical Bayesian models can characterize the decision-making mechanisms linked to the impulsive traits of an individual. These novel indices of gambling mechanisms unmasked during actual play may be useful for online prevention measures for at-risk players and future

  15. A model-based analysis of impulsivity using a slot-machine gambling paradigm

    PubMed Central

    Paliwal, Saee; Petzschner, Frederike H.; Schmitz, Anna Katharina; Tittgemeyer, Marc; Stephan, Klaas E.

    2014-01-01

    Impulsivity plays a key role in decision-making under uncertainty. It is a significant contributor to problem and pathological gambling (PG). Standard assessments of impulsivity by questionnaires, however, have various limitations, partly because impulsivity is a broad, multi-faceted concept. What remains unclear is which of these facets contribute to shaping gambling behavior. In the present study, we investigated impulsivity as expressed in a gambling setting by applying computational modeling to data from 47 healthy male volunteers who played a realistic, virtual slot-machine gambling task. Behaviorally, we found that impulsivity, as measured independently by the 11th revision of the Barratt Impulsiveness Scale (BIS-11), correlated significantly with an aggregate read-out of the following gambling responses: bet increases (BIs), machines switches (MS), casino switches (CS), and double-ups (DUs). Using model comparison, we compared a set of hierarchical Bayesian belief-updating models, i.e., the Hierarchical Gaussian Filter (HGF) and Rescorla–Wagner reinforcement learning (RL) models, with regard to how well they explained different aspects of the behavioral data. We then examined the construct validity of our winning models with multiple regression, relating subject-specific model parameter estimates to the individual BIS-11 total scores. In the most predictive model (a three-level HGF), the two free parameters encoded uncertainty-dependent mechanisms of belief updates and significantly explained BIS-11 variance across subjects. Furthermore, in this model, decision noise was a function of trial-wise uncertainty about winning probability. Collectively, our results provide a proof of concept that hierarchical Bayesian models can characterize the decision-making mechanisms linked to the impulsive traits of an individual. These novel indices of gambling mechanisms unmasked during actual play may be useful for online prevention measures for at-risk players and

  16. A model-based analysis of impulsivity using a slot-machine gambling paradigm.

    PubMed

    Paliwal, Saee; Petzschner, Frederike H; Schmitz, Anna Katharina; Tittgemeyer, Marc; Stephan, Klaas E

    2014-01-01

    Impulsivity plays a key role in decision-making under uncertainty. It is a significant contributor to problem and pathological gambling (PG). Standard assessments of impulsivity by questionnaires, however, have various limitations, partly because impulsivity is a broad, multi-faceted concept. What remains unclear is which of these facets contribute to shaping gambling behavior. In the present study, we investigated impulsivity as expressed in a gambling setting by applying computational modeling to data from 47 healthy male volunteers who played a realistic, virtual slot-machine gambling task. Behaviorally, we found that impulsivity, as measured independently by the 11th revision of the Barratt Impulsiveness Scale (BIS-11), correlated significantly with an aggregate read-out of the following gambling responses: bet increases (BIs), machines switches (MS), casino switches (CS), and double-ups (DUs). Using model comparison, we compared a set of hierarchical Bayesian belief-updating models, i.e., the Hierarchical Gaussian Filter (HGF) and Rescorla-Wagner reinforcement learning (RL) models, with regard to how well they explained different aspects of the behavioral data. We then examined the construct validity of our winning models with multiple regression, relating subject-specific model parameter estimates to the individual BIS-11 total scores. In the most predictive model (a three-level HGF), the two free parameters encoded uncertainty-dependent mechanisms of belief updates and significantly explained BIS-11 variance across subjects. Furthermore, in this model, decision noise was a function of trial-wise uncertainty about winning probability. Collectively, our results provide a proof of concept that hierarchical Bayesian models can characterize the decision-making mechanisms linked to the impulsive traits of an individual. These novel indices of gambling mechanisms unmasked during actual play may be useful for online prevention measures for at-risk players and future

  17. Study of Two-Dimensional Compressible Non-Acoustic Modeling of Stirling Machine Type Components

    NASA Technical Reports Server (NTRS)

    Tew, Roy C., Jr.; Ibrahim, Mounir B.

    2001-01-01

    A two-dimensional (2-D) computer code was developed for modeling enclosed volumes of gas with oscillating boundaries, such as Stirling machine components. An existing 2-D incompressible flow computer code, CAST, was used as the starting point for the project. CAST was modified to use the compressible non-acoustic Navier-Stokes equations to model an enclosed volume including an oscillating piston. The devices modeled have low Mach numbers and are sufficiently small that the time required for acoustics to propagate across them is negligible. Therefore, acoustics were excluded to enable more time efficient computation. Background information about the project is presented. The compressible non-acoustic flow assumptions are discussed. The governing equations used in the model are presented in transport equation format. A brief description is given of the numerical methods used. Comparisons of code predictions with experimental data are then discussed.

  18. Accurate modeling of switched reluctance machine based on hybrid trained WNN

    SciTech Connect

    Song, Shoujun Ge, Lefei; Ma, Shaojie; Zhang, Man

    2014-04-15

    According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, the nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.

  19. Accurate modeling of switched reluctance machine based on hybrid trained WNN

    NASA Astrophysics Data System (ADS)

    Song, Shoujun; Ge, Lefei; Ma, Shaojie; Zhang, Man

    2014-04-01

    According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, the nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.

  20. Modelling soil water retention using support vector machines with genetic algorithm optimisation.

    PubMed

    Lamorski, Krzysztof; Sławiński, Cezary; Moreno, Felix; Barna, Gyöngyi; Skierucha, Wojciech; Arrue, José L

    2014-01-01

    This work presents point pedotransfer function (PTF) models of the soil water retention curve. The developed models allowed for estimation of the soil water content for the specified soil water potentials: -0.98, -3.10, -9.81, -31.02, -491.66, and -1554.78 kPa, based on the following soil characteristics: soil granulometric composition, total porosity, and bulk density. Support Vector Machines (SVM) methodology was used for model development. A new methodology for elaboration of retention function models is proposed. Alternative to previous attempts known from literature, the ν-SVM method was used for model development and the results were compared with the formerly used the C-SVM method. For the purpose of models' parameters search, genetic algorithms were used as an optimisation framework. A new form of the aim function used for models parameters search is proposed which allowed for development of models with better prediction capabilities. This new aim function avoids overestimation of models which is typically encountered when root mean squared error is used as an aim function. Elaborated models showed good agreement with measured soil water retention data. Achieved coefficients of determination values were in the range 0.67-0.92. Studies demonstrated usability of ν-SVM methodology together with genetic algorithm optimisation for retention modelling which gave better performing models than other tested approaches. PMID:24772030

  1. Modelling Soil Water Retention Using Support Vector Machines with Genetic Algorithm Optimisation

    PubMed Central

    Lamorski, Krzysztof; Sławiński, Cezary; Moreno, Felix; Barna, Gyöngyi; Skierucha, Wojciech; Arrue, José L.

    2014-01-01

    This work presents point pedotransfer function (PTF) models of the soil water retention curve. The developed models allowed for estimation of the soil water content for the specified soil water potentials: –0.98, –3.10, –9.81, –31.02, –491.66, and –1554.78 kPa, based on the following soil characteristics: soil granulometric composition, total porosity, and bulk density. Support Vector Machines (SVM) methodology was used for model development. A new methodology for elaboration of retention function models is proposed. Alternative to previous attempts known from literature, the ν-SVM method was used for model development and the results were compared with the formerly used the C-SVM method. For the purpose of models' parameters search, genetic algorithms were used as an optimisation framework. A new form of the aim function used for models parameters search is proposed which allowed for development of models with better prediction capabilities. This new aim function avoids overestimation of models which is typically encountered when root mean squared error is used as an aim function. Elaborated models showed good agreement with measured soil water retention data. Achieved coefficients of determination values were in the range 0.67–0.92. Studies demonstrated usability of ν-SVM methodology together with genetic algorithm optimisation for retention modelling which gave better performing models than other tested approaches. PMID:24772030

  2. Discriminative feature-rich models for syntax-based machine translation.

    SciTech Connect

    Dixon, Kevin R.

    2012-12-01

    This report describes the campus executive LDRD %E2%80%9CDiscriminative Feature-Rich Models for Syntax-Based Machine Translation,%E2%80%9D which was an effort to foster a better relationship between Sandia and Carnegie Mellon University (CMU). The primary purpose of the LDRD was to fund the research of a promising graduate student at CMU; in this case, Kevin Gimpel was selected from the pool of candidates. This report gives a brief overview of Kevin Gimpel's research.

  3. Feature combination networks for the interpretation of statistical machine learning models: application to Ames mutagenicity

    PubMed Central

    2014-01-01

    Background A new algorithm has been developed to enable the interpretation of black box models. The developed algorithm is agnostic to learning algorithm and open to all structural based descriptors such as fragments, keys and hashed fingerprints. The algorithm has provided meaningful interpretation of Ames mutagenicity predictions from both random forest and support vector machine models built on a variety of structural fingerprints. A fragmentation algorithm is utilised to investigate the model’s behaviour on specific substructures present in the query. An output is formulated summarising causes of activation and deactivation. The algorithm is able to identify multiple causes of activation or deactivation in addition to identifying localised deactivations where the prediction for the query is active overall. No loss in performance is seen as there is no change in the prediction; the interpretation is produced directly on the model’s behaviour for the specific query. Results Models have been built using multiple learning algorithms including support vector machine and random forest. The models were built on public Ames mutagenicity data and a variety of fingerprint descriptors were used. These models produced a good performance in both internal and external validation with accuracies around 82%. The models were used to evaluate the interpretation algorithm. Interpretation was revealed that links closely with understood mechanisms for Ames mutagenicity. Conclusion This methodology allows for a greater utilisation of the predictions made by black box models and can expedite further study based on the output for a (quantitative) structure activity model. Additionally the algorithm could be utilised for chemical dataset investigation and knowledge extraction/human SAR development. PMID:24661325

  4. Using a Support Vector Machine (SVM) to Improve Generalization Ability of Load Model Parameters

    SciTech Connect

    Ma, Jian; Dong, Zhao Yang; Zhang, Pei

    2009-04-24

    Load modeling plays an important role in power system stability analysis and planning studies. The parameters of load models may experience variations in different application situations. Choosing appropriate parameters is critical for dynamic simulation and stability studies in power system. This paper presents a method to select the parameters with good generalization ability based on a given large number of available parameters that have been identified from dynamic simulation data in different scenarios. Principal component analysis is used to extract the major features of the given parameter sets. Reduced feature vectors are obtained by mapping the given parameter sets into principal component space. Then support vectors are found by implementing a classification problem. Load model parameters based on the obtained support vectors are built to reflect the dynamic property of the load. All of the given parameter sets were identified from simulation data based on the New England 10-machine 39-bus system, by taking into account different situations, such as load types, fault locations, fault types, and fault clearing time. The parameters obtained by support vector machine have good generalization capability, and can represent the load more accurately in most situations.

  5. Simulation of abrasive flow machining process for 2D and 3D mixture models

    NASA Astrophysics Data System (ADS)

    Dash, Rupalika; Maity, Kalipada

    2015-12-01

    Improvement of surface finish and material removal has been quite a challenge in a finishing operation such as abrasive flow machining (AFM). Factors that affect the surface finish and material removal are media viscosity, extrusion pressure, piston velocity, and particle size in abrasive flow machining process. Performing experiments for all the parameters and accurately obtaining an optimized parameter in a short time are difficult to accomplish because the operation requires a precise finish. Computational fluid dynamics (CFD) simulation was employed to accurately determine optimum parameters. In the current work, a 2D model was designed, and the flow analysis, force calculation, and material removal prediction were performed and compared with the available experimental data. Another 3D model for a swaging die finishing using AFM was simulated at different viscosities of the media to study the effects on the controlling parameters. A CFD simulation was performed by using commercially available ANSYS FLUENT. Two phases were considered for the flow analysis, and multiphase mixture model was taken into account. The fluid was considered to be a

  6. Abstracts of SIG Sessions.

    ERIC Educational Resources Information Center

    Proceedings of the ASIS Annual Meeting, 1995

    1995-01-01

    Presents abstracts of 15 special interest group (SIG) sessions. Topics include navigation and information utilization in the Internet, natural language processing, automatic indexing, image indexing, classification, users' models of database searching, online public access catalogs, education for information professions, information services,…

  7. Machine learning models identify molecules active against the Ebola virus in vitro.

    PubMed

    Ekins, Sean; Freundlich, Joel S; Clark, Alex M; Anantpadma, Manu; Davey, Robert A; Madrid, Peter

    2015-01-01

    The search for small molecule inhibitors of Ebola virus (EBOV) has led to several high throughput screens over the past 3 years. These have identified a range of FDA-approved active pharmaceutical ingredients (APIs) with anti-EBOV activity in vitro and several of which are also active in a mouse infection model. There are millions of additional commercially-available molecules that could be screened for potential activities as anti-EBOV compounds. One way to prioritize compounds for testing is to generate computational models based on the high throughput screening data and then virtually screen compound libraries. In the current study, we have generated Bayesian machine learning models with viral pseudotype entry assay and the EBOV replication assay data. We have validated the models internally and externally. We have also used these models to computationally score the MicroSource library of drugs to select those likely to be potential inhibitors. Three of the highest scoring molecules that were not in the model training sets, quinacrine, pyronaridine and tilorone, were tested in vitro and had EC 50 values of 350, 420 and 230 nM, respectively. Pyronaridine is a component of a combination therapy for malaria that was recently approved by the European Medicines Agency, which may make it more readily accessible for clinical testing. Like other known antimalarial drugs active against EBOV, it shares the 4-aminoquinoline scaffold. Tilorone, is an investigational antiviral agent that has shown a broad array of biological activities including cell growth inhibition in cancer cells, antifibrotic properties, α7 nicotinic receptor agonist activity, radioprotective activity and activation of hypoxia inducible factor-1. Quinacrine is an antimalarial but also has use as an anthelmintic. Our results suggest data sets with less than 1,000 molecules can produce validated machine learning models that can in turn be utilized to identify novel EBOV inhibitors in vitro. PMID:26834994

  8. Machine learning models identify molecules active against the Ebola virus in vitro

    PubMed Central

    Ekins, Sean; Freundlich, Joel S.; Clark, Alex M.; Anantpadma, Manu; Davey, Robert A.; Madrid, Peter

    2016-01-01

    The search for small molecule inhibitors of Ebola virus (EBOV) has led to several high throughput screens over the past 3 years. These have identified a range of FDA-approved active pharmaceutical ingredients (APIs) with anti-EBOV activity in vitro and several of which are also active in a mouse infection model. There are millions of additional commercially-available molecules that could be screened for potential activities as anti-EBOV compounds. One way to prioritize compounds for testing is to generate computational models based on the high throughput screening data and then virtually screen compound libraries. In the current study, we have generated Bayesian machine learning models with viral pseudotype entry assay and the EBOV replication assay data. We have validated the models internally and externally. We have also used these models to computationally score the MicroSource library of drugs to select those likely to be potential inhibitors. Three of the highest scoring molecules that were not in the model training sets, quinacrine, pyronaridine and tilorone, were tested in vitro and had EC 50 values of 350, 420 and 230 nM, respectively. Pyronaridine is a component of a combination therapy for malaria that was recently approved by the European Medicines Agency, which may make it more readily accessible for clinical testing. Like other known antimalarial drugs active against EBOV, it shares the 4-aminoquinoline scaffold. Tilorone, is an investigational antiviral agent that has shown a broad array of biological activities including cell growth inhibition in cancer cells, antifibrotic properties, α7 nicotinic receptor agonist activity, radioprotective activity and activation of hypoxia inducible factor-1. Quinacrine is an antimalarial but also has use as an anthelmintic. Our results suggest data sets with less than 1,000 molecules can produce validated machine learning models that can in turn be utilized to identify novel EBOV inhibitors in vitro. PMID:26834994

  9. EBS Radionuclide Transport Abstraction

    SciTech Connect

    J. Prouty

    2006-07-14

    The purpose of this report is to develop and analyze the engineered barrier system (EBS) radionuclide transport abstraction model, consistent with Level I and Level II model validation, as identified in Technical Work Plan for: Near-Field Environment and Transport: Engineered Barrier System: Radionuclide Transport Abstraction Model Report Integration (BSC 2005 [DIRS 173617]). The EBS radionuclide transport abstraction (or EBS RT Abstraction) is the conceptual model used in the total system performance assessment (TSPA) to determine the rate of radionuclide releases from the EBS to the unsaturated zone (UZ). The EBS RT Abstraction conceptual model consists of two main components: a flow model and a transport model. Both models are developed mathematically from first principles in order to show explicitly what assumptions, simplifications, and approximations are incorporated into the models used in the TSPA. The flow model defines the pathways for water flow in the EBS and specifies how the flow rate is computed in each pathway. Input to this model includes the seepage flux into a drift. The seepage flux is potentially split by the drip shield, with some (or all) of the flux being diverted by the drip shield and some passing through breaches in the drip shield that might result from corrosion or seismic damage. The flux through drip shield breaches is potentially split by the waste package, with some (or all) of the flux being diverted by the waste package and some passing through waste package breaches that might result from corrosion or seismic damage. Neither the drip shield nor the waste package survives an igneous intrusion, so the flux splitting submodel is not used in the igneous scenario class. The flow model is validated in an independent model validation technical review. The drip shield and waste package flux splitting algorithms are developed and validated using experimental data. The transport model considers advective transport and diffusive transport

  10. Hybrid wavelet-support vector machine approach for modelling rainfall-runoff process.

    PubMed

    Komasi, Mehdi; Sharghi, Soroush

    2016-01-01

    Because of the importance of water resources management, the need for accurate modeling of the rainfall-runoff process has rapidly grown in the past decades. Recently, the support vector machine (SVM) approach has been used by hydrologists for rainfall-runoff modeling and the other fields of hydrology. Similar to the other artificial intelligence models, such as artificial neural network (ANN) and adaptive neural fuzzy inference system, the SVM model is based on the autoregressive properties. In this paper, the wavelet analysis was linked to the SVM model concept for modeling the rainfall-runoff process of Aghchai and Eel River watersheds. In this way, the main time series of two variables, rainfall and runoff, were decomposed to multiple frequent time series by wavelet theory; then, these time series were imposed as input data on the SVM model in order to predict the runoff discharge one day ahead. The obtained results show that the wavelet SVM model can predict both short- and long-term runoff discharges by considering the seasonality effects. Also, the proposed hybrid model is relatively more appropriate than classical autoregressive ones such as ANN and SVM because it uses the multi-scale time series of rainfall and runoff data in the modeling process. PMID:27120649

  11. Hybrid wavelet-support vector machine approach for modelling rainfall-runoff process.

    PubMed

    Komasi, Mehdi; Sharghi, Soroush

    2016-01-01

    Because of the importance of water resources management, the need for accurate modeling of the rainfall-runoff process has rapidly grown in the past decades. Recently, the support vector machine (SVM) approach has been used by hydrologists for rainfall-runoff modeling and the other fields of hydrology. Similar to the other artificial intelligence models, such as artificial neural network (ANN) and adaptive neural fuzzy inference system, the SVM model is based on the autoregressive properties. In this paper, the wavelet analysis was linked to the SVM model concept for modeling the rainfall-runoff process of Aghchai and Eel River watersheds. In this way, the main time series of two variables, rainfall and runoff, were decomposed to multiple frequent time series by wavelet theory; then, these time series were imposed as input data on the SVM model in order to predict the runoff discharge one day ahead. The obtained results show that the wavelet SVM model can predict both short- and long-term runoff discharges by considering the seasonality effects. Also, the proposed hybrid model is relatively more appropriate than classical autoregressive ones such as ANN and SVM because it uses the multi-scale time series of rainfall and runoff data in the modeling process.

  12. A hybrid flowshop scheduling model considering dedicated machines and lot-splitting for the solar cell industry

    NASA Astrophysics Data System (ADS)

    Wang, Li-Chih; Chen, Yin-Yann; Chen, Tzu-Li; Cheng, Chen-Yang; Chang, Chin-Wei

    2014-10-01

    This paper studies a solar cell industry scheduling problem, which is similar to traditional hybrid flowshop scheduling (HFS). In a typical HFS problem, the allocation of machine resources for each order should be scheduled in advance. However, the challenge in solar cell manufacturing is the number of machines that can be adjusted dynamically to complete the job. An optimal production scheduling model is developed to explore these issues, considering the practical characteristics, such as hybrid flowshop, parallel machine system, dedicated machines, sequence independent job setup times and sequence dependent job setup times. The objective of this model is to minimise the makespan and to decide the processing sequence of the orders/lots in each stage, lot-splitting decisions for the orders and the number of machines used to satisfy the demands in each stage. From the experimental results, lot-splitting has significant effect on shortening the makespan, and the improvement effect is influenced by the processing time and the setup time of orders. Therefore, the threshold point to improve the makespan can be identified. In addition, the model also indicates that more lot-splitting approaches, that is, the flexibility of allocating orders/lots to machines is larger, will result in a better scheduling performance.

  13. Recipe for uncovering predictive genes using support vector machines based on model population analysis.

    PubMed

    Li, Hong-Dong; Liang, Yi-Zeng; Xu, Qing-Song; Cao, Dong-Sheng; Tan, Bin-Bin; Deng, Bai-Chuan; Lin, Chen-Chen

    2011-01-01

    Selecting a small number of informative genes for microarray-based tumor classification is central to cancer prediction and treatment. Based on model population analysis, here we present a new approach, called Margin Influence Analysis (MIA), designed to work with support vector machines (SVM) for selecting informative genes. The rationale for performing margin influence analysis lies in the fact that the margin of support vector machines is an important factor which underlies the generalization performance of SVM models. Briefly, MIA could reveal genes which have statistically significant influence on the margin by using Mann-Whitney U test. The reason for using the Mann-Whitney U test rather than two-sample t test is that Mann-Whitney U test is a nonparametric test method without any distribution-related assumptions and is also a robust method. Using two publicly available cancerous microarray data sets, it is demonstrated that MIA could typically select a small number of margin-influencing genes and further achieves comparable classification accuracy compared to those reported in the literature. The distinguished features and outstanding performance may make MIA a good alternative for gene selection of high dimensional microarray data. (The source code in MATLAB with GNU General Public License Version 2.0 is freely available at http://code.google.com/p/mia2009/). PMID:21339535

  14. MIP models and hybrid algorithms for simultaneous job splitting and scheduling on unrelated parallel machines.

    PubMed

    Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204

  15. MIP Models and Hybrid Algorithms for Simultaneous Job Splitting and Scheduling on Unrelated Parallel Machines

    PubMed Central

    Ozmutlu, H. Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204

  16. MIP models and hybrid algorithms for simultaneous job splitting and scheduling on unrelated parallel machines.

    PubMed

    Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.

  17. Seismic Consequence Abstraction

    SciTech Connect

    M. Gross

    2004-10-25

    The primary purpose of this model report is to develop abstractions for the response of engineered barrier system (EBS) components to seismic hazards at a geologic repository at Yucca Mountain, Nevada, and to define the methodology for using these abstractions in a seismic scenario class for the Total System Performance Assessment - License Application (TSPA-LA). A secondary purpose of this model report is to provide information for criticality studies related to seismic hazards. The seismic hazards addressed herein are vibratory ground motion, fault displacement, and rockfall due to ground motion. The EBS components are the drip shield, the waste package, and the fuel cladding. The requirements for development of the abstractions and the associated algorithms for the seismic scenario class are defined in ''Technical Work Plan For: Regulatory Integration Modeling of Drift Degradation, Waste Package and Drip Shield Vibratory Motion and Seismic Consequences'' (BSC 2004 [DIRS 171520]). The development of these abstractions will provide a more complete representation of flow into and transport from the EBS under disruptive events. The results from this development will also address portions of integrated subissue ENG2, Mechanical Disruption of Engineered Barriers, including the acceptance criteria for this subissue defined in Section 2.2.1.3.2.3 of the ''Yucca Mountain Review Plan, Final Report'' (NRC 2003 [DIRS 163274]).

  18. State Event Models for the Formal Analysis of Human-Machine Interactions

    NASA Technical Reports Server (NTRS)

    Combefis, Sebastien; Giannakopoulou, Dimitra; Pecheur, Charles

    2014-01-01

    The work described in this paper was motivated by our experience with applying a framework for formal analysis of human-machine interactions (HMI) to a realistic model of an autopilot. The framework is built around a formally defined conformance relation called "fullcontrol" between an actual system and the mental model according to which the system is operated. Systems are well-designed if they can be described by relatively simple, full-control, mental models for their human operators. For this reason, our framework supports automated generation of minimal full-control mental models for HMI systems, where both the system and the mental models are described as labelled transition systems (LTS). The autopilot that we analysed has been developed in the NASA Ames HMI prototyping tool ADEPT. In this paper, we describe how we extended the models that our HMI analysis framework handles to allow adequate representation of ADEPT models. We then provide a property-preserving reduction from these extended models to LTSs, to enable application of our LTS-based formal analysis algorithms. Finally, we briefly discuss the analyses we were able to perform on the autopilot model with our extended framework.

  19. Modeling of variable speed refrigerated display cabinets based on adaptive support vector machine

    NASA Astrophysics Data System (ADS)

    Cao, Zhikun; Han, Hua; Gu, Bo

    2010-01-01

    In this paper the adaptive support vector machine (ASVM) method is introduced to the field of intelligent modeling of refrigerated display cabinets and used to construct a highly precise mathematical model of their performance. A model for a variable speed open vertical display cabinet was constructed using preprocessing techniques for measured data, including the elimination of outlying data points by the use of an exponential weighted moving average (EWMA). Using dynamic loss coefficient adjustment, the adaptation of the SVM for use in this application was achieved. From there, the object function for energy use per unit of display area total energy consumption (TEC)/total display area (TDA) was constructed and solved using the ASVM method. When compared to the results achieved using a back-propagation neural network (BPNN) model, the ASVM model for the refrigerated display cabinet was characterized by its simple structure, fast convergence speed and high prediction accuracy. The ASVM model also has better noise rejection properties than that of original SVM model. It was revealed by the theoretical analysis and experimental results presented in this paper that it is feasible to model of the display cabinet built using the ASVM method.

  20. Large-scale ligand-based predictive modelling using support vector machines.

    PubMed

    Alvarsson, Jonathan; Lampa, Samuel; Schaal, Wesley; Andersson, Claes; Wikberg, Jarl E S; Spjuth, Ola

    2016-01-01

    The increasing size of datasets in drug discovery makes it challenging to build robust and accurate predictive models within a reasonable amount of time. In order to investigate the effect of dataset sizes on predictive performance and modelling time, ligand-based regression models were trained on open datasets of varying sizes of up to 1.2 million chemical structures. For modelling, two implementations of support vector machines (SVM) were used. Chemical structures were described by the signatures molecular descriptor. Results showed that for the larger datasets, the LIBLINEAR SVM implementation performed on par with the well-established libsvm with a radial basis function kernel, but with dramatically less time for model building even on modest computer resources. Using a non-linear kernel proved to be infeasible for large data sizes, even with substantial computational resources on a computer cluster. To deploy the resulting models, we extended the Bioclipse decision support framework to support models from LIBLINEAR and made our models of logD and solubility available from within Bioclipse. PMID:27516811

  1. Fishery landing forecasting using EMD-based least square support vector machine models

    NASA Astrophysics Data System (ADS)

    Shabri, Ani

    2015-05-01

    In this paper, the novel hybrid ensemble learning paradigm integrating ensemble empirical mode decomposition (EMD) and least square support machine (LSSVM) is proposed to improve the accuracy of fishery landing forecasting. This hybrid is formulated specifically to address in modeling fishery landing, which has high nonlinear, non-stationary and seasonality time series which can hardly be properly modelled and accurately forecasted by traditional statistical models. In the hybrid model, EMD is used to decompose original data into a finite and often small number of sub-series. The each sub-series is modeled and forecasted by a LSSVM model. Finally the forecast of fishery landing is obtained by aggregating all forecasting results of sub-series. To assess the effectiveness and predictability of EMD-LSSVM, monthly fishery landing record data from East Johor of Peninsular Malaysia, have been used as a case study. The result shows that proposed model yield better forecasts than Autoregressive Integrated Moving Average (ARIMA), LSSVM and EMD-ARIMA models on several criteria..

  2. An Abstract Data Interface

    NASA Astrophysics Data System (ADS)

    Allan, D. J.

    The Abstract Data Interface (ADI) is a system within which both abstract data models and their mappings on to file formats can be defined. The data model system is object-oriented and closely follows the Common Lisp Object System (CLOS) object model. Programming interfaces in both C and \\fortran are supplied, and are designed to be simple enough for use by users with limited software skills. The prototype system supports access to those FITS formats most commonly used in the X-ray community, as well as the Starlink NDF data format. New interfaces can be rapidly added to the system---these may communicate directly with the file system, other ADI objects or elsewhere (e.g., a network connection).

  3. Modeling and Experimental Investigation of Methylcyclohexane Ignition in a Rapid Compression Machine

    SciTech Connect

    Pitz, W J; Naik, C V; Mhaold?in, T N; Curran, H J; Orme, J P; Simmie, J M; Westbrook, C K

    2005-10-13

    A new mechanism for the oxidation of methylcyclohexane has been developed. The mechanism combined a newly-developed low temperature mechanism with a previously developed high temperature mechanism. Predictions from the chemical kinetic model have been compared to experimentally measured ignition delay times from a rapid compression machine. Predicted ignition delay times using the initial estimates of the methylcyclohexyl peroxy radical isomerization rate constants were much longer than those measured at low temperatures. The initial estimates of isomerization rate constants were modified based on the experimental findings of Gulati and Walker that indicate a much slower rate of isomerization. Predictions using the modified rate constants for isomerizations yielded faster ignition at lower temperatures that greatly improved the agreement between model predictions and the experimental data. These findings point to much slower isomerization rates for methylcyclohexyl peroxy radicals than previously expected.

  4. Machine Learning Models and Pathway Genome Data Base for Trypanosoma cruzi Drug Discovery

    PubMed Central

    McCall, Laura-Isobel; Sarker, Malabika; Yadav, Maneesh; Ponder, Elizabeth L.; Kallel, E. Adam; Kellar, Danielle; Chen, Steven; Arkin, Michelle; Bunin, Barry A.; McKerrow, James H.; Talcott, Carolyn

    2015-01-01

    Background Chagas disease is a neglected tropical disease (NTD) caused by the eukaryotic parasite Trypanosoma cruzi. The current clinical and preclinical pipeline for T. cruzi is extremely sparse and lacks drug target diversity. Methodology/Principal Findings In the present study we developed a computational approach that utilized data from several public whole-cell, phenotypic high throughput screens that have been completed for T. cruzi by the Broad Institute, including a single screen of over 300,000 molecules in the search for chemical probes as part of the NIH Molecular Libraries program. We have also compiled and curated relevant biological and chemical compound screening data including (i) compounds and biological activity data from the literature, (ii) high throughput screening datasets, and (iii) predicted metabolites of T. cruzi metabolic pathways. This information was used to help us identify compounds and their potential targets. We have constructed a Pathway Genome Data Base for T. cruzi. In addition, we have developed Bayesian machine learning models that were used to virtually screen libraries of compounds. Ninety-seven compounds were selected for in vitro testing, and 11 of these were found to have EC50 < 10μM. We progressed five compounds to an in vivo mouse efficacy model of Chagas disease and validated that the machine learning model could identify in vitro active compounds not in the training set, as well as known positive controls. The antimalarial pyronaridine possessed 85.2% efficacy in the acute Chagas mouse model. We have also proposed potential targets (for future verification) for this compound based on structural similarity to known compounds with targets in T. cruzi. Conclusions/ Significance We have demonstrated how combining chemoinformatics and bioinformatics for T. cruzi drug discovery can bring interesting in vivo active molecules to light that may have been overlooked. The approach we have taken is broadly applicable to other

  5. Kinetostatic modeling and analysis of an exechon parallel kinematic machine(PKM) module

    NASA Astrophysics Data System (ADS)

    Zhao, Yanqin; Jin, Yan; Zhang, Jun

    2016-01-01

    As a newly invented parallel kinematic machine(PKM), Exechon has found its potential application in machining and assembling industries due to high rigidity and high dynamics. To guarantee the overall performance, the loading conditions and deflections of the key components must be revealed to provide basic mechanic data for component design. For this purpose, a kinetostatic model is proposed with substructure synthesis technique. The Exechon is divided into a platform subsystem, a fixed base subsystem and three limb subsystems according to its structure. By modeling the limb assemblage as a spatial beam constrained by two sets of lumped virtual springs representing the compliances of revolute joint, universal joint and spherical joint, the equilibrium equations of limb subsystems are derived with finite element method(FEM). The equilibrium equations of the platform are derived with Newton's 2nd law. By introducing deformation compatibility conditions between the platform and limb, the governing equilibrium equations of the system are derived to formulate an analytical expression for system's deflections. The platform's elastic displacements and joint reactions caused by the gravity are investigated to show a strong position-dependency and axis-symmetry due to its kinematic and structure features. The proposed kinetostatic model is a trade-off between the accuracy of FEM and concision of analytical method, thus can predict the kinetostatics throughout the workspace in a quick and succinct manner. The proposed modeling methodology and kinetostatic analysis can be further expanded to other PKMs with necessary modifications, providing useful information for kinematic calibration as well as component strength calculations.

  6. Capturing lithium-ion battery dynamics with support vector machine-based battery model

    NASA Astrophysics Data System (ADS)

    Klass, Verena; Behm, Mårten; Lindbergh, Göran

    2015-12-01

    During long and high current pulses, diffusion resistance becomes important in lithium-ion batteries. In such diffusion-intense situations, a static support vector machine-based battery model relying on instantaneous current, state-of-charge (SOC), and temperature is not sufficient to capture the time-dependent voltage characteristics. In order to account for the diffusion-related voltage dynamics, we suggest therefore the inclusion of current history in the data-driven battery model by moving averages of the recent current. The voltage estimation performance of six different dynamic battery models with additional current history input is studied during relevant test scenarios. All current history models improve the time-dependent voltage drop estimation compared to the static model, manifesting the beneficial effect of the additional current history input during diffusion-intense situations. The best diffusion resistance estimation results are obtained for the two-step voltage estimation models that incorporate a reciprocal square root of time weighing function for the current of the previous 100 s or an exponential time function with a 20 s time constant (1-8% relative error). Those current history models even improve the overall voltage estimation performance during the studied test scenarios (under 0.25% root-mean-square percentage error).

  7. CATIA-V 3D Modeling for Design Integration of the Ignitor Machine Load Assembly^*

    NASA Astrophysics Data System (ADS)

    Bianchi, A.; Parodi, B.; Gardella, F.; Coppi, B.

    2007-11-01

    In the framework of the ANSALDO industrial contribution to the Ignitor engineering design, the detailed design of all components of the machine core (Load Assembly) has been completed. The machine Central Post, Central Solenoid, and Poloidal Field Coil systems, the Plasma Chamber and First Wall system, the surrounding mechanical structures, the Vacuum Cryostat and the polyethylene boron sheets attached to it for neutron shielding, have all been analyzed to confirm that they can withstand both normal and off-normal operating loads, as well as the Plasma Chamber and First Wall baking operations, with proper safety margins, for the maximum plasma parameters scenario at 13 T/11 MA, for the reduced scenarios at 9 T/7 MA (limiter) and at 9 T/6 MA (double nul). Both 3D and 2D drawings of each individual component have been produced using the Dassault Systems CATIA-V software. After they have been all integrated into a single 3D CATIA model of the Load Assembly, the electro-fluidic and fluidic lines which supply electrical currents and helium cooling gas to the coils have been added and mechanically incorporated with the components listed above. A global seismic analysis of the Load Assembly with SSE/OBE response spectra has also been performed to verify that it is able to withstand such external events. ^*Work supported in part by ENEA of italy and by the US D.O.E.

  8. One- and two-dimensional Stirling machine simulation using experimentally generated flow turbulence models

    NASA Technical Reports Server (NTRS)

    Goldberg, Louis F.

    1990-01-01

    Investigations of one- and two-dimensional (1- or 2-D) simulations of Stirling machines centered around experimental data generated by the U. of Minnesota Mechanical Engineering Test Rig (METR) are covered. This rig was used to investigate oscillating flows about a zero mean with emphasis on laminar/turbulent flow transitions in tubes. The Space Power Demonstrator Engine (SPDE) and in particular, its heater, were the subjects of the simulations. The heater was treated as a 1- or 2-D entity in an otherwise 1-D system. The 2-D flow effects impacted the transient flow predictions in the heater itself but did not have a major impact on overall system performance. Information propagation effects may be a significant issue in the simulation (if not the performance) of high-frequency, high-pressure Stirling machines. This was investigated further by comparing a simulation against an experimentally validated analytic solution for the fluid dynamics of a transmission line. The applicability of the pressure-linking algorithm for compressible flows may be limited by characteristic number (defined as flow path information traverses per cycle); this warrants further study. Lastly the METR was simulated in 1- and 2-D. A two-parameter k-w foldback function turbulence model was developed and tested against a limited set of METR experimental data.

  9. Ecophysiological Modeling of Grapevine Water Stress in Burgundy Terroirs by a Machine-Learning Approach.

    PubMed

    Brillante, Luca; Mathieu, Olivier; Lévêque, Jean; Bois, Benjamin

    2016-01-01

    In a climate change scenario, successful modeling of the relationships between plant-soil-meteorology is crucial for a sustainable agricultural production, especially for perennial crops. Grapevines (Vitis vinifera L. cv Chardonnay) located in eight experimental plots (Burgundy, France) along a hillslope were monitored weekly for 3 years for leaf water potentials, both at predawn (Ψpd) and at midday (Ψstem). The water stress experienced by grapevine was modeled as a function of meteorological data (minimum and maximum temperature, rainfall) and soil characteristics (soil texture, gravel content, slope) by a gradient boosting machine. Model performance was assessed by comparison with carbon isotope discrimination (δ(13)C) of grape sugars at harvest and by the use of a test-set. The developed models reached outstanding prediction performance (RMSE < 0.08 MPa for Ψstem and < 0.06 MPa for Ψpd), comparable to measurement accuracy. Model predictions at a daily time step improved correlation with δ(13)C data, respect to the observed trend at a weekly time scale. The role of each predictor in these models was described in order to understand how temperature, rainfall, soil texture, gravel content and slope affect the grapevine water status in the studied context. This work proposes a straight-forward strategy to simulate plant water stress in field condition, at a local scale; to investigate ecological relationships in the vineyard and adapt cultural practices to future conditions. PMID:27375651

  10. Ecophysiological Modeling of Grapevine Water Stress in Burgundy Terroirs by a Machine-Learning Approach.

    PubMed

    Brillante, Luca; Mathieu, Olivier; Lévêque, Jean; Bois, Benjamin

    2016-01-01

    In a climate change scenario, successful modeling of the relationships between plant-soil-meteorology is crucial for a sustainable agricultural production, especially for perennial crops. Grapevines (Vitis vinifera L. cv Chardonnay) located in eight experimental plots (Burgundy, France) along a hillslope were monitored weekly for 3 years for leaf water potentials, both at predawn (Ψpd) and at midday (Ψstem). The water stress experienced by grapevine was modeled as a function of meteorological data (minimum and maximum temperature, rainfall) and soil characteristics (soil texture, gravel content, slope) by a gradient boosting machine. Model performance was assessed by comparison with carbon isotope discrimination (δ(13)C) of grape sugars at harvest and by the use of a test-set. The developed models reached outstanding prediction performance (RMSE < 0.08 MPa for Ψstem and < 0.06 MPa for Ψpd), comparable to measurement accuracy. Model predictions at a daily time step improved correlation with δ(13)C data, respect to the observed trend at a weekly time scale. The role of each predictor in these models was described in order to understand how temperature, rainfall, soil texture, gravel content and slope affect the grapevine water status in the studied context. This work proposes a straight-forward strategy to simulate plant water stress in field condition, at a local scale; to investigate ecological relationships in the vineyard and adapt cultural practices to future conditions.

  11. Model for noise-induced hearing loss using support vector machine

    NASA Astrophysics Data System (ADS)

    Qiu, Wei; Ye, Jun; Liu-White, Xiaohong; Hamernik, Roger P.

    2005-09-01

    Contemporary noise standards are based on the assumption that an energy metric such as the equivalent noise level is sufficient for estimating the potential of a noise stimulus to cause noise-induced hearing loss (NIHL). Available data, from laboratory-based experiments (Lei et al., 1994; Hamernik and Qiu, 2001) indicate that while an energy metric may be necessary, it is not sufficient for the prediction of NIHL. A support vector machine (SVM) NIHL prediction model was constructed, based on a 550-subject (noise-exposed chinchillas) database. Training of the model used data from 367 noise-exposed subjects. The model was tested using the remaining 183 subjects. Input variables for the model included acoustic, audiometric, and biological variables, while output variables were PTS and cell loss. The results show that an energy parameter is not sufficient to predict NIHL, especially in complex noise environments. With the kurtosis and other noise and biological parameters included as additional inputs, the performance of SVM prediction model was significantly improved. The SVM prediction model has the potential to reliably predict noise-induced hearing loss. [Work supported by NIOSH.

  12. Hidden Markov models and other machine learning approaches in computational molecular biology

    SciTech Connect

    Baldi, P.

    1995-12-31

    This tutorial was one of eight tutorials selected to be presented at the Third International Conference on Intelligent Systems for Molecular Biology which was held in the United Kingdom from July 16 to 19, 1995. Computational tools are increasingly needed to process the massive amounts of data, to organize and classify sequences, to detect weak similarities, to separate coding from non-coding regions, and reconstruct the underlying evolutionary history. The fundamental problem in machine learning is the same as in scientific reasoning in general, as well as statistical modeling: to come up with a good model for the data. In this tutorial four classes of models are reviewed. They are: Hidden Markov models; artificial Neural Networks; Belief Networks; and Stochastic Grammars. When dealing with DNA and protein primary sequences, Hidden Markov models are one of the most flexible and powerful alignments and data base searches. In this tutorial, attention is focused on the theory of Hidden Markov Models, and how to apply them to problems in molecular biology.

  13. Ecophysiological Modeling of Grapevine Water Stress in Burgundy Terroirs by a Machine-Learning Approach

    PubMed Central

    Brillante, Luca; Mathieu, Olivier; Lévêque, Jean; Bois, Benjamin

    2016-01-01

    In a climate change scenario, successful modeling of the relationships between plant-soil-meteorology is crucial for a sustainable agricultural production, especially for perennial crops. Grapevines (Vitis vinifera L. cv Chardonnay) located in eight experimental plots (Burgundy, France) along a hillslope were monitored weekly for 3 years for leaf water potentials, both at predawn (Ψpd) and at midday (Ψstem). The water stress experienced by grapevine was modeled as a function of meteorological data (minimum and maximum temperature, rainfall) and soil characteristics (soil texture, gravel content, slope) by a gradient boosting machine. Model performance was assessed by comparison with carbon isotope discrimination (δ13C) of grape sugars at harvest and by the use of a test-set. The developed models reached outstanding prediction performance (RMSE < 0.08 MPa for Ψstem and < 0.06 MPa for Ψpd), comparable to measurement accuracy. Model predictions at a daily time step improved correlation with δ13C data, respect to the observed trend at a weekly time scale. The role of each predictor in these models was described in order to understand how temperature, rainfall, soil texture, gravel content and slope affect the grapevine water status in the studied context. This work proposes a straight-forward strategy to simulate plant water stress in field condition, at a local scale; to investigate ecological relationships in the vineyard and adapt cultural practices to future conditions. PMID:27375651

  14. Rotary ultrasonic machining of CFRP: a mechanistic predictive model for cutting force.

    PubMed

    Cong, W L; Pei, Z J; Sun, X; Zhang, C L

    2014-02-01

    Cutting force is one of the most important output variables in rotary ultrasonic machining (RUM) of carbon fiber reinforced plastic (CFRP) composites. Many experimental investigations on cutting force in RUM of CFRP have been reported. However, in the literature, there are no cutting force models for RUM of CFRP. This paper develops a mechanistic predictive model for cutting force in RUM of CFRP. The material removal mechanism of CFRP in RUM has been analyzed first. The model is based on the assumption that brittle fracture is the dominant mode of material removal. CFRP micromechanical analysis has been conducted to represent CFRP as an equivalent homogeneous material to obtain the mechanical properties of CFRP from its components. Based on this model, relationships between input variables (including ultrasonic vibration amplitude, tool rotation speed, feedrate, abrasive size, and abrasive concentration) and cutting force can be predicted. The relationships between input variables and important intermediate variables (indentation depth, effective contact time, and maximum impact force of single abrasive grain) have been investigated to explain predicted trends of cutting force. Experiments are conducted to verify the model, and experimental results agree well with predicted trends from this model.

  15. Probabilistic Modeling of Conformational Space for 3D Machine Learning Approaches.

    PubMed

    Jahn, Andreas; Hinselmann, Georg; Fechner, Nikolas; Henneges, Carsten; Zell, Andreas

    2010-05-17

    We present a new probabilistic encoding of the conformational space of a molecule that allows for the integration into common similarity calculations. The method uses distance profiles of flexible atom-pairs and computes generative models that describe the distance distribution in the conformational space. The generative models permit the use of probabilistic kernel functions and, therefore, our approach can be used to extend existing 3D molecular kernel functions, as applied in support vector machines, to build QSAR models. The resulting kernels are valid 4D kernel functions and reduce the dependency of the model quality on suitable conformations of the molecules. We showed in several experiments the robust performance of the 4D kernel function, which was extended by our approach, in comparison to the original 3D-based kernel function. The new method compares the conformational space of two molecules within one kernel evaluation. Hence, the number of kernel evaluations is significantly reduced in comparison to common kernel-based conformational space averaging techniques. Additionally, the performance gain of the extended model correlates with the flexibility of the data set and enables an a priori estimation of the model improvement.

  16. Evaluation models for soil nutrient based on support vector machine and artificial neural networks.

    PubMed

    Li, Hao; Leng, Weijia; Zhou, Yibing; Chen, Fudi; Xiu, Zhilong; Yang, Dazuo

    2014-01-01

    Soil nutrient is an important aspect that contributes to the soil fertility and environmental effects. Traditional evaluation approaches of soil nutrient are quite hard to operate, making great difficulties in practical applications. In this paper, we present a series of comprehensive evaluation models for soil nutrient by using support vector machine (SVM), multiple linear regression (MLR), and artificial neural networks (ANNs), respectively. We took the content of organic matter, total nitrogen, alkali-hydrolysable nitrogen, rapidly available phosphorus, and rapidly available potassium as independent variables, while the evaluation level of soil nutrient content was taken as dependent variable. Results show that the average prediction accuracies of SVM models are 77.87% and 83.00%, respectively, while the general regression neural network (GRNN) model's average prediction accuracy is 92.86%, indicating that SVM and GRNN models can be used effectively to assess the levels of soil nutrient with suitable dependent variables. In practical applications, both SVM and GRNN models can be used for determining the levels of soil nutrient.

  17. A Reordering Model Using a Source-Side Parse-Tree for Statistical Machine Translation

    NASA Astrophysics Data System (ADS)

    Hashimoto, Kei; Yamamoto, Hirofumi; Okuma, Hideo; Sumita, Eiichiro; Tokuda, Keiichi

    This paper presents a reordering model using a source-side parse-tree for phrase-based statistical machine translation. The proposed model is an extension of IST-ITG (imposing source tree on inversion transduction grammar) constraints. In the proposed method, the target-side word order is obtained by rotating nodes of the source-side parse-tree. We modeled the node rotation, monotone or swap, using word alignments based on a training parallel corpus and source-side parse-trees. The model efficiently suppresses erroneous target word orderings, especially global orderings. Furthermore, the proposed method conducts a probabilistic evaluation of target word reorderings. In English-to-Japanese and English-to-Chinese translation experiments, the proposed method resulted in a 0.49-point improvement (29.31 to 29.80) and a 0.33-point improvement (18.60 to 18.93) in word BLEU-4 compared with IST-ITG constraints, respectively. This indicates the validity of the proposed reordering model.

  18. Data on Support Vector Machines (SVM) model to forecast photovoltaic power.

    PubMed

    Malvoni, M; De Giorgi, M G; Congedo, P M

    2016-12-01

    The data concern the photovoltaic (PV) power, forecasted by a hybrid model that considers weather variations and applies a technique to reduce the input data size, as presented in the paper entitled "Photovoltaic forecast based on hybrid pca-lssvm using dimensionality reducted data" (M. Malvoni, M.G. De Giorgi, P.M. Congedo, 2015) [1]. The quadratic Renyi entropy criteria together with the principal component analysis (PCA) are applied to the Least Squares Support Vector Machines (LS-SVM) to predict the PV power in the day-ahead time frame. The data here shared represent the proposed approach results. Hourly PV power predictions for 1,3,6,12, 24 ahead hours and for different data reduction sizes are provided in Supplementary material. PMID:27622206

  19. A computational visual saliency model based on statistics and machine learning.

    PubMed

    Lin, Ru-Je; Lin, Wei-Song

    2014-01-01

    Identifying the type of stimuli that attracts human visual attention has been an appealing topic for scientists for many years. In particular, marking the salient regions in images is useful for both psychologists and many computer vision applications. In this paper, we propose a computational approach for producing saliency maps using statistics and machine learning methods. Based on four assumptions, three properties (Feature-Prior, Position-Prior, and Feature-Distribution) can be derived and combined by a simple intersection operation to obtain a saliency map. These properties are implemented by a similarity computation, support vector regression (SVR) technique, statistical analysis of training samples, and information theory using low-level features. This technique is able to learn the preferences of human visual behavior while simultaneously considering feature uniqueness. Experimental results show that our approach performs better in predicting human visual attention regions than 12 other models in two test databases. PMID:25084782

  20. Data on Support Vector Machines (SVM) model to forecast photovoltaic power.

    PubMed

    Malvoni, M; De Giorgi, M G; Congedo, P M

    2016-12-01

    The data concern the photovoltaic (PV) power, forecasted by a hybrid model that considers weather variations and applies a technique to reduce the input data size, as presented in the paper entitled "Photovoltaic forecast based on hybrid pca-lssvm using dimensionality reducted data" (M. Malvoni, M.G. De Giorgi, P.M. Congedo, 2015) [1]. The quadratic Renyi entropy criteria together with the principal component analysis (PCA) are applied to the Least Squares Support Vector Machines (LS-SVM) to predict the PV power in the day-ahead time frame. The data here shared represent the proposed approach results. Hourly PV power predictions for 1,3,6,12, 24 ahead hours and for different data reduction sizes are provided in Supplementary material.

  1. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  2. Meeting Abstracts - Nexus 2015.

    PubMed

    2015-10-01

    The AMCP Abstracts program provides a forum through which authors can share their insights and outcomes of advanced managed care practice through publication in AMCP's Journal of Managed Care Specialty Pharmacy (JMCP). Of the abstracts accepted for publication, most are presented as posters, so interested AMCP meeting attendees can review findings and query authors. The main poster presentation is Tuesday, October 27, 2015; posters are also displayed on Wednesday, October 28, 2015. The AMCP Nexus 2015 in Orlando, Florida, is expected to attract more than 3,500 managed care pharmacists and other health care professionals who manage and evaluate drug therapies, develop and manage networks, and work with medical managers and information specialists to improve the care of all individuals enrolled in managed care programs.  Abstracts were submitted in the following categories:  Research Report: describe completed original research on managed care pharmacy services or health care interventions. Examples include (but are not limited to) observational studies using administrative claims, reports of the impact of unique benefit design strategies, and analyses of the effects of innovative administrative or clinical programs.Economic Model: describe models that predict the effect of various benefit design or clinical decisions on a population. For example, an economic model could be used to predict the budget impact of a new pharmaceutical product on a health care system. Solving Problems in Managed Care: describe the specific steps taken to introduce a needed change, develop and implement a new system or program, plan and organize an administrative function, or solve other types of problems in managed care settings. These abstracts describe a course of events; they do not test a hypothesis, but they may include data.

  3. a New Vuinter Subroutine for ABAQUS/EXPLICIT™ to Modeling Rate Dependent Surface Interactions Laws in Machining

    NASA Astrophysics Data System (ADS)

    Kortabarria, A.; Rech, J.; de Eguilaz, E. Ruiz; Arrazola, P. J.

    2011-05-01

    Although there have been great advances in the machining research, still there's not a total control of the process. FEM simulation is one of the most powerful methods in machining research, but the strong mechanical and thermal loads combined with great strain and strain rates make difficult to obtain accurate input parameters. With the aim of obtaining better accuracy in simulation results a new Vuinter subroutine for Abaqus/Explicit™ 6.9 has been developed. This subroutine is able to represent the principal workpiece-tool interaction laws, such as the rate dependant coulomb friction coefficient and the rate dependant frictional heat partition coefficient. To validate it, the subroutine has been implemented in a basic sliding model and in 2D ALE machining model. Finally the results have been compared with experimental and numerical ones in different work conditions.

  4. Study on the machined depth when nanoscratching on 6H-SiC using Berkovich indenter: Modelling and experimental study

    NASA Astrophysics Data System (ADS)

    Zhang, Feihu; Meng, Binbin; Geng, Yanquan; Zhang, Yong

    2016-04-01

    In order to investigate the deformation characteristics and material removing mechanism of the single crystal silicon carbide at the nanoscale, the nanoscratching tests were conducted on the surface of 6H-SiC (0 0 0 1) by using Berkovich indenter. In this paper, a theoretical model for nanoscratching with Berkovich indenter is proposed to reveal the relationship between the applied normal load and the machined depth. The influences of the elastic recovery and the stress distribution of the material are considered in the developed theoretical model. Experimental and theoretical machined depths are compared when scratching in different directions. Results show that the effects of the elastic recovery of the material, the geometry of the tip and the stress distribution of the interface between the tip and sample have large influences on the machined depth which should be considered for this kind of hard brittle material of 6H-SiC.

  5. Copper Conductivity Model Development and Validation Using Flyer Plate Experiments on the Z-machine

    NASA Astrophysics Data System (ADS)

    Riford, L.; Lemke, R. W.; Cochrane, K.

    2015-11-01

    Magnetically accelerated flyer plate experiments done on Sandia's Z-machine provide insight into a multitude of materials problems at high energies and densities including conductivity model development and validation. In an experiment with ten Cu flyer plates of thicknesses 500-1000 μm, VISAR measurements exhibit a characteristic jump in the velocity correlated with magnetic field burn-through and the expansion of melted material at the free surface. The experiment is modeled using Sandia's shock and multiphysics MHD code ALEGRA. Simulated free surface velocities are within 1% of the measured data early in time, but divergence occurs at the feature, where the simulation indicates a slower burn through time. The cause was found to be in the Cu conductivity model's compressed regime. The model was improved by lowering the conductivity in the region 12.5-16 g/cc and 350-16000 K with a novel parameter based optimization method using the velocity feature as a figure of merit. Sandia National Laboratories is a multiprogram laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U. S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  6. Using machine learning tools to model complex toxic interactions with limited sampling regimes.

    PubMed

    Bertin, Matthew J; Moeller, Peter; Guillette, Louis J; Chapman, Robert W

    2013-03-19

    A major impediment to understanding the impact of environmental stress, including toxins and other pollutants, on organisms, is that organisms are rarely challenged by one or a few stressors in natural systems. Thus, linking laboratory experiments that are limited by practical considerations to a few stressors and a few levels of these stressors to real world conditions is constrained. In addition, while the existence of complex interactions among stressors can be identified by current statistical methods, these methods do not provide a means to construct mathematical models of these interactions. In this paper, we offer a two-step process by which complex interactions of stressors on biological systems can be modeled in an experimental design that is within the limits of practicality. We begin with the notion that environment conditions circumscribe an n-dimensional hyperspace within which biological processes or end points are embedded. We then randomly sample this hyperspace to establish experimental conditions that span the range of the relevant parameters and conduct the experiment(s) based upon these selected conditions. Models of the complex interactions of the parameters are then extracted using machine learning tools, specifically artificial neural networks. This approach can rapidly generate highly accurate models of biological responses to complex interactions among environmentally relevant toxins, identify critical subspaces where nonlinear responses exist, and provide an expedient means of designing traditional experiments to test the impact of complex mixtures on biological responses. Further, this can be accomplished with an astonishingly small sample size.

  7. Improving protein-protein interactions prediction accuracy using protein evolutionary information and relevance vector machine model.

    PubMed

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Chen, Xing; Yan, Gui-Ying; Hu, Ji-Pu

    2016-10-01

    Predicting protein-protein interactions (PPIs) is a challenging task and essential to construct the protein interaction networks, which is important for facilitating our understanding of the mechanisms of biological systems. Although a number of high-throughput technologies have been proposed to predict PPIs, there are unavoidable shortcomings, including high cost, time intensity, and inherently high false positive rates. For these reasons, many computational methods have been proposed for predicting PPIs. However, the problem is still far from being solved. In this article, we propose a novel computational method called RVM-BiGP that combines the relevance vector machine (RVM) model and Bi-gram Probabilities (BiGP) for PPIs detection from protein sequences. The major improvement includes (1) Protein sequences are represented using the Bi-gram probabilities (BiGP) feature representation on a Position Specific Scoring Matrix (PSSM), in which the protein evolutionary information is contained; (2) For reducing the influence of noise, the Principal Component Analysis (PCA) method is used to reduce the dimension of BiGP vector; (3) The powerful and robust Relevance Vector Machine (RVM) algorithm is used for classification. Five-fold cross-validation experiments executed on yeast and Helicobacter pylori datasets, which achieved very high accuracies of 94.57 and 90.57%, respectively. Experimental results are significantly better than previous methods. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM-BiGP method is significantly better than the SVM-based method. In addition, we achieved 97.15% accuracy on imbalance yeast dataset, which is higher than that of balance yeast dataset. The promising experimental results show the efficiency and robust of the proposed method, which can be an automatic decision support tool for future

  8. Improving protein-protein interactions prediction accuracy using protein evolutionary information and relevance vector machine model.

    PubMed

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Chen, Xing; Yan, Gui-Ying; Hu, Ji-Pu

    2016-10-01

    Predicting protein-protein interactions (PPIs) is a challenging task and essential to construct the protein interaction networks, which is important for facilitating our understanding of the mechanisms of biological systems. Although a number of high-throughput technologies have been proposed to predict PPIs, there are unavoidable shortcomings, including high cost, time intensity, and inherently high false positive rates. For these reasons, many computational methods have been proposed for predicting PPIs. However, the problem is still far from being solved. In this article, we propose a novel computational method called RVM-BiGP that combines the relevance vector machine (RVM) model and Bi-gram Probabilities (BiGP) for PPIs detection from protein sequences. The major improvement includes (1) Protein sequences are represented using the Bi-gram probabilities (BiGP) feature representation on a Position Specific Scoring Matrix (PSSM), in which the protein evolutionary information is contained; (2) For reducing the influence of noise, the Principal Component Analysis (PCA) method is used to reduce the dimension of BiGP vector; (3) The powerful and robust Relevance Vector Machine (RVM) algorithm is used for classification. Five-fold cross-validation experiments executed on yeast and Helicobacter pylori datasets, which achieved very high accuracies of 94.57 and 90.57%, respectively. Experimental results are significantly better than previous methods. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM-BiGP method is significantly better than the SVM-based method. In addition, we achieved 97.15% accuracy on imbalance yeast dataset, which is higher than that of balance yeast dataset. The promising experimental results show the efficiency and robust of the proposed method, which can be an automatic decision support tool for future

  9. EBS Radionuclide Transport Abstraction

    SciTech Connect

    J.D. Schreiber

    2005-08-25

    The purpose of this report is to develop and analyze the engineered barrier system (EBS) radionuclide transport abstraction model, consistent with Level I and Level II model validation, as identified in ''Technical Work Plan for: Near-Field Environment and Transport: Engineered Barrier System: Radionuclide Transport Abstraction Model Report Integration'' (BSC 2005 [DIRS 173617]). The EBS radionuclide transport abstraction (or EBS RT Abstraction) is the conceptual model used in the total system performance assessment for the license application (TSPA-LA) to determine the rate of radionuclide releases from the EBS to the unsaturated zone (UZ). The EBS RT Abstraction conceptual model consists of two main components: a flow model and a transport model. Both models are developed mathematically from first principles in order to show explicitly what assumptions, simplifications, and approximations are incorporated into the models used in the TSPA-LA. The flow model defines the pathways for water flow in the EBS and specifies how the flow rate is computed in each pathway. Input to this model includes the seepage flux into a drift. The seepage flux is potentially split by the drip shield, with some (or all) of the flux being diverted by the drip shield and some passing through breaches in the drip shield that might result from corrosion or seismic damage. The flux through drip shield breaches is potentially split by the waste package, with some (or all) of the flux being diverted by the waste package and some passing through waste package breaches that might result from corrosion or seismic damage. Neither the drip shield nor the waste package survives an igneous intrusion, so the flux splitting submodel is not used in the igneous scenario class. The flow model is validated in an independent model validation technical review. The drip shield and waste package flux splitting algorithms are developed and validated using experimental data. The transport model considers

  10. Filtered selection coupled with support vector machines generate a functionally relevant prediction model for colorectal cancer

    PubMed Central

    Gabere, Musa Nur; Hussein, Mohamed Aly; Aziz, Mohammad Azhar

    2016-01-01

    Purpose There has been considerable interest in using whole-genome expression profiles for the classification of colorectal cancer (CRC). The selection of important features is a crucial step before training a classifier. Methods In this study, we built a model that uses support vector machine (SVM) to classify cancer and normal samples using Affymetrix exon microarray data obtained from 90 samples of 48 patients diagnosed with CRC. From the 22,011 genes, we selected the 20, 30, 50, 100, 200, 300, and 500 genes most relevant to CRC using the minimum-redundancy–maximum-relevance (mRMR) technique. With these gene sets, an SVM model was designed using four different kernel types (linear, polynomial, radial basis function [RBF], and sigmoid). Results The best model, which used 30 genes and RBF kernel, outperformed other combinations; it had an accuracy of 84% for both ten fold and leave-one-out cross validations in discriminating the cancer samples from the normal samples. With this 30 genes set from mRMR, six classifiers were trained using random forest (RF), Bayes net (BN), multilayer perceptron (MLP), naïve Bayes (NB), reduced error pruning tree (REPT), and SVM. Two hybrids, mRMR + SVM and mRMR + BN, were the best models when tested on other datasets, and they achieved a prediction accuracy of 95.27% and 91.99%, respectively, compared to other mRMR hybrid models (mRMR + RF, mRMR + NB, mRMR + REPT, and mRMR + MLP). Ingenuity pathway analysis was used to analyze the functions of the 30 genes selected for this model and their potential association with CRC: CDH3, CEACAM7, CLDN1, IL8, IL6R, MMP1, MMP7, and TGFB1 were predicted to be CRC biomarkers. Conclusion This model could be used to further develop a diagnostic tool for predicting CRC based on gene expression data from patient samples. PMID:27330311

  11. Modeling workflow to design machine translation applications for public health practice

    PubMed Central

    Turner, Anne M.; Brownstein, Megumu K.; Cole, Kate; Karasz, Hilary; Kirchhoff, Katrin

    2014-01-01

    Objective Provide a detailed understanding of the information workflow processes related to translating health promotion materials for limited English proficiency individuals in order to inform the design of context-driven machine translation (MT) tools for public health (PH). Materials and Methods We applied a cognitive work analysis framework to investigate the translation information workflow processes of two large health departments in Washington State. Researchers conducted interviews, performed a task analysis, and validated results with PH professionals to model translation workflow and identify functional requirements for a translation system for PH. Results The study resulted in a detailed description of work related to translation of PH materials, an information workflow diagram, and a description of attitudes towards MT technology. We identified a number of themes that hold design implications for incorporating MT in PH translation practice. A PH translation tool prototype was designed based on these findings. Discussion This study underscores the importance of understanding the work context and information workflow for which systems will be designed. Based on themes and translation information workflow processes, we identified key design guidelines for incorporating MT into PH translation work. Primary amongst these is that MT should be followed by human review for translations to be of high quality and for the technology to be adopted into practice. Counclusion The time and costs of creating multilingual health promotion materials are barriers to translation. PH personnel were interested in MT's potential to improve access to low-cost translated PH materials, but expressed concerns about ensuring quality. We outline design considerations and a potential machine translation tool to best fit MT systems into PH practice. PMID:25445922

  12. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine.

    PubMed

    Shang, Qiang; Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang

    2016-01-01

    Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust. PMID:27551829

  13. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine.

    PubMed

    Shang, Qiang; Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang

    2016-01-01

    Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust.

  14. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine

    PubMed Central

    Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang

    2016-01-01

    Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust. PMID:27551829

  15. Non-parametric temporal modeling of the hemodynamic response function via a liquid state machine.

    PubMed

    Avesani, Paolo; Hazan, Hananel; Koilis, Ester; Manevitz, Larry M; Sona, Diego

    2015-10-01

    Standard methods for the analysis of functional MRI data strongly rely on prior implicit and explicit hypotheses made to simplify the analysis. In this work the attention is focused on two such commonly accepted hypotheses: (i) the hemodynamic response function (HRF) to be searched in the BOLD signal can be described by a specific parametric model e.g., double-gamma; (ii) the effect of stimuli on the signal is taken to be linearly additive. While these assumptions have been empirically proven to generate high sensitivity for statistical methods, they also limit the identification of relevant voxels to what is already postulated in the signal, thus not allowing the discovery of unknown correlates in the data due to the presence of unexpected hemodynamics. This paper tries to overcome these limitations by proposing a method wherein the HRF is learned directly from data rather than induced from its basic form assumed in advance. This approach produces a set of voxel-wise models of HRF and, as a result, relevant voxels are filterable according to the accuracy of their prediction in a machine learning framework. This approach is instantiated using a temporal architecture based on the paradigm of Reservoir Computing wherein a Liquid State Machine is combined with a decoding Feed-Forward Neural Network. This splits the modeling into two parts: first a representation of the complex temporal reactivity of the hemodynamic response is determined by a universal global "reservoir" which is essentially temporal; second an interpretation of the encoded representation is determined by a standard feed-forward neural network, which is trained by the data. Thus the reservoir models the temporal state of information during and following temporal stimuli in a feed-back system, while the neural network "translates" this data to fit the specific HRF response as given, e.g. by BOLD signal measurements in fMRI. An empirical analysis on synthetic datasets shows that the learning process can

  16. Geometric dimension model of virtual astronaut body for ergonomic analysis of man-machine space system

    NASA Astrophysics Data System (ADS)

    Qianxiang, Zhou

    2012-07-01

    It is very important to clarify the geometric characteristic of human body segment and constitute analysis model for ergonomic design and the application of ergonomic virtual human. The typical anthropometric data of 1122 Chinese men aged 20-35 years were collected using three-dimensional laser scanner for human body. According to the correlation between different parameters, curve fitting were made between seven trunk parameters and ten body parameters with the SPSS 16.0 software. It can be concluded that hip circumference and shoulder breadth are the most important parameters in the models and the two parameters have high correlation with the others parameters of human body. By comparison with the conventional regressive curves, the present regression equation with the seven trunk parameters is more accurate to forecast the geometric dimensions of head, neck, height and the four limbs with high precision. Therefore, it is greatly valuable for ergonomic design and analysis of man-machine system.This result will be very useful to astronaut body model analysis and application.

  17. Modeling and analysis of reservation frame slotted-ALOHA in wireless machine-to-machine area networks for data collection.

    PubMed

    Vázquez-Gallego, Francisco; Alonso, Luis; Alonso-Zarate, Jesus

    2015-02-09

    Reservation frame slotted-ALOHA (RFSA) was proposed in the past to manage the access to the wireless channel when devices generate long messages fragmented into small packets. In this paper, we consider an M2M area network composed of end-devices that periodically respond to the requests from a gateway with the transmission of fragmented messages. The idle network is suddenly set into saturation, having all end-devices attempting to get access to the channel simultaneously. This has been referred to as delta traffic. While previous works analyze the throughput of RFSA in steady-state conditions, assuming that traffic is generated following random distributions, the performance of RFSA under delta traffic has never received attention. In this paper, we propose a theoretical model to calculate the average delay and energy consumption required to resolve the contention under delta traffic using RFSA.We have carried out computer-based simulations to validate the accuracy of the theoretical model and to compare the performance for RFSA and FSA. Results show that there is an optimal frame length that minimizes delay and energy consumption and which depends on the number of end-devices. In addition, it is shown that RFSA reduces the energy consumed per end-device by more than 50% with respect to FSA under delta traffic.

  18. Machine Shop Grinding Machines.

    ERIC Educational Resources Information Center

    Dunn, James

    This curriculum manual is one in a series of machine shop curriculum manuals intended for use in full-time secondary and postsecondary classes, as well as part-time adult classes. The curriculum can also be adapted to open-entry, open-exit programs. Its purpose is to equip students with basic knowledge and skills that will enable them to enter the…

  19. Piaget on Abstraction.

    ERIC Educational Resources Information Center

    Moessinger, Pierre; Poulin-Dubois, Diane

    1981-01-01

    Reviews and discusses Piaget's recent work on abstract reasoning. Piaget's distinction between empirical and reflective abstraction is presented; his hypotheses are considered to be metaphorical. (Author/DB)

  20. Curriculum Assessment Using Artificial Neural Network and Support Vector Machine Modeling Approaches: A Case Study. IR Applications. Volume 29

    ERIC Educational Resources Information Center

    Chen, Chau-Kuang

    2010-01-01

    Artificial Neural Network (ANN) and Support Vector Machine (SVM) approaches have been on the cutting edge of science and technology for pattern recognition and data classification. In the ANN model, classification accuracy can be achieved by using the feed-forward of inputs, back-propagation of errors, and the adjustment of connection weights. In…

  1. Support Vector Machine Model for Automatic Detection and Classification of Seismic Events

    NASA Astrophysics Data System (ADS)

    Barros, Vesna; Barros, Lucas

    2016-04-01

    The automated processing of multiple seismic signals to detect, localize and classify seismic events is a central tool in both natural hazards monitoring and nuclear treaty verification. However, false detections and missed detections caused by station noise and incorrect classification of arrivals are still an issue and the events are often unclassified or poorly classified. Thus, machine learning techniques can be used in automatic processing for classifying the huge database of seismic recordings and provide more confidence in the final output. Applied in the context of the International Monitoring System (IMS) - a global sensor network developed for the Comprehensive Nuclear-Test-Ban Treaty (CTBT) - we propose a fully automatic method for seismic event detection and classification based on a supervised pattern recognition technique called the Support Vector Machine (SVM). According to Kortström et al., 2015, the advantages of using SVM are handleability of large number of features and effectiveness in high dimensional spaces. Our objective is to detect seismic events from one IMS seismic station located in an area of high seismicity and mining activity and classify them as earthquakes or quarry blasts. It is expected to create a flexible and easily adjustable SVM method that can be applied in different regions and datasets. Taken a step further, accurate results for seismic stations could lead to a modification of the model and its parameters to make it applicable to other waveform technologies used to monitor nuclear explosions such as infrasound and hydroacoustic waveforms. As an authorized user, we have direct access to all IMS data and bulletins through a secure signatory account. A set of significant seismic waveforms containing different types of events (e.g. earthquake, quarry blasts) and noise is being analysed to train the model and learn the typical pattern of the signal from these events. Moreover, comparing the performance of the support

  2. Modeling of Energy Transfer for Carbon Nanotube-Based Precision Machining

    NASA Astrophysics Data System (ADS)

    Wong, Basil T.; Pinar Menguc, M.; Vallance, R. Ryan; Rao, Apparao M.

    2003-03-01

    INTRODUCTION Possible use of electron emission from carbon nanotubes (CNTs) for precision machining has been realized only recently. It is hypothesized that by coupling CNT electron emission with radiation transfer mechanism nano-scaled machining can be achieved. A laser, for example, can be used to raise the temperature of the workpiece near its melting point, and a carbon nanotube is then used to transfer additional energy required to the workpiece to complete the removal of minute amount of materials for nanomachining process. To investigate this hypothesis, a detailed numerical/analytical study is conducted. Electron transfer is modeled using a Monte Carlo approach, and a detailed radiation transfer model, including Fresnel reflections is adapted. Based on the numerical simulations we found that a power of one-tenth of a watt is required from a CNT alone to raise the temperature of gold beyond its melting point. However, using a localized heating with a laser, the required power can be reduced by roughly more than a half. This paper outlines the details of the numerical simulation and establishes a set of design guidelines for future nanomachining modalities. We are interested in nanomachining using the CNTs. Our objective is to determine if we can effectively remove tens of atoms from the workpiece by electron transfer from a single CNT and proper laser heating from either side of the workpiece. To reach our goal, energy transfer from a single CNT may not be sufficient. One way to overcome this setback is to preheat the workpiece to a certain temperature through a bulk heating, and using a subsequent localized heating by the laser beam to further increase the temperature of a specified location. Thus only a minimum amount of energy is required from the nanotube to process the material, i.e. to remove tens of atoms. Due to the complicated interactions between propagating electrons and the solid material, obtaining a physically realistic theoretical analysis

  3. Machine Learning

    NASA Astrophysics Data System (ADS)

    Hoffmann, Achim; Mahidadia, Ashesh

    The purpose of this chapter is to present fundamental ideas and techniques of machine learning suitable for the field of this book, i.e., for automated scientific discovery. The chapter focuses on those symbolic machine learning methods, which produce results that are suitable to be interpreted and understood by humans. This is particularly important in the context of automated scientific discovery as the scientific theories to be produced by machines are usually meant to be interpreted by humans. This chapter contains some of the most influential ideas and concepts in machine learning research to give the reader a basic insight into the field. After the introduction in Sect. 1, general ideas of how learning problems can be framed are given in Sect. 2. The section provides useful perspectives to better understand what learning algorithms actually do. Section 3 presents the Version space model which is an early learning algorithm as well as a conceptual framework, that provides important insight into the general mechanisms behind most learning algorithms. In section 4, a family of learning algorithms, the AQ family for learning classification rules is presented. The AQ family belongs to the early approaches in machine learning. The next, Sect. 5 presents the basic principles of decision tree learners. Decision tree learners belong to the most influential class of inductive learning algorithms today. Finally, a more recent group of learning systems are presented in Sect. 6, which learn relational concepts within the framework of logic programming. This is a particularly interesting group of learning systems since the framework allows also to incorporate background knowledge which may assist in generalisation. Section 7 discusses Association Rules - a technique that comes from the related field of Data mining. Section 8 presents the basic idea of the Naive Bayesian Classifier. While this is a very popular learning technique, the learning result is not well suited for

  4. An Insight to the Modeling of 1 × 1 Rib Loop Formation Process on Circular Weft Knitting Machine using Computer

    NASA Astrophysics Data System (ADS)

    Ray, Sadhan Chandra

    2015-10-01

    The mechanics of single jersey loop formation is well-reported is literature. However, as the concept of any model of double jersey loop formation process is not available in accessible international literature. Therefore, it was planned to develop a model of 1 × 1 rib loop formation process on dial and cylinder machine using computer so that the influence of various input variables on the final loop length as well on the profile of tension on the yarn inside Knitting Zone (KZ) can be understood. The model provides an insight into the mechanics of 1 × 1 rib loop formation system on dial and cylinder machine. Besides, the degree of agreement between predicted and measured values of loop length and cam forces as well as theoretical analysis of the model have justified the acceptability of the model.

  5. Estimation of wrist angle from sonomyography using support vector machine and artificial neural network models.

    PubMed

    Xie, Hong-Bo; Zheng, Yong-Ping; Guo, Jing-Yi; Chen, Xin; Shi, Jun

    2009-04-01

    Sonomyography (SMG) is the signal we previously termed to describe muscle contraction using real-time muscle thickness changes extracted from ultrasound images. In this paper, we used least squares support vector machine (LS-SVM) and artificial neural networks (ANN) to predict dynamic wrist angles from SMG signals. Synchronized wrist angle and SMG signals from the extensor carpi radialis muscles of five normal subjects were recorded during the process of wrist extension and flexion at rates of 15, 22.5, and 30cycles/min, respectively. An LS-SVM model together with back-propagation (BP) and radial basis function (RBF) ANN was developed and trained using the data sets collected at the rate of 22.5cycles/min for each subject. The established LS-SVM and ANN models were then used to predict the wrist angles for the remained data sets obtained at different extension rates. It was found that the wrist angle signals collected at different rates could be accurately predicted by all the three methods, based on the values of root mean square difference (RMSD<0.2) and the correlation coefficient (CC>0.98), with the performance of the LS-SVM model being significantly better (RMSD<0.15, CC>0.99) than those of its counterparts. The results also demonstrated that the models established for the rate of 22.5cycles/min could be used for the prediction from SMG data sets obtained under other extension rates. It was concluded that the wrist angle could be precisely estimated from the thickness changes of the extensor carpi radialis using LS-SVM or ANN models.

  6. Modeling Plan-Related Clinical Complications Using Machine Learning Tools in a Multiplan IMRT Framework

    SciTech Connect

    Zhang, Hao H.; D'Souza, Warren D. Shi Leyuan; Meyer, Robert R.

    2009-08-01

    Purpose: To predict organ-at-risk (OAR) complications as a function of dose-volume (DV) constraint settings without explicit plan computation in a multiplan intensity-modulated radiotherapy (IMRT) framework. Methods and Materials: Several plans were generated by varying the DV constraints (input features) on the OARs (multiplan framework), and the DV levels achieved by the OARs in the plans (plan properties) were modeled as a function of the imposed DV constraint settings. OAR complications were then predicted for each of the plans by using the imposed DV constraints alone (features) or in combination with modeled DV levels (plan properties) as input to machine learning (ML) algorithms. These ML approaches were used to model two OAR complications after head-and-neck and prostate IMRT: xerostomia, and Grade 2 rectal bleeding. Two-fold cross-validation was used for model verification and mean errors are reported. Results: Errors for modeling the achieved DV values as a function of constraint settings were 0-6%. In the head-and-neck case, the mean absolute prediction error of the saliva flow rate normalized to the pretreatment saliva flow rate was 0.42% with a 95% confidence interval of (0.41-0.43%). In the prostate case, an average prediction accuracy of 97.04% with a 95% confidence interval of (96.67-97.41%) was achieved for Grade 2 rectal bleeding complications. Conclusions: ML can be used for predicting OAR complications during treatment planning allowing for alternative DV constraint settings to be assessed within the planning framework.

  7. Hidden Markov Model and Support Vector Machine based decoding of finger movements using Electrocorticography

    PubMed Central

    Wissel, Tobias; Pfeiffer, Tim; Frysch, Robert; Knight, Robert T.; Chang, Edward F.; Hinrichs, Hermann; Rieger, Jochem W.; Rose, Georg

    2013-01-01

    Objective Support Vector Machines (SVM) have developed into a gold standard for accurate classification in Brain-Computer-Interfaces (BCI). The choice of the most appropriate classifier for a particular application depends on several characteristics in addition to decoding accuracy. Here we investigate the implementation of Hidden Markov Models (HMM)for online BCIs and discuss strategies to improve their performance. Approach We compare the SVM, serving as a reference, and HMMs for classifying discrete finger movements obtained from the Electrocorticograms of four subjects doing a finger tapping experiment. The classifier decisions are based on a subset of low-frequency time domain and high gamma oscillation features. Main results We show that decoding optimization between the two approaches is due to the way features are extracted and selected and less dependent on the classifier. An additional gain in HMM performance of up to 6% was obtained by introducing model constraints. Comparable accuracies of up to 90% were achieved with both SVM and HMM with the high gamma cortical response providing the most important decoding information for both techniques. Significance We discuss technical HMM characteristics and adaptations in the context of the presented data as well as for general BCI applications. Our findings suggest that HMMs and their characteristics are promising for efficient online brain-computer interfaces. PMID:24045504

  8. Fuzzy logic controller for hemodialysis machine based on human body model.

    PubMed

    Nafisi, Vahid Reza; Eghbal, Manouchehr; Motlagh, Mohammad Reza Jahed; Yavari, Fatemeh

    2011-01-01

    Fuzzy controllers are being used in various control schemes. The aim of this study is to adjust the hemodialysis machine parameters by utilizing a fuzzy logic controller (FLC) so that patient's hemodynamic condition remains stable during hemodialysis treatment. For this purpose, a comprehensive mathematical model of the arterial pressure response during hemodialysis, including hemodynamic, osmotic, and regulatory phenomena has been used. The multi-input multi-output (MIMO) fuzzy logic controller receives three parameters from the model (heart rate, arterial blood pressure, and relative blood volume) as input. According to the changes in the controller input values and its rule base, the outputs change so that the patient's hemodynamic condition remains stable. The results of the simulations illustrate that applying the controller can improve the stability of a patient's hemodynamic condition during hemodialysis treatment and it also decreases the treatment time. Furthermore, by using fuzzy logic, there is no need to have prior knowledge about the system under control and the FLC is compatible with different patients.

  9. Assessment of machine learning reliability methods for quantifying the applicability domain of QSAR regression models.

    PubMed

    Toplak, Marko; Močnik, Rok; Polajnar, Matija; Bosnić, Zoran; Carlsson, Lars; Hasselgren, Catrin; Demšar, Janez; Boyer, Scott; Zupan, Blaž; Stålring, Jonna

    2014-02-24

    The vastness of chemical space and the relatively small coverage by experimental data recording molecular properties require us to identify subspaces, or domains, for which we can confidently apply QSAR models. The prediction of QSAR models in these domains is reliable, and potential subsequent investigations of such compounds would find that the predictions closely match the experimental values. Standard approaches in QSAR assume that predictions are more reliable for compounds that are "similar" to those in subspaces with denser experimental data. Here, we report on a study of an alternative set of techniques recently proposed in the machine learning community. These methods quantify prediction confidence through estimation of the prediction error at the point of interest. Our study includes 20 public QSAR data sets with continuous response and assesses the quality of 10 reliability scoring methods by observing their correlation with prediction error. We show that these new alternative approaches can outperform standard reliability scores that rely only on similarity to compounds in the training set. The results also indicate that the quality of reliability scoring methods is sensitive to data set characteristics and to the regression method used in QSAR. We demonstrate that at the cost of increased computational complexity these dependencies can be leveraged by integration of scores from various reliability estimation approaches. The reliability estimation techniques described in this paper have been implemented in an open source add-on package ( https://bitbucket.org/biolab/orange-reliability ) to the Orange data mining suite. PMID:24490838

  10. Manifest: A computer program for 2-D flow modeling in Stirling machines

    NASA Technical Reports Server (NTRS)

    Gedeon, David

    1989-01-01

    A computer program named Manifest is discussed. Manifest is a program one might want to use to model the fluid dynamics in the manifolds commonly found between the heat exchangers and regenerators of Stirling machines; but not just in the manifolds - in the regenerators as well. And in all sorts of other places too, such as: in heaters or coolers, or perhaps even in cylinder spaces. There are probably nonStirling uses for Manifest also. In broad strokes, Manifest will: (1) model oscillating internal compressible laminar fluid flow in a wide range of two-dimensional regions, either filled with porous materials or empty; (2) present a graphics-based user-friendly interface, allowing easy selection and modification of region shape and boundary condition specification; (3) run on a personal computer, or optionally (in the case of its number-crunching module) on a supercomputer; and (4) allow interactive examination of the solution output so the user can view vector plots of flow velocity, contour plots of pressure and temperature at various locations and tabulate energy-related integrals of interest.

  11. Hybrid polylingual object model: an efficient and seamless integration of Java and native components on the Dalvik virtual machine.

    PubMed

    Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv

    2014-01-01

    JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded. PMID:25110745

  12. Hybrid polylingual object model: an efficient and seamless integration of Java and native components on the Dalvik virtual machine.

    PubMed

    Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv

    2014-01-01

    JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.

  13. Object Classification via Planar Abstraction

    NASA Astrophysics Data System (ADS)

    Oesau, Sven; Lafarge, Florent; Alliez, Pierre

    2016-06-01

    We present a supervised machine learning approach for classification of objects from sampled point data. The main idea consists in first abstracting the input object into planar parts at several scales, then discriminate between the different classes of objects solely through features derived from these planar shapes. Abstracting into planar shapes provides a means to both reduce the computational complexity and improve robustness to defects inherent to the acquisition process. Measuring statistical properties and relationships between planar shapes offers invariance to scale and orientation. A random forest is then used for solving the multiclass classification problem. We demonstrate the potential of our approach on a set of indoor objects from the Princeton shape benchmark and on objects acquired from indoor scenes and compare the performance of our method with other point-based shape descriptors.

  14. Multiscale Modeling of Biological Functions: From Enzymes to Molecular Machines (Nobel Lecture)

    PubMed Central

    Warshel, Arieh

    2016-01-01

    Adetailed understanding of the action of biological molecules is a pre-requisite for rational advances in health sciences and related fields. Here, the challenge is to move from available structural information to a clear understanding of the underlying function of the system. In light of the complexity of macromolecular complexes, it is essential to use computer simulations to describe how the molecular forces are related to a given function. However, using a full and reliable quantum mechanical representation of large molecular systems has been practically impossible. The solution to this (and related) problems has emerged from the realization that large systems can be spatially divided into a region where the quantum mechanical description is essential (e.g. a region where bonds are being broken), with the remainder of the system being represented on a simpler level by empirical force fields. This idea has been particularly effective in the development of the combined quantum mechanics/molecular mechanics (QM/MM) models. Here, the coupling between the electrostatic effects of the quantum and classical subsystems has been a key to the advances in describing the functions of enzymes and other biological molecules. The same idea of representing complex systems in different resolutions in both time and length scales has been found to be very useful in modeling the action of complex systems. In such cases, starting with coarse grained (CG) representations that were originally found to be very useful in simulating protein folding, and augmenting them with a focus on electrostatic energies, has led to models that are particularly effective in probing the action of molecular machines. The same multiscale idea is likely to play a major role in modeling of even more complex systems, including cells and collections of cells. PMID:25060243

  15. Multiscale modeling of biological functions: from enzymes to molecular machines (Nobel Lecture).

    PubMed

    Warshel, Arieh

    2014-09-15

    A detailed understanding of the action of biological molecules is a pre-requisite for rational advances in health sciences and related fields. Here, the challenge is to move from available structural information to a clear understanding of the underlying function of the system. In light of the complexity of macromolecular complexes, it is essential to use computer simulations to describe how the molecular forces are related to a given function. However, using a full and reliable quantum mechanical representation of large molecular systems has been practically impossible. The solution to this (and related) problems has emerged from the realization that large systems can be spatially divided into a region where the quantum mechanical description is essential (e.g. a region where bonds are being broken), with the remainder of the system being represented on a simpler level by empirical force fields. This idea has been particularly effective in the development of the combined quantum mechanics/molecular mechanics (QM/MM) models. Here, the coupling between the electrostatic effects of the quantum and classical subsystems has been a key to the advances in describing the functions of enzymes and other biological molecules. The same idea of representing complex systems in different resolutions in both time and length scales has been found to be very useful in modeling the action of complex systems. In such cases, starting with coarse grained (CG) representations that were originally found to be very useful in simulating protein folding, and augmenting them with a focus on electrostatic energies, has led to models that are particularly effective in probing the action of molecular machines. The same multiscale idea is likely to play a major role in modeling of even more complex systems, including cells and collections of cells.

  16. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere.

    PubMed

    Ma, Denglong; Zhang, Zaoxiao

    2016-07-01

    Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem. PMID:27035273

  17. Working with Simple Machines

    ERIC Educational Resources Information Center

    Norbury, John W.

    2006-01-01

    A set of examples is provided that illustrate the use of work as applied to simple machines. The ramp, pulley, lever and hydraulic press are common experiences in the life of a student, and their theoretical analysis therefore makes the abstract concept of work more real. The mechanical advantage of each of these systems is also discussed so that…

  18. Machine Learning Based Multi-Physical-Model Blending for Enhancing Renewable Energy Forecast -- Improvement via Situation Dependent Error Correction

    SciTech Connect

    Lu, Siyuan; Hwang, Youngdeok; Khabibrakhmanov, Ildar; Marianno, Fernando J.; Shao, Xiaoyan; Zhang, Jie; Hodge, Bri-Mathias; Hamann, Hendrik F.

    2015-07-15

    With increasing penetration of solar and wind energy to the total energy supply mix, the pressing need for accurate energy forecasting has become well-recognized. Here we report the development of a machine-learning based model blending approach for statistically combining multiple meteorological models for improving the accuracy of solar/wind power forecast. Importantly, we demonstrate that in addition to parameters to be predicted (such as solar irradiance and power), including additional atmospheric state parameters which collectively define weather situations as machine learning input provides further enhanced accuracy for the blended result. Functional analysis of variance shows that the error of individual model has substantial dependence on the weather situation. The machine-learning approach effectively reduces such situation dependent error thus produces more accurate results compared to conventional multi-model ensemble approaches based on simplistic equally or unequally weighted model averaging. Validation over an extended period of time results show over 30% improvement in solar irradiance/power forecast accuracy compared to forecasts based on the best individual model.

  19. Diabetic peripheral neuropathy class prediction by multicategory support vector machine model: a cross-sectional study

    PubMed Central

    2016-01-01

    OBJECTIVES Diabetes is increasing in worldwide prevalence, toward epidemic levels. Diabetic neuropathy, one of the most common complications of diabetes mellitus, is a serious condition that can lead to amputation. This study used a multicategory support vector machine (MSVM) to predict diabetic peripheral neuropathy severity classified into four categories using patients’ demographic characteristics and clinical features. METHODS In this study, the data were collected at the Diabetes Center of Hamadan in Iran. Patients were enrolled by the convenience sampling method. Six hundred patients were recruited. After obtaining informed consent, a questionnaire collecting general information and a neuropathy disability score (NDS) questionnaire were administered. The NDS was used to classify the severity of the disease. We used MSVM with both one-against-all and one-against-one methods and three kernel functions, radial basis function (RBF), linear, and polynomial, to predict the class of disease with an unbalanced dataset. The synthetic minority class oversampling technique algorithm was used to improve model performance. To compare the performance of the models, the mean of accuracy was used. RESULTS For predicting diabetic neuropathy, a classifier built from a balanced dataset and the RBF kernel function with a one-against-one strategy predicted the class to which a patient belonged with about 76% accuracy. CONCLUSIONS The results of this study indicate that, in terms of overall classification accuracy, the MSVM model based on a balanced dataset can be useful for predicting the severity of diabetic neuropathy, and it should be further investigated for the prediction of other diseases. PMID:27032459

  20. A Comparison of Hourly Typhoon Rainfall Forecasting Models Based on Support Vector Machines and Random Forests with Different Predictor Sets

    NASA Astrophysics Data System (ADS)

    Lin, Kun-Hsiang; Tseng, Hung-Wei; Kuo, Chen-Min; Yang, Tao-Chang; Yu, Pao-Shan

    2016-04-01

    Typhoons with heavy rainfall and strong wind often cause severe floods and losses in Taiwan, which motivates the development of rainfall forecasting models as part of an early warning system. Thus, this study aims to develop rainfall forecasting models based on two machine learning methods, support vector machines (SVMs) and random forests (RFs), and investigate the performances of the models with different predictor sets for searching the optimal predictor set in forecasting. Four predictor sets were used: (1) antecedent rainfalls, (2) antecedent rainfalls and typhoon characteristics, (3) antecedent rainfalls and meteorological factors, and (4) antecedent rainfalls, typhoon characteristics and meteorological factors to construct for 1- to 6-hour ahead rainfall forecasting. An application to three rainfall stations in Yilan River basin, northeastern Taiwan, was conducted. Firstly, the performance of the SVMs-based forecasting model with predictor set #1 was analyzed. The results show that the accuracy of the models for 2- to 6-hour ahead forecasting decrease rapidly as compared to the accuracy of the model for 1-hour ahead forecasting which is acceptable. For improving the model performance, each predictor set was further examined in the SVMs-based forecasting model. The results reveal that the SVMs-based model using predictor set #4 as input variables performs better than the other sets and a significant improvement of model performance is found especially for the long lead time forecasting. Lastly, the performance of the SVMs-based model using predictor set #4 as input variables was compared with the performance of the RFs-based model using predictor set #4 as input variables. It is found that the RFs-based model is superior to the SVMs-based model in hourly typhoon rainfall forecasting. Keywords: hourly typhoon rainfall forecasting, predictor selection, support vector machines, random forests

  1. Aerodynamic Properties Analysis of Rapid Prototyped Models Versus Conventional Machined Models

    NASA Technical Reports Server (NTRS)

    Springer, A.; Cooper, K.

    1998-01-01

    Initial studies of the aerodynamic characteristics of proposed launch vehicles can be made more accurately if lower cost, high fidelity aerodynamic models are available for wind tunnel testing early in the design phase. This paper discusses the results of a study undertaken at NASA's Marshall Space Flight Center to determine if four rapid prototyping methods using a variety of materials are suitable for the design and manufacturing of high speed wind tunnel models in direct testing applications. It also gives an analysis of whether these materials and processes are of sufficient strength and fidelity to withstand the testing environment. In addition to test data, costs and turn-around times for the various models are given. Based on the results of this study, it can be concluded that rapid prototyping models show promise in limited direct application for preliminary aerodynamic development studies at subsonic, transonic, and supersonic speeds.

  2. Support vector machines for seizure detection in an animal model of chronic epilepsy

    NASA Astrophysics Data System (ADS)

    Nandan, Manu; Talathi, Sachin S.; Myers, Stephen; Ditto, William L.; Khargonekar, Pramod P.; Carney, Paul R.

    2010-06-01

    We compare the performance of three support vector machine (SVM) types: weighted SVM, one-class SVM and support vector data description (SVDD) for the application of seizure detection in an animal model of chronic epilepsy. Large EEG datasets (273 h and 91 h respectively, with a sampling rate of 1 kHz) from two groups of rats with chronic epilepsy were used in this study. For each of these EEG datasets, we extracted three energy-based seizure detection features: mean energy, mean curve length and wavelet energy. Using these features we performed twofold cross-validation to obtain the performance statistics: sensitivity (S), specificity (K) and detection latency (τ) as a function of control parameters for the given SVM. Optimal control parameters for each SVM type that produced the best seizure detection statistics were then identified using two independent strategies. Performance of each SVM type is ranked based on the overall seizure detection performance through an optimality index metric (O). We found that SVDD not only performed better than the other SVM types in terms of highest value of the mean optimality index metric (\\skew3\\bar{O} ) but also gave a more reliable performance across the two EEG datasets.

  3. Fullrmc, a rigid body Reverse Monte Carlo modeling package enabled with machine learning and artificial intelligence.

    PubMed

    Aoun, Bachir

    2016-05-01

    A new Reverse Monte Carlo (RMC) package "fullrmc" for atomic or rigid body and molecular, amorphous, or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython, C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with a set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modeling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. In addition, fullrmc provides a unique way with almost no additional computational cost to recur a group's selection, allowing the system to go out of local minimas by refining a group's position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group. PMID:26800289

  4. Advancing brain-machine interfaces: moving beyond linear state space models

    PubMed Central

    Rouse, Adam G.; Schieber, Marc H.

    2015-01-01

    Advances in recent years have dramatically improved output control by Brain-Machine Interfaces (BMIs). Such devices nevertheless remain robotic and limited in their movements compared to normal human motor performance. Most current BMIs rely on transforming recorded neural activity to a linear state space composed of a set number of fixed degrees of freedom. Here we consider a variety of ways in which BMI design might be advanced further by applying non-linear dynamics observed in normal motor behavior. We consider (i) the dynamic range and precision of natural movements, (ii) differences between cortical activity and actual body movement, (iii) kinematic and muscular synergies, and (iv) the implications of large neuronal populations. We advance the hypothesis that a given population of recorded neurons may transmit more useful information than can be captured by a single, linear model across all movement phases and contexts. We argue that incorporating these various non-linear characteristics will be an important next step in advancing BMIs to more closely match natural motor performance. PMID:26283932

  5. Fullrmc, a rigid body Reverse Monte Carlo modeling package enabled with machine learning and artificial intelligence.

    PubMed

    Aoun, Bachir

    2016-05-01

    A new Reverse Monte Carlo (RMC) package "fullrmc" for atomic or rigid body and molecular, amorphous, or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython, C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with a set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modeling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. In addition, fullrmc provides a unique way with almost no additional computational cost to recur a group's selection, allowing the system to go out of local minimas by refining a group's position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group.

  6. Global-scale assessment of groundwater depletion and related groundwater abstractions: Combining hydrological modeling with information from well observations and GRACE satellites

    NASA Astrophysics Data System (ADS)

    Döll, Petra; Müller Schmied, Hannes; Schuh, Carina; Portmann, Felix T.; Eicker, Annette

    2014-07-01

    Groundwater depletion (GWD) compromises crop production in major global agricultural areas and has negative ecological consequences. To derive GWD at the grid cell, country, and global levels, we applied a new version of the global hydrological model WaterGAP that simulates not only net groundwater abstractions and groundwater recharge from soils but also groundwater recharge from surface water bodies in dry regions. A large number of independent estimates of GWD as well as total water storage (TWS) trends determined from GRACE satellite data by three analysis centers were compared to model results. GWD and TWS trends are simulated best assuming that farmers in GWD areas irrigate at 70% of optimal water requirement. India, United States, Iran, Saudi Arabia, and China had the highest GWD rates in the first decade of the 21st century. On the Arabian Peninsula, in Libya, Egypt, Mali, Mozambique, and Mongolia, at least 30% of the abstracted groundwater was taken from nonrenewable groundwater during this time period. The rate of global GWD has likely more than doubled since the period 1960-2000. Estimated GWD of 113 km3/yr during 2000-2009, corresponding to a sea level rise of 0.31 mm/yr, is much smaller than most previous estimates. About 15% of the globally abstracted groundwater was taken from nonrenewable groundwater during this period. To monitor recent temporal dynamics of GWD and related water abstractions, GRACE data are best evaluated with a hydrological model that, like WaterGAP, simulates the impact of abstractions on water storage, but the low spatial resolution of GRACE remains a challenge.

  7. Implications of the Turing machine model of computation for processor and programming language design

    NASA Astrophysics Data System (ADS)

    Hunter, Geoffrey

    2004-01-01

    A computational process is classified according to the theoretical model that is capable of executing it; computational processes that require a non-predeterminable amount of intermediate storage for their execution are Turing-machine (TM) processes, while those whose storage are predeterminable are Finite Automation (FA) processes. Simple processes (such as traffic light controller) are executable by Finite Automation, whereas the most general kind of computation requires a Turing Machine for its execution. This implies that a TM process must have a non-predeterminable amount of memory allocated to it at intermediate instants of its execution; i.e. dynamic memory allocation. Many processes encountered in practice are TM processes. The implication for computational practice is that the hardware (CPU) architecture and its operating system must facilitate dynamic memory allocation, and that the programming language used to specify TM processes must have statements with the semantic attribute of dynamic memory allocation, for in Alan Turing"s thesis on computation (1936) the "standard description" of a process is invariant over the most general data that the process is designed to process; i.e. the program describing the process should never have to be modified to allow for differences in the data that is to be processed in different instantiations; i.e. data-invariant programming. Any non-trivial program is partitioned into sub-programs (procedures, subroutines, functions, modules, etc). Examination of the calls/returns between the subprograms reveals that they are nodes in a tree-structure; this tree-structure is independent of the programming language used to encode (define) the process. Each sub-program typically needs some memory for its own use (to store values intermediate between its received data and its computed results); this locally required memory is not needed before the subprogram commences execution, and it is not needed after its execution terminates

  8. Dry machinability of aluminum alloys.

    SciTech Connect

    Shareef, I.; Natarajan, M.; Ajayi, O. O.; Energy Technology; Department of IMET

    2005-01-01

    Adverse effects of the use of cutting fluids and environmental concerns with regard to cutting fluid disposability is compelling industry to adopt Dry or near Dry Machining, with the aim of eliminating or significantly reducing the use of metal working fluids. Pending EPA regulations on metal cutting, dry machining is becoming a hot topic of research and investigation both in industry and federal research labs. Although the need for dry machining may be apparent, most of the manufacturers still consider dry machining to be impractical and even if possible, very expensive. This perception is mainly due to lack of appropriate cutting tools that can withstand intense heat and Built-up-Edge (BUE) formation during dry machining. The challenge of heat dissipation without coolant requires a completely different approach to tooling. Special tooling utilizing high-performance multi-layer, multi-component, heat resisting, low friction coatings could be a plausible answer to the challenge of dry machining. In pursuit of this goal Argonne National Labs has introduced Nano-crystalline near frictionless carbon (NFC) diamond like coatings (DLC), while industrial efforts have led to the introduction of composite coatings such as titanium aluminum nitride (TiAlN), tungsten carbide/carbon (WC/C) and others. Although, these coatings are considered to be very promising, they have not been tested either from tribological or from dry machining applications point of view. As such a research program in partnership with federal labs and industrial sponsors has started with the goal of exploring the feasibility of dry machining using the newly developed coatings such as Near Frictionless Carbon Coatings (NFC), Titanium Aluminum Nitride (TiAlN), and multi-layer multicomponent nano coatings such as TiAlCrYN and TiAlN/YN. Although various coatings are under investigation as part of the overall dry machinability program, this extended abstract deals with a systematic investigation of dry

  9. Modelling and simulation of effect of ultrasonic vibrations on machining of Ti6Al4V.

    PubMed

    Patil, Sandip; Joshi, Shashikant; Tewari, Asim; Joshi, Suhas S

    2014-02-01

    The titanium alloys cause high machining heat generation and consequent rapid wear of cutting tool edges during machining. The ultrasonic assisted turning (UAT) has been found to be very effective in machining of various materials; especially in the machining of "difficult-to-cut" material like Ti6Al4V. The present work is a comprehensive study involving 2D FE transient simulation of UAT in DEFORM framework and their experimental characterization. The simulation shows that UAT reduces the stress level on cutting tool during machining as compared to that of in continuous turning (CT) barring the penetration stage, wherein both tools are subjected to identical stress levels. There is a 40-45% reduction in cutting forces and about 48% reduction in cutting temperature in UAT over that of in CT. However, the reduction magnitude reduces with an increase in the cutting speed. The experimental analysis of UAT process shows that the surface roughness in UAT is lower than in CT, and the UATed surfaces have matte finish as against the glossy finish on the CTed surfaces. Microstructural observations of the chips and machined surfaces in both processes reveal that the intensity of thermal softening and shear band formation is reduced in UAT over that of in CT.

  10. Model-based automatic target recognition using hierarchical foveal machine vision

    NASA Astrophysics Data System (ADS)

    McKee, Douglas C.; Bandera, Cesar; Ghosal, Sugata; Rauss, Patrick J.

    1996-06-01

    This paper presents a target detection and interrogation techniques for a foveal automatic target recognition (ATR) system based on the hierarchical scale-space processing of imagery from a rectilinear tessellated multiacuity retinotopology. Conventional machine vision captures imagery and applies early vision techniques with uniform resolution throughout the field-of-view (FOV). In contrast, foveal active vision features graded acuity imagers and processing coupled with context sensitive gaze control, analogous to that prevalent throughout vertebrate vision. Foveal vision can operate more efficiently in dynamic scenarios with localized relevance than uniform acuity vision because resolution is treated as a dynamically allocable resource. Foveal ATR exploits the difference between detection and recognition resolution requirements and sacrifices peripheral acuity to achieve a wider FOV (e.g. faster search), greater localized resolution where needed (e.g., more confident recognition at the fovea), and faster frame rates (e.g., more reliable tracking and navigation) without increasing processing requirements. The rectilinearity of the retinotopology supports a data structure that is a subset of the image pyramid. This structure lends itself to multiresolution and conventional 2-D algorithms, and features a shift invariance of perceived target shape that tolerates sensor pointing errors and supports multiresolution model-based techniques. The detection technique described in this paper searches for regions-of- interest (ROIs) using the foveal sensor's wide FOV peripheral vision. ROIs are initially detected using anisotropic diffusion filtering and expansion template matching to a multiscale Zernike polynomial-based target model. Each ROI is then interrogated to filter out false target ROIs by sequentially pointing a higher acuity region of the sensor at each ROI centroid and conducting a fractal dimension test that distinguishes targets from structured clutter.

  11. Support vector machine model for diagnosing pneumoconiosis based on wavelet texture features of digital chest radiographs.

    PubMed

    Zhu, Biyun; Chen, Hui; Chen, Budong; Xu, Yan; Zhang, Kuan

    2014-02-01

    This study aims to explore the classification ability of decision trees (DTs) and support vector machines (SVMs) to discriminate between the digital chest radiographs (DRs) of pneumoconiosis patients and control subjects. Twenty-eight wavelet-based energy texture features were calculated at the lung fields on DRs of 85 healthy controls and 40 patients with stage I and stage II pneumoconiosis. DTs with algorithm C5.0 and SVMs with four different kernels were trained by samples with two combinations of the texture features to classify a DR as of a healthy subject or of a patient with pneumoconiosis. All of the models were developed with fivefold cross-validation, and the final performances of each model were compared by the area under receiver operating characteristic (ROC) curve. For both SVM (with a radial basis function kernel) and DT (with algorithm C5.0), areas under ROC curves (AUCs) were 0.94 ± 0.02 and 0.86 ± 0.04 (P = 0.02) when using the full feature set and 0.95 ± 0.02 and 0.88 ± 0.04 (P = 0.05) when using the selected feature set, respectively. When built on the selected texture features, the SVM with a polynomial kernel showed a higher diagnostic performance with an AUC value of 0.97 ± 0.02 than SVMs with a linear kernel, a radial basis function kernel and a sigmoid kernel with AUC values of 0.96 ± 0.02 (P = 0.37), 0.95 ± 0.02 (P = 0.24), and 0.90 ± 0.03 (P = 0.01), respectively. The SVM model with a polynomial kernel built on the selected feature set showed the highest diagnostic performance among all tested models when using either all the wavelet texture features or the selected ones. The model has a good potential in diagnosing pneumoconiosis based on digital chest radiographs.

  12. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to

  13. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering.

    PubMed

    Shanechi, Maryam M; Orsborn, Amy L; Carmena, Jose M

    2016-04-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain's behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user's motor intention during CLDA-a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter

  14. Gasoline surrogate modeling of gasoline ignition in a rapid compression machine and comparison to experiments

    SciTech Connect

    Mehl, M; Kukkadapu, G; Kumar, K; Sarathy, S M; Pitz, W J; Sung, S J

    2011-09-15

    The use of gasoline in homogeneous charge compression ignition engines (HCCI) and in duel fuel diesel - gasoline engines, has increased the need to understand its compression ignition processes under engine-like conditions. These processes need to be studied under well-controlled conditions in order to quantify low temperature heat release and to provide fundamental validation data for chemical kinetic models. With this in mind, an experimental campaign has been undertaken in a rapid compression machine (RCM) to measure the ignition of gasoline mixtures over a wide range of compression temperatures and for different compression pressures. By measuring the pressure history during ignition, information on the first stage ignition (when observed) and second stage ignition are captured along with information on the phasing of the heat release. Heat release processes during ignition are important because gasoline is known to exhibit low temperature heat release, intermediate temperature heat release and high temperature heat release. In an HCCI engine, the occurrence of low-temperature and intermediate-temperature heat release can be exploited to obtain higher load operation and has become a topic of much interest for engine researchers. Consequently, it is important to understand these processes under well-controlled conditions. A four-component gasoline surrogate model (including n-heptane, iso-octane, toluene, and 2-pentene) has been developed to simulate real gasolines. An appropriate surrogate mixture of the four components has been developed to simulate the specific gasoline used in the RCM experiments. This chemical kinetic surrogate model was then used to simulate the RCM experimental results for real gasoline. The experimental and modeling results covered ultra-lean to stoichiometric mixtures, compressed temperatures of 640-950 K, and compression pressures of 20 and 40 bar. The agreement between the experiments and model is encouraging in terms of first

  15. Modelling of the radial forging process of a hollow billet with the mandrel on the lever radial forging machine

    NASA Astrophysics Data System (ADS)

    Karamyshev, A. P.; Nekrasov, I. I.; Pugin, A. I.; Fedulov, A. A.

    2016-04-01

    The finite-element method (FEM) has been used in scientific research of forming technological process modelling. Among the others, the process of the multistage radial forging of hollow billets has been modelled. The model includes both the thermal problem, concerning preliminary heating of the billet taking into account thermal expansion, and the deformation problem, when the billet is forged in a special machine. The latter part of the model describes such features of the process as die calibration, die movement, initial die temperature, friction conditions, etc. The results obtained can be used to define the necessary process parameters and die calibration.

  16. Chronic renoprotective effect of pulsatile perfusion machine RM3 and IGL-1 solution in a preclinical kidney transplantation model

    PubMed Central

    2012-01-01

    Background Machine perfusion (MP) of kidney graft provides benefits against preservation injury, however decreased graft quality requires optimization of the method. We examined the chronic benefits of MP on kidney grafts and the potential improvements provided by IGL-1 solution. Method We used an established autotransplantation pig kidney model to study the effects of MP against the deleterious effects of warm ischemia (WI: 60 minutes) followed by 22 hours of cold ischemia in MP or static cold storage (CS) followed by autotransplantation. MPS and IGL-1 solutions were compared. Results Animal survival was higher in MPS-MP and both IGL groups. Creatinine measurement did not discriminate between the groups, however MPS-MP and both IGL groups showed decreased proteinuria. Chronic fibrosis level was equivalent between the groups. RTqPCR and immunohistofluorescent evaluation showed that MP and IGL-1 provided some protection against epithelial to mesenchymal transition and chronic lesions. IGL-1 was protective with both MP and CS, particularly against chronic inflammation, with only small differences between the groups. Conclusion IGL-1 used in either machine or static preservation offers similar levels of protection than standard MP. The compatibility of IGL-1 with both machine perfusion and static storage could represent an advantage for clinical teams when choosing the correct solution to use for multi-organ collection. The path towards improving machine perfusion, and organ quality, may involve the optimization of the solution and the correct use of colloids. PMID:23171422

  17. A model of application system for man-machine-environment system engineering in vessels based on IDEF0

    NASA Astrophysics Data System (ADS)

    Shang, Zhen; Qiu, Changhua; Zhu, Shifan

    2011-09-01

    Applying man-machine-environment system engineering (MMESE) in vessels is a method to improve the effectiveness of the interaction between equipment, environment, and humans for the purpose of advancing operating efficiency, performance, safety, and habitability of a vessel and its subsystems. In the following research, the life cycle of vessels was divided into 9 phases, and 15 research subjects were also identified from among these phases. The 15 subjects were systemized, and then the man-machine-environment engineering system application model for vessels was developed using the ICAM definition method 0 (IDEF0), which is a systematical modeling method. This system model bridges the gap between the data and information flow of every two associated subjects with the major basic research methods and approaches included, which brings the formerly relatively independent subjects together as a whole. The application of this systematic model should facilitate the application of man-machine-environment system engineering in vessels, especially at the conceptual and embodiment design phases. The managers and designers can deal with detailed tasks quickly and efficiently while reducing repetitive work.

  18. Effects of imbalance and geometric error on precision grinding machines

    SciTech Connect

    Bibler, J.E.

    1997-06-01

    To study balancing in grinding, a simple mechanical system was examined. It was essential to study such a well-defined system, as opposed to a large, complex system such as a machining center. The use of a compact, well-defined system enabled easy quantification of the imbalance force input, its phase angle to any geometric decentering, and good understanding of the machine mode shapes. It is important to understand a simple system such as the one I examined given that imbalance is so intimately coupled to machine dynamics. It is possible to extend the results presented here to industrial machines, although that is not part of this work. In addition to the empirical testing, a simple mechanical system to look at how mode shapes, balance, and geometric error interplay to yield spindle error motion was modelled. The results of this model will be presented along with the results from a more global grinding model. The global model, presented at ASPE in November 1996, allows one to examine the effects of changing global machine parameters like stiffness and damping. This geometrically abstract, one-dimensional model will be presented to demonstrate the usefulness of an abstract approach for first-order understanding but it will not be the main focus of this thesis. 19 refs., 36 figs., 10 tables.

  19. Solving the AI Planning Plus Scheduling Problem Using Model Checking via Automatic Translation from the Abstract Plan Preparation Language (APPL) to the Symbolic Analysis Laboratory (SAL)

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Munoz, Cesar A.; Siminiceanu, Radu I.

    2007-01-01

    This paper describes a translator from a new planning language named the Abstract Plan Preparation Language (APPL) to the Symbolic Analysis Laboratory (SAL) model checker. This translator has been developed in support of the Spacecraft Autonomy for Vehicles and Habitats (SAVH) project sponsored by the Exploration Technology Development Program, which is seeking to mature autonomy technology for the vehicles and operations centers of Project Constellation.

  20. Reification of abstract concepts to improve comprehension using interactive virtual environments and a knowledge-based design: a renal physiology model.

    PubMed

    Alverson, Dale C; Saiki, Stanley M; Caudell, Thomas P; Goldsmith, Timothy; Stevens, Susan; Saland, Linda; Colleran, Kathleen; Brandt, John; Danielson, Lee; Cerilli, Lisa; Harris, Alexis; Gregory, Martin C; Stewart, Randall; Norenberg, Jeffery; Shuster, George; Panaoitis; Holten, James; Vergera, Victor M; Sherstyuk, Andrei; Kihmm, Kathleen; Lui, Jack; Wang, Kin Lik

    2006-01-01

    Several abstract concepts in medical education are difficult to teach and comprehend. In order to address this challenge, we have been applying the approach of reification of abstract concepts using interactive virtual environments and a knowledge-based design. Reification is the process of making abstract concepts and events, beyond the realm of direct human experience, concrete and accessible to teachers and learners. Entering virtual worlds and simulations not otherwise easily accessible provides an opportunity to create, study, and evaluate the emergence of knowledge and comprehension from the direct interaction of learners with otherwise complex abstract ideas and principles by bringing them to life. Using a knowledge-based design process and appropriate subject matter experts, knowledge structure methods are applied in order to prioritize, characterize important relationships, and create a concept map that can be integrated into the reified models that are subsequently developed. Applying these principles, our interdisciplinary team has been developing a reified model of the nephron into which important physiologic functions can be integrated and rendered into a three dimensional virtual environment called Flatland, a virtual environments development software tool, within which a learners can interact using off-the-shelf hardware. The nephron model can be driven dynamically by a rules-based artificial intelligence engine, applying the rules and concepts developed in conjunction with the subject matter experts. In the future, the nephron model can be used to interactively demonstrate a number of physiologic principles or a variety of pathological processes that may be difficult to teach and understand. In addition, this approach to reification can be applied to a host of other physiologic and pathological concepts in other systems. These methods will require further evaluation to determine their impact and role in learning.

  1. Psychological Abstracts/BRS.

    ERIC Educational Resources Information Center

    Dolan, Donna R.

    1978-01-01

    Discusses particular problems and possible solutions in searching the Psychological Abstracts database, with special reference to its loading on BRS. Included are examples of typical searches, citations (with or without abstract/annotation), a tabulated searchguide to Psychological Abstracts on BRS and specifications for the database. (Author/JD)

  2. Abstraction and Consolidation

    ERIC Educational Resources Information Center

    Monaghan, John; Ozmantar, Mehmet Fatih

    2006-01-01

    The framework for this paper is a recently developed theory of abstraction in context. The paper reports on data collected from one student working on tasks concerned with absolute value functions. It examines the relationship between mathematical constructions and abstractions. It argues that an abstraction is a consolidated construction that can…

  3. Neural control and adaptive neural forward models for insect-like, energy-efficient, and adaptable locomotion of walking machines

    PubMed Central

    Manoonpong, Poramate; Parlitz, Ulrich; Wörgötter, Florentin

    2013-01-01

    Living creatures, like walking animals, have found fascinating solutions for the problem of locomotion control. Their movements show the impression of elegance including versatile, energy-efficient, and adaptable locomotion. During the last few decades, roboticists have tried to imitate such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast, animal locomotion mechanisms seem to largely depend not only on central mechanisms (central pattern generators, CPGs) and sensory feedback (afferent-based control) but also on internal forward models (efference copies). They are used to a different degree in different animals. Generally, CPGs organize basic rhythmic motions which are shaped by sensory feedback while internal models are used for sensory prediction and state estimations. According to this concept, we present here adaptive neural locomotion control consisting of a CPG mechanism with neuromodulation and local leg control mechanisms based on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show that the employed embodied neural closed-loop system can be a powerful way for developing robust and adaptable machines. PMID:23408775

  4. Neural control and adaptive neural forward models for insect-like, energy-efficient, and adaptable locomotion of walking machines.

    PubMed

    Manoonpong, Poramate; Parlitz, Ulrich; Wörgötter, Florentin

    2013-01-01

    Living creatures, like walking animals, have found fascinating solutions for the problem of locomotion control. Their movements show the impression of elegance including versatile, energy-efficient, and adaptable locomotion. During the last few decades, roboticists have tried to imitate such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast, animal locomotion mechanisms seem to largely depend not only on central mechanisms (central pattern generators, CPGs) and sensory feedback (afferent-based control) but also on internal forward models (efference copies). They are used to a different degree in different animals. Generally, CPGs organize basic rhythmic motions which are shaped by sensory feedback while internal models are used for sensory prediction and state estimations. According to this concept, we present here adaptive neural locomotion control consisting of a CPG mechanism with neuromodulation and local leg control mechanisms based on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show that the employed embodied neural closed-loop system can be a powerful way for developing robust and adaptable machines. PMID:23408775

  5. The Aachen miniaturized heart-lung machine--first results in a small animal model.

    PubMed

    Schnoering, Heike; Arens, Jutta; Sachweh, Joerg S; Veerman, Melanie; Tolba, Rene; Schmitz-Rode, Thomas; Steinseifer, Ulrich; Vazquez-Jimenez, Jaime F

    2009-11-01

    Congenital heart surgery most often incorporates extracorporeal circulation. Due to foreign surface contact and the administration of foreign blood in many children, inflammatory response and hemolysis are important matters of debate. This is particularly an issue in premature and low birth-weight newborns. Taking these considerations into account, the Aachen miniaturized heart-lung machine (MiniHLM) with a total static priming volume of 102 mL (including tubing) was developed and tested in a small animal model. Fourteen female Chinchilla Bastard rabbits were operated on using two different kinds of circuits. In eight animals, a conventional HLM with Dideco Kids oxygenator and Stöckert roller pump (Sorin group, Milan, Italy) was used, and the Aachen MiniHLM was employed in six animals. Outcome parameters were hemolysis and blood gas analysis including lactate. The rabbits were anesthetized, and a standard median sternotomy was performed. The ascending aorta and the right atrium were cannulated. After initiating cardiopulmonary bypass, the aorta was cross-clamped, and cardiac arrest was induced by blood cardioplegia. Blood samples for hemolysis and blood gas analysis were drawn before, during, and after cardiopulmonary bypass. After 1 h aortic clamp time, all animals were weaned from cardiopulmonary bypass. Blood gas analysis revealed adequate oxygenation and perfusion during cardiopulmonary bypass, irrespective of the employed perfusion system. The use of the Aachen MiniHLM resulted in a statistically significant reduced decrease in fibrinogen during cardiopulmonary bypass. A trend revealing a reduced increase in free hemoglobin during bypass in the MiniHLM group could also be observed. This newly developed Aachen MiniHLM with low priming volume, reduced hemolysis, and excellent gas transfer (O(2) and CO(2)) may reduce circuit-induced complications during heart surgery in neonates.

  6. Parametric modeling and optimization of laser scanning parameters during laser assisted machining of Inconel 718

    NASA Astrophysics Data System (ADS)

    Venkatesan, K.; Ramanujam, R.; Kuppan, P.

    2016-04-01

    This paper presents a parametric effect, microstructure, micro-hardness and optimization of laser scanning parameters (LSP) on heating experiments during laser assisted machining of Inconel 718 alloy. The laser source used for experiments is a continuous wave Nd:YAG laser with maximum power of 2 kW. The experimental parameters in the present study are cutting speed in the range of 50-100 m/min, feed rate of 0.05-0.1 mm/rev, laser power of 1.25-1.75 kW and approach angle of 60-90°of laser beam axis to tool. The plan of experiments are based on central composite rotatable design L31 (43) orthogonal array. The surface temperature is measured via on-line measurement using infrared pyrometer. Parametric significance on surface temperature is analysed using response surface methodology (RSM), analysis of variance (ANOVA) and 3D surface graphs. The structural change of the material surface is observed using optical microscope and quantitative measurement of heat affected depth that are analysed by Vicker's hardness test. The results indicate that the laser power and approach angle are the most significant parameters to affect the surface temperature. The optimum ranges of laser power and approach angle was identified as 1.25-1.5 kW and 60-65° using overlaid contour plot. The developed second order regression model is found to be in good agreement with experimental values with R2 values of 0.96 and 0.94 respectively for surface temperature and heat affected depth.

  7. Seismic waves modeling with the Fourier pseudo-spectral method on massively parallel machines.

    NASA Astrophysics Data System (ADS)

    Klin, Peter

    2015-04-01

    The Fourier pseudo-spectral method (FPSM) is an approach for the 3D numerical modeling of the wave propagation, which is based on the discretization of the spatial domain in a structured grid and relies on global spatial differential operators for the solution of the wave equation. This last peculiarity is advantageous from the accuracy point of view but poses difficulties for an efficient implementation of the method to be run on parallel computers with distributed memory architecture. The 1D spatial domain decomposition approach has been so far commonly adopted in the parallel implementations of the FPSM, but it implies an intensive data exchange among all the processors involved in the computation, which can degrade the performance because of communication latencies. Moreover, the scalability of the 1D domain decomposition is limited, since the number of processors can not exceed the number of grid points along the directions in which the domain is partitioned. This limitation inhibits an efficient exploitation of the computational environments with a very large number of processors. In order to overcome the limitations of the 1D domain decomposition we implemented a parallel version of the FPSM based on a 2D domain decomposition, which allows to achieve a higher degree of parallelism and scalability on massively parallel machines with several thousands of processing elements. The parallel programming is essentially achieved using the MPI protocol but OpenMP parts are also included in order to exploit the single processor multi - threading capabilities, when available. The developed tool is aimed at the numerical simulation of the seismic waves propagation and in particular is intended for earthquake ground motion research. We show the scalability tests performed up to 16k processing elements on the IBM Blue Gene/Q computer at CINECA (Italy), as well as the application to the simulation of the earthquake ground motion in the alluvial plain of the Po river (Italy).

  8. A Directed Acyclic Graph-Large Margin Distribution Machine Model for Music Symbol Classification

    PubMed Central

    Wen, Cuihong; Zhang, Jing; Rebelo, Ana; Cheng, Fanyong

    2016-01-01

    Optical Music Recognition (OMR) has received increasing attention in recent years. In this paper, we propose a classifier based on a new method named Directed Acyclic Graph-Large margin Distribution Machine (DAG-LDM). The DAG-LDM is an improvement of the Large margin Distribution Machine (LDM), which is a binary classifier that optimizes the margin distribution by maximizing the margin mean and minimizing the margin variance simultaneously. We modify the LDM to the DAG-LDM to solve the multi-class music symbol classification problem. Tests are conducted on more than 10000 music symbol images, obtained from handwritten and printed images of music scores. The proposed method provides superior classification capability and achieves much higher classification accuracy than the state-of-the-art algorithms such as Support Vector Machines (SVMs) and Neural Networks (NNs). PMID:26985826

  9. A Directed Acyclic Graph-Large Margin Distribution Machine Model for Music Symbol Classification.

    PubMed

    Wen, Cuihong; Zhang, Jing; Rebelo, Ana; Cheng, Fanyong

    2016-01-01

    Optical Music Recognition (OMR) has received increasing attention in recent years. In this paper, we propose a classifier based on a new method named Directed Acyclic Graph-Large margin Distribution Machine (DAG-LDM). The DAG-LDM is an improvement of the Large margin Distribution Machine (LDM), which is a binary classifier that optimizes the margin distribution by maximizing the margin mean and minimizing the margin variance simultaneously. We modify the LDM to the DAG-LDM to solve the multi-class music symbol classification problem. Tests are conducted on more than 10000 music symbol images, obtained from handwritten and printed images of music scores. The proposed method provides superior classification capability and achieves much higher classification accuracy than the state-of-the-art algorithms such as Support Vector Machines (SVMs) and Neural Networks (NNs).

  10. Abstraction and Problem Reformulation

    NASA Technical Reports Server (NTRS)

    Giunchiglia, Fausto

    1992-01-01

    In work done jointly with Toby Walsh, the author has provided a sound theoretical foundation to the process of reasoning with abstraction (GW90c, GWS9, GW9Ob, GW90a). The notion of abstraction formalized in this work can be informally described as: (property 1), the process of mapping a representation of a problem, called (following historical convention (Sac74)) the 'ground' representation, onto a new representation, called the 'abstract' representation, which, (property 2) helps deal with the problem in the original search space by preserving certain desirable properties and (property 3) is simpler to handle as it is constructed from the ground representation by "throwing away details". One desirable property preserved by an abstraction is provability; often there is a relationship between provability in the ground representation and provability in the abstract representation. Another can be deduction or, possibly inconsistency. By 'throwing away details' we usually mean that the problem is described in a language with a smaller search space (for instance a propositional language or a language without variables) in which formulae of the abstract representation are obtained from the formulae of the ground representation by the use of some terminating rewriting technique. Often we require that the use of abstraction results in more efficient .reasoning. However, it might simply increase the number of facts asserted (eg. by allowing, in practice, the exploration of deeper search spaces or by implementing some form of learning). Among all abstractions, three very important classes have been identified. They relate the set of facts provable in the ground space to those provable in the abstract space. We call: TI abstractions all those abstractions where the abstractions of all the provable facts of the ground space are provable in the abstract space; TD abstractions all those abstractions wllere the 'unabstractions' of all the provable facts of the abstract space are

  11. An asymptotical machine

    NASA Astrophysics Data System (ADS)

    Cristallini, Achille

    2016-07-01

    A new and intriguing machine may be obtained replacing the moving pulley of a gun tackle with a fixed point in the rope. Its most important feature is the asymptotic efficiency. Here we obtain a satisfactory description of this machine by means of vector calculus and elementary trigonometry. The mathematical model has been compared with experimental data and briefly discussed.

  12. MOAtox: A comprehensive mode of action and acute aquatic toxicity database for predictive model development (SETAC abstract)

    EPA Science Inventory

    The mode of toxic action (MOA) has been recognized as a key determinant of chemical toxicity and as an alternative to chemical class-based predictive toxicity modeling. However, the development of quantitative structure activity relationship (QSAR) and other models has been limit...

  13. Abstracts and program proceedings of the 1994 meeting of the International Society for Ecological Modelling North American Chapter

    SciTech Connect

    Kercher, J.R.

    1994-06-01

    This document contains information about the 1994 meeting of the International Society for Ecological Modelling North American Chapter. The topics discussed include: extinction risk assessment modelling, ecological risk analysis of uranium mining, impacts of pesticides, demography, habitats, atmospheric deposition, and climate change.

  14. Machine learning methods for empirical streamflow simulation: a comparison of model accuracy, interpretability, and uncertainty in seasonal watersheds

    NASA Astrophysics Data System (ADS)

    Shortridge, Julie E.; Guikema, Seth D.; Zaitchik, Benjamin F.

    2016-07-01

    In the past decade, machine learning methods for empirical rainfall-runoff modeling have seen extensive development and been proposed as a useful complement to physical hydrologic models, particularly in basins where data to support process-based models are limited. However, the majority of research has focused on a small number of methods, such as artificial neural networks, despite the development of multiple other approaches for non-parametric regression in recent years. Furthermore, this work has often evaluated model performance based on predictive accuracy alone, while not considering broader objectives, such as model interpretability and uncertainty, that are important if such methods are to be used for planning and management decisions. In this paper, we use multiple regression and machine learning approaches (including generalized additive models, multivariate adaptive regression splines, artificial neural networks, random forests, and M5 cubist models) to simulate monthly streamflow in five highly seasonal rivers in the highlands of Ethiopia and compare their performance in terms of predictive accuracy, error structure and bias, model interpretability, and uncertainty when faced with extreme climate conditions. While the relative predictive performance of models differed across basins, data-driven approaches were able to achieve reduced errors when compared to physical models developed for the region. Methods such as random forests and generalized additive models may have advantages in terms of visualization and interpretation of model structure, which can be useful in providing insights into physical watershed function. However, the uncertainty associated with model predictions under extreme climate conditions should be carefully evaluated, since certain models (especially generalized additive models and multivariate adaptive regression splines) become highly variable when faced with high temperatures.

  15. Least Square Support Vector Machine Modelling of Breakdown Voltage of Solid Insulating Materials in the Presence of Voids

    NASA Astrophysics Data System (ADS)

    Behera, S.; Tripathy, R. K.; Mohanty, S.

    2013-03-01

    The least square formulation of support vector machine (SVM) was recently proposed and derived from the statistical learning theory. It is also marked as a new development by learning from examples based on neural networks, radial basis function and splines or other functions. Here least square support vector machine (LS-SVM) is used as a machine learning technique for the prediction of the breakdown voltage of solid insulator. The breakdown voltage is due to partial discharge of five solid insulating materials under ac condition. That has been predicted as a function of four input parameters such as thickness of insulating samples ` t', diameter of void ` d', the thickness of the void ` t 1' and relative permittivity of materials `ɛ r ' by using the LS-SVM model. From experimental studies performed on cylindrical-plane electrode system, the requisite training data is obtained. The voids with different dimension are artificially created. Detailed studies have been carried out to determine the LS-SVM parameters which give the best result. At the completion of training it is found that the LS-SVM model is capable of predicting the breakdown voltage V b = ( t, t 1, d, ɛ r ) very efficiently and with a small value of the mean absolute error.

  16. [Prediction model of net photosynthetic rate of ginseng under forest based on optimized parameters support vector machine].

    PubMed

    Wu, Hai-wei; Yu, Hai-ye; Zhang, Lei

    2011-05-01

    Using K-fold cross validation method and two support vector machine functions, four kernel functions, grid-search, genetic algorithm and particle swarm optimization, the authors constructed the support vector machine model of the best penalty parameter c and the best correlation coefficient. Using information granulation technology, the authors constructed P particle and epsilon particle about those factors affecting net photosynthetic rate, and reduced these dimensions of the determinant. P particle includes the percent of visible spectrum ingredients. Epsilon particle includes leaf temperature, scattering radiation, air temperature, and so on. It is possible to obtain the best correlation coefficient among photosynthetic effective radiation, visible spectrum and individual net photosynthetic rate by this technology. The authors constructed the training set and the forecasting set including photosynthetic effective radiation, P particle and epsilon particle. The result shows that epsilon-SVR-RBF-genetic algorithm model, nu-SVR-linear-grid-search model and nu-SVR-RBF-genetic algorithm model obtain the correlation coefficient of up to 97% about the forecasting set including photosynthetic effective radiation and P particle. The penalty parameter c of nu-SVR-linear-grid-search model is the minimum, so the model's generalization ability is the best. The authors forecasted the forecasting set including photosynthetic effective radiation, P particle and epsilon particle by the model, and the correlation coefficient is up to 96%.

  17. Modelling and calibration technique of laser triangulation sensors for integration in robot arms and articulated arm coordinate measuring machines.

    PubMed

    Santolaria, Jorge; Guillomía, David; Cajal, Carlos; Albajez, José A; Aguilar, Juan J

    2009-01-01

    A technique for intrinsic and extrinsic calibration of a laser triangulation sensor (LTS) integrated in an articulated arm coordinate measuring machine (AACMM) is presented in this paper. After applying a novel approach to the AACMM kinematic parameter identification problem, by means of a single calibration gauge object, a one-step calibration method to obtain both intrinsic-laser plane, CCD sensor and camera geometry-and extrinsic parameters related to the AACMM main frame has been developed. This allows the integration of LTS and AACMM mathematical models without the need of additional optimization methods after the prior sensor calibration, usually done in a coordinate measuring machine (CMM) before the assembly of the sensor in the arm. The experimental tests results for accuracy and repeatability show the suitable performance of this technique, resulting in a reliable, quick and friendly calibration method for the AACMM final user. The presented method is also valid for sensor integration in robot arms and CMMs.

  18. The evolving market structures of gambling: case studies modelling the socioeconomic assignment of gaming machines in Melbourne and Sydney, Australia.

    PubMed

    Marshall, David C; Baker, Robert G V

    2002-01-01

    The expansion of gambling industries worldwide is intertwined with the growing government dependence on gambling revenue for fiscal assignments. In Australia, electronic gaming machines (EGMs) have dominated recent gambling industry growth. As EGMs have proliferated, growing recognition has emerged that EGM distribution closely reflects levels of socioeconomic disadvantage. More machines are located in less advantaged regions. This paper analyses time-series socioeconomic distributions of EGMs in Melbourne, Australia, an immature EGM market, and then compares the findings with the mature market in Sydney. Similar findings in both cities suggest that market assignment of EGMs transcends differences in historical and legislative environments. This indicates that similar underlying structures are evident in both markets. Modelling the spatial structures of gambling markets provides an opportunity to identify regions most at risk of gambling related problems. Subsequently, policies can be formulated which ensure fiscal revenue from gambling can be better targeted towards regions likely to be most afflicted by excessive gambling-related problems.

  19. Modelling and calibration technique of laser triangulation sensors for integration in robot arms and articulated arm coordinate measuring machines.

    PubMed

    Santolaria, Jorge; Guillomía, David; Cajal, Carlos; Albajez, José A; Aguilar, Juan J

    2009-01-01

    A technique for intrinsic and extrinsic calibration of a laser triangulation sensor (LTS) integrated in an articulated arm coordinate measuring machine (AACMM) is presented in this paper. After applying a novel approach to the AACMM kinematic parameter identification problem, by means of a single calibration gauge object, a one-step calibration method to obtain both intrinsic-laser plane, CCD sensor and camera geometry-and extrinsic parameters related to the AACMM main frame has been developed. This allows the integration of LTS and AACMM mathematical models without the need of additional optimization methods after the prior sensor calibration, usually done in a coordinate measuring machine (CMM) before the assembly of the sensor in the arm. The experimental tests results for accuracy and repeatability show the suitable performance of this technique, resulting in a reliable, quick and friendly calibration method for the AACMM final user. The presented method is also valid for sensor integration in robot arms and CMMs. PMID:22400001

  20. The evolving market structures of gambling: case studies modelling the socioeconomic assignment of gaming machines in Melbourne and Sydney, Australia.

    PubMed

    Marshall, David C; Baker, Robert G V

    2002-01-01

    The expansion of gambling industries worldwide is intertwined with the growing government dependence on gambling revenue for fiscal assignments. In Australia, electronic gaming machines (EGMs) have dominated recent gambling industry growth. As EGMs have proliferated, growing recognition has emerged that EGM distribution closely reflects levels of socioeconomic disadvantage. More machines are located in less advantaged regions. This paper analyses time-series socioeconomic distributions of EGMs in Melbourne, Australia, an immature EGM market, and then compares the findings with the mature market in Sydney. Similar findings in both cities suggest that market assignment of EGMs transcends differences in historical and legislative environments. This indicates that similar underlying structures are evident in both markets. Modelling the spatial structures of gambling markets provides an opportunity to identify regions most at risk of gambling related problems. Subsequently, policies can be formulated which ensure fiscal revenue from gambling can be better targeted towards regions likely to be most afflicted by excessive gambling-related problems. PMID:12375384

  1. Low power laser protects human erythrocytes In an In vitro model of artificial heart-lung machines.

    PubMed

    Itoh, T; Murakami, H; Orihashi, K; Sueda, T; Kusumoto, Y; Kakehashi, M; Matsuura, Y

    2000-11-01

    The protective effect of the low power helium-neon (He-Ne) laser against the damage of human erythrocytes in whole blood was examined in a perfusion model using an artificial heart-lung machine. Preserved human whole blood was diluted and perfused in 2 closed circuits with a double roller pump. The laser irradiated one of the circuits (laser group), and none the other (control group). In the laser group, erythrocyte deformability and erythrocyte adenosine triphosphate (ATP) levels were significantly higher, and free hemoglobin levels were significantly lower than those in the control group. Subsequent morphological findings by means of scanning electron microscope were consistent with these results. Low power He-Ne laser protected human erythrocytes in the preserved diluted whole blood from the damage caused by experimental artificial heart-lung machines. The clinical application of low power laser treatment for extracorporeal circulation is suggested.

  2. Modelling and Calibration Technique of Laser Triangulation Sensors for Integration in Robot Arms and Articulated Arm Coordinate Measuring Machines

    PubMed Central

    Santolaria, Jorge; Guillomía, David; Cajal, Carlos; Albajez, José A.; Aguilar, Juan J.

    2009-01-01

    A technique for intrinsic and extrinsic calibration of a laser triangulation sensor (LTS) integrated in an articulated arm coordinate measuring machine (AACMM) is presented in this paper. After applying a novel approach to the AACMM kinematic parameter identification problem, by means of a single calibration gauge object, a one-step calibration method to obtain both intrinsic—laser plane, CCD sensor and camera geometry—and extrinsic parameters related to the AACMM main frame has been developed. This allows the integration of LTS and AACMM mathematical models without the need of additional optimization methods after the prior sensor calibration, usually done in a coordinate measuring machine (CMM) before the assembly of the sensor in the arm. The experimental tests results for accuracy and repeatability show the suitable performance of this technique, resulting in a reliable, quick and friendly calibration method for the AACMM final user. The presented method is also valid for sensor integration in robot arms and CMMs. PMID:22400001

  3. A model composition for Mars derived from the oxygen isotopic ratios of martian/SNC meteorites. [Abstract only

    NASA Technical Reports Server (NTRS)

    Delaney, J. S.

    1994-01-01

    Oxygen is the most abundant element in most meteorites, yet the ratios of its isotopes are seldom used to constrain the compositional history of achondrites. The two major achondrite groups have O isotope signatures that differ from any plausible chondritic precursors and lie between the ordinary and carbonaceous chondrite domains. If the assumption is made that the present global sampling of chondritic meteorites reflects the variability of O reservoirs at the time of planetessimal/planet aggregation in the early nebula, then the O in these groups must reflect mixing between known chondritic reservoirs. This approach, in combination with constraints based on Fe-Mn-Mg systematics, has been used previously to model the composition of the basaltic achondrite parent body (BAP) and provides a model precursor composition that is generally consistent with previous eucrite parent body (EPB) estimates. The same approach is applied to Mars exploiting the assumption that the SNC and related meteorites sample the martian lithosphere. Model planet and planetesimal compositions can be derived by mixing of known chondritic components using O isotope ratios as the fundamental compositional constraint. The major- and minor-element composition for Mars derived here and that derived previously for the basaltic achondrite parent body are, in many respects, compatible with model compositions generated using completely independent constraints. The role of volatile elements and alkalis in particular remains a major difficulty in applying such models.

  4. Loving Those Abstracts

    ERIC Educational Resources Information Center

    Stevens, Lori

    2004-01-01

    The author describes a lesson she did on abstract art with her high school art classes. She passed out a required step-by-step outline of the project process. She asked each of them to look at abstract art. They were to list five or six abstract artists they thought were interesting, narrow their list down to the one most personally intriguing,…

  5. (abstract)Electron Impact Emission Cross Sections for Modeling UV Auroral and Dayglow Observations of the Upper Atmospheres of Planets

    NASA Technical Reports Server (NTRS)

    Ajello, J. M.; Shemansky, D. E.; James, G.; Kanik, I.; Slevin, J. A.

    1993-01-01

    In the upper atmospheres of the Jovian and Terrestrial planets a dominant mechanism for energy transfer occurs through electron collisional processes with neutral species leading to UV radiation. In response to the need for accurate collision cross sections to model spectroscopic observations of planetary systems, JPL has measured in the laboratory emission cross sections and medium resolution spectra of H, H(sub 2), N(sub 2), SO(sub 2), and other important planetary gases.Voyager and International Ultraviolet Explorer (IUE) spacecraft have established that band systems of H(sub 2) and N(sub 2) are the dominant UV molecular emissions in the solar system produced by electron impact. Applications of our data to models of Voyager, IUE, Galileo, and Hubble Space Telescope observations of the planets will be described.

  6. Estimating the period and Q of the Chandler Wobble from observations and models of its excitation (Abstract)

    NASA Astrophysics Data System (ADS)

    Gross, R.; Nastula, J.

    2015-08-01

    Any irregularly shaped solid body rotating about some axis that is not aligned with its figure axis will freely wobble as it rotates. For the Earth, this free wobble is known as the Chandler wobble in honor of S.C. Chandler, Jr. who first observed it in 1891. Unlike the forced wobbles of the Earth, such as the annual wobble, whose periods are the same as the periods of the forcing mechanisms, the period of the free Chandler wobble is a function of the internal structure and rheology of the Earth, and its decay time constant, or quality factor Q, is a function of the dissipation mechanism(s), like mantle anelasticity, that are acting to dampen it. Improved estimates of the period and Q of the Chandler wobble can therefore be used to improve our understanding of these properties of the Earth. Here, estimates of the period and Q of the Chandler wobble are obtained by finding those values that minimize the power within the Chandler band of the difference between observed and modeled polar motion excitation spanning 1962- 2010. Atmosphere, ocean, and hydrology models are used to model the excitation caused by both mass and motion variations within these global geophysical fluids. Direct observations of the excitation caused by mass variations as determined from GRACE time varying gravitational field measurements are also used. The resulting estimates of the period and Q of the Chandler wobble will be presented along with a discussion of the robustness of the estimates.

  7. Using the Relevance Vector Machine Model Combined with Local Phase Quantization to Predict Protein-Protein Interactions from Protein Sequences

    PubMed Central

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Fang, Yu-Hong; Zhao, Yu-Jun; Zhang, Ming

    2016-01-01

    We propose a novel computational method known as RVM-LPQ that combines the Relevance Vector Machine (RVM) model and Local Phase Quantization (LPQ) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the LPQ feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We perform 5-fold cross-validation experiments on Yeast and Human datasets, and we achieve very high accuracies of 92.65% and 97.62%, respectively, which is significantly better than previous works. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the Yeast dataset. The experimental results demonstrate that our RVM-LPQ method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool for future proteomics research. PMID:27314023

  8. Using detailed inter-network simulation and model abstraction to investigate and evaluate joint battlespace infosphere (JBI) support technologies

    NASA Astrophysics Data System (ADS)

    Green, David M.; Dallaire, Joel D.; Reaper, Jerome H.

    2004-08-01

    The Joint Battlespace Infosphere (JBI) program is performing a technology investigation into global communications, data mining and warehousing, and data fusion technologies by focusing on techniques and methodologies that support twenty first century military distributed collaboration. Advancement of these technologies is vitally important if military decision makers are to have the right data, in the right format, at the right time and place to support making the right decisions within available timelines. A quantitative understanding of individual and combinational effects arising from the application of technologies within a framework is presently far too complex to evaluate at more than a cursory depth. In order to facilitate quantitative analysis under these circumstances, the Distributed Information Enterprise Modeling and Simulation (DIEMS) team was formed to apply modeling and simulation (M&S) techniques to help in addressing JBI analysis challenges. The DIEMS team has been tasked utilizing collaborative distributed M&S architectures to quantitatively evaluate JBI technologies and tradeoffs. This paper first presents a high level view of the DIEMS project. Once this approach has been established, a more concentrated view of the detailed communications simulation techniques used in generating the underlying support data sets is presented.

  9. Agenda, extended abstracts, and bibliographies for a workshop on Deposit modeling, mineral resources assessment, and their role in sustainable development

    USGS Publications Warehouse

    Briskey, Joseph A.; Schulz, Klaus J.

    2002-01-01

    Global demand for mineral resources continues to increase because of increasing global population and the desire and efforts to improve living standards worldwide. The ability to meet this growing demand for minerals is affected by the concerns about possible environmental degradation associated with minerals production and by competing land uses. Informed planning and decisions concerning sustainability and resource development require a long-term perspective and an integrated approach to land-use, resource, and environmental management worldwide. This, in turn, requires unbiased information on the global distribution of identified and especially undiscovered resources, the economic and political factors influencing their development, and the potential environmental consequences of their exploitation. The purpose of the IGC workshop is to review the state-of-the-art in mineral-deposit modeling and quantitative resource assessment and to examine their role in the sustainability of mineral use. The workshop will address such questions as: Which of the available mineral-deposit models and assessment methods are best suited for predicting the locations, deposit types, and amounts of undiscovered nonfuel mineral resources remaining in the world? What is the availability of global geologic, mineral deposit, and mineral-exploration information? How can mineral-resource assessments be used to address economic and environmental issues? Presentations will include overviews of assessment methods used in previous national and other small-scale assessments of large regions as well as resulting assessment products and their uses.

  10. Mathematical Abstraction through Scaffolding

    ERIC Educational Resources Information Center

    Ozmantar, Mehmet Fatih; Roper, Tom

    2004-01-01

    This paper examines the role of scaffolding in the process of abstraction. An activity-theoretic approach to abstraction in context is taken. This examination is carried out with reference to verbal protocols of two 17 year-old students working together on a task connected to sketching the graph of |f|x|)|. Examination of the data suggests that…

  11. Is It Really Abstract?

    ERIC Educational Resources Information Center

    Kernan, Christine

    2011-01-01

    For this author, one of the most enjoyable aspects of teaching elementary art is the willingness of students to embrace the different styles of art introduced to them. In this article, she describes a project that allows upper-elementary students to learn about abstract art and the lives of some of the master abstract artists, implement the idea…

  12. Designing for Mathematical Abstraction

    ERIC Educational Resources Information Center

    Pratt, Dave; Noss, Richard

    2010-01-01

    Our focus is on the design of systems (pedagogical, technical, social) that encourage mathematical abstraction, a process we refer to as "designing for abstraction." In this paper, we draw on detailed design experiments from our research on children's understanding about chance and distribution to re-present this work as a case study in designing…

  13. Paper Abstract Animals

    ERIC Educational Resources Information Center

    Sutley, Jane

    2010-01-01

    Abstraction is, in effect, a simplification and reduction of shapes with an absence of detail designed to comprise the essence of the more naturalistic images being depicted. Without even intending to, young children consistently create interesting, and sometimes beautiful, abstract compositions. A child's creations, moreover, will always seem to…

  14. Leadership Abstracts, 1995.

    ERIC Educational Resources Information Center

    Johnson, Larry, Ed.

    1995-01-01

    The abstracts in this series provide two-page discussions of issues related to leadership, administration, and teaching in community colleges. The 12 abstracts for Volume 8, 1995, are: (1) "Redesigning the System To Meet the Workforce Training Needs of the Nation," by Larry Warford; (2) "The College President, the Board, and the Board Chair: A…

  15. Concept Formation and Abstraction.

    ERIC Educational Resources Information Center

    Lunzer, Eric A.

    1979-01-01

    This paper examines the nature of concepts and conceptual processes and the manner of their formation. It argues that a process of successive abstraction and systematization is central to the evolution of conceptual structures. Classificatory processes are discussed and three levels of abstraction outlined. (Author/SJL)

  16. Data Abstraction in GLISP.

    ERIC Educational Resources Information Center

    Novak, Gordon S., Jr.

    GLISP is a high-level computer language (based on Lisp and including Lisp as a sublanguage) which is compiled into Lisp. GLISP programs are compiled relative to a knowledge base of object descriptions, a form of abstract datatypes. A primary goal of the use of abstract datatypes in GLISP is to allow program code to be written in terms of objects,…

  17. Leadership Abstracts, Volume 10.

    ERIC Educational Resources Information Center

    Milliron, Mark D., Ed.

    1997-01-01

    The abstracts in this series provide brief discussions of issues related to leadership, administration, professional development, technology, and education in community colleges. Volume 10 for 1997 contains the following 12 abstracts: (1) "On Community College Renewal" (Nathan L. Hodges and Mark D. Milliron); (2) "The Community College Niche in a…

  18. Designing a stencil compiler for the Connection Machine model CM-5

    SciTech Connect

    Brickner, R.G.; Holian, K.; Thiagarajan, B.; Johnsson, S.L. |

    1994-12-31

    In this paper the authors present the design of a stencil compiler for the Connection Machine system CM-5. The stencil compiler will optimize the data motion between processing nodes, minimize the data motion within a node, and minimize the data motion between registers and local memory in a node. The compiler will natively support two-dimensional stencils, but stencils in three dimensions will be automatically decomposed. Lower dimensional stencils are treated as degenerate stencils. The compiler will be integrated as part of the CM Fortran programming system. Much of the compiler code will be adapted from the CM-2/200 stencil compiler, which is part of CMSSL (the Connection Machine Scientific Software Library) Release 3.1 for the CM-2/200, and the compiler will be available as part of the Connection Machine Scientific Software Library (CMSSL) for the CM-5. In addition to setting down design considerations, they report on the implementation status of the stencil compiler. In particular, they discuss optimization strategies and status of code conversion from CM-2/200 to CM-5 architecture, and report on the measured performance of prototype target code which the compiler will generate.

  19. Interpreting support vector machine models for multivariate group wise analysis in neuroimaging.

    PubMed

    Gaonkar, Bilwaj; T Shinohara, Russell; Davatzikos, Christos

    2015-08-01

    Machine learning based classification algorithms like support vector machines (SVMs) have shown great promise for turning a high dimensional neuroimaging data into clinically useful decision criteria. However, tracing imaging based patterns that contribute significantly to classifier decisions remains an open problem. This is an issue of critical importance in imaging studies seeking to determine which anatomical or physiological imaging features contribute to the classifier's decision, thereby allowing users to critically evaluate the findings of such machine learning methods and to understand disease mechanisms. The majority of published work addresses the question of statistical inference for support vector classification using permutation tests based on SVM weight vectors. Such permutation testing ignores the SVM margin, which is critical in SVM theory. In this work we emphasize the use of a statistic that explicitly accounts for the SVM margin and show that the null distributions associated with this statistic are asymptotically normal. Further, our experiments show that this statistic is a lot less conservative as compared to weight based permutation tests and yet specific enough to tease out multivariate patterns in the data. Thus, we can better understand the multivariate patterns that the SVM uses for neuroimaging based classification.

  20. Interpreting support vector machine models for multivariate group wise analysis in neuroimaging

    PubMed Central

    Gaonkar, Bilwaj; Shinohara, Russell T; Davatzikos, Christos

    2015-01-01

    Machine learning based classification algorithms like support vector machines (SVMs) have shown great promise for turning a high dimensional neuroimaging data into clinically useful decision criteria. However, tracing imaging based patterns that contribute significantly to classifier decisions remains an open problem. This is an issue of critical importance in imaging studies seeking to determine which anatomical or physiological imaging features contribute to the classifier’s decision, thereby allowing users to critically evaluate the findings of such machine learning methods and to understand disease mechanisms. The majority of published work addresses the question of statistical inference for support vector classification using permutation tests based on SVM weight vectors. Such permutation testing ignores the SVM margin, which is critical in SVM theory. In this work we emphasize the use of a statistic that explicitly accounts for the SVM margin and show that the null distributions associated with this statistic are asymptotically normal. Further, our experiments show that this statistic is a lot less conservative as compared to weight based permutation tests and yet specific enough to tease out multivariate patterns in the data. Thus, we can better understand the multivariate patterns that the SVM uses for neuroimaging based classification. PMID:26210913

  1. Abstract Datatypes in PVS

    NASA Technical Reports Server (NTRS)

    Owre, Sam; Shankar, Natarajan

    1997-01-01

    PVS (Prototype Verification System) is a general-purpose environment for developing specifications and proofs. This document deals primarily with the abstract datatype mechanism in PVS which generates theories containing axioms and definitions for a class of recursive datatypes. The concepts underlying the abstract datatype mechanism are illustrated using ordered binary trees as an example. Binary trees are described by a PVS abstract datatype that is parametric in its value type. The type of ordered binary trees is then presented as a subtype of binary trees where the ordering relation is also taken as a parameter. We define the operations of inserting an element into, and searching for an element in an ordered binary tree; the bulk of the report is devoted to PVS proofs of some useful properties of these operations. These proofs illustrate various approaches to proving properties of abstract datatype operations. They also describe the built-in capabilities of the PVS proof checker for simplifying abstract datatype expressions.

  2. Abstract coherent categories.

    PubMed

    Rehder, B; Ross, B H

    2001-09-01

    Many studies have demonstrated the importance of the knowledge that interrelates features in people's mental representation of categories and that makes our conception of categories coherent. This article focuses on abstract coherent categories, coherent categories that are also abstract because they are defined by relations independently of any features. Four experiments demonstrate that abstract coherent categories are learned more easily than control categories with identical features and statistical structure, and also that participants induced an abstract representation of the category by granting category membership to exemplars with completely novel features. The authors argue that the human conceptual system is heavily populated with abstract coherent concepts, including conceptions of social groups, societal institutions, legal, political, and military scenarios, and many superordinate categories, such as classes of natural kinds. PMID:11550753

  3. SWAT and River-2D Modelling of Pinder River for Analysing Snow Trout Habitat under Different Flow Abstraction Scenarios

    NASA Astrophysics Data System (ADS)

    Nale, J. P.; Gosain, A. K.; Khosa, R.

    2015-12-01

    Pinder River, one of major headstreams of River Ganga, originates in Pindari Glaciers of Kumaon Himalayas and after passing through rugged gorges meets Alaknanda at Karanprayag forming one of the five celestial confluences of Upper Ganga region. While other sub-basins of Upper Ganga are facing severe ecological losses, Pinder basin is still in its virginal state and is well known for its beautiful valleys besides being host to unique and rare biodiversity. A proposed 252 MW run-of-river hydroelectric project at Devsari on this river has been a major concern on account of its perceived potential for egregious environmental and social impacts. In this context, the study presented tries to analyse the expected changes in aquatic habitat conditions after this project is operational (with different operation policies). SWAT hydrological modelling platform has been used to derive stream flow simulations under various scenarios ranging from the present to the likely future conditions. To analyse the habitat conditions, a two dimensional hydraulic-habitat model 'River-2D', a module of iRIC software, is used. Snow trout has been identified as the target keystone species and its habitat preferences, in the form of flow depths, flow velocity and substrate condition, are obtained from diverse sources of related literature and are provided as Habitat Suitability Indices to River-2D. Bed morphology constitutes an important River-2D input and has been obtained, for the designated 1 km long study reach of Pinder upto Karanprayag, from a combination of actual field observations and supplemented by SRTM 1 Arc-Second Global digital elevation data. Monthly Weighted Usable Area for three different life stages (Spawning, Juvenile and Adult) of Snow Trout are obtained corresponding to seven different flow discharges ranging from 10 cumec to 1000 cumec. Comparing the present and proposed future river flow conditions obtained from SWAT modelling, losses in Weighted Usable Area, for the

  4. (abstract) A Polarimetric Model for Effects of Brine Infiltrated Snow Cover and Frost Flowers on Sea Ice Backscatter

    NASA Technical Reports Server (NTRS)

    Nghiem, S. V.; Kwok, R.; Yueh, S. H.

    1995-01-01

    A polarimetric scattering model is developed to study effects of snow cover and frost flowers with brine infiltration on thin sea ice. Leads containing thin sea ice in the Artic icepack are important to heat exchange with the atmosphere and salt flux into the upper ocean. Surface characteristics of thin sea ice in leads are dominated by the formation of frost flowers with high salinity. In many cases, the thin sea ice layer is covered by snow, which wicks up brine from sea ice due to capillary force. Snow and frost flowers have a significant impact on polarimetric signatures of thin ice, which needs to be studied for accessing the retrieval of geophysical parameters such as ice thickness. Frost flowers or snow layer is modeled with a heterogeneous mixture consisting of randomly oriented ellipsoids and brine infiltration in an air background. Ice crystals are characterized with three different axial lengths to depict the nonspherical shape. Under the covering multispecies medium, the columinar sea-ice layer is an inhomogeneous anisotropic medium composed of ellipsoidal brine inclusions preferentially oriented in the vertical direction in an ice background. The underlying medium is homogeneous sea water. This configuration is described with layered inhomogeneous media containing multiple species of scatterers. The species are allowed to have different size, shape, and permittivity. The strong permittivity fluctuation theory is extended to account for the multispecies in the derivation of effective permittivities with distributions of scatterer orientations characterized by Eulerian rotation angles. Polarimetric backscattering coefficients are obtained consistently with the same physical description used in the effective permittivity calculation. The mulitspecies model allows the inclusion of high-permittivity species to study effects of brine infiltrated snow cover and frost flowers on thin ice. The results suggest that the frost cover with a rough interface

  5. (abstract) Using TOPEX/Poseidon Sea Level Observations to Test the Sensitivity of an Ocean Model to Wind Forcing

    NASA Technical Reports Server (NTRS)

    Fu, Lee-Lueng; Chao, Yi

    1996-01-01

    It has been demonstrated that current-generation global ocean general circulation models (OGCM) are able to simulate large-scale sea level variations fairly well. In this study, a GFDL/MOM-based OGCM was used to investigate its sensitivity to different wind forcing. Simulations of global sea level using wind forcing from the ERS-1 Scatterometer and the NMC operational analysis were compared to the observations made by the TOPEX/Poseidon (T/P) radar altimeter for a two-year period. The result of the study has demonstrated the sensitivity of the OGCM to the quality of wind forcing, as well as the synergistic use of two spaceborne sensors in advancing the study of wind-driven ocean dynamics.

  6. Carbon sequestration in Synechococcus Sp.: from molecular machines to hierarchical modeling.

    SciTech Connect

    Martino, Anthony A. (Sandia National Laboratories, Livermore, CA); Heffelfinger, Grant S.; Frink, Laura J. Douglas; Davidson, George S.; Haaland, David Michael; Timlin, Jerilyn Ann; Plimpton, Steven James; Lane, Todd W.; Thomas, Edward Victor; Rintoul, Mark Daniel; Roe, Diana C. (Sandia National Laboratories, Livermore, CA); Faulon, Jean-Loup Michel; Hart, William Eugene

    2003-02-01

    The U.S. Department of Energy recently announced the first five grants for the Genomes to Life (GTL) Program. The goal of this program is to ''achieve the most far-reaching of all biological goals: a fundamental, comprehensive, and systematic understanding of life.'' While more information about the program can be found at the GTL website (www.doegenomestolife.org), this paper provides an overview of one of the five GTL projects funded, ''Carbon Sequestration in Synechococcus Sp.: From Molecular Machines to Hierarchical Modeling.'' This project is a combined experimental and computational effort emphasizing developing, prototyping, and applying new computational tools and methods to elucidate the biochemical mechanisms of the carbon sequestration of Synechococcus Sp., an abundant marine cyanobacteria known to play an important role in the global carbon cycle. Understanding, predicting, and perhaps manipulating carbon fixation in the oceans has long been a major focus of biological oceanography and has more recently been of interest to a broader audience of scientists and policy makers. It is clear that the oceanic sinks and sources of CO(2) are important terms in the global environmental response to anthropogenic atmospheric inputs of CO(2) and that oceanic microorganisms play a key role in this response. However, the relationship between this global phenomenon and the biochemical mechanisms of carbon fixation in these microorganisms is poorly understood. The project includes five subprojects: an experimental investigation, three computational biology efforts, and a fifth which deals with addressing computational infrastructure challenges of relevance to this project and the Genomes to Life program as a whole. Our experimental effort is designed to provide biology and data to drive the computational efforts and includes significant investment in developing new experimental methods for uncovering protein partners, characterizing protein complexes, identifying new

  7. Abstraction of Drift Seepage

    SciTech Connect

    J.T. Birkholzer

    2004-11-01

    This model report documents the abstraction of drift seepage, conducted to provide seepage-relevant parameters and their probability distributions for use in Total System Performance Assessment for License Application (TSPA-LA). Drift seepage refers to the flow of liquid water into waste emplacement drifts. Water that seeps into drifts may contact waste packages and potentially mobilize radionuclides, and may result in advective transport of radionuclides through breached waste packages [''Risk Information to Support Prioritization of Performance Assessment Models'' (BSC 2003 [DIRS 168796], Section 3.3.2)]. The unsaturated rock layers overlying and hosting the repository form a natural barrier that reduces the amount of water entering emplacement drifts by natural subsurface processes. For example, drift seepage is limited by the capillary barrier forming at the drift crown, which decreases or even eliminates water flow from the unsaturated fractured rock into the drift. During the first few hundred years after waste emplacement, when above-boiling rock temperatures will develop as a result of heat generated by the decay of the radioactive waste, vaporization of percolation water is an additional factor limiting seepage. Estimating the effectiveness of these natural barrier capabilities and predicting the amount of seepage into drifts is an important aspect of assessing the performance of the repository. The TSPA-LA therefore includes a seepage component that calculates the amount of seepage into drifts [''Total System Performance Assessment (TSPA) Model/Analysis for the License Application'' (BSC 2004 [DIRS 168504], Section 6.3.3.1)]. The TSPA-LA calculation is performed with a probabilistic approach that accounts for the spatial and temporal variability and inherent uncertainty of seepage-relevant properties and processes. Results are used for subsequent TSPA-LA components that may handle, for example, waste package corrosion or radionuclide transport.

  8. Human-machine interactions

    DOEpatents

    Forsythe, J. Chris; Xavier, Patrick G.; Abbott, Robert G.; Brannon, Nathan G.; Bernard, Michael L.; Speed, Ann E.

    2009-04-28

    Digital technology utilizing a cognitive model based on human naturalistic decision-making processes, including pattern recognition and episodic memory, can reduce the dependency of human-machine interactions on the abilities of a human user and can enable a machine to more closely emulate human-like responses. Such a cognitive model can enable digital technology to use cognitive capacities fundamental to human-like communication and cooperation to interact with humans.

  9. On abstract degenerate neutral differential equations

    NASA Astrophysics Data System (ADS)

    Hernández, Eduardo; O'Regan, Donal

    2016-10-01

    We introduce a new abstract model of functional differential equations, which we call abstract degenerate neutral differential equations, and we study the existence of strict solutions. The class of problems and the technical approach introduced in this paper allow us to generalize and extend recent results on abstract neutral differential equations. Some examples on nonlinear partial neutral differential equations are presented.

  10. Laser-induced Breakdown spectroscopy quantitative analysis method via adaptive analytical line selection and relevance vector machine regression model

    NASA Astrophysics Data System (ADS)

    Yang, Jianhong; Yi, Cancan; Xu, Jinwu; Ma, Xianghong

    2015-05-01

    A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.

  11. An in vivo autotransplant model of renal preservation: cold storage versus machine perfusion in the prevention of ischemia/reperfusion injury.

    PubMed

    La Manna, Gaetano; Conte, Diletta; Cappuccilli, Maria Laura; Nardo, Bruno; D'Addio, Francesca; Puviani, Lorenza; Comai, Giorgia; Bianchi, Francesca; Bertelli, Riccardo; Lanci, Nicole; Donati, Gabriele; Scolari, Maria Piera; Faenza, Alessandro; Stefoni, Sergio

    2009-07-01

    There is increasing proof that organ preservation by machine perfusion is able to limit ischemia/reperfusion injury in kidney transplantation. This study was designed to compare the efficiency in hypothermic organ preservation by machine perfusion or cold storage in an animal model of kidney autotransplantation. Twelve pigs underwent left nephrectomy after warm ischemic time; the organs were preserved in machine perfusion (n = 6) or cold storage (n = 6) and then autotransplanted with immediate contralateral nephrectomy. The following parameters were compared between the two groups of animals: hematological and urine indexes of renal function, blood/gas analysis values, histological features, tissue adenosine-5'-triphosphate (ATP) content, perforin gene expression in kidney biopsies, and organ weight changes were compared before and after preservation. The amount of cellular ATP was significantly higher in organs preserved by machine perfusion; moreover, the study of apoptosis induction revealed an enhanced perforin expression in the kidneys, which underwent simple hypothermic preservation compared to the machine-preserved ones. Organ weight was significantly decreased after cold storage, but it remained quite stable for machine-perfused kidneys. The present model seems to suggest that organ preservation by hypothermic machine perfusion is able to better control cellular impairment in comparison with cold storage.

  12. Toward the Development of a Fundamentally Based Chemical Model for Cyclopentanone: High-Pressure-Limit Rate Constants for H Atom Abstraction and Fuel Radical Decomposition.

    PubMed

    Zhou, Chong-Wen; Simmie, John M; Pitz, William J; Curran, Henry J

    2016-09-15

    Theoretical aspects of the development of a chemical kinetic model for the pyrolysis and combustion of a cyclic ketone, cyclopentanone, are considered. Calculated thermodynamic and kinetic data are presented for the first time for the principal species including 2- and 3-oxo-cyclopentyl radicals, which are in reasonable agreement with the literature. These radicals can be formed via H atom abstraction reactions by Ḣ and Ö atoms and ȮH, HȮ2, and ĊH3 radicals, the rate constants of which have been calculated. Abstraction from the β-hydrogen atom is the dominant process when ȮH is involved, but the reverse holds true for HȮ2 radicals. The subsequent β-scission of the radicals formed is also determined, and it is shown that recent tunable VUV photoionization mass spectrometry experiments can be interpreted in this light. The bulk of the calculations used the composite model chemistry G4, which was benchmarked in the simplest case with a coupled cluster treatment, CCSD(T), in the complete basis set limit. PMID:27558073

  13. Toward the Development of a Fundamentally Based Chemical Model for Cyclopentanone: High-Pressure-Limit Rate Constants for H Atom Abstraction and Fuel Radical Decomposition.

    PubMed

    Zhou, Chong-Wen; Simmie, John M; Pitz, William J; Curran, Henry J

    2016-09-15

    Theoretical aspects of the development of a chemical kinetic model for the pyrolysis and combustion of a cyclic ketone, cyclopentanone, are considered. Calculated thermodynamic and kinetic data are presented for the first time for the principal species including 2- and 3-oxo-cyclopentyl radicals, which are in reasonable agreement with the literature. These radicals can be formed via H atom abstraction reactions by Ḣ and Ö atoms and ȮH, HȮ2, and ĊH3 radicals, the rate constants of which have been calculated. Abstraction from the β-hydrogen atom is the dominant process when ȮH is involved, but the reverse holds true for HȮ2 radicals. The subsequent β-scission of the radicals formed is also determined, and it is shown that recent tunable VUV photoionization mass spectrometry experiments can be interpreted in this light. The bulk of the calculations used the composite model chemistry G4, which was benchmarked in the simplest case with a coupled cluster treatment, CCSD(T), in the complete basis set limit.

  14. Abstraction of Seepage into Drifts

    SciTech Connect

    WILSON,MICHAEL L.; HO,CLIFFORD K.

    2000-10-16

    The abstraction model used for seepage into emplacement drifts in recent TSPA simulations has been presented. This model contributes to the calculation of the quantity of water that might contact waste if it is emplaced at Yucca Mountain. Other important components of that calculation not discussed here include models for climate, infiltration, unsaturated-zone flow, and thermohydrology; drip-shield and waste-package degradation; and flow around and through the drip shield and waste package. The seepage abstraction model is stochastic because predictions of seepage are necessarily quite uncertain. The model provides uncertainty distributions for seepage fraction fraction of waste-package locations flow rate as functions of percolation flux. In addition, effects of intermediate-scale flow with seepage and seep channeling are included by means of a flow-focusing factor, which is also represented by an uncertainty distribution.

  15. Electric machine

    SciTech Connect

    El-Refaie, Ayman Mohamed Fawzi; Reddy, Patel Bhageerath

    2012-07-17

    An interior permanent magnet electric machine is disclosed. The interior permanent magnet electric machine comprises a rotor comprising a plurality of radially placed magnets each having a proximal end and a distal end, wherein each magnet comprises a plurality of magnetic segments and at least one magnetic segment towards the distal end comprises a high resistivity magnetic material.

  16. Nonplanar machines

    SciTech Connect

    Ritson, D. )

    1989-05-01

    This talk examines methods available to minimize, but never entirely eliminate, degradation of machine performance caused by terrain following. Breaking of planar machine symmetry for engineering convenience and/or monetary savings must be balanced against small performance degradation, and can only be decided on a case-by-case basis. 5 refs.

  17. Permutation Machines.

    PubMed

    Bhatia, Swapnil; LaBoda, Craig; Yanez, Vanessa; Haddock-Angelli, Traci; Densmore, Douglas

    2016-08-19

    We define a new inversion-based machine called a permuton of n genetic elements, which allows the n elements to be rearranged in any of the n·(n - 1)·(n - 2)···2 = n! distinct orderings. We present two design algorithms for architecting such a machine. We define a notion of a feasible design and use the framework to discuss the feasibility of the permuton architectures. We have implemented our design algorithms in a freely usable web-accessible software for exploration of these machines. Permutation machines could be used as memory elements or state machines and explicitly illustrate a rational approach to designing biological systems.

  18. Permutation Machines.

    PubMed

    Bhatia, Swapnil; LaBoda, Craig; Yanez, Vanessa; Haddock-Angelli, Traci; Densmore, Douglas

    2016-08-19

    We define a new inversion-based machine called a permuton of n genetic elements, which allows the n elements to be rearranged in any of the n·(n - 1)·(n - 2)···2 = n! distinct orderings. We present two design algorithms for architecting such a machine. We define a notion of a feasible design and use the framework to discuss the feasibility of the permuton architectures. We have implemented our design algorithms in a freely usable web-accessible software for exploration of these machines. Permutation machines could be used as memory elements or state machines and explicitly illustrate a rational approach to designing biological systems. PMID:27383067

  19. Financial and environmental modelling of water hardness--implications for utilising harvested rainwater in washing machines.

    PubMed

    Morales-Pinzón, Tito; Lurueña, Rodrigo; Gabarrell, Xavier; Gasol, Carles M; Rieradevall, Joan

    2014-02-01

    A study was conducted to determine the financial and environmental effects of water quality on rainwater harvesting systems. The potential for replacing tap water used in washing machines with rainwater was studied, and then analysis presented in this paper is valid for applications that include washing machines where tap water hardness may be important. A wide range of weather conditions, such as rainfall (284-1,794 mm/year); water hardness (14-315 mg/L CaCO3); tap water prices (0.85-2.65 Euros/m(3)) in different Spanish urban areas (from individual buildings to whole neighbourhoods); and other scenarios (including materials and water storage capacity) were analysed. Rainfall was essential for rainwater harvesting, but the tap water prices and the water hardness were the main factors for consideration in the financial and the environmental analyses, respectively. The local tap water hardness and prices can cause greater financial and environmental impacts than the type of material used for the water storage tank or the volume of the tank. The use of rainwater as a substitute for hard water in washing machines favours financial analysis. Although tap water hardness significantly affects the financial analysis, the greatest effect was found in the environmental analysis. When hard tap water needed to be replaced, it was found that a water price of 1 Euro/m(3) could render the use of rainwater financially feasible when using large-scale rainwater harvesting systems. When the water hardness was greater than 300 mg/L CaCO3, a financial analysis revealed that an net present value greater than 270 Euros/dwelling could be obtained at the neighbourhood scale, and there could be a reduction in the Global Warming Potential (100 years) ranging between 35 and 101 kg CO2 eq./dwelling/year.

  20. Evaluation of different time domain peak models using extreme learning machine-based peak detection for EEG signal.

    PubMed

    Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Cumming, Paul; Mubin, Marizan

    2016-01-01

    Various peak models have been introduced to detect and analyze peaks in the time domain analysis of electroencephalogram (EEG) signals. In general, peak model in the time domain analysis consists of a set of signal parameters, such as amplitude, width, and slope. Models including those proposed by Dumpala, Acir, Liu, and Dingle are routinely used to detect peaks in EEG signals acquired in clinical studies of epilepsy or eye blink. The optimal peak model is the most reliable peak detection performance in a particular application. A fair measure of performance of different models requires a common and unbiased platform. In this study, we evaluate the performance of the four different peak models using the extreme learning machine (ELM)-based peak detection algorithm. We found that the Dingle model gave the best performance, with 72 % accuracy in the analysis of real EEG data. Statistical analysis conferred that the Dingle model afforded significantly better mean testing accuracy than did the Acir and Liu models, which were in the range 37-52 %. Meanwhile, the Dingle model has no significant difference compared to Dumpala model.

  1. Evaluation of different time domain peak models using extreme learning machine-based peak detection for EEG signal.

    PubMed

    Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Cumming, Paul; Mubin, Marizan

    2016-01-01

    Various peak models have been introduced to detect and analyze peaks in the time domain analysis of electroencephalogram (EEG) signals. In general, peak model in the time domain analysis consists of a set of signal parameters, such as amplitude, width, and slope. Models including those proposed by Dumpala, Acir, Liu, and Dingle are routinely used to detect peaks in EEG signals acquired in clinical studies of epilepsy or eye blink. The optimal peak model is the most reliable peak detection performance in a particular application. A fair measure of performance of different models requires a common and unbiased platform. In this study, we evaluate the performance of the four different peak models using the extreme learning machine (ELM)-based peak detection algorithm. We found that the Dingle model gave the best performance, with 72 % accuracy in the analysis of real EEG data. Statistical analysis conferred that the Dingle model afforded significantly better mean testing accuracy than did the Acir and Liu models, which were in the range 37-52 %. Meanwhile, the Dingle model has no significant difference compared to Dumpala model. PMID:27462484

  2. 2016 ACPA MEETING ABSTRACTS.

    PubMed

    2016-07-01

    The peer-reviewed abstracts presented at the 73rd Annual Meeting of the ACPA are published as submitted by the authors. For financial conflict of interest disclosure, please visit http://meeting.acpa-cpf.org/disclosures.html. PMID:27447885

  3. Abstracts--Citations

    ERIC Educational Resources Information Center

    Occupational Mental Health, 1971

    1971-01-01

    Provides abstracts and citations of journal articles and reports dealing with aspects of mental health. Topics include alcoholism, drug abuse, disadvantaged, mental health programs, rehabilitation, student mental health, and others. (SB)

  4. Automatic Abstraction in Planning

    NASA Technical Reports Server (NTRS)

    Christensen, J.

    1991-01-01

    Traditionally, abstraction in planning has been accomplished by either state abstraction or operator abstraction, neither of which has been fully automatic. We present a new method, predicate relaxation, for automatically performing state abstraction. PABLO, a nonlinear hierarchical planner, implements predicate relaxation. Theoretical, as well as empirical results are presented which demonstrate the potential advantages of using predicate relaxation in planning. We also present a new definition of hierarchical operators that allows us to guarantee a limited form of completeness. This new definition is shown to be, in some ways, more flexible than previous definitions of hierarchical operators. Finally, a Classical Truth Criterion is presented that is proven to be sound and complete for a planning formalism that is general enough to include most classical planning formalisms that are based on the STRIPS assumption.

  5. Introducing Abstract Design

    ERIC Educational Resources Information Center

    Ciscell, Bob

    1973-01-01

    A functional approach involving collage, two-dimensional design, three-dimensional construction, and elements of Cubism, is used to teach abstract design in elementary and junior high school art classes. (DS)

  6. Abstracts of SIG Sessions.

    ERIC Educational Resources Information Center

    Proceedings of the ASIS Annual Meeting, 1991

    1991-01-01

    Presents abstracts of 36 special interest group (SIG) sessions. Highlights include the Chemistry Online Retrieval Experiment; organizing and retrieving images; intelligent information retrieval using natural language processing; interdisciplinarity; libraries as publishers; indexing hypermedia; cognitive aspects of classification; computer-aided…

  7. 1971 Annual Conference Abstracts

    ERIC Educational Resources Information Center

    Journal of Engineering Education, 1971

    1971-01-01

    Included are 112 abstracts listed under headings such as: acoustics, continuing engineering studies, educational research and methods, engineering design, libraries, liberal studies, and materials. Other areas include agricultural, electrical, mechanical, mineral, and ocean engineering. (TS)

  8. Paradigms for Abstracting Systems.

    ERIC Educational Resources Information Center

    Pinto, Maria; Galvez, Carmen

    1999-01-01

    Discussion of abstracting systems focuses on the paradigm concept and identifies and explains four paradigms: communicational, or information theory; physical, including information retrieval; cognitive, including information processing and artificial intelligence; and systemic, including quality management. Emphasizes multidimensionality and…

  9. Prediction of Machine Tool Condition Using Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Wang, Peigong; Meng, Qingfeng; Zhao, Jian; Li, Junjie; Wang, Xiufeng

    2011-07-01

    Condition monitoring and predicting of CNC machine tools are investigated in this paper. Considering the CNC machine tools are often small numbers of samples, a condition predicting method for CNC machine tools based on support vector machines (SVMs) is proposed, then one-step and multi-step condition prediction models are constructed. The support vector machines prediction models are used to predict the trends of working condition of a certain type of CNC worm wheel and gear grinding machine by applying sequence data of vibration signal, which is collected during machine processing. And the relationship between different eigenvalue in CNC vibration signal and machining quality is discussed. The test result shows that the trend of vibration signal Peak-to-peak value in surface normal direction is most relevant to the trend of surface roughness value. In trends prediction of working condition, support vector machine has higher prediction accuracy both in the short term ('One-step') and long term (multi-step) prediction compared to autoregressive (AR) model and the RBF neural network. Experimental results show that it is feasible to apply support vector machine to CNC machine tool condition prediction.

  10. Abstracts of contributed papers

    SciTech Connect

    Not Available

    1994-08-01

    This volume contains 571 abstracts of contributed papers to be presented during the Twelfth US National Congress of Applied Mechanics. Abstracts are arranged in the order in which they fall in the program -- the main sessions are listed chronologically in the Table of Contents. The Author Index is in alphabetical order and lists each paper number (matching the schedule in the Final Program) with its corresponding page number in the book.

  11. Quantum Boltzmann Machine

    NASA Astrophysics Data System (ADS)

    Kulchytskyy, Bohdan; Andriyash, Evgeny; Amin, Mohammed; Melko, Roger

    The field of machine learning has been revolutionized by the recent improvements in the training of deep networks. Their architecture is based on a set of stacked layers of simpler modules. One of the most successful building blocks, known as a restricted Boltzmann machine, is an energetic model based on the classical Ising Hamiltonian. In our work, we investigate the benefits of quantum effects on the learning capacity of Boltzmann machines by extending its underlying Hamiltonian with a transverse field. For this purpose, we employ exact and stochastic training procedures on data sets with physical origins.

  12. Abstracting in the Context of Spontaneous Learning

    ERIC Educational Resources Information Center

    Williams, Gaye

    2007-01-01

    There is evidence that spontaneous learning leads to relational understanding and high positive affect. To study spontaneous abstracting, a model was constructed by combining the RBC model of abstraction with Krutetskii's mental activities. Using video-stimulated interviews, the model was then used to analyze the behavior of two Year 8 students…

  13. Developing a support vector machine based QSPR model for prediction of half-life of some herbicides.

    PubMed

    Samghani, Kobra; HosseinFatemi, Mohammad

    2016-07-01

    The half-life (t1/2) of 58 herbicides were modeled by quantitative structure-property relationship (QSPR) based molecular structure descriptors. After calculation and the screening of a large number of molecular descriptors, the most relevant those ones selected by stepwise multiple linear regression were used for developing linear and nonlinear models which developed by using multiple linear regression and support vector machine, respectively. Comparison between statistical parameters of linear and nonlinear models indicates the suitability of SVM over MLR model for predicting the half-life of herbicides. The statistical parameters of R(2) and standard error for training set of SVM model were; 0.96 and 0.087, respectively, and were 0.93 and 0.092 for the test set. The SVM model was evaluated by leave one out cross validation test, which its result indicates the robustness and predictability of the model. The established SVM model was used for predicting the half-life of other herbicides that are located in the applicability domain of model that were determined via leverage approach. The results of this study indicate that the relationship among selected molecular descriptors and herbicide's half-life is non-linear. These results emphases that the process of degradation of herbicides in the environment is very complex and can be affected by various environmental and structural features, therefore simple linear model cannot be able to successfully predict it. PMID:26970881

  14. A hybrid model of self organizing maps and least square support vector machine for river flow forecasting

    NASA Astrophysics Data System (ADS)

    Ismail, S.; Shabri, A.; Samsudin, R.

    2012-11-01

    Successful river flow forecasting is a major goal and an essential procedure that is necessary in water resource planning and management. There are many forecasting techniques used for river flow forecasting. This study proposed a hybrid model based on a combination of two methods: Self Organizing Map (SOM) and Least Squares Support Vector Machine (LSSVM) model, referred to as the SOM-LSSVM model for river flow forecasting. The hybrid model uses the SOM algorithm to cluster the entire dataset into several disjointed clusters, where the monthly river flows data with similar input pattern are grouped together from a high dimensional input space onto a low dimensional output layer. By doing this, the data with similar input patterns will be mapped to neighbouring neurons in the SOM's output layer. After the dataset has been decomposed into several disjointed clusters, an individual LSSVM is applied to forecast the river flow. The feasibility of this proposed model is evaluated with respect to the actual river flow data from the Bernam River located in Selangor, Malaysia. The performance of the SOM-LSSVM was compared with other single models such as ARIMA, ANN and LSSVM. The performance of these models was then evaluated using various performance indicators. The experimental results show that the SOM-LSSVM model outperforms the other models and performs better than ANN, LSSVM as well as ARIMA for river flow forecasting. It also indicates that the proposed model can forecast more precisely, and provides a promising alternative technique for river flow forecasting.

  15. Temperature drift modeling and compensation of fiber optical gyroscope based on improved support vector machine and particle swarm optimization algorithms.

    PubMed

    Wang, Wei; Chen, Xiyuan

    2016-08-10

    Modeling and compensation of temperature drift is an important method for improving the precision of fiber-optic gyroscopes (FOGs). In this paper, a new method of modeling and compensation for FOGs based on improved particle swarm optimization (PSO) and support vector machine (SVM) algorithms is proposed. The convergence speed and reliability of PSO are improved by introducing a dynamic inertia factor. The regression accuracy of SVM is improved by introducing a combined kernel function with four parameters and piecewise regression with fixed steps. The steps are as follows. First, the parameters of the combined kernel functions are optimized by the improved PSO algorithm. Second, the proposed kernel function of SVM is used to carry out piecewise regression, and the regression model is also obtained. Third, the temperature drift is compensated for by the regression data. The regression accuracy of the proposed method (in the case of mean square percentage error indicators) increased by 83.81% compared to the traditional SVM.

  16. Modeling and output tracking of transverse flux permanent magnet machines using high gain observer and RBF neural network.

    PubMed

    Karimi, H R; Babazadeh, A

    2005-10-01

    This paper deals with modeling and adaptive output tracking of a transverse flux permanent magnet machine as a nonlinear system with unknown nonlinearities by utilizing high gain observer and radial basis function networks. The proposed model is developed based on computing the permeance between rotor and stator using quasiflux tubes. Based on this model, the techniques of feedback linearization and Hinfinity control are used to design an adaptive control law for compensating the unknown nonlinear parts, such as the effect of cogging torque, as a disturbance is decreased onto the rotor angle and angular velocity tracking performances. Finally, the capability of the proposed method in tracking both the angle and the angular velocity is shown in the simulation results.

  17. Language and Tool Support for Class and State Machine Refinement in UML-B

    NASA Astrophysics Data System (ADS)

    Said, Mar Yah; Butler, Michael; Snook, Colin

    UML-B is a ‘UML-like’ graphical front end for Event-B that provides support for object-oriented modelling concepts. In particular, UML-B supports class diagrams and state machines, concepts that are not explicitly supported in plain Event-B. In Event-B, refinement is used to relate system models at different abstraction levels. The same abstraction-refinement concepts can also be applied in UML-B. This paper introduces the notions of refined classes and refined state machines to enable refinement of classes and state machines in UML-B. Together with these notions, a technique for moving an event between classes to facilitate abstraction is also introduced. Our work makes explicit the structures of class and state machine refinement in UML-B. The UML-B drawing tool and Event-B translator are extended to support the new refinement concepts. A case study of an auto teller machine (ATM) is presented to demonstrate application and effectiveness of refined classes and refined state machines.

  18. D Modelling of Tunnel Excavation Using Pressurized Tunnel Boring Machine in Overconsolidated Soils

    NASA Astrophysics Data System (ADS)

    Demagh, Rafik; Emeriault, Fabrice

    2013-06-01

    The construction of shallow tunnels in urban areas requires a prior assessment of their effects on the existing structures. In the case of shield tunnel boring machines (TBM), the various construction stages carried out constitute a highly three-dimensional problem of soil/structure interaction and are not easy to represent in a complete numerical simulation. Consequently, the tunnelling- induced soil movements are quite difficult to evaluate. A 3D simulation procedure, using a finite differences code, namely FLAC3D, taking into account, in an explicit manner, the main sources of movements in the soil mass is proposed in this paper. It is illustrated by the particular case of Toulouse Subway Line B for which experimental data are available and where the soil is saturated and highly overconsolidated. A comparison made between the numerical simulation results and the insitu measurements shows that the 3D procedure of simulation proposed is relevant, in particular regarding the adopted representation of the different operations performed by the tunnel boring machine (excavation, confining pressure, shield advancement, installation of the tunnel lining, grouting of the annular void, etc). Furthermore, a parametric study enabled a better understanding of the singular behaviour origin observed on the ground surface and within the solid soil mass, till now not mentioned in the literature.

  19. Landscape epidemiology and machine learning: A geospatial approach to modeling West Nile virus risk in the United States

    NASA Astrophysics Data System (ADS)

    Young, Sean Gregory

    The complex interactions between human health and the physical landscape and environment have been recognized, if not fully understood, since the ancient Greeks. Landscape epidemiology, sometimes called spatial epidemiology, is a sub-discipline of medical geography that uses environmental conditions as explanatory variables in the study of disease or other health phenomena. This theory suggests that pathogenic organisms (whether germs or larger vector and host species) are subject to environmental conditions that can be observed on the landscape, and by identifying where such organisms are likely to exist, areas at greatest risk of the disease can be derived. Machine learning is a sub-discipline of artificial intelligence that can be used to create predictive models from large and complex datasets. West Nile virus (WNV) is a relatively new infectious disease in the United States, and has a fairly well-understood transmission cycle that is believed to be highly dependent on environmental conditions. This study takes a geospatial approach to the study of WNV risk, using both landscape epidemiology and machine learning techniques. A combination of remotely sensed and in situ variables are used to predict WNV incidence with a correlation coefficient as high as 0.86. A novel method of mitigating the small numbers problem is also tested and ultimately discarded. Finally a consistent spatial pattern of model errors is identified, indicating the chosen variables are capable of predicting WNV disease risk across most of the United States, but are inadequate in the northern Great Plains region of the US.

  20. Metacognition and abstract reasoning.

    PubMed

    Markovits, Henry; Thompson, Valerie A; Brisson, Janie

    2015-05-01

    The nature of people's meta-representations of deductive reasoning is critical to understanding how people control their own reasoning processes. We conducted two studies to examine whether people have a metacognitive representation of abstract validity and whether familiarity alone acts as a separate metacognitive cue. In Study 1, participants were asked to make a series of (1) abstract conditional inferences, (2) concrete conditional inferences with premises having many potential alternative antecedents and thus specifically conducive to the production of responses consistent with conditional logic, or (3) concrete problems with premises having relatively few potential alternative antecedents. Participants gave confidence ratings after each inference. Results show that confidence ratings were positively correlated with logical performance on abstract problems and concrete problems with many potential alternatives, but not with concrete problems with content less conducive to normative responses. Confidence ratings were higher with few alternatives than for abstract content. Study 2 used a generation of contrary-to-fact alternatives task to improve levels of abstract logical performance. The resulting increase in logical performance was mirrored by increases in mean confidence ratings. Results provide evidence for a metacognitive representation based on logical validity, and show that familiarity acts as a separate metacognitive cue.

  1. Use of Machine Learning Techniques for Iidentification of Robust Teleconnections to East African Rainfall Variability in Observations and Models

    NASA Technical Reports Server (NTRS)

    Roberts, J. Brent; Robertson, Franklin R.; Funk, Chris

    2014-01-01

    Providing advance warning of East African rainfall variations is a particular focus of several groups including those participating in the Famine Early Warming Systems Network. Both seasonal and long-term model projections of climate variability are being used to examine the societal impacts of hydrometeorological variability on seasonal to interannual and longer time scales. The NASA / USAID SERVIR project, which leverages satellite and modeling-based resources for environmental decision making in developing nations, is focusing on the evaluation of both seasonal and climate model projections to develop downscaled scenarios for using in impact modeling. The utility of these projections is reliant on the ability of current models to capture the embedded relationships between East African rainfall and evolving forcing within the coupled ocean-atmosphere-land climate system. Previous studies have posited relationships between variations in El Niño, the Walker circulation, Pacific decadal variability (PDV), and anthropogenic forcing. This study applies machine learning methods (e.g. clustering, probabilistic graphical model, nonlinear PCA) to observational datasets in an attempt to expose the importance of local and remote forcing mechanisms of East African rainfall variability. The ability of the NASA Goddard Earth Observing System (GEOS5) coupled model to capture the associated relationships will be evaluated using Coupled Model Intercomparison Project Phase 5 (CMIP5) simulations.

  2. Modeling and characterization of disease associated subnetworks in the human interactome using machine learning

    PubMed Central

    Sam, Lee T.; Michailidis, George

    2009-01-01

    The availability of large-scale, genome-wide data about the molecular interactome of entire organisms has made possible new types of integrative studies, making use of rapidly accumulating knowledge of gene-disease associations. Previous studies have established the presence of functional biomodules in the molecular interaction network of living organisms, a number of which have been associated with the pathogenesis and progression of human disease. While a number of studies have examined the networks and biomodules associated with disease, the properties that contribute to the particular susceptibility of these subnetworks to disruptions leading to disease phenotypes have not been extensively studied. We take a machine learning approach to the characterization of these disease subnetworks associated with complex and single-gene diseases, taking into account both the biological roles of their constituent genes and topological properties of the networks they form. PMID:21347156

  3. Abstracting and indexing guide

    USGS Publications Warehouse

    U.S. Department of the Interior; Office of Water Resources Research

    1974-01-01

    These instructions have been prepared for those who abstract and index scientific and technical documents for the Water Resources Scientific Information Center (WRSIC). With the recent publication growth in all fields, information centers have undertaken the task of keeping the various scientific communities aware of current and past developments. An abstract with carefully selected index terms offers the user of WRSIC services a more rapid means for deciding whether a document is pertinent to his needs and professional interests, thus saving him the time necessary to scan the complete work. These means also provide WRSIC with a document representation or surrogate which is more easily stored and manipulated to produce various services. Authors are asked to accept the responsibility for preparing abstracts of their own papers to facilitate quick evaluation, announcement, and dissemination to the scientific community.

  4. Thyra Abstract Interface Package

    2005-09-01

    Thrya primarily defines a set of abstract C++ class interfaces needed for the development of abstract numerical atgorithms (ANAs) such as iterative linear solvers, transient solvers all the way up to optimization. At the foundation of these interfaces are abstract C++ classes for vectors, vector spaces, linear operators and multi-vectors. Also included in the Thyra package is C++ code for creating concrete vector, vector space, linear operator, and multi-vector subclasses as well as other utilitiesmore » to aid in the development of ANAs. Currently, very general and efficient concrete subclass implementations exist for serial and SPMD in-core vectors and multi-vectors. Code also currently exists for testing objects and providing composite objects such as product vectors.« less

  5. A Semantic Theory of Abstractions: A Preliminary Report

    NASA Technical Reports Server (NTRS)

    Nayak, P. Pandurang; Levy, Alon Y.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    In this paper we present a semantic theory of abstractions based on viewing abstractions as interpretations between theories. This theory captures important aspects of abstractions not captured in the theory of abstractions presented by Giunchiglia and Walsh. Instead of viewing abstractions as syntactic mappings, we view abstractions as a two step process: the intended domain model is first abstracted and then a set of (abstract) formulas is constructed to capture the abstracted domain model. Viewing and justifying abstractions as model level transformations is both natural and insightful. We provide a precise characterization of the abstract theory that exactly implements the intended abstraction, and show that this theory, while being axiomatizable, is not always finitely axiomatizable. A simple corollary of the latter result disproves a conjecture made by Tenenberg that if a theory is finitely axiomatizable, then predicate abstraction of that theory leads to a finitely axiomatizable theory.

  6. Modeling using support vector machines on imbalanced data: A case study on the prediction of the sightings of Irrawaddy dolphins

    NASA Astrophysics Data System (ADS)

    Ying, Liew Chin; Labadin, Jane; Chai, Wang Yin; Tuen, Andrew Alek; Peter, Cindy

    2015-05-01

    Support vector machines (SVMs) is a powerful machine learning algorithm for classification particularly in medical, image processing and text analysis related studies. Nonetheless, its application in ecology is scarce. This study aims to demonstrate and compare the classification performance of SVMs models developed with weights and models developed with adoption of systematic random under-sampling technique in predicting a one-class independent dataset. The data used is a typical imbalanced real-world data with 700 data points where only 11% are sighted data points. Conversely, the one-class independent real-world dataset, with twenty data points, used for prediction consists of sighted data only. Both datasets are characterized with seven attributes. The results show that the former models have reported overall accuracy ranged between 87.62% and 90% with G-mean between 0% and 30.07% (0% to 9.09% sensitivity and 97.34% to 100% specificity) while the ROC-AUC values ranged between 75.92% and 88.78%. The latter models have reported overall accuracy ranged between 67.39% and 78.26% with G-mean between 66.51% and 76.30% (78.26% to 95.65% sensitivity and 52.17% to 60.87% specificity) while the ROC-AUC values ranged between 72.59% and 85.82%. Nevertheless, the former models could barely predict the independent dataset successfully. Majority of the models fail to predict a single sighted data point and the best prediction accuracy reported is 30%. The classification performance of the latter models is surprisingly encouraging where majority of the models manage to achieve more than 30% prediction accuracy. In addition, many of the models are capable to attain 65% prediction accuracy, more than double the performance of the former models. Current study thus suggests that, where highly imbalanced ecology data is concerned, modeling using SVMs adopting systematic random under-sampling technique is a more promising mean than w-SVM in obtaining much rewarding classification

  7. Monel Machining

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Castle Industries, Inc. is a small machine shop manufacturing replacement plumbing repair parts, such as faucet, tub and ballcock seats. Therese Castley, president of Castle decided to introduce Monel because it offered a chance to improve competitiveness and expand the product line. Before expanding, Castley sought NERAC assistance on Monel technology. NERAC (New England Research Application Center) provided an information package which proved very helpful. The NASA database was included in NERAC's search and yielded a wealth of information on machining Monel.

  8. Development of the first nonhydrostatic nested-grid grid-point global atmospheric modeling system on parallel machines

    SciTech Connect

    Kao, C.Y.J.; Langley, D.L.; Reisner, J.M.; Smith, W.S.

    1998-11-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). Evaluating the importance of global and regional climate response to increasing atmospheric concentrations of greenhouse gases requires a comprehensive global atmospheric modeling system (GAMS) capable of simulations over a wide range of atmospheric circulations, from complex terrain to continental scales, on high-performance computers. Unfortunately, all of the existing global circulation models (GCMs) do not meet this requirements, because they suffer from one or more of the following three shortcomings: (1) use of the hydrostatic approximation, which makes the models potentially ill-posed; (2) lack of a nested-grid (or multi-grid) capability, which makes it difficult to consistently evaluate the regional climate response to the global warming, and (3) spherical spectral (opposed to grid-point finite-difference) representation of model variables, which hinders model performance for parallel machine applications. The end product of the research is a highly modularized, multi-gridded, self-calibratable (for further parameterization development) global modeling system with state-of-the-science physics and chemistry. This system will be suitable for a suite of atmospheric problems: from local circulations to climate, from thunderstorms to global cloud radiative forcing, from urban pollution to global greenhouse trace gases, and from the guiding of field experiments to coupling with ocean models. It will also provide a unique testbed for high-performance computing architecture.

  9. Annual Conference Abstracts

    ERIC Educational Resources Information Center

    Engineering Education, 1976

    1976-01-01

    Presents the abstracts of 158 papers presented at the American Society for Engineering Education's annual conference at Knoxville, Tennessee, June 14-17, 1976. Included are engineering topics covering education, aerospace, agriculture, biomedicine, chemistry, computers, electricity, acoustics, environment, mechanics, and women. (SL)

  10. Abstraction through Game Play

    ERIC Educational Resources Information Center

    Avraamidou, Antri; Monaghan, John; Walker, Aisha

    2012-01-01

    This paper examines the computer game play of an 11-year-old boy. In the course of building a virtual house he developed and used, without assistance, an artefact and an accompanying strategy to ensure that his house was symmetric. We argue that the creation and use of this artefact-strategy is a mathematical abstraction. The discussion…

  11. Making the Abstract Concrete

    ERIC Educational Resources Information Center

    Potter, Lee Ann

    2005-01-01

    President Ronald Reagan nominated a woman to serve on the United States Supreme Court. He did so through a single-page form letter, completed in part by hand and in part by typewriter, announcing Sandra Day O'Connor as his nominee. While the document serves as evidence of a historic event, it is also a tangible illustration of abstract concepts…

  12. Annual Conference Abstracts

    ERIC Educational Resources Information Center

    Journal of Engineering Education, 1972

    1972-01-01

    Includes abstracts of papers presented at the 80th Annual Conference of the American Society for Engineering Education. The broad areas include aerospace, affiliate and associate member council, agricultural engineering, biomedical engineering, continuing engineering studies, chemical engineering, civil engineering, computers, cooperative…

  13. Computers in Abstract Algebra

    ERIC Educational Resources Information Center

    Nwabueze, Kenneth K.

    2004-01-01

    The current emphasis on flexible modes of mathematics delivery involving new information and communication technology (ICT) at the university level is perhaps a reaction to the recent change in the objectives of education. Abstract algebra seems to be one area of mathematics virtually crying out for computer instructional support because of the…

  14. 2002 NASPSA Conference Abstracts.

    ERIC Educational Resources Information Center

    Journal of Sport & Exercise Psychology, 2002

    2002-01-01

    Contains abstracts from the 2002 conference of the North American Society for the Psychology of Sport and Physical Activity. The publication is divided into three sections: the preconference workshop, "Effective Teaching Methods in the Classroom;" symposia (motor development, motor learning and control, and sport psychology); and free…

  15. Leadership Abstracts, 2002.

    ERIC Educational Resources Information Center

    Wilson, Cynthia, Ed.; Milliron, Mark David, Ed.

    2002-01-01

    This 2002 volume of Leadership Abstracts contains issue numbers 1-12. Articles include: (1) "Skills Certification and Workforce Development: Partnering with Industry and Ourselves," by Jeffrey A. Cantor; (2) "Starting Again: The Brookhaven Success College," by Alice W. Villadsen; (3) "From Digital Divide to Digital Democracy," by Gerardo E. de los…

  16. Water reuse. [Lead abstract

    SciTech Connect

    Middlebrooks, E.J.

    1982-01-01

    Separate abstracts were prepared for the 31 chapters of this book which deals with all aspects of wastewater reuse. Design data, case histories, performance data, monitoring information, health information, social implications, legal and organizational structures, and background information needed to analyze the desirability of water reuse are presented. (KRM)

  17. Abstract Film and Beyond.

    ERIC Educational Resources Information Center

    Le Grice, Malcolm

    A theoretical and historical account of the main preoccupations of makers of abstract films is presented in this book. The book's scope includes discussion of nonrepresentational forms as well as examination of experiments in the manipulation of time in films. The ten chapters discuss the following topics: art and cinematography, the first…

  18. Models of logistic regression analysis, support vector machine, and back-propagation neural network based on serum tumor markers in colorectal cancer diagnosis.

    PubMed

    Zhang, B; Liang, X L; Gao, H Y; Ye, L S; Wang, Y G

    2016-05-13

    We evaluated the application of three machine learning algorithms, including logistic regression, support vector machine and back-propagation neural network, for diagnosing congenital heart disease and colorectal cancer. By inspecting related serum tumor marker levels in colorectal cancer patients and healthy subjects, early diagnosis models for colorectal cancer were built using three machine learning algorithms to assess their corresponding diagnostic values. Except for serum alpha-fetoprotein, the levels of 11 other serum markers of patients in the colorectal cancer group were higher than those in the benign colorectal cancer group (P < 0.05). The results of logistic regression analysis indicted that individual detection of serum carcinoembryonic antigens, CA199, CA242, CA125, and CA153 and their combined detection was effective for diagnosing colorectal cancer. Combined detection had a better diagnostic effect with a sensitivity of 94.2% and specificity of 97.7%; combining serum carcinoembryonic antigens, CA199, CA242, CA125, and CA153, with the support vector machine diagnosis model and back-propagation, a neural network diagnosis model was built with diagnostic accuracies of 82 and 75%, sensitivities of 85 and 80%, and specificities of 80 and 70%, respectively. Colorectal cancer diagnosis models based on the three machine learning algorithms showed high diagnostic value and can help obtain evidence for the early diagnosis of colorectal cancer.

  19. Abstractions of Awareness: Aware of What?

    NASA Astrophysics Data System (ADS)

    Metaxas, Georgios; Markopoulos, Panos

    This chapter presents FN-AAR, an abstract model of awareness systems. The purpose of the model is to capture in a concise and abstract form essential aspects of awareness systems, many of which have been discussed in design essays or in the context of evaluating specific design solutions.

  20. A general procedure to generate models for urban environmental-noise pollution using feature selection and machine learning methods.

    PubMed

    Torija, Antonio J; Ruiz, Diego P

    2015-02-01

    The prediction of environmental noise in urban environments requires the solution of a complex and non-linear problem, since there are complex relationships among the multitude of variables involved in the characterization and modelling of environmental noise and environmental-noise magnitudes. Moreover, the inclusion of the great spatial heterogeneity characteristic of urban environments seems to be essential in order to achieve an accurate environmental-noise prediction in cities. This problem is addressed in this paper, where a procedure based on feature-selection techniques and machine-learning regression methods is proposed and applied to this environmental problem. Three machine-learning regression methods, which are considered very robust in solving non-linear problems, are used to estimate the energy-equivalent sound-pressure level descriptor (LAeq). These three methods are: (i) multilayer perceptron (MLP), (ii) sequential minimal optimisation (SMO), and (iii) Gaussian processes for regression (GPR). In addition, because of the high number of input variables involved in environmental-noise modelling and estimation in urban environments, which make LAeq prediction models quite complex and costly in terms of time and resources for application to real situations, three different techniques are used to approach feature selection or data reduction. The feature-selection techniques used are: (i) correlation-based feature-subset selection (CFS), (ii) wrapper for feature-subset selection (WFS), and the data reduction technique is principal-component analysis (PCA). The subsequent analysis leads to a proposal of different schemes, depending on the needs regarding data collection and accuracy. The use of WFS as the feature-selection technique with the implementation of SMO or GPR as regression algorithm provides the best LAeq estimation (R(2)=0.94 and mean absolute error (MAE)=1.14-1.16 dB(A)).

  1. Support Vector Machine and Artificial Neural Network Models for the Classification of Grapevine Varieties Using a Portable NIR Spectrophotometer

    PubMed Central

    Gutiérrez, Salvador; Tardaguila, Javier; Fernández-Novales, Juan; Diago, María P.

    2015-01-01

    The identification of different grapevine varieties, currently attended using visual ampelometry, DNA analysis and very recently, by hyperspectral analysis under laboratory conditions, is an issue of great importance in the wine industry. This work presents support vector machine and artificial neural network’s modelling for grapevine varietal classification from in-field leaf spectroscopy. Modelling was attempted at two scales: site-specific and a global scale. Spectral measurements were obtained on the near-infrared (NIR) spectral range between 1600 to 2400 nm under field conditions in a non-destructive way using a portable spectrophotometer. For the site specific approach, spectra were collected from the adaxial side of 400 individual leaves of 20 grapevine (Vitis vinifera L.) varieties one week after veraison. For the global model, two additional sets of spectra were collected one week before harvest from two different vineyards in another vintage, each one consisting on 48 measurement from individual leaves of six varieties. Several combinations of spectra scatter correction and smoothing filtering were studied. For the training of the models, support vector machines and artificial neural networks were employed using the pre-processed spectra as input and the varieties as the classes of the models. The results from the pre-processing study showed that there was no influence whether using scatter correction or not. Also, a second-degree derivative with a window size of 5 Savitzky-Golay filtering yielded the highest outcomes. For the site-specific model, with 20 classes, the best results from the classifiers thrown an overall score of 87.25% of correctly classified samples. These results were compared under the same conditions with a model trained using partial least squares discriminant analysis, which showed a worse performance in every case. For the global model, a 6-class dataset involving samples from three different vineyards, two years and leaves

  2. Support Vector Machine and Artificial Neural Network Models for the Classification of Grapevine Varieties Using a Portable NIR Spectrophotometer.

    PubMed

    Gutiérrez, Salvador; Tardaguila, Javier; Fernández-Novales, Juan; Diago, María P

    2015-01-01

    The identification of different grapevine varieties, currently attended using visual ampelometry, DNA analysis and very recently, by hyperspectral analysis under laboratory conditions, is an issue of great importance in the wine industry. This work presents support vector machine and artificial neural network's modelling for grapevine varietal classification from in-field leaf spectroscopy. Modelling was attempted at two scales: site-specific and a global scale. Spectral measurements were obtained on the near-infrared (NIR) spectral range between 1600 to 2400 nm under field conditions in a non-destructive way using a portable spectrophotometer. For the site specific approach, spectra were collected from the adaxial side of 400 individual leaves of 20 grapevine (Vitis vinifera L.) varieties one week after veraison. For the global model, two additional sets of spectra were collected one week before harvest from two different vineyards in another vintage, each one consisting on 48 measurement from individual leaves of six varieties. Several combinations of spectra scatter correction and smoothing filtering were studied. For the training of the models, support vector machines and artificial neural networks were employed using the pre-processed spectra as input and the varieties as the classes of the models. The results from the pre-processing study showed that there was no influence whether using scatter correction or not. Also, a second-degree derivative with a window size of 5 Savitzky-Golay filtering yielded the highest outcomes. For the site-specific model, with 20 classes, the best results from the classifiers thrown an overall score of 87.25% of correctly classified samples. These results were compared under the same conditions with a model trained using partial least squares discriminant analysis, which showed a worse performance in every case. For the global model, a 6-class dataset involving samples from three different vineyards, two years and leaves

  3. SPOCK: A SPICE based circuit code for modeling pulsed power machines

    SciTech Connect

    Ingermanson, R.; Parks, D.

    1996-12-31

    SPICE is an industry standard electrical circuit simulation code developed by the University of California at Berkeley over the last twenty years. The authors have developed a number of new SPICE devices of interest to the pulsed power community: plasma opening switches, plasma radiation sources, bremsstrahlung diodes, magnetically insulated transmission lines, explosively driven flux compressors. These new devices are integrated into SPICE using S-Cubed`s MIRIAD technology to create a user-friendly circuit code that runs on Unix workstations or under Windows NT or Windows 95. The new circuit code is called SPOCK--``S-Cubed Power Optimizing Circuit Kit.`` SPOCK allows the user to easily run optimization studies by setting up runs in which any circuit parameters can be systematically varied. Results can be plotted as 1-D line plots, 2-D contour plots, or 3-D ``bedsheet`` plots. The authors demonstrate SPOCK`s capabilities on a color laptop computer, performing realtime analysis of typical configurations of such machines as HAWK and ACE4.

  4. Protein Kinase Classification with 2866 Hidden Markov Models and One Support Vector Machine

    NASA Technical Reports Server (NTRS)

    Weber, Ryan; New, Michael H.; Fonda, Mark (Technical Monitor)

    2002-01-01

    The main application considered in this paper is predicting true kinases from randomly permuted kinases that share the same length and amino acid distributions as the true kinases. Numerous methods already exist for this classification task, such as HMMs, motif-matchers, and sequence comparison algorithms. We build on some of these efforts by creating a vector from the output of thousands of structurally based HMMs, created offline with Pfam-A seed alignments using SAM-T99, which then must be combined into an overall classification for the protein. Then we use a Support Vector Machine for classifying this large ensemble Pfam-Vector, with a polynomial and chisquared kernel. In particular, the chi-squared kernel SVM performs better than the HMMs and better than the BLAST pairwise comparisons, when predicting true from false kinases in some respects, but no one algorithm is best for all purposes or in all instances so we consider the particular strengths and weaknesses of each.

  5. Building a model to predict megafires using a machine learning approach.

    NASA Astrophysics Data System (ADS)

    Podschwit, H. R.; Barbero, R.; Larkin, N. K.; Steel, A.

    2014-12-01

    Weather and climate are critical influences of wildland fire activity. Climate change has led to an increase in the size and frequency of wildfires in many parts of the United States. These changes are expected to increase under current climate change scenarios, likely exacerbating so called "mega-fire" activity. Megafires are typically the most devastating and costly to suppress. It is then desirable to know when and where weather conditions will be conducive to the development of these fires in the future. However, standard statistical methods may not be suited to handle the data imbalance and high-dimensional features of such an analysis. We use an ensemble machine learning approach to estimate the risk of megafires based on weather and climate variables for each ecosystem in the contiguous U.S. Bootstrap aggregated trees are used to describe which suite of coarse scale weather conditions has historically best separated megafires from other large fires and to estimate the conditional probability of a "megafire" given ignition. The annual distribution of ignitions was estimated to calculate an overall probability of a "megafire" and spatial wildfire patterns were used to appropriately distribute this probability across space. This framework was then applied to future climate projections under the RCP8.5 scenario to estimate the future risk of these fire types. Our methodology was applied to various climate change scenarios and suggests that the frequency of these types of fires is likely to increase throughout much of the western United States in the next 50 years.

  6. A Boltzmann machine for the organization of intelligent machines

    NASA Technical Reports Server (NTRS)

    Moed, Michael C.; Saridis, George N.

    1989-01-01

    In the present technological society, there is a major need to build machines that would execute intelligent tasks operating in uncertain environments with minimum interaction with a human operator. Although some designers have built smart robots, utilizing heuristic ideas, there is no systematic approach to design such machines in an engineering manner. Recently, cross-disciplinary research from the fields of computers, systems AI and information theory has served to set the foundations of the emerging area of the design of intelligent machines. Since 1977 Saridis has been developing an approach, defined as Hierarchical Intelligent Control, designed to organize, coordinate and execute anthropomorphic tasks by a machine with minimum interaction with a human operator. This approach utilizes analytical (probabilistic) models to describe and control the various functions of the intelligent machine structured by the intuitively defined principle of Increasing Precision with Decreasing Intelligence (IPDI) (Saridis 1979). This principle, even though resembles the managerial structure of organizational systems (Levis 1988), has been derived on an analytic basis by Saridis (1988). The purpose is to derive analytically a Boltzmann machine suitable for optimal connection of nodes in a neural net (Fahlman, Hinton, Sejnowski, 1985). Then this machine will serve to search for the optimal design of the organization level of an intelligent machine. In order to accomplish this, some mathematical theory of the intelligent machines will be first outlined. Then some definitions of the variables associated with the principle, like machine intelligence, machine knowledge, and precision will be made (Saridis, Valavanis 1988). Then a procedure to establish the Boltzmann machine on an analytic basis will be presented and illustrated by an example in designing the organization level of an Intelligent Machine. A new search technique, the Modified Genetic Algorithm, is presented and proved

  7. Plant microRNA-Target Interaction Identification Model Based on the Integration of Prediction Tools and Support Vector Machine

    PubMed Central

    Meng, Jun; Shi, Lin; Luan, Yushi

    2014-01-01

    Background Confident identification of microRNA-target interactions is significant for studying the function of microRNA (miRNA). Although some computational miRNA target prediction methods have been proposed for plants, results of various methods tend to be inconsistent and usually lead to more false positive. To address these issues, we developed an integrated model for identifying plant miRNA–target interactions. Results Three online miRNA target prediction toolkits and machine learning algorithms were integrated to identify and analyze Arabidopsis thaliana miRNA-target interactions. Principle component analysis (PCA) feature extraction and self-training technology were introduced to improve the performance. Results showed that the proposed model outperformed the previously existing methods. The results were validated by using degradome sequencing supported Arabidopsis thaliana miRNA-target interactions. The proposed model constructed on Arabidopsis thaliana was run over Oryza sativa and Vitis vinifera to demonstrate that our model is effective for other plant species. Conclusions The integrated model of online predictors and local PCA-SVM classifier gained credible and high quality miRNA-target interactions. The supervised learning algorithm of PCA-SVM classifier was employed in plant miRNA target identification for the first time. Its performance can be substantially improved if more experimentally proved training samples are provided. PMID:25051153

  8. Support Vector Machine (SVM) Models for Predicting Inhibitors of the 3' Processing Step of HIV-1 Integrase.

    PubMed

    Xuan, Shouyi; Wang, Maolin; Kang, Hang; Kirchmair, Johannes; Tan, Lu; Yan, Aixia

    2013-10-01

    Inhibition of the 3' processing step of HIV-1 integrase by small molecule inhibitors is one of the most promising strategies for the treatment of AIDS. Using a support vector machine (SVM) approach, we developed six classification models for predicting 3'P inhibitors. The models are based on up to 48 selected molecular descriptors and a comprehensive data set of 1253 molecules, with measured activities ranging from nanomolar to micromolar IC50 values. Model B2, the most robust SVM model, obtains a prediction accuracy, sensitivity, specificity and Matthews correlation coefficient (MCC) of 93 %, 81 %, 94 % and 0.67 on the test set, respectively. The presence of hydrogen bonding features and hydrophilicity in general were identified as key determinants of inhibitory activity. Further important properties include molecular refractivity, π atom charge, total charge, lone pair electronegativity, and effective atom polarizability. Comparative fragment-based analysis of the active and inactive molecules corroborated these observations and revealed several characteristic structural elements of 3'P inhibitors. The models built in this study can be obtained from the authors.

  9. Method and system employing finite state machine modeling to identify one of a plurality of different electric load types

    DOEpatents

    Du, Liang; Yang, Yi; Harley, Ronald Gordon; Habetler, Thomas G.; He, Dawei

    2016-08-09

    A system is for a plurality of different electric load types. The system includes a plurality of sensors structured to sense a voltage signal and a current signal for each of the different electric loads; and a processor. The processor acquires a voltage and current waveform from the sensors for a corresponding one of the different electric load types; calculates a power or current RMS profile of the waveform; quantizes the power or current RMS profile into a set of quantized state-values; evaluates a state-duration for each of the quantized state-values; evaluates a plurality of state-types based on the power or current RMS profile and the quantized state-values; generates a state-sequence that describes a corresponding finite state machine model of a generalized load start-up or transient profile for the corresponding electric load type; and identifies the corresponding electric load type.

  10. Generalized Abstract Symbolic Summaries

    NASA Technical Reports Server (NTRS)

    Person, Suzette; Dwyer, Matthew B.

    2009-01-01

    Current techniques for validating and verifying program changes often consider the entire program, even for small changes, leading to enormous V&V costs over a program s lifetime. This is due, in large part, to the use of syntactic program techniques which are necessarily imprecise. Building on recent advances in symbolic execution of heap manipulating programs, in this paper, we develop techniques for performing abstract semantic differencing of program behaviors that offer the potential for improved precision.

  11. Foundations of the Bandera Abstraction Tools

    NASA Technical Reports Server (NTRS)

    Hatcliff, John; Dwyer, Matthew B.; Pasareanu, Corina S.; Robby

    2003-01-01

    Current research is demonstrating that model-checking and other forms of automated finite-state verification can be effective for checking properties of software systems. Due to the exponential costs associated with model-checking, multiple forms of abstraction are often necessary to obtain system models that are tractable for automated checking. The Bandera Tool Set provides multiple forms of automated support for compiling concurrent Java software systems to models that can be supplied to several different model-checking tools. In this paper, we describe the foundations of Bandera's data abstraction mechanism which is used to reduce the cardinality (and the program's state-space) of data domains in software to be model-checked. From a technical standpoint, the form of data abstraction used in Bandera is simple, and it is based on classical presentations of abstract interpretation. We describe the mechanisms that Bandera provides for declaring abstractions, for attaching abstractions to programs, and for generating abstracted programs and properties. The contributions of this work are the design and implementation of various forms of tool support required for effective application of data abstraction to software components written in a programming language like Java which has a rich set of linguistic features.

  12. Perspective: Web-based machine learning models for real-time screening of thermoelectric materials properties

    NASA Astrophysics Data System (ADS)

    Gaultois, Michael W.; Oliynyk, Anton O.; Mar, Arthur; Sparks, Taylor D.; Mulholland, Gregory J.; Meredig, Bryce

    2016-05-01

    The experimental search for new thermoelectric materials remains largely confined to a limited set of successful chemical and structural families, such as chalcogenides, skutterudites, and Zintl phases. In principle, computational tools such as density functional theory (DFT) offer the possibility of rationally guiding experimental synthesis efforts toward very different chemistries. However, in practice, predicting thermoelectric properties from first principles remains a challenging endeavor [J. Carrete et al., Phys. Rev. X 4, 011019 (2014)], and experimental researchers generally do not directly use computation to drive their own synthesis efforts. To bridge this practical gap between experimental needs and computational tools, we report an open machine learning-based recommendation engine (http://thermoelectrics.citrination.com) for materials researchers that suggests promising new thermoelectric compositions based on pre-screening about 25 000 known materials and also evaluates the feasibility of user-designed compounds. We show this engine can identify interesting chemistries very different from known thermoelectrics. Specifically, we describe the experimental characterization of one example set of compounds derived from our engine, RE12Co5Bi (RE = Gd, Er), which exhibits surprising thermoelectric performance given its unprecedentedly high loading with metallic d and f block elements and warrants further investigation as a new thermoelectric material platform. We show that our engine predicts this family of materials to have low thermal and high electrical conductivities, but modest Seebeck coefficient, all of which are confirmed experimentally. We note that the engine also predicts materials that may simultaneously optimize all three properties entering into zT; we selected RE12Co5Bi for this study due to its interesting chemical composition and known facile synthesis.

  13. Using state machines to model the Ion Torrent sequencing process and to improve read error rates

    PubMed Central

    Golan, David; Medvedev, Paul

    2013-01-01

    Motivation: The importance of fast and affordable DNA sequencing methods for current day life sciences, medicine and biotechnology is hard to overstate. A major player is Ion Torrent, a pyrosequencing-like technology which produces flowgrams – sequences of incorporation values – which are converted into nucleotide sequences by a base-calling algorithm. Because of its exploitation of ubiquitous semiconductor technology and innovation in chemistry, Ion Torrent has been gaining popularity since its debut in 2011. Despite the advantages, however, Ion Torrent read accuracy remains a significant concern. Results: We present FlowgramFixer, a new algorithm for converting flowgrams into reads. Our key observation is that the incorporation signals of neighboring flows, even after normalization and phase correction, carry considerable mutual information and are important in making the correct base-call. We therefore propose that base-calling of flowgrams should be done on a read-wide level, rather than one flow at a time. We show that this can be done in linear-time by combining a state machine with a Viterbi algorithm to find the nucleotide sequence that maximizes the likelihood of the observed flowgram. FlowgramFixer is applicable to any flowgram-based sequencing platform. We demonstrate FlowgramFixer’s superior performance on Ion Torrent Escherichia coli data, with a 4.8% improvement in the number of high-quality mapped reads and a 7.1% improvement in the number of uniquely mappable reads. Availability: Binaries and source code of FlowgramFixer are freely available at: http://www.cs.tau.ac.il/~davidgo5/flowgramfixer.html. Contact: davidgo5@post.tau.ac.il PMID:23813003

  14. Integrating Subcellular Location for Improving Machine Learning Models of Remote Homology Detection in Eukaryotic Organisms

    SciTech Connect

    Shah, Anuj R.; Oehmen, Chris S.; Harper, Jill K.; Webb-Robertson, Bobbie-Jo M.

    2007-02-23

    Motivation: At the center of bioinformatics, genomics, and pro-teomics is the need for highly accurate genome annotations. Producing high-quality reliable annotations depends on identifying sequences which are related evolutionarily (homologs) on which to infer function. Homology detection is one of the oldest tasks in bioinformatics, however most approaches still fail when presented with sequences that have low residue similarity despite a distant evolutionary relationship (remote homology). Recently, discriminative approaches, such as support vector machines (SVMs) have demonstrated a vast improvement in sensitivity for remote homology detection. These methods however have only focused on one aspect of the sequence at a time, e.g., sequence similarity or motif based scores. However, supplementary information, such as the sub-cellular location of a protein within the cell would give further clues as to possible homologous pairs, additionally eliminating false relationships due to simple functional roles that cannot exist due to location. We have developed a method, SVM-SimLoc that integrates sub-cellular location with sequence similarity information into a pro-tein family classifier and compared it to one of the most accurate sequence based SVM approaches, SVM-Pairwise. Results: The SCOP 1.53 benchmark data set was utilized to assess the performance of SVM-SimLoc. As cellular location prediction is dependent upon the type of sequence, eukaryotic or prokaryotic, the analysis is restricted to the 2630 eukaryotic sequences in the benchmark dataset, evaluating a total of 27 protein families. We demonstrate that the integration of sequence similarity and sub-cellular location yields notably more accurate results than using sequence similarity independently at a significance level of 0.006.

  15. Machine-learning model observer for detection and localization tasks in clinical SPECT-MPI

    NASA Astrophysics Data System (ADS)

    Parages, Felipe M.; O'Connor, J. Michael; Pretorius, P. Hendrik; Brankov, Jovan G.

    2016-03-01

    In this work we propose a machine-learning MO based on Naive-Bayes classification (NB-MO) for the diagnostic tasks of detection, localization and assessment of perfusion defects in clinical SPECT Myocardial Perfusion Imaging (MPI), with the goal of evaluating several image reconstruction methods used in clinical practice. NB-MO uses image features extracted from polar-maps in order to predict lesion detection, localization and severity scores given by human readers in a series of 3D SPECT-MPI. The population used to tune (i.e. train) the NB-MO consisted of simulated SPECT-MPI cases - divided into normals or with lesions in variable sizes and locations - reconstructed using filtered backprojection (FBP) method. An ensemble of five human specialists (physicians) read a subset of simulated reconstructed images, and assigned a perfusion score for each region of the left-ventricle (LV). Polar-maps generated from the simulated volumes along with their corresponding human scores were used to train five NB-MOs (one per human reader), which are subsequently applied (i.e. tested) on three sets of clinical SPECT-MPI polar maps, in order to predict human detection and localization scores. The clinical "testing" population comprises healthy individuals and patients suffering from coronary artery disease (CAD) in three possible regions, namely: LAD, LcX and RCA. Each clinical case was reconstructed using three reconstruction strategies, namely: FBP with no SC (i.e. scatter compensation), OSEM with Triple Energy Window (TEW) SC method, and OSEM with Effective Source Scatter Estimation (ESSE) SC. Alternative Free-Response (AFROC) analysis of perfusion scores shows that NB-MO predicts a higher human performance for scatter-compensated reconstructions, in agreement with what has been reported in published literature. These results suggest that NB-MO has good potential to generalize well to reconstruction methods not used during training, even for reasonably dissimilar datasets (i

  16. Workout Machine

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The Orbotron is a tri-axle exercise machine patterned after a NASA training simulator for astronaut orientation in the microgravity of space. It has three orbiting rings corresponding to roll, pitch and yaw. The user is in the middle of the inner ring with the stomach remaining in the center of all axes, eliminating dizziness. Human power starts the rings spinning, unlike the NASA air-powered system. Marketed by Fantasy Factory (formerly Orbotron, Inc.), the machine can improve aerobic capacity, strength and endurance in five to seven minute workouts.

  17. RVMAB: Using the Relevance Vector Machine Model Combined with Average Blocks to Predict the Interactions of Proteins from Protein Sequences.

    PubMed

    An, Ji-Yong; You, Zhu-Hong; Meng, Fan-Rong; Xu, Shu-Juan; Wang, Yin

    2016-01-01

    Protein-Protein Interactions (PPIs) play essential roles in most cellular processes. Knowledge of PPIs is becoming increasingly more important, which has prompted the development of technologies that are capable of discovering large-scale PPIs. Although many high-throughput biological technologies have been proposed to detect PPIs, there are unavoidable shortcomings, including cost, time intensity, and inherently high false positive and false negative rates. For the sake of these reasons, in silico methods are attracting much attention due to their good performances in predicting PPIs. In this paper, we propose a novel computational method known as RVM-AB that combines the Relevance Vector Machine (RVM) model and Average Blocks (AB) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the AB feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We performed five-fold cross-validation experiments on yeast and Helicobacter pylori datasets, and achieved very high accuracies of 92.98% and 95.58% respectively, which is significantly better than previous works. In addition, we also obtained good prediction accuracies of 88.31%, 89.46%, 91.08%, 91.55%, and 94.81% on other five independent datasets C. elegans, M. musculus, H. sapiens, H. pylori, and E. coli for cross-species prediction. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM-AB method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool. To facilitate extensive studies for future proteomics research, we developed a freely

  18. RVMAB: Using the Relevance Vector Machine Model Combined with Average Blocks to Predict the Interactions of Proteins from Protein Sequences.

    PubMed

    An, Ji-Yong; You, Zhu-Hong; Meng, Fan-Rong; Xu, Shu-Juan; Wang, Yin

    2016-01-01

    Protein-Protein Interactions (PPIs) play essential roles in most cellular processes. Knowledge of PPIs is becoming increasingly more important, which has prompted the development of technologies that are capable of discovering large-scale PPIs. Although many high-throughput biological technologies have been proposed to detect PPIs, there are unavoidable shortcomings, including cost, time intensity, and inherently high false positive and false negative rates. For the sake of these reasons, in silico methods are attracting much attention due to their good performances in predicting PPIs. In this paper, we propose a novel computational method known as RVM-AB that combines the Relevance Vector Machine (RVM) model and Average Blocks (AB) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the AB feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We performed five-fold cross-validation experiments on yeast and Helicobacter pylori datasets, and achieved very high accuracies of 92.98% and 95.58% respectively, which is significantly better than previous works. In addition, we also obtained good prediction accuracies of 88.31%, 89.46%, 91.08%, 91.55%, and 94.81% on other five independent datasets C. elegans, M. musculus, H. sapiens, H. pylori, and E. coli for cross-species prediction. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM-AB method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool. To facilitate extensive studies for future proteomics research, we developed a freely

  19. [The estimation model of rice leaf area index using hyperspectral data based on support vector machine].

    PubMed

    Yang, Xiao-hua; Huang, Jing-feng; Wang, Xiu-zhen; Wang, Fu-min

    2008-08-01

    In order to compare the prediction powers between the best statistical model and SVM technique using each VI for rice LAI, the VIs are as independent variables in statistical models and are as net inputs in SVM, and the rice LAI are as dependent variables in statistical models and are as net outputs in SVM. Hyperspectral reflectance (350 to 2500 nm) data were recorded in two experiments involving four replicates of two rice cultivars ("Xiushui 110" and "Xieyou 9308"), three nitrogen levels (0, 120, 240 kg x ha(-1) N), and with a plant density of 45 plants x m(-2). The first experiment was seeded on 30 May 2004 and the second experiment on 15 June 2004. Both sets of seedlings were transplanted to the field one month later. Hyperspectral reflectance was ground-based and measured using Analytical Spectral Devices and 1 meter above the rice canopy. The solar angle compared to nadir was for all measurements less than 45 degrees and no disturbing clouds were observed. Hyperspectral reflectance was transformed to ten different vegetation indices including RVI, NDVI, NDVIgreen, SAVI, OSAVI, MSAVI, MCACI, TCARI/OSAVI, RDVI and RVI2, according to the width of TM bands of Ladsat-5. Different statistical models including linearity model, exponent model, power model and logarithm model, were analyzed using all samples' LAI and vegetation indices. Three good relationships including exponent relationship of NDVIgreen, power relationship of TCARI/OSAVI and power relationship of RV12 were selected based on the R2 of models. These three relationships were used to predict the LAI of rice through SVM models with different kernel functions including an analysis of variance kernel (ANOVA), a polynomial kernel (POLY) and a radial basic function kernel (RBF), and corresponding statistical models. The results show that all SVM models have lower RMSE values and higher estimation precision than corresponding statistical models; SVM with POLY kernel function using TCARI/OSAVI has the highest

  20. Wacky Machines

    ERIC Educational Resources Information Center

    Fendrich, Jean

    2002-01-01

    Collectors everywhere know that local antique shops and flea markets are treasure troves just waiting to be plundered. Science teachers might take a hint from these hobbyists, for the next community yard sale might be a repository of old, quirky items that are just the things to get students thinking about simple machines. By introducing some…

  1. The virtual machine (VM) scaler: an infrastructure manager supporting environmental modeling on IaaS clouds

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Infrastructure-as-a-service (IaaS) clouds provide a new medium for deployment of environmental modeling applications. Harnessing advancements in virtualization, IaaS clouds can provide dynamic scalable infrastructure to better support scientific modeling computational demands. Providing scientific m...

  2. Modeling Epoxidation of Drug-like Molecules with a Deep Machine Learning Network.

    PubMed

    Hughes, Tyler B; Miller, Grover P; Swamidass, S Joshua

    2015-07-22

    Drug toxicity is frequently caused by electrophilic reactive metabolites that covalently bind to proteins. Epoxides comprise a large class of three-membered cyclic ethers. These molecules are electrophilic and typically highly reactive due to ring tension and polarized carbon-oxygen bonds. Epoxides are metabolites often formed by cytochromes P450 acting on aromatic or double bonds. The specific location on a molecule that undergoes epoxidation is its site of epoxidation (SOE). Identifying a molecule's SOE can aid in interpreting adverse events related to reactive metabolites and direct modification to prevent epoxidation for safer drugs. This study utilized a database of 702 epoxidation reactions to build a model that accurately predicted sites of epoxidation. The foundation for this model was an algorithm originally designed to model sites of cytochromes P450 metabolism (called XenoSite) that was recently applied to model the intrinsic reactivity of diverse molecules with glutathione. This modeling algorithm systematically and quantitatively summarizes the knowledge from hundreds of epoxidation reactions with a deep convolution network. This network makes predictions at both an atom and molecule level. The final epoxidation model constructed with this approach identified SOEs with 94.9% area under the curve (AUC) performance and separated epoxidized and non-epoxidized molecules with 79.3% AUC. Moreover, within epoxidized molecules, the model separated aromatic or double bond SOEs from all other aromatic or double bonds with AUCs of 92.5% and 95.1%, respectively. Finally, the model separated SOEs from sites of sp(2) hydroxylation with 83.2% AUC. Our model is the first of its kind and may be useful for the development of safer drugs. The epoxidation model is available at http://swami.wustl.edu/xenosite. PMID:27162970

  3. Modeling Epoxidation of Drug-like Molecules with a Deep Machine Learning Network

    PubMed Central

    2015-01-01

    Drug toxicity is frequently caused by electrophilic reactive metabolites that covalently bind to proteins. Epoxides comprise a large class of three-membered cyclic ethers. These molecules are electrophilic and typically highly reactive due to ring tension and polarized carbon–oxygen bonds. Epoxides are metabolites often formed by cytochromes P450 acting on aromatic or double bonds. The specific location on a molecule that undergoes epoxidation is its site of epoxidation (SOE). Identifying a molecule’s SOE can aid in interpreting adverse events related to reactive metabolites and direct modification to prevent epoxidation for safer drugs. This study utilized a database of 702 epoxidation reactions to build a model that accurately predicted sites of epoxidation. The foundation for this model was an algorithm originally designed to model sites of cytochromes P450 metabolism (called XenoSite) that was recently applied to model the intrinsic reactivity of diverse molecules with glutathione. This modeling algorithm systematically and quantitatively summarizes the knowledge from hundreds of epoxidation reactions with a deep convolution network. This network makes predictions at both an atom and molecule level. The final epoxidation model constructed with this approach identified SOEs with 94.9% area under the curve (AUC) performance and separated epoxidized and non-epoxidized molecules with 79.3% AUC. Moreover, within epoxidized molecules, the model separated aromatic or double bond SOEs from all other aromatic or double bonds with AUCs of 92.5% and 95.1%, respectively. Finally, the model separated SOEs from sites of sp2 hydroxylation with 83.2% AUC. Our model is the first of its kind and may be useful for the development of safer drugs. The epoxidation model is available at http://swami.wustl.edu/xenosite. PMID:27162970

  4. Rosen's (M,R) system as an X-machine.

    PubMed

    Palmer, Michael L; Williams, Richard A; Gatherer, Derek

    2016-11-01

    Robert Rosen's (M,R) system is an abstract biological network architecture that is allegedly both irreducible to sub-models of its component states and non-computable on a Turing machine. (M,R) stands as an obstacle to both reductionist and mechanistic presentations of systems biology, principally due to its self-referential structure. If (M,R) has the properties claimed for it, computational systems biology will not be possible, or at best will be a science of approximate simulations rather than accurate models. Several attempts have been made, at both empirical and theoretical levels, to disprove this assertion by instantiating (M,R) in software architectures. So far, these efforts have been inconclusive. In this paper, we attempt to demonstrate why - by showing how both finite state machine and stream X-machine formal architectures fail to capture the self-referential requirements of (M,R). We then show that a solution may be found in communicating X-machines, which remove self-reference using parallel computation, and then synthesise such machine architectures with object-orientation to create a formal basis for future software instantiations of (M,R) systems.

  5. Temperature drift modeling and compensation of fiber optical gyroscope based on improved support vector machine and particle swarm optimization algorithms.

    PubMed

    Wang, Wei; Chen, Xiyuan

    2016-08-10

    Modeling and compensation of temperature drift is an important method for improving the precision of fiber-optic gyroscopes (FOGs). In this paper, a new method of modeling and compensation for FOGs based on improved particle swarm optimization (PSO) and support vector machine (SVM) algorithms is proposed. The convergence speed and reliability of PSO are improved by introducing a dynamic inertia factor. The regression accuracy of SVM is improved by introducing a combined kernel function with four parameters and piecewise regression with fixed steps. The steps are as follows. First, the parameters of the combined kernel functions are optimized by the improved PSO algorithm. Second, the proposed kernel function of SVM is used to carry out piecewise regression, and the regression model is also obtained. Third, the temperature drift is compensated for by the regression data. The regression accuracy of the proposed method (in the case of mean square percentage error indicators) increased by 83.81% compared to the traditional SVM. PMID:27534465

  6. Prediction model of band gap for inorganic compounds by combination of density functional theory calculations and machine learning techniques

    NASA Astrophysics Data System (ADS)

    Lee, Joohwi; Seko, Atsuto; Shitara, Kazuki; Nakayama, Keita; Tanaka, Isao

    2016-03-01

    Machine learning techniques are applied to make prediction models of the G0W0 band gaps for 270 inorganic compounds using Kohn-Sham (KS) band gaps, cohesive energy, crystalline volume per atom, and other fundamental information of constituent elements as predictors. Ordinary least squares regression (OLSR), least absolute shrinkage and selection operator, and nonlinear support vector regression (SVR) methods are applied with two levels of predictor sets. When the KS band gap by generalized gradient approximation of Perdew-Burke-Ernzerhof (PBE) or modified Becke-Johnson (mBJ) is used as a single predictor, the OLSR model predicts the G0W0 band gap of randomly selected test data with the root-mean-square error (RMSE) of 0.59 eV. When KS band gap by PBE and mBJ methods are used together with a set of predictors representing constituent elements and compounds, the RMSE decreases significantly. The best model by SVR yields the RMSE of 0.24 eV. Band gaps estimated in this way should be useful as predictors for virtual screening of a large set of materials.

  7. Man-machine Integration Design and Analysis System (MIDAS) Task Loading Model (TLM) experimental and software detailed design report

    NASA Technical Reports Server (NTRS)

    Staveland, Lowell

    1994-01-01

    This is the experimental and software detailed design report for the prototype task loading model (TLM) developed as part of the man-machine integration design and analysis system (MIDAS), as implemented and tested in phase 6 of the Army-NASA Aircrew/Aircraft Integration (A3I) Program. The A3I program is an exploratory development effort to advance the capabilities and use of computational representations of human performance and behavior in the design, synthesis, and analysis of manned systems. The MIDAS TLM computationally models the demands designs impose on operators to aide engineers in the conceptual design of aircraft crewstations. This report describes TLM and the results of a series of experiments which were run this phase to test its capabilities as a predictive task demand modeling tool. Specifically, it includes discussions of: the inputs and outputs of TLM, the theories underlying it, the results of the test experiments, the use of the TLM as both stand alone tool and part of a complete human operator simulation, and a brief introduction to the TLM software design.

  8. Improving near-infrared prediction model robustness with support vector machine regression: a pharmaceutical tablet assay example.

    PubMed

    Igne, Benoît; Drennen, James K; Anderson, Carl A

    2014-01-01

    Changes in raw materials and process wear and tear can have significant effects on the prediction error of near-infrared calibration models. When the variability that is present during routine manufacturing is not included in the calibration, test, and validation sets, the long-term performance and robustness of the model will be limited. Nonlinearity is a major source of interference. In near-infrared spectroscopy, nonlinearity can arise from light path-length differences that can come from differences in particle size or density. The usefulness of support vector machine (SVM) regression to handle nonlinearity and improve the robustness of calibration models in scenarios where the calibration set did not include all the variability present in test was evaluated. Compared to partial least squares (PLS) regression, SVM regression was less affected by physical (particle size) and chemical (moisture) differences. The linearity of the SVM predicted values was also improved. Nevertheless, although visualization and interpretation tools have been developed to enhance the usability of SVM-based methods, work is yet to be done to provide chemometricians in the pharmaceutical industry with a regression method that can supplement PLS-based methods. PMID:25358108

  9. Interactional Metadiscourse in Research Article Abstracts

    ERIC Educational Resources Information Center

    Gillaerts, Paul; Van de Velde, Freek

    2010-01-01

    This paper deals with interpersonality in research article abstracts analysed in terms of interactional metadiscourse. The evolution in the distribution of three prominent interactional markers comprised in Hyland's (2005a) model, viz. hedges, boosters and attitude markers, is investigated in three decades of abstract writing in the field of…

  10. Abstraction and context in concept representation.

    PubMed Central

    Hampton, James A

    2003-01-01

    This paper develops the notion of abstraction in the context of the psychology of concepts, and discusses its relation to context dependence in knowledge representation. Three general approaches to modelling conceptual knowledge from the domain of cognitive psychology are discussed, which serve to illustrate a theoretical dimension of increasing levels of abstraction. PMID:12903660

  11. Role of hydrogen abstraction acetylene addition mechanisms in the formation of chlorinated naphthalenes. 2. Kinetic modeling and the detailed mechanism of ring closure.

    PubMed

    McIntosh, Grant J; Russell, Douglas K

    2014-12-26

    The dominant formation mechanisms of chlorinated phenylacetylenes, naphthalenes, and phenylvinylacetylenes in relatively low pressure and temperature (∼40 Torr and 1000 K) pyrolysis systems are explored. Mechanism elucidation is achieved through a combination of theoretical and experimental techniques, the former employing a novel simplification of kinetic modeling which utilizes rate constants in a probabilistic framework. Contemporary formation schemes of the compounds of interest generally require successive additions of acetylene to phenyl radicals. As such, infrared laser powered homogeneous pyrolyses of dichloro- or trichloroethylene were perturbed with 1,2,4- or 1,2,3-trichlorobenzene. The resulting changes in product identities were compared with the major products expected from conventional pathways, aided by the results of our previous computational work. This analysis suggests that a Bittner-Howard growth mechanism, with a novel amendment to the conventional scheme made just prior to ring closure, describes the major products well. Expected products from a number of other potentially operative channels are shown to be incongruent with experiment, further supporting the role of Bittner-Howard channels as the unique pathway to naphthalene growth. A simple quantitative analysis which performs very well is achieved by considering the reaction scheme as a probability tree, with relative rate constants being cast as branching probabilities. This analysis describes all chlorinated phenylacetylene, naphthalene, and phenylvinylacetylene congeners. The scheme is then tested in a more general system, i.e., not enforcing a hydrogen abstraction/acetylene addition mechanism, by pyrolyzing mixtures of di- and trichloroethylene without the addition of an aromatic precursor. The model indicates that these mechanisms are still likely to be operative.

  12. Expanding Options. A Model to Attract Secondary Students into Nontraditional Vocational Programs. For Emphasis in: Building Trades, Electronics, Health Services, Machine Shop, Welding.

    ERIC Educational Resources Information Center

    Good, James D.; DeVore, Mary Ann

    This model has been designed for use by Missouri secondary schools in attracting females and males into nontraditional occupational programs. The research-based strategies are intended for implementation in the following areas: attracting females into building trades, electronics, machine shop, and welding; and males into secondary health…

  13. Process Modeling In Cold Forging Considering The Process-Tool-Machine Interactions

    NASA Astrophysics Data System (ADS)

    Kroiss, Thomas; Engel, Ulf; Merklein, Marion

    2010-06-01

    In this paper, a methodic approach is presented for the determination and modeling of the axial deflection characteristic for the whole system of stroke-controlled press and tooling system. This is realized by a combination of experiment and FE simulation. The press characteristic is uniquely measured in experiment. The tooling system characteristic is determined in FE simulation to avoid experimental investigations on various tooling systems. The stiffnesses of press and tooling system are combined to a substitute stiffness that is integrated into the FE process simulation as a spring element. Non-linear initial effects of the press are modeled with a constant shift factor. The approach was applied to a full forward extrusion process on a press with C-frame. A comparison between experiments and results of the integrated FE simulation model showed a high accuracy of the FE model. The simulation model with integrated deflection characteristic represents the entire process behavior and can be used for the calculation of a mathematical process model based on variant simulations and response surfaces. In a subsequent optimization step, an adjusted process and tool design can be determined, that compensates the influence of the deflections on the workpiece dimensions leading to high workpiece accuracy. Using knowledge on the process behavior, the required number of variant simulations was reduced.

  14. An initial abstraction and constant loss model, and methods for estimating unit hydrographs, peak streamflows, and flood volumes for urban basins in Missouri

    USGS Publications Warehouse

    Huizinga, Richard J.

    2014-01-01

    The appropriate regional initial abstraction regression equation was combined with both the generalized and the specific regional mean constant loss values and the GUH regression equations. Both the generalized regional mean constant loss and

  15. Electrochemical machining with ultrashort voltage pulses: modelling of charging dynamics and feature profile evolution.

    PubMed

    Kenney, Jason A; Hwang, Gyeong S

    2005-07-01

    A two-dimensional computational model is developed to describe electrochemical nanostructuring of conducting materials with ultrashort voltage pulses. The model consists of (1) a transient charging simulation to describe the evolution of the overpotentials at the tool and workpiece surfaces and the resulting dissolution currents and (2) a feature profile evolution tool which uses the level set method to describe either vertical or lateral etching of the workpiece. Results presented include transient currents at different separations between tool and workpiece, evolution of overpotentials and dissolution currents as a function of position along the workpiece, and etch profiles as a function of pulse duration. PMID:21727446

  16. A LARI Experience (Abstract)

    NASA Astrophysics Data System (ADS)

    Cook, M.

    2015-12-01

    (Abstract only) In 2012, Lowell Observatory launched The Lowell Amateur Research Initiative (LARI) to formally involve amateur astronomers in scientific research by bringing them to the attention of and helping professional astronomers with their astronomical research. One of the LARI projects is the BVRI photometric monitoring of Young Stellar Objects (YSOs), wherein amateurs obtain observations to search for new outburst events and characterize the colour evolution of previously identified outbursters. A summary of the scientific and organizational aspects of this LARI project, including its goals and science motivation, the process for getting involved with the project, a description of the team members, their equipment and methods of collaboration, and an overview of the programme stars, preliminary findings, and lessons learned is presented.

  17. IEEE conference record -- Abstracts

    SciTech Connect

    Not Available

    1994-01-01

    This conference covers the following areas: computational plasma physics; vacuum electronic; basic phenomena in fully ionized plasmas; plasma, electron, and ion sources; environmental/energy issues in plasma science; space plasmas; plasma processing; ball lightning/spherical plasma configurations; plasma processing; fast wave devices; magnetic fusion; basic phenomena in partially ionized plasma; dense plasma focus; plasma diagnostics; basic phenomena in weakly ionized gases; fast opening switches; MHD; fast z-pinches and x-ray lasers; intense ion and electron beams; laser-produced plasmas; microwave plasma interactions; EM and ETH launchers; solid state plasmas and switches; intense beam microwaves; and plasmas for lighting. Separate abstracts were prepared for 416 papers in this conference.

  18. Predicting the hazardous dose of industrial chemicals in warm-blooded species using machine learning-based modelling approaches.

    PubMed

    Gupta, S; Basant, N; Singh, K P

    2015-06-01

    The hazardous dose of a chemical (HD50) is an emerging and acceptable test statistic for the safety/risk assessment of chemicals. Since it is derived using the experimental toxicity values of the chemical in several test species, it is highly cumbersome, time and resource intensive. In this study, three machine learning-based QSARs were established for predicting the HD50 of chemicals in warm-blooded species following the OECD guidelines. A data set comprising HD50 values of 957 chemicals was used to develop SDT, DTF and DTB QSAR models. The diversity in chemical structures and nonlinearity in the data were verified. Several validation coefficients were derived to test the predictive and generalization abilities of the constructed QSARs. The chi-path descriptors were identified as the most influential in three QSARs. The DTF and DTB performed relatively better than SDT model and yielded r(2) values of 0.928 and 0.959 between the measured and predicted HD50 values in the complete data set. Substructure alerts responsible for the toxicity of the chemicals were identified. The results suggest the appropriateness of the developed QSARs for reliably predicting the HD50 values of chemicals, and they can be used for screening of new chemicals for their safety/risk assessment for regulatory purposes.

  19. A comparison of binary and multiclass support vector machine models for volcanic lithology estimation using geophysical log data from Liaohe Basin, China

    NASA Astrophysics Data System (ADS)

    Mou, Dan; Wang, Zhu-Wen

    2016-05-01

    Lithology estimation of rocks, especially volcanic lithology, is one of the major goals of geophysical exploration. In this paper, we propose the use of binary and multiclass support vector machine models with geophysical log data to estimate the volcanic lithology of the Liaohe Basin, China. Using neutron (CNL), density (DEN), acoustic (AC), deep lateral resistivity (RLLD), and gamma-ray (GR) log data from 40 wells (a total of 1200 log data points) in the Liaohe Basin, China, we first construct the binary support vector machine model to classify volcanic rock and non-volcanic rock. Then, we expand the binary model to a multiclass model using the approach of directed acyclic graphs, and construct multiclass models to classify six types of volcanic rocks: basalt, non-compacted basalt, trachyte, non-compacted trachyte, gabbro and diabase. To assess the accuracy of these two models, we compare their predictions with core data from four wells (at 800 different depth points in total). Results indicate that the accuracy of the binary and multiclass models are 98.4% and 87%, respectively, demonstrating that binary and multiclass support vector machine models are effective methods for classifying volcanic lithology.

  20. Machine Learning Model Analysis and Data Visualization with Small Molecules Tested in a Mouse Model of Mycobacterium tuberculosis Infection (2014–2015)

    PubMed Central

    2016-01-01

    The renewed urgency to develop new treatments for Mycobacterium tuberculosis (Mtb) infection has resulted in large-scale phenotypic screening and thousands of new active compounds in vitro. The next challenge is to identify candidates to pursue in a mouse in vivo efficacy model as a step to predicting clinical efficacy. We previously analyzed over 70 years of this mouse in vivo efficacy data, which we used to generate and validate machine learning models. Curation of 60 additional small molecules with in vivo data published in 2014 and 2015 was undertaken to further test these models. This represents a much larger test set than for the previous models. Several computational approaches have now been applied to analyze these molecules and compare their molecular properties beyond those attempted previously. Our previous machine learning models have been updated, and a novel aspect has been added in the form of mouse liver microsomal half-life (MLM t1/2) and in vitro-based Mtb models incorporating cytotoxicity data that were used to predict in vivo activity for comparison. Our best Mtbin vivo models possess fivefold ROC values > 0.7, sensitivity > 80%, and concordance > 60%, while the best specificity value is >40%. Use of an MLM t1/2 Bayesian model affords comparable results for scoring the 60 compounds tested. Combining MLM stability and in vitroMtb models in a novel consensus workflow in the best cases has a positive predicted value (hit rate) > 77%. Our results indicate that Bayesian models constructed with literature in vivoMtb data generated by different laboratories in various mouse models can have predictive value and may be used alongside MLM t1/2 and in vitro-based Mtb models to assist in selecting antitubercular compounds with desirable in vivo efficacy. We demonstrate for the first time that consensus models of any kind can be used to predict in vivo activity for Mtb. In addition, we describe a new clustering method for data visualization and apply this to

  1. Machine Learning Model Analysis and Data Visualization with Small Molecules Tested in a Mouse Model of Mycobacterium tuberculosis Infection (2014-2015).

    PubMed

    Ekins, Sean; Perryman, Alexander L; Clark, Alex M; Reynolds, Robert C; Freundlich, Joel S

    2016-07-25

    The renewed urgency to develop new treatments for Mycobacterium tuberculosis (Mtb) infection has resulted in large-scale phenotypic screening and thousands of new active compounds in vitro. The next challenge is to identify candidates to pursue in a mouse in vivo efficacy model as a step to predicting clinical efficacy. We previously analyzed over 70 years of this mouse in vivo efficacy data, which we used to generate and validate machine learning models. Curation of 60 additional small molecules with in vivo data published in 2014 and 2015 was undertaken to further test these models. This represents a much larger test set than for the previous models. Several computational approaches have now been applied to analyze these molecules and compare their molecular properties beyond those attempted previously. Our previous machine learning models have been updated, and a novel aspect has been added in the form of mouse liver microsomal half-life (MLM t1/2) and in vitro-based Mtb models incorporating cytotoxicity data that were used to predict in vivo activity for comparison. Our best Mtb in vivo models possess fivefold ROC values > 0.7, sensitivity > 80%, and concordance > 60%, while the best specificity value is >40%. Use of an MLM t1/2 Bayesian model affords comparable results for scoring the 60 compounds tested. Combining MLM stability and in vitro Mtb models in a novel consensus workflow in the best cases has a positive predicted value (hit rate) > 77%. Our results indicate that Bayesian models constructed with literature in vivo Mtb data generated by different laboratories in various mouse models can have predictive value and may be used alongside MLM t1/2 and in vitro-based Mtb models to assist in selecting antitubercular compounds with desirable in vivo efficacy. We demonstrate for the first time that consensus models of any kind can be used to predict in vivo activity for Mtb. In addition, we describe a new clustering method for data visualization and apply this

  2. Writing a successful research abstract.

    PubMed

    Bliss, Donna Z

    2012-01-01

    Writing and submitting a research abstract provides timely dissemination of the findings of a study and offers peer input for the subsequent development of a quality manuscript. Acceptance of abstracts is competitive. Understanding the expected content of an abstract, the abstract review process and tips for skillful writing will improve the chance of acceptance.

  3. Model-Based Collaborative Filtering Analysis of Student Response Data: Machine-Learning Item Response Theory

    ERIC Educational Resources Information Center

    Bergner, Yoav; Droschler, Stefan; Kortemeyer, Gerd; Rayyan, Saif; Seaton, Daniel; Pritchard, David E.

    2012-01-01

    We apply collaborative filtering (CF) to dichotomously scored student response data (right, wrong, or no interaction), finding optimal parameters for each student and item based on cross-validated prediction accuracy. The approach is naturally suited to comparing different models, both unidimensional and multidimensional in ability, including a…

  4. Automated surface micro-machining mask creation from a 3D model.

    SciTech Connect

    Schiek, Richard Louis; Schmidt, Rodney Cannon

    2004-06-01

    We have developed and implemented a method, which given a three-dimensional object can infer from topology the two-dimensional masks needed to produce that object with surface micromachining. The masks produced by this design tool can be generic, process independent masks, or if given process constraints, specific for a target process. This design tool calculates the two-dimensional mask set required to produce a given three-dimensional model by investigating the vertical topology to the model. The 3D model is first separated into bodies that are non-intersecting, made from different materials or only linked through a ground plane. Next, for each body unique vertical cross sections are located and arranged into a tree based on their topological relationship. A branch-wise search of the tree uncovers locations where deposition boundaries must lie and identifies candidate masks creating a generic mask set for the 3D model. Finally, in the last step specific process requirements are considered that may constrain the generic mask set. Constraints can include the thickness or number of deposition layers, specific ordering of masks as required by a process and type of material used in a given layer. Candidate masks are reconciled with the process constraints through a constrained optimization.

  5. Using remote sensing and machine learning for the spatial modelling of a bluetongue virus vector

    NASA Astrophysics Data System (ADS)

    Van doninck, J.; Peters, J.; De Baets, B.; Ducheyne, E.; Verhoest, N. E. C.

    2012-04-01

    Bluetongue is a viral vector-borne disease transmitted between hosts, mostly cattle and small ruminants, by some species of Culicoides midges. Within the Mediterranean basin, C. imicola is the main vector of the bluetongue virus. The spatial distribution of this species is limited by a number of environmental factors, including temperature, soil properties and land cover. The identification of zones at risk of bluetongue outbreaks thus requires detailed information on these environmental factors, as well as appropriate epidemiological modelling techniques. We here give an overview of the environmental factors assumed to be constraining the spatial distribution of C. imicola, as identified in different studies. Subsequently, remote sensing products that can be used as proxies for these environmental constraints are presented. Remote sensing data are then used together with species occurrence data from the Spanish Bluetongue National Surveillance Programme to calibrate a supervised learning model, based on Random Forests, to model the probability of occurrence of the C. imicola midge. The model will then be applied for a pixel-based prediction over the Iberian peninsula using remote sensing products for habitat characterization.

  6. Abstraction of Seepage into Drifts

    SciTech Connect

    M.L. Wilson; C.K. Ho

    2000-09-26

    A total-system performance assessment (TSPA) for a potential nuclear-waste repository requires an estimate of the amount of water that might contact waste. This paper describes the model used for part of that estimation in a recent TSPA for the Yucca Mountain site. The discussion is limited to estimation of how much water might enter emplacement drifts; additional considerations related to flow within the drifts, and how much water might actually contact waste, are not addressed here. The unsaturated zone at Yucca Mountain is being considered for the potential repository, and a drift opening in unsaturated rock tends to act as a capillary barrier and divert much of the percolating water around it. For TSPA, the important questions regarding seepage are how many waste packages might be subjected to water flow and how much flow those packages might see. Because of heterogeneity of the rock and uncertainty about the future (how the climate will evolve, etc.), it is not possible to predict seepage amounts or locations with certainty. Thus, seepage is treated as a stochastic quantity in TSPA simulations, with the magnitude and spatial distribution of seepage sampled from uncertainty distributions. The distillation of the essential components of process modeling into a form suitable for use in TSPA simulations is referred to as abstraction. In the following sections, seepage process models and abstractions will be summarized and then some illustrative results are presented.

  7. Charging machine

    DOEpatents

    Medlin, John B.

    1976-05-25

    A charging machine for loading fuel slugs into the process tubes of a nuclear reactor includes a tubular housing connected to the process tube, a charging trough connected to the other end of the tubular housing, a device for loading the charging trough with a group of fuel slugs, means for equalizing the coolant pressure in the charging trough with the pressure in the process tubes, means for pushing the group of fuel slugs into the process tube and a latch and a seal engaging the last object in the group of fuel slugs to prevent the fuel slugs from being ejected from the process tube when the pusher is removed and to prevent pressure liquid from entering the charging machine.

  8. Fullerene Machines

    NASA Technical Reports Server (NTRS)

    Globus, Al; Saini, Subhash

    1998-01-01

    Recent computational efforts at NASA Ames Research Center and computation and experiment elsewhere suggest that a nanotechnology of machine phase functionalized fullerenes may be synthetically accessible and of great interest. We have computationally demonstrated that molecular gears fashioned from (14,0) single-walled carbon nanotubes and benzyne teeth should operate well at 50-100 gigahertz. Preliminary results suggest that these gears can be cooled by a helium atmosphere and a laser motor can power fullerene gears if a positive and negative charge have been added to form a dipole. In addition, we have unproven concepts based on experimental and computational evidence for support structures, computer control, a system architecture, a variety of components, and manufacture. Combining fullerene machines with the remarkable mechanical properties of carbon nanotubes, there is some reason to believe that a focused effort to develop fullerene nanotechnology could yield materials with tremendous properties.

  9. Architectures for intelligent machines

    NASA Technical Reports Server (NTRS)

    Saridis, George N.

    1991-01-01

    The theory of intelligent machines has been recently reformulated to incorporate new architectures that are using neural and Petri nets. The analytic functions of an intelligent machine are implemented by intelligent controls, using entropy as a measure. The resulting hierarchical control structure is based on the principle of increasing precision with decreasing intelligence. Each of the three levels of the intelligent control is using different architectures, in order to satisfy the requirements of the principle: the organization level is moduled after a Boltzmann machine for abstract reasoning, task planning and decision making; the coordination level is composed of a number of Petri net transducers supervised, for command exchange, by a dispatcher, which also serves as an interface to the organization level; the execution level, include the sensory, planning for navigation and control hardware which interacts one-to-one with the appropriate coordinators, while a VME bus provides a channel for database exchange among the several devices. This system is currently implemented on a robotic transporter, designed for space construction at the CIRSSE laboratories at the Rensselaer Polytechnic Institute. The progress of its development is reported.

  10. Induction machine

    DOEpatents

    Owen, Whitney H.

    1980-01-01

    A polyphase rotary induction machine for use as a motor or generator utilizing a single rotor assembly having two series connected sets of rotor windings, a first stator winding disposed around the first rotor winding and means for controlling the current induced in one set of the rotor windings compared to the current induced in the other set of the rotor windings. The rotor windings may be wound rotor windings or squirrel cage windings.

  11. Abstraction Planning in Real Time

    NASA Technical Reports Server (NTRS)

    Washington, Richard

    1994-01-01

    When a planning agent works in a complex, real-world domain, it is unable to plan for and store all possible contingencies and problem situations ahead of time. The agent needs to be able to fall back on an ability to construct plans at run time under time constraints. This thesis presents a method for planning at run time that incrementally builds up plans at multiple levels of abstraction. The plans are continually updated by information from the world, allowing the planner to adjust its plan to a changing world during the planning process. All the information is represented over intervals of time, allowing the planner to reason about durations, deadlines, and delays within its plan. In addition to the method, the thesis presents a formal model of the planning process and uses the model to investigate planning strategies. The method has been implemented, and experiments have been run to validate the overall approach and the theoretical model.

  12. A simple numerical model for membrane oxygenation of an artificial lung machine

    NASA Astrophysics Data System (ADS)

    Subraveti, Sai Nikhil; Sai, P. S. T.; Viswanathan Pillai, Vinod Kumar; Patnaik, B. S. V.

    2015-11-01

    Optimal design of membrane oxygenators will have far reaching ramification in the development of artificial heart-lung systems. In the present CFD study, we simulate the gas exchange between the venous blood and air that passes through the hollow fiber membranes on a benchmark device. The gas exchange between the tube side fluid and the shell side venous liquid is modeled by solving mass, momentum conservation equations. The fiber bundle was modelled as a porous block with a bundle porosity of 0.6. The resistance offered by the fiber bundle was estimated by the standard Ergun correlation. The present numerical simulations are validated against available benchmark data. The effect of bundle porosity, bundle size, Reynolds number, non-Newtonian constitutive relation, upstream velocity distribution etc. on the pressure drop, oxygen saturation levels etc. are investigated. To emulate the features of gas transfer past the alveoli, the effect of pulsatility on the membrane oxygenation is also investigated.

  13. Control method and system for hydraulic machines employing a dynamic joint motion model

    DOEpatents

    Danko, George

    2011-11-22

    A control method and system for controlling a hydraulically actuated mechanical arm to perform a task, the mechanical arm optionally being a hydraulically actuated excavator arm. The method can include determining a dynamic model of the motion of the hydraulic arm for each hydraulic arm link by relating the input signal vector for each respective link to the output signal vector for the same link. Also the method can include determining an error signal for each link as the weighted sum of the differences between a measured position and a reference position and between the time derivatives of the measured position and the time derivatives of the reference position for each respective link. The weights used in the determination of the error signal can be determined from the constant coefficients of the dynamic model. The error signal can be applied in a closed negative feedback control loop to diminish or eliminate the error signal for each respective link.

  14. A simple tandem disk model for a cross-wind machine

    NASA Astrophysics Data System (ADS)

    Healey, J. V.

    The relative power coefficients, area expansion ratio, and crosswind forces for a crosswind tubine, e.g., the Darrieus, were examined with a tandem-disk, single-streamtube model. The upwind disk is assumed to be rectangular and the downwind disk is modeled as filling the wake of the upwind disk. Velocity and force triangles are devised for the factors operating at each blade. Attention was given to the NACA 0012 and 0018, and Go 735 and 420 airfoils as blades, with Reynolds number just under 500,000. The 0018 was found to be the best airfoil, followed by the 0012, the 735, and, very far behind in terms of the power coefficient, the 420. The forces on the two disks were calculated to be equal at low tip speed ratios with symmetrical airfoil, while the Go cambered profiles yielded negative values upwind in the same conditions.

  15. Machine learning for molecular scattering dynamics: Gaussian Process models for improved predictions of molecular collision observables

    NASA Astrophysics Data System (ADS)

    Krems, Roman; Cui, Jie; Li, Zhiying

    2016-05-01

    We show how statistical learning techniques based on kriging (Gaussian Process regression) can be used for improving the predictions of classical and/or quantum scattering theory. In particular, we show how Gaussian Process models can be used for: (i) efficient non-parametric fitting of multi-dimensional potential energy surfaces without the need to fit ab initio data with analytical functions; (ii) obtaining scattering observables as functions of individual PES parameters; (iii) using classical trajectories to interpolate quantum results; (iv) extrapolation of scattering observables from one molecule to another; (v) obtaining scattering observables with error bars reflecting the inherent inaccuracy of the underlying potential energy surfaces. We argue that the application of Gaussian Process models to quantum scattering calculations may potentially elevate the theoretical predictions to the same level of certainty as the experimental measurements and can be used to identify the role of individual atoms in determining the outcome of collisions of complex molecules. We will show examples and discuss the applications of Gaussian Process models to improving the predictions of scattering theory relevant for the cold molecules research field. Work supported by NSERC of Canada.

  16. A biophysically based finite-state machine model for analyzing gastric experimental entrainment and pacing recordings.

    PubMed

    Sathar, Shameer; Trew, Mark L; Du, Peng; O'Grady, Greg; Cheng, Leo K

    2014-04-01

    Gastrointestinal motility is coordinated by slow waves (SWs) generated by the interstitial cells of Cajal (ICC). Experimental studies have shown that SWs spontaneously activate at different intrinsic frequencies in isolated tissue, whereas in intact tissues they are entrained to a single frequency. Gastric pacing has been used in an attempt to improve motility in disorders such as gastroparesis by modulating entrainment, but the optimal methods of pacing are currently unknown. Computational models can aid in the interpretation of complex in vivo recordings and help to determine optimal pacing strategies. However, previous computational models of SW entrainment are limited to the intrinsic pacing frequency as the primary determinant of the conduction velocity, and are not able to accurately represent the effects of external stimuli and electrical anisotropies. In this paper, we present a novel computationally efficient method for modeling SW propagation through the ICC network while accounting for conductivity parameters and fiber orientations. The method successfully reproduced experimental recordings of entrainment following gastric transection and the effects of gastric pacing on SW activity. It provides a reliable new tool for investigating gastric electrophysiology in normal and diseased states, and to guide and focus future experimental studies. PMID:24276722

  17. Analysis of complex networks using aggressive abstraction.

    SciTech Connect

    Colbaugh, Richard; Glass, Kristin.; Willard, Gerald

    2008-10-01

    This paper presents a new methodology for analyzing complex networks in which the network of interest is first abstracted to a much simpler (but equivalent) representation, the required analysis is performed using the abstraction, and analytic conclusions are then mapped back to the original network and interpreted there. We begin by identifying a broad and important class of complex networks which admit abstractions that are simultaneously dramatically simplifying and property preserving we call these aggressive abstractions -- and which can therefore be analyzed using the proposed approach. We then introduce and develop two forms of aggressive abstraction: 1.) finite state abstraction, in which dynamical networks with uncountable state spaces are modeled using finite state systems, and 2.) onedimensional abstraction, whereby high dimensional network dynamics are captured in a meaningful way using a single scalar variable. In each case, the property preserving nature of the abstraction process is rigorously established and efficient algorithms are presented for computing the abstraction. The considerable potential of the proposed approach to complex networks analysis is illustrated through case studies involving vulnerability analysis of technological networks and predictive analysis for social processes.

  18. Finding Feasible Abstract Counter-Examples

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Dwyer, Matthew B.; Visser, Willem; Clancy, Daniel (Technical Monitor)

    2002-01-01

    A strength of model checking is its ability to automate the detection of subtle system errors and produce traces that exhibit those errors. Given the high computational cost of model checking most researchers advocate the use of aggressive property-preserving abstractions. Unfortunately, the more aggressively a system is abstracted the more infeasible behavior it will have. Thus, while abstraction enables efficient model checking it also threatens the usefulness of model checking as a defect detection tool, since it may be difficult to determine whether a counter-example is feasible and hence worth developer time to analyze. We have explored several strategies for addressing this problem by extending an explicit-state model checker, Java PathFinder (JPF), to search for and analyze counter-examples in the presence of abstractions. We demonstrate that these techniques effectively preserve the defect detection ability of model checking in the presence of aggressive abstraction by applying them to check properties of several abstracted multi-threaded Java programs. These new capabilities are not specific to JPF and can be easily adapted to other model checking frameworks; we describe how this was done for the Bandera toolset.

  19. Physics-based simulation modeling and optimization of microstructural changes induced by machining and selective laser melting processes in titanium and nickel based alloys

    NASA Astrophysics Data System (ADS)

    Arisoy, Yigit Muzaffer

    Manufacturing processes may significantly affect the quality of resultant surfaces and structural integrity of the metal end products. Controlling manufacturing process induced changes to the product's surface integrity may improve the fatigue life and overall reliability of the end product. The goal of this study is to model the phenomena that result in microstructural alterations and improve the surface integrity of the manufactured parts by utilizing physics-based process simulations and other computational methods. Two different (both conventional and advanced) manufacturing processes; i.e. machining of Titanium and Nickel-based alloys and selective laser melting of Nickel-based powder alloys are studied. 3D Finite Element (FE) process simulations are developed and experimental data that validates these process simulation models are generated to compare against predictions. Computational process modeling and optimization have been performed for machining induced microstructure that includes; i) predicting recrystallization and grain size using FE simulations and the Johnson-Mehl-Avrami-Kolmogorov (JMAK) model, ii) predicting microhardness using non-linear regression models and the Random Forests method, and iii) multi-objective machining optimization for minimizing microstructural changes. Experimental analysis and computational process modeling of selective laser melting have been also conducted including; i) microstructural analysis of grain sizes and growth directions using SEM imaging and machine learning algorithms, ii) analysis of thermal imaging for spattering, heating/cooling rates and meltpool size, iii) predicting thermal field, meltpool size, and growth directions via thermal gradients using 3D FE simulations, iv) predicting localized solidification using the Phase Field method. These computational process models and predictive models, once utilized by industry to optimize process parameters, have the ultimate potential to improve performance of

  20. Gradient boosting machines, a tutorial

    PubMed Central

    Natekin, Alexey; Knoll, Alois

    2013-01-01

    Gradient boosting machines are a family of powerful machine-learning techniques that have shown considerable success in a wide range of practical applications. They are highly customizable to the particular needs of the application, like being learned with respect to different loss functions. This article gives a tutorial introduction into the methodology of gradient boosting methods with a strong focus on machine learning aspects of modeling. A theoretical information is complemented with descriptive examples and illustrations which cover all the stages of the gradient boosting model design. Considerations on handling the model complexity are discussed. Three practical examples of gradient boosting applications are presented and comprehensively analyzed. PMID:24409142

  1. A Bayesian Machine Learning Model for Estimating Building Occupancy from Open Source Data

    DOE PAGES

    Stewart, Robert N.; Urban, Marie L.; Duchscherer, Samantha E.; Kaufman, Jason; Morton, April M.; Thakur, Gautam; Piburn, Jesse; Moehl, Jessica

    2016-01-01

    Understanding building occupancy is critical to a wide array of applications including natural hazards loss analysis, green building technologies, and population distribution modeling. Due to the expense of directly monitoring buildings, scientists rely in addition on a wide and disparate array of ancillary and open source information including subject matter expertise, survey data, and remote sensing information. These data are fused using data harmonization methods which refer to a loose collection of formal and informal techniques for fusing data together to create viable content for building occupancy estimation. In this paper, we add to the current state of the artmore » by introducing the Population Data Tables (PDT), a Bayesian based informatics system for systematically arranging data and harmonization techniques into a consistent, transparent, knowledge learning framework that retains in the final estimation uncertainty emerging from data, expert judgment, and model parameterization. PDT probabilistically estimates ambient occupancy in units of people/1000ft2 for over 50 building types at the national and sub-national level with the goal of providing global coverage. The challenge of global coverage led to the development of an interdisciplinary geospatial informatics system tool that provides the framework for capturing, storing, and managing open source data, handling subject matter expertise, carrying out Bayesian analytics as well as visualizing and exporting occupancy estimation results. We present the PDT project, situate the work within the larger community, and report on the progress of this multi-year project.Understanding building occupancy is critical to a wide array of applications including natural hazards loss analysis, green building technologies, and population distribution modeling. Due to the expense of directly monitoring buildings, scientists rely in addition on a wide and disparate array of ancillary and open source information

  2. A Bayesian Machine Learning Model for Estimating Building Occupancy from Open Source Data

    SciTech Connect

    Stewart, Robert N.; Urban, Marie L.; Duchscherer, Samantha E.; Kaufman, Jason; Morton, April M.; Thakur, Gautam; Piburn, Jesse; Moehl, Jessica

    2016-01-01

    Understanding building occupancy is critical to a wide array of applications including natural hazards loss analysis, green building technologies, and population distribution modeling. Due to the expense of directly monitoring buildings, scientists rely in addition on a wide and disparate array of ancillary and open source information including subject matter expertise, survey data, and remote sensing information. These data are fused using data harmonization methods which refer to a loose collection of formal and informal techniques for fusing data together to create viable content for building occupancy estimation. In this paper, we add to the current state of the art by introducing the Population Data Tables (PDT), a Bayesian based informatics system for systematically arranging data and harmonization techniques into a consistent, transparent, knowledge learning framework that retains in the final estimation uncertainty emerging from data, expert judgment, and model parameterization. PDT probabilistically estimates ambient occupancy in units of people/1000ft2 for over 50 building types at the national and sub-national level with the goal of providing global coverage. The challenge of global coverage led to the development of an interdisciplinary geospatial informatics system tool that provides the framework for capturing, storing, and managing open source data, handling subject matter expertise, carrying out Bayesian analytics as well as visualizing and exporting occupancy estimation results. We present the PDT project, situate the work within the larger community, and report on the progress of this multi-year project.Understanding building occupancy is critical to a wide array of applications including natural hazards loss analysis, green building technologies, and population distribution modeling. Due to the expense of directly monitoring buildings, scientists rely in addition on a wide and disparate array of ancillary and open source information including

  3. Estimating Biophysical Crop Properties by a Machine Learning Model Inversion using Hyperspectral Imagery of Different Resolution

    NASA Astrophysics Data System (ADS)

    Preidl, S.; Doktor, D.

    2013-12-01

    This study investigates how image resolution and phenology affects the quality of biophysical variable estimation of different crop types. Hence, several hyperspectral at-sensor radiance images (400-2500 nm) of 1, 2 and 3 meter resolution were acquired by an AISA dual airborne system to estimate leaf chlorophyll content and leaf area index (LAI) of different crop types. The study area describes a climatic gradient that ranges from the Magdeburg Börde (130 meter a.s.l.) to the northeast of the Harz Mountain (450 meter a.s.l.), Germany. The 35 kilometer long flight strip is recorded on the same day at all three resolutions. Ground measurements were conducted simultaneously to the flight campaigns on selected crop fields. The SLC model was coupled with the atmospheric model MODTRAN4 to build up a look-up table (LUT) of simulated at-sensor radiances. To support a fast and more accurate inversion process, LUT-spectra were selected for model inversion which location in the PCA space (spanned by the first three principal components) is similar to the one of the measured spectra. A support vector regression (SVR) was trained on the reduced LUT to perform a pixel-based inversion of the hyperspectral images, subsequently. A multi-parameter sensitivity analysis was recently developed to define the most influential parameters for a reasonable model setup in the first place. This completes the development of an automated inversion process chain to estimate leaf and canopy biophysical properties. To achieve reasonable inversion results each pixel should be radiatively independent from its surrounding pixels. Image texture is used to calculate the second-order statistical variance between pixel pairs quantifying spatial heterogeneity throughout the spectral domain. The texture measurement can be employed as an uncertainty assessment of the biophysical variable estimation map. Results show that vegetated areas within the field are representing spectrally homogeneous systems. In

  4. Stellar Presentations (Abstract)

    NASA Astrophysics Data System (ADS)

    Young, D.

    2015-12-01

    (Abstract only) The AAVSO is in the process of expanding its education, outreach and speakers bureau program. powerpoint presentations prepared for specific target audiences such as AAVSO members, educators, students, the general public, and Science Olympiad teams, coaches, event supervisors, and state directors will be available online for members to use. The presentations range from specific and general content relating to stellar evolution and variable stars to specific activities for a workshop environment. A presentation—even with a general topic—that works for high school students will not work for educators, Science Olympiad teams, or the general public. Each audience is unique and requires a different approach. The current environment necessitates presentations that are captivating for a younger generation that is embedded in a highly visual and sound-bite world of social media, twitter and U-Tube, and mobile devices. For educators, presentations and workshops for themselves and their students must support the Next Generation Science Standards (NGSS), the Common Core Content Standards, and the Science Technology, Engineering and Mathematics (STEM) initiative. Current best practices for developing relevant and engaging powerpoint presentations to deliver information to a variety of targeted audiences will be presented along with several examples.

  5. Automated Supernova Discovery (Abstract)

    NASA Astrophysics Data System (ADS)

    Post, R. S.

    2015-12-01

    (Abstract only) We are developing a system of robotic telescopes for automatic recognition of Supernovas as well as other transient events in collaboration with the Puckett Supernova Search Team. At the SAS2014 meeting, the discovery program, SNARE, was first described. Since then, it has been continuously improved to handle searches under a wide variety of atmospheric conditions. Currently, two telescopes are used to build a reference library while searching for PSN with a partial library. Since data is taken every night without clouds, we must deal with varying atmospheric and high background illumination from the moon. Software is configured to identify a PSN, reshoot for verification with options to change the run plan to acquire photometric or spectrographic data. The telescopes are 24-inch CDK24, with Alta U230 cameras, one in CA and one in NM. Images and run plans are sent between sites so the CA telescope can search while photometry is done in NM. Our goal is to find bright PSNs with magnitude 17.5 or less which is the limit of our planned spectroscopy. We present results from our first automated PSN discoveries and plans for PSN data acquisition.

  6. A hybrid feature selection algorithm integrating an extreme learning machine for landslide susceptibility modeling of Mt. Woomyeon, South Korea

    NASA Astrophysics Data System (ADS)

    Vasu, Nikhil N.; Lee, Seung-Rae

    2016-06-01

    An ever-increasing trend of extreme rainfall events in South Korea owing to climate change is causing shallow landslides and debris flows in mountains that cover 70% of the total land area of the nation. These catastrophic, gravity-driven processes cost the government several billion KRW (South Korean Won) in losses in addition to fatalities every year. The most common type of landslide observed is the shallow landslide, which occurs at 1-3 m depth, and may mobilize into more catastrophic flow-type landslides. Hence, to predict potential landslide areas, susceptibility maps are developed in a geographical information system (GIS) environment utilizing available morphological, hydrological, geotechnical, and geological data. Landslide susceptibility models were developed using 163 landslide points and an equal number of nonlandslide points in Mt. Woomyeon, Seoul, and 23 landslide conditioning factors. However, because not all of the factors contribute to the determination of the spatial probability for landslide initiation, and a simple filter or wrapper-based approach is not efficient in identifying all of the relevant features, a feedback-loop-based hybrid algorithm was implemented in conjunction with a learning scheme called an extreme learning machine, which is based on a single-layer, feed-forward network. Validation of the constructed susceptibility model was conducted using a testing set of landslide inventory data through a prediction rate curve. The model selected 13 relevant conditioning factors out of the initial 23; and the resulting susceptibility map shows a success rate of 85% and a prediction rate of 89.45%, indicating a good performance, in contrast to the low success and prediction rate of 69.19% and 56.19%, respectively, as obtained using a wrapper technique.

  7. Operant conditioning of a multiple degree-of-freedom brain-machine interface in a primate model of amputation.

    PubMed

    Balasubramanian, Karthikeyan; Southerland, Joshua; Vaidya, Mukta; Qian, Kai; Eleryan, Ahmed; Fagg, Andrew H; Sluzky, Marc; Oweiss, Karim; Hatsopoulos, Nicholas

    2013-01-01

    Operant conditioning with biofeedback has been shown to be an effective method to modify neural activity to generate goal-directed actions in a brain-machine interface. It is particularly useful when neural activity cannot be mathematically mapped to motor actions of the actual body such as in the case of amputation. Here, we implement an operant conditioning approach with visual feedback in which an amputated monkey is trained to control a multiple degree-of-freedom robot to perform a reach-to-grasp behavior. A key innovation is that each controlled dimension represents a behaviorally relevant synergy among a set of joint degrees-of-freedom. We present a number of behavioral metrics by which to assess improvements in BMI control with exposure to the system. The use of non-human primates with chronic amputation is arguably the most clinically-relevant model of human amputation that could have direct implications for developing a neural prosthesis to treat humans with missing upper limbs.

  8. A Support Vector Machine model for the prediction of proteotypic peptides for accurate mass and time proteomics

    SciTech Connect

    Webb-Robertson, Bobbie-Jo M.; Cannon, William R.; Oehmen, Christopher S.; Shah, Anuj R.; Gurumoorthi, Vidhya; Lipton, Mary S.; Waters, Katrina M.

    2008-07-01

    Motivation: The standard approach to identifying peptides based on accurate mass and elution time (AMT) compares these profiles obtained from a high resolution mass spectrometer to a database of peptides previously identified from tandem mass spectrometry (MS/MS) studies. It would be advantageous, with respect to both accuracy and cost, to only search for those peptides that are detectable by MS (proteotypic). Results: We present a Support Vector Machine (SVM) model that uses a simple descriptor space based on 35 properties of amino acid content, charge, hydrophilicity, and polarity for the quantitative prediction of proteotypic peptides. Using three independently derived AMT databases (Shewanella oneidensis, Salmonella typhimurium, Yersinia pestis) for training and validation within and across species, the SVM resulted in an average accuracy measure of ~0.8 with a standard deviation of less than 0.025. Furthermore, we demonstrate that these results are achievable with a small set of 12 variables and can achieve high proteome coverage. Availability: http://omics.pnl.gov/software/STEPP.php

  9. Combining Metabolite-Based Pharmacophores with Bayesian Machine Learning Models for Mycobacterium tuberculosis Drug Discovery.

    PubMed

    Ekins, Sean; Madrid, Peter B; Sarker, Malabika; Li, Shao-Gang; Mittal, Nisha; Kumar, Pradeep; Wang, Xin; Stratton, Thomas P; Zimmerman, Matthew; Talcott, Carolyn; Bourbon, Pauline; Travers, Mike; Yadav, Maneesh; Freundlich, Joel S

    2015-01-01

    Integrated computational approaches for Mycobacterium tuberculosis (Mtb) are useful to identify new molecules that could lead to future tuberculosis (TB) drugs. Our approach uses information derived from the TBCyc pathway and genome database, the Collaborative Drug Discovery TB database combined with 3D pharmacophores and dual event Bayesian models of whole-cell activity and lack of cytotoxicity. We have prioritized a large number of molecules that may act as mimics of substrates and metabolites in the TB metabolome. We computationally searched over 200,000 commercial molecules using 66 pharmacophores based on substrates and metabolites from Mtb and further filtering with Bayesian models. We ultimately tested 110 compounds in vitro that resulted in two compounds of interest, BAS 04912643 and BAS 00623753 (MIC of 2.5 and 5 μg/mL, respectively). These molecules were used as a starting point for hit-to-lead optimization. The most promising class proved to be the quinoxaline di-N-oxides, evidenced by transcriptional profiling to induce mRNA level perturbations most closely resembling known protonophores. One of these, SRI58 exhibited an MIC = 1.25 μg/mL versus Mtb and a CC50 in Vero cells of >40 μg/mL, while featuring fair Caco-2 A-B permeability (2.3 x 10-6 cm/s), kinetic solubility (125 μM at pH 7.4 in PBS) and mouse metabolic stability (63.6% remaining after 1 h incubation with mouse liver microsomes). Despite demonstration of how a combined bioinformatics/cheminformatics approach afforded a small molecule with promising in vitro profiles, we found that SRI58 did not exhibit quantifiable blood levels in mice.

  10. Combining Metabolite-Based Pharmacophores with Bayesian Machine Learning Models for Mycobacterium tuberculosis Drug Discovery

    PubMed Central

    Sarker, Malabika; Li, Shao-Gang; Mittal, Nisha; Kumar, Pradeep; Wang, Xin; Stratton, Thomas P.; Zimmerman, Matthew; Talcott, Carolyn; Bourbon, Pauline; Travers, Mike; Yadav, Maneesh

    2015-01-01

    Integrated computational approaches for Mycobacterium tuberculosis (Mtb) are useful to identify new molecules that could lead to future tuberculosis (TB) drugs. Our approach uses information derived from the TBCyc pathway and genome database, the Collaborative Drug Discovery TB database combined with 3D pharmacophores and dual event Bayesian models of whole-cell activity and lack of cytotoxicity. We have prioritized a large number of molecules that may act as mimics of substrates and metabolites in the TB metabolome. We computationally searched over 200,000 commercial molecules using 66 pharmacophores based on substrates and metabolites from Mtb and further filtering with Bayesian models. We ultimately tested 110 compounds in vitro that resulted in two compounds of interest, BAS 04912643 and BAS 00623753 (MIC of 2.5 and 5 μg/mL, respectively). These molecules were used as a starting point for hit-to-lead optimization. The most promising class proved to be the quinoxaline di-N-oxides, evidenced by transcriptional profiling to induce mRNA level perturbations most closely resembling known protonophores. One of these, SRI58 exhibited an MIC = 1.25 μg/mL versus Mtb and a CC50 in Vero cells of >40 μg/mL, while featuring fair Caco-2 A-B permeability (2.3 x 10−6 cm/s), kinetic solubility (125 μM at pH 7.4 in PBS) and mouse metabolic stability (63.6% remaining after 1 h incubation with mouse liver microsomes). Despite demonstration of how a combined bioinformatics/cheminformatics approach afforded a small molecule with promising in vitro profiles, we found that SRI58 did not exhibit quantifiable blood levels in mice. PMID:26517557

  11. A grounded theory of abstraction in artificial intelligence.

    PubMed Central

    Zucker, Jean-Daniel

    2003-01-01

    In artificial intelligence, abstraction is commonly used to account for the use of various levels of details in a given representation language or the ability to change from one level to another while preserving useful properties. Abstraction has been mainly studied in problem solving, theorem proving, knowledge representation (in particular for spatial and temporal reasoning) and machine learning. In such contexts, abstraction is defined as a mapping between formalisms that reduces the computational complexity of the task at stake. By analysing the notion of abstraction from an information quantity point of view, we pinpoint the differences and the complementary role of reformulation and abstraction in any representation change. We contribute to extending the existing semantic theories of abstraction to be grounded on perception, where the notion of information quantity is easier to characterize formally. In the author's view, abstraction is best represented using abstraction operators, as they provide semantics for classifying different abstractions and support the automation of representation changes. The usefulness of a grounded theory of abstraction in the cartography domain is illustrated. Finally, the importance of explicitly representing abstraction for designing more autonomous and adaptive systems is discussed. PMID:12903672

  12. Abstraction Planning in Real Time

    NASA Technical Reports Server (NTRS)

    Washington, R.

    1994-01-01

    When a planning agent works in a complex, real-world domain, it is unable to plan for and store all possible contingencies and problem situations ahead of time. This thesis presents a method for planning a run time that incrementally builds up plans at multiple levels of abstraction. The plans are continually updated by information from the world, allowing the planner to adjust its plan to a changing world during the planning process. All the information is represented over intervals of time, allowing the planner to reason about durations, deadlines, and delays within its plan. In addition to the method, the thesis presents a formal model of the planning process and uses the model to investigate planning strategies.

  13. Perspex Machine X: software development

    NASA Astrophysics Data System (ADS)

    Noble, Sam; Thomas, Benjamin A.; Anderson, James A. D. W.

    2007-01-01

    The Perspex Machine arose from the unification of computation with geometry. We now report significant redevelopment of both a partial C compiler that generates perspex programs and of a Graphical User Interface (GUI). The compiler is constructed with standard compiler-generator tools and produces both an explicit parse tree for C and an Abstract Syntax Tree (AST) that is better suited to code generation. The GUI uses a hash table and a simpler software architecture to achieve an order of magnitude speed up in processing and, consequently, an order of magnitude increase in the number of perspexes that can be manipulated in real time (now 6,000). Two perspex-machine simulators are provided, one using trans-floating-point arithmetic and the other using transrational arithmetic. All of the software described here is available on the world wide web. The compiler generates code in the neural model of the perspex. At each branch point it uses a jumper to return control to the main fibre. This has the effect of pruning out an exponentially increasing number of branching fibres, thereby greatly increasing the efficiency of perspex programs as measured by the number of neurons required to implement an algorithm. The jumpers are placed at unit distance from the main fibre and form a geometrical structure analogous to a myelin sheath in a biological neuron. Both the perspex jumper-sheath and the biological myelin-sheath share the computational function of preventing cross-over of signals to neurons that lie close to an axon. This is an example of convergence driven by similar geometrical and computational constraints in perspex and biological neurons.

  14. An extreme learning machine model for the simulation of monthly mean streamflow water level in eastern Queensland.

    PubMed

    Deo, Ravinesh C; Şahin, Mehmet

    2016-02-01

    A predictive model for streamflow has practical implications for understanding the drought hydrology, environmental monitoring and agriculture, ecosystems and resource management. In this study, the state-or-art extreme learning machine (ELM) model was utilized to simulate the mean streamflow water level (Q WL) for three hydrological sites in eastern Queensland (Gowrie Creek, Albert, and Mary River). The performance of the ELM model was benchmarked with the artificial neural network (ANN) model. The ELM model was a fast computational method using single-layer feedforward neural networks and randomly determined hidden neurons that learns the historical patterns embedded in the input variables. A set of nine predictors with the month (to consider the seasonality of Q WL); rainfall; Southern Oscillation Index; Pacific Decadal Oscillation Index; ENSO Modoki Index; Indian Ocean Dipole Index; and Nino 3.0, Nino 3.4, and Nino 4.0 sea surface temperatures (SSTs) were utilized. A selection of variables was performed using cross correlation with Q WL, yielding the best inputs defined by (month; P; Nino 3.0 SST; Nino 4.0 SST; Southern Oscillation Index (SOI); ENSO Modoki Index (EMI)) for Gowrie Creek, (month; P; SOI; Pacific Decadal Oscillation (PDO); Indian Ocean Dipole (IOD); EMI) for Albert River, and by (month; P; Nino 3.4 SST; Nino 4.0 SST; SOI; EMI) for Mary River site. A three-layer neuronal structure trialed with activation equations defined by sigmoid, logarithmic, tangent sigmoid, sine, hardlim, triangular, and radial basis was utilized, resulting in optimum ELM model with hard-limit function and architecture 6-106-1 (Gowrie Creek), 6-74-1 (Albert River), and 6-146-1 (Mary River). The alternative ELM and ANN models with two inputs (month and rainfall) and the ELM model with all nine inputs were also developed. The performance was evaluated using the mean absolute error (MAE), coefficient of determination (r (2)), Willmott's Index (d), peak deviation (P dv), and Nash

  15. An extreme learning machine model for the simulation of monthly mean streamflow water level in eastern Queensland.

    PubMed

    Deo, Ravinesh C; Şahin, Mehmet

    2016-02-01

    A predictive model for streamflow has practical implications for understanding the drought hydrology, environmental monitoring and agriculture, ecosystems and resource management. In this study, the state-or-art extreme learning machine (ELM) model was utilized to simulate the mean streamflow water level (Q WL) for three hydrological sites in eastern Queensland (Gowrie Creek, Albert, and Mary River). The performance of the ELM model was benchmarked with the artificial neural network (ANN) model. The ELM model was a fast computational method using single-layer feedforward neural networks and randomly determined hidden neurons that learns the historical patterns embedded in the input variables. A set of nine predictors with the month (to consider the seasonality of Q WL); rainfall; Southern Oscillation Index; Pacific Decadal Oscillation Index; ENSO Modoki Index; Indian Ocean Dipole Index; and Nino 3.0, Nino 3.4, and Nino 4.0 sea surface temperatures (SSTs) were utilized. A selection of variables was performed using cross correlation with Q WL, yielding the best inputs defined by (month; P; Nino 3.0 SST; Nino 4.0 SST; Southern Oscillation Index (SOI); ENSO Modoki Index (EMI)) for Gowrie Creek, (month; P; SOI; Pacific Decadal Oscillation (PDO); Indian Ocean Dipole (IOD); EMI) for Albert River, and by (month; P; Nino 3.4 SST; Nino 4.0 SST; SOI; EMI) for Mary River site. A three-layer neuronal structure trialed with activation equations defined by sigmoid, logarithmic, tangent sigmoid, sine, hardlim, triangular, and radial basis was utilized, resulting in optimum ELM model with hard-limit function and architecture 6-106-1 (Gowrie Creek), 6-74-1 (Albert River), and 6-146-1 (Mary River). The alternative ELM and ANN models with two inputs (month and rainfall) and the ELM model with all nine inputs were also developed. The performance was evaluated using the mean absolute error (MAE), coefficient of determination (r (2)), Willmott's Index (d), peak deviation (P dv), and Nash

  16. Abstracts and reviews.

    PubMed

    Liebmann, G H; Wollman, L; Woltmann, A G

    1966-09-01

    Abstract Eric Berne, M.D.: Games People Play. Grove Press, New York, 1964. 192 pages. Price $5.00. Reviewed by Hugo G. Beigel Finkle, Alex M., Ph.D., M.D. and Prian, Dimitry F. Sexual Potency in Elderly Men before and after Prostatectomy. J.A.M.A., 196: 2, April, 1966. Reviewed by H. George Liebman Calvin C. Hernton: Sex and Racism In America. Grove Press, Inc. Black Cat Edition No. 113 (Paperback), 1966, 180 pp. Price $.95. Reviewed by Gus Woltmann Hans Lehfeldt, M.D., Ernest W. Kulka, M.D., H. George Liebman, M.D.: Comparative Study of Uterine Contraceptive Devices. Obstetrics and Gynecology, 26: 5, 1965, pp. 679-688. Lawrence Lipton. The Erotic Revolution. Sherbourne Press, Los Angeles, 1965. 322 pp., Price $7.50. Masters, William H., M.D. and Johnson, Virginia E. Human Sexual Response. Boston: Little, Brown and Co., 1966. 366 pages. Price $.10.00. Reviewed by Hans Lehfeldt Douglas P. Murphy, M.D. and Editha F. Torrano, M.D. Male Fertility in 3620 Childless Couples. Fertility and Sterility, 16: 3, May-June, 1965. Reviewed by Leo Wollman, M.D. Edwin M. Schur, Editor: The Family and the Sexual Revolution, Indiana University Press, Bloomington, Indiana, 1964. 427 pgs. Weldon, Virginia F., M.D., Blizzard, Robert M., M.D., and Migeon, Claude, M.D. Newborn Girls Misdiagnosed as Bilaterally Chryptorchid Males. The New England Journal of Medicine, April 14, 1966. Reviewed by H. George Liebman.

  17. How to Make a Good Animation: A Grounded Cognition Model of How Visual Representation Design Affects the Construction of Abstract Physics Knowledge

    ERIC Educational Resources Information Center

    Chen, Zhongzhou; Gladding, Gary

    2014-01-01

    Visual representations play a critical role in teaching physics. However, since we do not have a satisfactory understanding of how visual perception impacts the construction of abstract knowledge, most visual representations used in instructions are either created based on existing conventions or designed according to the instructor's…

  18. Electrical machine

    DOEpatents

    De Bock, Hendrik Pieter Jacobus; Alexander, James Pellegrino; El-Refaie, Ayman Mohamed Fawzi; Gerstler, William Dwight; Shah, Manoj Ramprasad; Shen, Xiaochun

    2016-06-21

    An apparatus, such as an electrical machine, is provided. The apparatus can include a rotor defining a rotor bore and a conduit disposed in and extending axially along the rotor bore. The conduit can have an annular conduit body defining a plurality of orifices disposed axially along the conduit and extending through the conduit body. The rotor can have an inner wall that at least partially defines the rotor bore. The orifices can extend through the conduit body along respective orifice directions, and the rotor and conduit can be configured to provide a line of sight along the orifice direction from the respective orifices to the inner wall.

  19. TEMPO machine

    SciTech Connect

    Rohwein, G.J.; Lancaster, K.T.; Lawson, R.N.

    1986-06-01

    TEMPO is a transformer powered megavolt pulse generator with an output pulse of 100 ns duration. The machine was designed for burst mode operation at pulse repetition rates up to 10 Hz with minimum pulse-to-pulse voltage variations. To meet the requirement for pulse duration a nd a 20-..omega.. output impedance within reasonable size constraints, the pulse forming transmission line was designed as two parallel water-insulated, strip-type Blumleins. Stray capacitance and electric fields along the edges of the line elements were controlled by lining the tank with plastic sheet.

  20. Predictive models for anti-tubercular molecules using machine learning on high-throughput biological screening datasets

    PubMed Central

    2011-01-01

    Background Tuberculosis is a contagious disease caused by Mycobacterium tuberculosis (Mtb), affecting more than two billion people around the globe and is one of the major causes of morbidity and mortality in the developing world. Recent reports suggest that Mtb has been developing resistance to the widely used anti-tubercular drugs resulting in the emergence and spread of multi drug-resistant (MDR) and extensively drug-resistant (XDR) strains throughout the world. In view of this global epidemic, there is an urgent need to facilitate fast and efficient lead identification methodologies. Target based screening of large compound libraries has been widely used as a fast and efficient approach for lead identification, but is restricted by the knowledge about the target structure. Whole organism screens on the other hand are target-agnostic and have been now widely employed as an alternative for lead identification but they are limited by the time and cost involved in running the screens for large compound libraries. This could be possibly be circumvented by using computational approaches to prioritize molecules for screening programmes. Results We utilized physicochemical properties of compounds to train four supervised classifiers (Naïve Bayes, Random Forest, J48 and SMO) on three publicly available bioassay screens of Mtb inhibitors and validated the robustness of the predictive models using various statistical measures. Conclusions This study is a comprehensive analysis of high-throughput bioassay data for anti-tubercular activity and the application of machine learning approaches to create target-agnostic predictive models for anti-tubercular agents. PMID:22099929