Science.gov

Sample records for abstract machine model

  1. Programming the Navier-Stokes computer: An abstract machine model and a visual editor

    NASA Technical Reports Server (NTRS)

    Middleton, David; Crockett, Tom; Tomboulian, Sherry

    1988-01-01

    The Navier-Stokes computer is a parallel computer designed to solve Computational Fluid Dynamics problems. Each processor contains several floating point units which can be configured under program control to implement a vector pipeline with several inputs and outputs. Since the development of an effective compiler for this computer appears to be very difficult, machine level programming seems necessary and support tools for this process have been studied. These support tools are organized into a graphical program editor. A programming process is described by which appropriate computations may be efficiently implemented on the Navier-Stokes computer. The graphical editor would support this programming process, verifying various programmer choices for correctness and deducing values such as pipeline delays and network configurations. Step by step details are provided and demonstrated with two example programs.

  2. Automatic Review of Abstract State Machines by Meta Property Verification

    NASA Technical Reports Server (NTRS)

    Arcaini, Paolo; Gargantini, Angelo; Riccobene, Elvinia

    2010-01-01

    A model review is a validation technique aimed at determining if a model is of sufficient quality and allows defects to be identified early in the system development, reducing the cost of fixing them. In this paper we propose a technique to perform automatic review of Abstract State Machine (ASM) formal specifications. We first detect a family of typical vulnerabilities and defects a developer can introduce during the modeling activity using the ASMs and we express such faults as the violation of meta-properties that guarantee certain quality attributes of the specification. These meta-properties are then mapped to temporal logic formulas and model checked for their violation. As a proof of concept, we also report the result of applying this ASM review process to several specifications.

  3. Teaching for Abstraction: A Model

    ERIC Educational Resources Information Center

    White, Paul; Mitchelmore, Michael C.

    2010-01-01

    This article outlines a theoretical model for teaching elementary mathematical concepts that we have developed over the past 10 years. We begin with general ideas about the abstraction process and differentiate between "abstract-general" and "abstract-apart" concepts. A 4-phase model of teaching, called Teaching for Abstraction, is then proposed…

  4. Multimodeling and Model Abstraction

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The multiplicity of models of the same process or phenomenon is the commonplace in environmental modeling. Last 10 years brought marked interest to making use of the variety of conceptual approaches instead of attempting to find the best model or using a single preferred model. Two systematic approa...

  5. Formal modeling of virtual machines

    NASA Technical Reports Server (NTRS)

    Cremers, A. B.; Hibbard, T. N.

    1978-01-01

    Systematic software design can be based on the development of a 'hierarchy of virtual machines', each representing a 'level of abstraction' of the design process. The reported investigation presents the concept of 'data space' as a formal model for virtual machines. The presented model of a data space combines the notions of data type and mathematical machine to express the close interaction between data and control structures which takes place in a virtual machine. One of the main objectives of the investigation is to show that control-independent data type implementation is only of limited usefulness as an isolated tool of program development, and that the representation of data is generally dictated by the control context of a virtual machine. As a second objective, a better understanding is to be developed of virtual machine state structures than was heretofore provided by the view of the state space as a Cartesian product.

  6. Modelling Metamorphism by Abstract Interpretation

    NASA Astrophysics Data System (ADS)

    Dalla Preda, Mila; Giacobazzi, Roberto; Debray, Saumya; Coogan, Kevin; Townsend, Gregg M.

    Metamorphic malware apply semantics-preserving transformations to their own code in order to foil detection systems based on signature matching. In this paper we consider the problem of automatically extract metamorphic signatures from these malware. We introduce a semantics for self-modifying code, later called phase semantics, and prove its correctness by showing that it is an abstract interpretation of the standard trace semantics. Phase semantics precisely models the metamorphic code behavior by providing a set of traces of programs which correspond to the possible evolutions of the metamorphic code during execution. We show that metamorphic signatures can be automatically extracted by abstract interpretation of the phase semantics, and that regular metamorphism can be modelled as finite state automata abstraction of the phase semantics.

  7. Towards Compatible and Interderivable Semantic Specifications for the Scheme Programming Language, Part II: Reduction Semantics and Abstract Machines

    NASA Astrophysics Data System (ADS)

    Biernacka, Małgorzata; Danvy, Olivier

    We present a context-sensitive reduction semantics for a lambda-calculus with explicit substitutions and we show that the functional implementation of this small-step semantics mechanically corresponds to that of the abstract machine for Core Scheme presented by Clinger at PLDI’98, including first-class continuations. Starting from this reduction semantics, (1) we refocus it into a small-step abstract machine; (2) we fuse the transition function of this abstract machine with its driver loop, obtaining a big-step abstract machine which is staged; (3) we compress its corridor transitions, obtaining an eval/continue abstract machine; and (4) we unfold its ground closures, which yields an abstract machine that essentially coincides with Clinger’s machine. This lambda-calculus with explicit substitutions therefore aptly accounts for Core Scheme, including Clinger’s permutations and unpermutations.

  8. Model-based machine learning

    PubMed Central

    Bishop, Christopher M.

    2013-01-01

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612

  9. Model-based machine learning.

    PubMed

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612

  10. Integrating model abstraction into monitoring strategies

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study was designed and performed to investigate the opportunities and benefits of integrating model abstraction techniques into monitoring strategies. The study focused on future applications of modeling to contingency planning and management of potential and actual contaminant release sites wi...

  11. Abstracts

    NASA Astrophysics Data System (ADS)

    2012-09-01

    Measuring cosmological parameters with GRBs: status and perspectives New interpretation of the Amati relation The SED Machine - a dedicated transient spectrograph PTF10iue - evidence for an internal engine in a unique Type Ic SN Direct evidence for the collapsar model of long gamma-ray bursts On pair instability supernovae and gamma-ray bursts Pan-STARRS1 observations of ultraluminous SNe The influence of rotation on the critical neutrino luminosity in core-collapse supernovae General relativistic magnetospheres of slowly rotating and oscillating neutron stars Host galaxies of short GRBs GRB 100418A: a bridge between GRB-associated hypernovae and SNe Two super-luminous SNe at z ~ 1.5 from the SNLS Prospects for very-high-energy gamma-ray bursts with the Cherenkov Telescope Array The dynamics and radiation of relativistic flows from massive stars The search for light echoes from the supernova explosion of 1181 AD The proto-magnetar model for gamma-ray bursts Stellar black holes at the dawn of the universe MAXI J0158-744: the discovery of a supersoft X-ray transient Wide-band spectra of magnetar burst emission Dust formation and evolution in envelope-stripped core-collapse supernovae The host galaxies of dark gamma-ray bursts Keck observations of 150 GRB host galaxies Search for properties of GRBs at large redshift The early emission from SNe Spectral properties of SN shock breakout MAXI observation of GRBs and short X-ray transients A three-dimensional view of SN 1987A using light echo spectroscopy X-ray study of the southern extension of the SNR Puppis A All-sky survey of short X-ray transients by MAXI GSC Development of the CALET gamma-ray burst monitor (CGBM)

  12. SATURATED ZONE FLOW AND TRANSPORT MODEL ABSTRACTION

    SciTech Connect

    B.W. ARNOLD

    2004-10-27

    The purpose of the saturated zone (SZ) flow and transport model abstraction task is to provide radionuclide-transport simulation results for use in the total system performance assessment (TSPA) for license application (LA) calculations. This task includes assessment of uncertainty in parameters that pertain to both groundwater flow and radionuclide transport in the models used for this purpose. This model report documents the following: (1) The SZ transport abstraction model, which consists of a set of radionuclide breakthrough curves at the accessible environment for use in the TSPA-LA simulations of radionuclide releases into the biosphere. These radionuclide breakthrough curves contain information on radionuclide-transport times through the SZ. (2) The SZ one-dimensional (I-D) transport model, which is incorporated in the TSPA-LA model to simulate the transport, decay, and ingrowth of radionuclide decay chains in the SZ. (3) The analysis of uncertainty in groundwater-flow and radionuclide-transport input parameters for the SZ transport abstraction model and the SZ 1-D transport model. (4) The analysis of the background concentration of alpha-emitting species in the groundwater of the SZ.

  13. Directory of Energy Information Administration Model Abstracts

    SciTech Connect

    Not Available

    1986-07-16

    This directory partially fulfills the requirements of Section 8c, of the documentation order, which states in part that: The Office of Statistical Standards will annually publish an EIA document based on the collected abstracts and the appendices. This report contains brief statements about each model's title, acronym, purpose, and status, followed by more detailed information on characteristics, uses, and requirements. Sources for additional information are identified. All models active through March 1985 are included. The main body of this directory is an alphabetical list of all active EIA models. Appendix A identifies major EIA modeling systems and the models within these systems, and Appendix B identifies active EIA models by type (basic, auxiliary, and developing). EIA also leases models developed by proprietary software vendors. Documentation for these proprietary models is the responsibility of the companies from which they are leased. EIA has recently leased models from Chase Econometrics, Inc., Data Resources, Inc. (DRI), the Oak Ridge National Laboratory (ORNL), and Wharton Econometric Forecasting Associates (WEFA). Leased models are not abstracted here. The directory is intended for the use of energy and energy-policy analysts in the public and private sectors.

  14. Directory of Energy Information Administration model abstracts

    SciTech Connect

    Not Available

    1987-08-11

    This report contains brief statements from the model managers about each model's title, acronym, purpose, and status, followed by more detailed information on characteristics, uses, and requirements. Sources for additional information are identified. All models ''active'' through March 1987 are included. The main body of this directory is an alphabetical list of all active EIA models. Appendix A identifies major EIA modeling systems and the models within these systems, and Appendix B identifies active EIA models by type (basic, auxiliary, and developing). A basic model is one designated by the EIA Administrator as being sufficiently important to require sustained support and public scrutiny. An auxiliary model is one designated by the EIA Administrator as being used only occasionally in analyses, and therefore requires minimal levels of documentation. A developing model is one designated by the EIA Administrator as being under development and yet of sufficient interest to require a basic level of documentation at a future date. EIA also leases models developed by proprietary software vendors. Documentation for these ''proprietary'' models is the responsibility of the companies from which they are leased. EIA has recently leased models from Chase Econometrics, Inc., Data Resources, Inc. (DRI), the Oak Ridge National Laboratory (ORNL), and Wharton Econometric Forecasting Associates (WEFA). Leased models are not abstracted here. The directory is intended for the use of energy and energy-policy analysts in the public and private sectors.

  15. Model Checking Abstract PLEXIL Programs with SMART

    NASA Technical Reports Server (NTRS)

    Siminiceanu, Radu I.

    2007-01-01

    We describe a method to automatically generate discrete-state models of abstract Plan Execution Interchange Language (PLEXIL) programs that can be analyzed using model checking tools. Starting from a high-level description of a PLEXIL program or a family of programs with common characteristics, the generator lays the framework that models the principles of program execution. The concrete parts of the program are not automatically generated, but require the modeler to introduce them by hand. As a case study, we generate models to verify properties of the PLEXIL macro constructs that are introduced as shorthand notation. After an exhaustive analysis, we conclude that the macro definitions obey the intended semantics and behave as expected, but contingently on a few specific requirements on the timing semantics of micro-steps in the concrete executive implementation.

  16. Evolutionary model with Turing machines

    NASA Astrophysics Data System (ADS)

    Feverati, Giovanni; Musso, Fabio

    2008-06-01

    The development of a large noncoding fraction in eukaryotic DNA and the phenomenon of the code bloat in the field of evolutionary computations show a striking similarity. This seems to suggest that (in the presence of mechanisms of code growth) the evolution of a complex code cannot be attained without maintaining a large inactive fraction. To test this hypothesis we performed computer simulations of an evolutionary toy model for Turing machines, studying the relations among fitness and coding versus noncoding ratio while varying mutation and code growth rates. The results suggest that, in our model, having a large reservoir of noncoding states constitutes a great (long term) evolutionary advantage.

  17. Multivariate cross-classification: applying machine learning techniques to characterize abstraction in neural representations

    PubMed Central

    Kaplan, Jonas T.; Man, Kingson; Greening, Steven G.

    2015-01-01

    Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC), and review several domains where it has recently made an impact. MVCC has been important in establishing correspondences among neural patterns across cognitive domains, including motor-perception matching and cross-sensory matching. It has been used to test for similarity between neural patterns evoked by perception and those generated from memory. Other work has used MVCC to investigate the similarity of representations for semantic categories across different kinds of stimulus presentation, and in the presence of different cognitive demands. We use these examples to demonstrate the power of MVCC as a tool for investigating neural abstraction and discuss some important methodological issues related to its application. PMID:25859202

  18. Multivariate cross-classification: applying machine learning techniques to characterize abstraction in neural representations.

    PubMed

    Kaplan, Jonas T; Man, Kingson; Greening, Steven G

    2015-01-01

    Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC), and review several domains where it has recently made an impact. MVCC has been important in establishing correspondences among neural patterns across cognitive domains, including motor-perception matching and cross-sensory matching. It has been used to test for similarity between neural patterns evoked by perception and those generated from memory. Other work has used MVCC to investigate the similarity of representations for semantic categories across different kinds of stimulus presentation, and in the presence of different cognitive demands. We use these examples to demonstrate the power of MVCC as a tool for investigating neural abstraction and discuss some important methodological issues related to its application. PMID:25859202

  19. Machine learning in sedimentation modelling.

    PubMed

    Bhattacharya, B; Solomatine, D P

    2006-03-01

    The paper presents machine learning (ML) models that predict sedimentation in the harbour basin of the Port of Rotterdam. The important factors affecting the sedimentation process such as waves, wind, tides, surge, river discharge, etc. are studied, the corresponding time series data is analysed, missing values are estimated and the most important variables behind the process are chosen as the inputs. Two ML methods are used: MLP ANN and M5 model tree. The latter is a collection of piece-wise linear regression models, each being an expert for a particular region of the input space. The models are trained on the data collected during 1992-1998 and tested by the data of 1999-2000. The predictive accuracy of the models is found to be adequate for the potential use in the operational decision making. PMID:16530383

  20. Memristor models for machine learning.

    PubMed

    Carbajal, Juan Pablo; Dambre, Joni; Hermans, Michiel; Schrauwen, Benjamin

    2015-03-01

    In the quest for alternatives to traditional complementary metal-oxide-semiconductor, it is being suggested that digital computing efficiency and power can be improved by matching the precision to the application. Many applications do not need the high precision that is being used today. In particular, large gains in area and power efficiency could be achieved by dedicated analog realizations of approximate computing engines. In this work we explore the use of memristor networks for analog approximate computation, based on a machine learning framework called reservoir computing. Most experimental investigations on the dynamics of memristors focus on their nonvolatile behavior. Hence, the volatility that is present in the developed technologies is usually unwanted and is not included in simulation models. In contrast, in reservoir computing, volatility is not only desirable but necessary. Therefore, in this work, we propose two different ways to incorporate it into memristor simulation models. The first is an extension of Strukov's model, and the second is an equivalent Wiener model approximation. We analyze and compare the dynamical properties of these models and discuss their implications for the memory and the nonlinear processing capacity of memristor networks. Our results indicate that device variability, increasingly causing problems in traditional computer design, is an asset in the context of reservoir computing. We conclude that although both models could lead to useful memristor-based reservoir computing systems, their computational performance will differ. Therefore, experimental modeling research is required for the development of accurate volatile memristor models. PMID:25602769

  1. Musical Instruments, Models, and Machines.

    NASA Astrophysics Data System (ADS)

    Gershenfeld, Neil

    1996-11-01

    A traditional musical instrument is an analog computer that integrates equations of motion based on applied boundary conditions. We are approaching a remarkable time when advances in transducers, real-time computing, and mathematical modeling will enable new technology to emulate and generalize the physics of great musical instruments from first principles, helping virtuosic musicians to do more and non-musicians to engage in creative expression. I will discuss the underlying problems, including non-contact sensing and state reconstruction for nonlinear systems, describe exploratory performance collaborations with artists ranging from Yo-Yo Ma to Penn & Teller, and then consider the broader implications of these devices for the interaction between people and machines. Part B of program listing

  2. Rough set models of Physarum machines

    NASA Astrophysics Data System (ADS)

    Pancerz, Krzysztof; Schumann, Andrew

    2015-04-01

    In this paper, we consider transition system models of behaviour of Physarum machines in terms of rough set theory. A Physarum machine, a biological computing device implemented in the plasmodium of Physarum polycephalum (true slime mould), is a natural transition system. In the behaviour of Physarum machines, one can notice some ambiguity in Physarum motions that influences exact anticipation of states of machines in time. To model this ambiguity, we propose to use rough set models created over transition systems. Rough sets are an appropriate tool to deal with rough (ambiguous, imprecise) concepts in the universe of discourse.

  3. Abstract models for the synthesis of optimization algorithms.

    NASA Technical Reports Server (NTRS)

    Meyer, G. G. L.; Polak, E.

    1971-01-01

    Systematic approach to the problem of synthesis of optimization algorithms. Abstract models for algorithms are developed which guide the inventive process toward ?conceptual' algorithms which may consist of operations that are inadmissible in a practical method. Once the abstract models are established a set of methods for converting ?conceptual' algorithms falling into the class defined by the abstract models into ?implementable' iterative procedures is presented.

  4. Application of model abstraction techniques to simulate transport in soils

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Successful understanding and modeling of contaminant transport in soils is the precondition of risk-informed predictions of the subsurface contaminant transport. Exceedingly complex models of subsurface contaminant transport are often inefficient. Model abstraction is the methodology for reducing th...

  5. Abstracting event-based control models for high autonomy systems

    NASA Technical Reports Server (NTRS)

    Luh, Cheng-Jye; Zeigler, Bernard P.

    1993-01-01

    A high autonomy system needs many models on which to base control, management, design, and other interventions. These models differ in level of abstraction and in formalism. Concepts and tools are needed to organize the models into a coherent whole. The paper deals with the abstraction processes for systematic derivation of related models for use in event-based control. The multifaceted modeling methodology is briefly reviewed. The morphism concepts needed for application to model abstraction are described. A theory for supporting the construction of DEVS models needed for event-based control is then presented. An implemented morphism on the basis of this theory is also described.

  6. Modelling abstraction licensing strategies ahead of the UK's water abstraction licensing reform

    NASA Astrophysics Data System (ADS)

    Klaar, M. J.

    2012-12-01

    Within England and Wales, river water abstractions are licensed and regulated by the Environment Agency (EA), who uses compliance with the Environmental Flow Indicator (EFI) to ascertain where abstraction may cause undesirable effects on river habitats and species. The EFI is a percentage deviation from natural flow represented using a flow duration curve. The allowable percentage deviation changes with different flows, and also changes depending on an assessment of the sensitivity of the river to changes in flow (Table 1). Within UK abstraction licensing, resource availability is expressed as a surplus or deficit of water resources in relation to the EFI, and utilises the concept of 'hands-off-flows' (HOFs) at the specified flow statistics detailed in Table 1. Use of a HOF system enables abstraction to cease at set flows, but also enables abstraction to occur at periods of time when more water is available. Compliance at low flows (Q95) is used by the EA to determine the hydrological classification and compliance with the Water Framework Directive (WFD) for identifying waterbodies where flow may be causing or contributing to a failure in good ecological status (GES; Table 2). This compliance assessment shows where the scenario flows are below the EFI and by how much, to help target measures for further investigation and assessment. Currently, the EA is reviewing the EFI methodology in order to assess whether or not it can be used within the reformed water abstraction licensing system which is being planned by the Department for Environment, Food and Rural Affairs (DEFRA) to ensure the licensing system is resilient to the challenges of climate change and population growth, while allowing abstractors to meet their water needs efficiently, and better protect the environment. In order to assess the robustness of the EFI, a simple model has been created which allows a number of abstraction, flow and licensing scenarios to be run to determine WFD compliance using the

  7. Vibration absorber modeling for handheld machine tool

    NASA Astrophysics Data System (ADS)

    Abdullah, Mohd Azman; Mustafa, Mohd Muhyiddin; Jamil, Jazli Firdaus; Salim, Mohd Azli; Ramli, Faiz Redza

    2015-05-01

    Handheld machine tools produce continuous vibration to the users during operation. This vibration causes harmful effects to the health of users for repeated operations in a long period of time. In this paper, a dynamic vibration absorber (DVA) is designed and modeled to reduce the vibration generated by the handheld machine tool. Several designs and models of vibration absorbers with various stiffness properties are simulated, tested and optimized in order to diminish the vibration. Ordinary differential equation is used to derive and formulate the vibration phenomena in the machine tool with and without the DVA. The final transfer function of the DVA is later analyzed using commercial available mathematical software. The DVA with optimum properties of mass and stiffness is developed and applied on the actual handheld machine tool. The performance of the DVA is experimentally tested and validated by the final result of vibration reduction.

  8. Coupling Radar Rainfall to Hydrological Models for Water Abstraction Management

    NASA Astrophysics Data System (ADS)

    Asfaw, Alemayehu; Shucksmith, James; Smith, Andrea; MacDonald, Ken

    2015-04-01

    The impacts of climate change and growing water use are likely to put considerable pressure on water resources and the environment. In the UK, a reform to surface water abstraction policy has recently been proposed which aims to increase the efficiency of using available water resources whilst minimising impacts on the aquatic environment. Key aspects to this reform include the consideration of dynamic rather than static abstraction licensing as well as introducing water trading concepts. Dynamic licensing will permit varying levels of abstraction dependent on environmental conditions (i.e. river flow and quality). The practical implementation of an effective dynamic abstraction strategy requires suitable flow forecasting techniques to inform abstraction asset management. Potentially the predicted availability of water resources within a catchment can be coupled to predicted demand and current storage to inform a cost effective water resource management strategy which minimises environmental impacts. The aim of this work is to use a historical analysis of UK case study catchment to compare potential water resource availability using modelled dynamic abstraction scenario informed by a flow forecasting model, against observed abstraction under a conventional abstraction regime. The work also demonstrates the impacts of modelling uncertainties on the accuracy of predicted water availability over range of forecast lead times. The study utilised a conceptual rainfall-runoff model PDM - Probability-Distributed Model developed by Centre for Ecology & Hydrology - set up in the Dove River catchment (UK) using 1km2 resolution radar rainfall as inputs and 15 min resolution gauged flow data for calibration and validation. Data assimilation procedures are implemented to improve flow predictions using observed flow data. Uncertainties in the radar rainfall data used in the model are quantified using artificial statistical error model described by Gaussian distribution and

  9. How Pupils Use a Model for Abstract Concepts in Genetics

    ERIC Educational Resources Information Center

    Venville, Grady; Donovan, Jenny

    2008-01-01

    The purpose of this research was to explore the way pupils of different age groups use a model to understand abstract concepts in genetics. Pupils from early childhood to late adolescence were taught about genes and DNA using an analogical model (the wool model) during their regular biology classes. Changing conceptual understandings of the…

  10. Dissipation and irreversibility for models of mechanochemical machines

    NASA Astrophysics Data System (ADS)

    Brown, Aidan; Sivak, David

    For biological systems to maintain order and achieve directed progress, they must overcome fluctuations so that reactions and processes proceed forwards more than they go in reverse. It is well known that some free energy dissipation is required to achieve irreversible forward progress, but the quantitative relationship between irreversibility and free energy dissipation is not well understood. Previous studies focused on either abstract calculations or detailed simulations that are difficult to generalize. We present results for mechanochemical models of molecular machines, exploring a range of model characteristics and behaviours. Our results describe how irreversibility and dissipation trade off in various situations, and how this trade-off can depend on details of the model. The irreversibility-dissipation trade-off points towards general principles of microscopic machine operation or process design. Our analysis identifies system parameters which can be controlled to bring performance to the Pareto frontier.

  11. Evaluating the performance versus accuracy tradeoff for abstract models

    NASA Astrophysics Data System (ADS)

    McGraw, Robert M.; Clark, Joseph E.

    2001-09-01

    While the military and commercial communities are increasingly reliant on simulation to reduce cost, the cost of developing simulations for their complex system may be costly in themselves. In order to reduce simulation costs, simulation developers have turned toward using collaborative simulation, reusing existing simulation models, and utilizing model abstraction techniques to reduce simulation development time as well as simulation execution time. This paper focuses on model abstraction techniques that can be applied to reduce simulation execution and development time and the effects those techniques have on simulation accuracy.

  12. Concrete Model Checking with Abstract Matching and Refinement

    NASA Technical Reports Server (NTRS)

    Pasareanu Corina S.; Peianek Radek; Visser, Willem

    2005-01-01

    We propose an abstraction-based model checking method which relies on refinement of an under-approximation of the feasible behaviors of the system under analysis. The method preserves errors to safety properties, since all analyzed behaviors are feasible by definition. The method does not require an abstract transition relation to he generated, but instead executes the concrete transitions while storing abstract versions of the concrete states, as specified by a set of abstraction predicates. For each explored transition. the method checks, with the help of a theorem prover, whether there is any loss of precision introduced by abstraction. The results of these checks are used to decide termination or to refine the abstraction, by generating new abstraction predicates. If the (possibly infinite) concrete system under analysis has a finite bisimulation quotient, then the method is guaranteed to eventually explore an equivalent finite bisimilar structure. We illustrate the application of the approach for checking concurrent programs. We also show how a lightweight variant can be used for efficient software testing.

  13. An abstract specification language for Markov reliability models

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1985-01-01

    Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.

  14. Particle Tracking Model and Abstraction of Transport Processes

    SciTech Connect

    B. Robinson

    2004-10-21

    The purpose of this report is to document the abstraction model being used in total system performance assessment (TSPA) model calculations for radionuclide transport in the unsaturated zone (UZ). The UZ transport abstraction model uses the particle-tracking method that is incorporated into the finite element heat and mass model (FEHM) computer code (Zyvoloski et al. 1997 [DIRS 100615]) to simulate radionuclide transport in the UZ. This report outlines the assumptions, design, and testing of a model for calculating radionuclide transport in the UZ at Yucca Mountain. In addition, methods for determining and inputting transport parameters are outlined for use in the TSPA for license application (LA) analyses. Process-level transport model calculations are documented in another report for the UZ (BSC 2004 [DIRS 164500]). Three-dimensional, dual-permeability flow fields generated to characterize UZ flow (documented by BSC 2004 [DIRS 169861]; DTN: LB03023DSSCP9I.001 [DIRS 163044]) are converted to make them compatible with the FEHM code for use in this abstraction model. This report establishes the numerical method and demonstrates the use of the model that is intended to represent UZ transport in the TSPA-LA. Capability of the UZ barrier for retarding the transport is demonstrated in this report, and by the underlying process model (BSC 2004 [DIRS 164500]). The technical scope, content, and management of this report are described in the planning document ''Technical Work Plan for: Unsaturated Zone Transport Model Report Integration'' (BSC 2004 [DIRS 171282]). Deviations from the technical work plan (TWP) are noted within the text of this report, as appropriate. The latest version of this document is being prepared principally to correct parameter values found to be in error due to transcription errors, changes in source data that were not captured in the report, calculation errors, and errors in interpretation of source data.

  15. Of Models and Machines: Implementing Bounded Rationality.

    PubMed

    Dick, Stephanie

    2015-09-01

    This essay explores the early history of Herbert Simon's principle of bounded rationality in the context of his Artificial Intelligence research in the mid 1950s. It focuses in particular on how Simon and his colleagues at the RAND Corporation translated a model of human reasoning into a computer program, the Logic Theory Machine. They were motivated by a belief that computers and minds were the same kind of thing--namely, information-processing systems. The Logic Theory Machine program was a model of how people solved problems in elementary mathematical logic. However, in making this model actually run on their 1950s computer, the JOHNNIAC, Simon and his colleagues had to navigate many obstacles and material constraints quite foreign to the human experience of logic. They crafted new tools and engaged in new practices that accommodated the affordances of their machine, rather than reflecting the character of human cognition and its bounds. The essay argues that tracking this implementation effort shows that "internal" cognitive practices and "external" tools and materials are not so easily separated as they are in Simon's principle of bounded rationality--the latter often shaping the dynamics of the former. PMID:26685521

  16. The abstract model of dynamic evolution based on services

    NASA Astrophysics Data System (ADS)

    Qian, Ye; Li, Tong; Li, Yunfei; Gu, Hongxing

    2012-01-01

    Service-oriented software system is facing a challenge to regulate itself promptly because of the evolving Internet environment and user requirements In this paper, a new way that describe the dynamic evolution of services according to 3C mode(Will 1990) is proposed, and Extended workflow net is utilized to describe the abstract model of dynamic evolution of services from specific-functional-domain which is defined in this paper to the whole system.

  17. The abstract model of dynamic evolution based on services

    NASA Astrophysics Data System (ADS)

    Qian, Ye; Li, Tong; Li, Yunfei; Gu, Hongxing

    2011-12-01

    Service-oriented software system is facing a challenge to regulate itself promptly because of the evolving Internet environment and user requirements In this paper, a new way that describe the dynamic evolution of services according to 3C mode(Will 1990) is proposed, and Extended workflow net is utilized to describe the abstract model of dynamic evolution of services from specific-functional-domain which is defined in this paper to the whole system.

  18. Situation models, mental simulations, and abstract concepts in discourse comprehension.

    PubMed

    Zwaan, Rolf A

    2016-08-01

    This article sets out to examine the role of symbolic and sensorimotor representations in discourse comprehension. It starts out with a review of the literature on situation models, showing how mental representations are constrained by linguistic and situational factors. These ideas are then extended to more explicitly include sensorimotor representations. Following Zwaan and Madden (2005), the author argues that sensorimotor and symbolic representations mutually constrain each other in discourse comprehension. These ideas are then developed further to propose two roles for abstract concepts in discourse comprehension. It is argued that they serve as pointers in memory, used (1) cataphorically to integrate upcoming information into a sensorimotor simulation, or (2) anaphorically integrate previously presented information into a sensorimotor simulation. In either case, the sensorimotor representation is a specific instantiation of the abstract concept. PMID:26088667

  19. Directory of Energy Information Administration model abstracts 1988

    SciTech Connect

    Not Available

    1988-01-01

    This directory contains descriptions about each basic and auxiliary model, including the title, acronym, purpose, and type, followed by more detailed information on characteristics, uses, and requirements. For developing models, limited information is provided. Sources for additional information are identified. Included in this directory are 44 EIA models active as of February 1, 1988; 16 of which operate on personal computers. Models that run on personal computers are identified by ''PC'' as part of the acronyms. The main body of this directory is an alphabetical listing of all basic and auxiliary EIA models. Appendix A identifies major EIA modeling systems and the models within these systems, and Appendix B identifies EIA models by type (basic or auxiliary). Appendix C lists developing models and contact persons for those models. A basic model is one designated by the EIA Administrator as being sufficiently important to require sustained support and public scrutiny. An auxiliary model is one designated by the EIA Administrator as being used only occasionally in analyses, and therefore requires minimal levels of documentation. A developing model is one designated by the EIA Administrator as being under development and yet of sufficient interest to require a basic level of documentation at a future date. EIA also leases models developed by proprietary software vendors. Documentation for these ''proprietary'' models is the responsibility of the companies from which they are leased. EIA has recently leased models from Chase Econometrics, Inc., Data Resources, Inc. (DRI), the Oak Ridge National Laboratory (ORNL), and Wharton Econometric Forecasting Associates (WEFA). Leased models are not abstracted here.

  20. Modeling quantum physics with machine learning

    NASA Astrophysics Data System (ADS)

    Lopez-Bezanilla, Alejandro; Arsenault, Louis-Francois; Millis, Andrew; Littlewood, Peter; von Lilienfeld, Anatole

    2014-03-01

    Machine Learning (ML) is a systematic way of inferring new results from sparse information. It directly allows for the resolution of computationally expensive sets of equations by making sense of accumulated knowledge and it is therefore an attractive method for providing computationally inexpensive 'solvers' for some of the important systems of condensed matter physics. In this talk a non-linear regression statistical model is introduced to demonstrate the utility of ML methods in solving quantum physics related problem, and is applied to the calculation of electronic transport in 1D channels. DOE contract number DE-AC02-06CH11357.

  1. Entity-Centric Abstraction and Modeling Framework for Transportation Architectures

    NASA Technical Reports Server (NTRS)

    Lewe, Jung-Ho; DeLaurentis, Daniel A.; Mavris, Dimitri N.; Schrage, Daniel P.

    2007-01-01

    A comprehensive framework for representing transpportation architectures is presented. After discussing a series of preceding perspectives and formulations, the intellectual underpinning of the novel framework using an entity-centric abstraction of transportation is described. The entities include endogenous and exogenous factors and functional expressions are offered that relate these and their evolution. The end result is a Transportation Architecture Field which permits analysis of future concepts under the holistic perspective. A simulation model which stems from the framework is presented and exercised producing results which quantify improvements in air transportation due to advanced aircraft technologies. Finally, a modeling hypothesis and its accompanying criteria are proposed to test further use of the framework for evaluating new transportation solutions.

  2. Modeling and analysis of pulse electrochemical machining

    NASA Astrophysics Data System (ADS)

    Wei, Bin

    Pulse Electrochemical Machining (PECM) is a potentially cost effective technology meeting the increasing needs of precision manufacturing of superalloys, like titanium alloys, into complex shapes such as turbine airfoils. This dissertation reports: (1) an assessment of the worldwide state-of-the-art PECM research and industrial practice; (2) PECM process model development; (3) PECM of a superalloy (Ti-6Al-4V); and (4) key issues in future PECM research. The assessment focuses on identifying dimensional control problems with continuous ECM and how PECM can offer a solution. Previous research on PECM system design, process mechanisms, and dimensional control is analysed, leading to a clearer understanding of key issues in PECM development such as process characterization and modeling. New interelectrode gap dynamic models describing the gap evolution with time are developed for different PECM processes with an emphasis on the frontal gaps and a typical two-dimensional case. A 'PECM cosine principle' and several tool design formulae are also derived. PECM processes are characterized using concepts such as quasi-equilibrium gap and dissolution localization. Process simulation is performed to evaluate the effects of process inputs on dimensional accuracy control. Analysis is made on three types (single-phase, homogeneous, and inhomogeneous) of models concerning the physical processes (such as the electrolyte flow, Joule heating, and bubble generation) in the interelectrode gap. A physical model is introduced for the PECM with short pulses, which addresses the effect of electrolyte conductivity change on anodic dissolution. PECM of the titanium alloy is studied from a new perspective on the pulsating currents influence on surface quality and dimension control. An experimental methodology is developed to acquire instantaneous currents and to accurately measure the coefficient of machinability. The influence of pulse parameters on the surface passivation is explained based

  3. FIELD DATA AND PRELIMINARY MODELING TO DEMONSTRATE MODEL ABSTRACTION TECHNIQUES USING THE OPE3 FIELD SITE

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This report describes the data and preliminary modeling to develop a case study of model abstraction application at the watershed scale. Model abstraction is defined as the methodology for reducing the complexity of a simulation model while maintaining the validity of the simulation results with res...

  4. Uncovering protein interaction in abstracts and text using a novel linear model and word proximity networks

    PubMed Central

    Abi-Haidar, Alaa; Kaur, Jasleen; Maguitman, Ana; Radivojac, Predrag; Rechtsteiner, Andreas; Verspoor, Karin; Wang, Zhiping; Rocha, Luis M

    2008-01-01

    Background: We participated in three of the protein-protein interaction subtasks of the Second BioCreative Challenge: classification of abstracts relevant for protein-protein interaction (interaction article subtask [IAS]), discovery of protein pairs (interaction pair subtask [IPS]), and identification of text passages characterizing protein interaction (interaction sentences subtask [ISS]) in full-text documents. We approached the abstract classification task with a novel, lightweight linear model inspired by spam detection techniques, as well as an uncertainty-based integration scheme. We also used a support vector machine and singular value decomposition on the same features for comparison purposes. Our approach to the full-text subtasks (protein pair and passage identification) includes a feature expansion method based on word proximity networks. Results: Our approach to the abstract classification task (IAS) was among the top submissions for this task in terms of measures of performance used in the challenge evaluation (accuracy, F-score, and area under the receiver operating characteristic curve). We also report on a web tool that we produced using our approach: the Protein Interaction Abstract Relevance Evaluator (PIARE). Our approach to the full-text tasks resulted in one of the highest recall rates as well as mean reciprocal rank of correct passages. Conclusion: Our approach to abstract classification shows that a simple linear model, using relatively few features, can generalize and uncover the conceptual nature of protein-protein interactions from the bibliome. Because the novel approach is based on a rather lightweight linear model, it can easily be ported and applied to similar problems. In full-text problems, the expansion of word features with word proximity networks is shown to be useful, although the need for some improvements is discussed. PMID:18834489

  5. Information Model for Machine-Tool-Performance Tests

    PubMed Central

    Lee, Y. Tina; Soons, Johannes A.; Donmez, M. Alkan

    2001-01-01

    This report specifies an information model of machine-tool-performance tests in the EXPRESS [1] language. The information model provides a mechanism for describing the properties and results of machine-tool-performance tests. The objective of the information model is a standardized, computer-interpretable representation that allows for efficient archiving and exchange of performance test data throughout the life cycle of the machine. The report also demonstrates the implementation of the information model using three different implementation methods.

  6. Prototype-based models in machine learning.

    PubMed

    Biehl, Michael; Hammer, Barbara; Villmann, Thomas

    2016-01-01

    An overview is given of prototype-based models in machine learning. In this framework, observations, i.e., data, are stored in terms of typical representatives. Together with a suitable measure of similarity, the systems can be employed in the context of unsupervised and supervised analysis of potentially high-dimensional, complex datasets. We discuss basic schemes of competitive vector quantization as well as the so-called neural gas approach and Kohonen's topology-preserving self-organizing map. Supervised learning in prototype systems is exemplified in terms of learning vector quantization. Most frequently, the familiar Euclidean distance serves as a dissimilarity measure. We present extensions of the framework to nonstandard measures and give an introduction to the use of adaptive distances in relevance learning. PMID:26800334

  7. 9. VIEW, LOOKING SOUTH, OF INTERLOCKING MACHINE, WITH ORIGINAL MODEL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    9. VIEW, LOOKING SOUTH, OF INTERLOCKING MACHINE, WITH ORIGINAL MODEL BOARD IN CENTER, NEW MODEL BOARD AT LEFT AND MODEL SEMAPHORES AT TOP OF PHOTOGRAPH, THIRD FLOOR - South Station Tower No. 1 & Interlocking System, Dewey Square, Boston, Suffolk County, MA

  8. Modeling of cumulative tool wear in machining metal matrix composites

    SciTech Connect

    Hung, N.P.; Tan, V.K.; Oon, B.E.

    1995-12-31

    Metal matrix composites (MMCs) are notoriously known for their low machinability because of the abrasive and brittle reinforcement. Although a near-net-shape product could be produced, finish machining is still required for the final shape and dimension. The classical Taylor`s tool life equation that relates tool life and cutting conditions has been traditionally used to study machinability. The turning operation is commonly used to investigate the machinability of a material; tedious and costly milling experiments have to be performed separately; while a facing test is not applicable for the Taylor`s model since the facing speed varies as the tool moves radially. Collecting intensive machining data for MMCs is often difficult because of the constraints on size, cost of the material, and the availability of sophisticated machine tools. A more flexible model and machinability testing technique are, therefore, sought. This study presents and verifies new models for turning, facing, and milling operations. Different cutting conditions were utilized to assess the machinability of MMCs reinforced with silicon carbide or alumina particles. Experimental data show that tool wear does not depend on the order of different cutting speeds since abrasion is the main wear mechanism. Correlation between data for turning, milling, and facing is presented. It is more economical to rank machinability using data for facing and then to convert the data for turning and milling, if required. Subsurface damages such as work-hardened and cracked matrix alloy, and fractured and delaminated particles are discussed.

  9. 8. VIEW, LOOKING NORTH, OF INTERLOCKING MACHINE WITH ORIGINAL MODEL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    8. VIEW, LOOKING NORTH, OF INTERLOCKING MACHINE WITH ORIGINAL MODEL BOARD IN CENTER AND MODEL SEMAPHORE SIGNALS (AT TOP OF PHOTOGRAPH), THIRD FLOOR - South Station Tower No. 1 & Interlocking System, Dewey Square, Boston, Suffolk County, MA

  10. Selected translated abstracts of Russian-language climate-change publications. 4: General circulation models

    SciTech Connect

    Burtis, M.D.; Razuvaev, V.N.; Sivachok, S.G.

    1996-10-01

    This report presents English-translated abstracts of important Russian-language literature concerning general circulation models as they relate to climate change. Into addition to the bibliographic citations and abstracts translated into English, this report presents the original citations and abstracts in Russian. Author and title indexes are included to assist the reader in locating abstracts of particular interest.

  11. A probabilistic approach to aggregate induction machine modeling

    SciTech Connect

    Stankovic, A.M.; Lesieutre, B.C.

    1996-11-01

    In this paper the authors pursue probabilistic aggregate dynamical models for n identical induction machines connected to a bus, capturing the effect of different mechanical inputs to the individual machines. The authors explore model averaging and review in detail four procedures for linear models. They describe linear systems depending upon stochastic parameters, and develop a theoretical justification for a very simple and reasonably accurate averaging method. They then extend this to the nonlinear model. Finally, they use a recently introduced notion of the stochastic norm to describe a cluster of induction machines undergoing multiple simultaneous parametric variations, and obtain useful and very mildly conservative bounds on eigenstructure perturbations under multiple simultaneous parametric variations.

  12. Limit model of electrochemical dimensional machining of metals

    NASA Astrophysics Data System (ADS)

    Zhitnikov, V. P.; Oshmarina, E. M.; Porechny, S. S.; Fedorova, G. I.

    2014-07-01

    The method of precision electrochemical machining is studied by using a model in which the current output has the form of a step function of current density. The problems of maximum stationary and quasistationary machining are formulated and solved, which made it possible to study the nonstationary process with sufficient accuracy.

  13. Model Machine Shop for Drafting Instruction.

    ERIC Educational Resources Information Center

    Jackson, Carl R.

    The development and implementation of a two-year interdisciplinary course integrating a machine shop and drafting curriculum are described in the report. The purpose of the course is to provide a learning process in industrial drafting featuring identifiable orientation in skills that will enable the student to develop competencies that are…

  14. Context in Models of Human-Machine Systems

    NASA Technical Reports Server (NTRS)

    Callantine, Todd J.; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    All human-machine systems models represent context. This paper proposes a theory of context through which models may be usefully related and integrated for design. The paper presents examples of context representation in various models, describes an application to developing models for the Crew Activity Tracking System (CATS), and advances context as a foundation for integrated design of complex dynamic systems.

  15. Developing a PLC-friendly state machine model: lessons learned

    NASA Astrophysics Data System (ADS)

    Pessemier, Wim; Deconinck, Geert; Raskin, Gert; Saey, Philippe; Van Winckel, Hans

    2014-07-01

    Modern Programmable Logic Controllers (PLCs) have become an attractive platform for controlling real-time aspects of astronomical telescopes and instruments due to their increased versatility, performance and standardization. Likewise, vendor-neutral middleware technologies such as OPC Unified Architecture (OPC UA) have recently demonstrated that they can greatly facilitate the integration of these industrial platforms into the overall control system. Many practical questions arise, however, when building multi-tiered control systems that consist of PLCs for low level control, and conventional software and platforms for higher level control. How should the PLC software be structured, so that it can rely on well-known programming paradigms on the one hand, and be mapped to a well-organized OPC UA interface on the other hand? Which programming languages of the IEC 61131-3 standard closely match the problem domains of the abstraction levels within this structure? How can the recent additions to the standard (such as the support for namespaces and object-oriented extensions) facilitate a model based development approach? To what degree can our applications already take advantage of the more advanced parts of the OPC UA standard, such as the high expressiveness of the semantic modeling language that it defines, or the support for events, aggregation of data, automatic discovery, ... ? What are the timing and concurrency problems to be expected for the higher level tiers of the control system due to the cyclic execution of control and communication tasks by the PLCs? We try to answer these questions by demonstrating a semantic state machine model that can readily be implemented using IEC 61131 and OPC UA. One that does not aim to capture all possible states of a system, but rather one that attempts to organize the course-grained structure and behaviour of a system. In this paper we focus on the intricacies of this seemingly simple task, and on the lessons that we

  16. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    PubMed

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235

  17. Predicting Market Impact Costs Using Nonparametric Machine Learning Models

    PubMed Central

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235

  18. (abstract) Simple Spreadsheet Thermal Models for Cryogenic Applications

    NASA Technical Reports Server (NTRS)

    Nash, A. E.

    1994-01-01

    Self consistent circuit analog thermal models, that can be run in commercial spreadsheet programs on personal computers, have been created to calculate the cooldown and steady state performance of cryogen cooled Dewars. The models include temperature dependent conduction and radiation effects. The outputs of the models provide temperature distribution and Dewar performance information. These models have been used to analyze the Cryogenic Telescope Test Facility (CTTF). The facility will be on line in early 1995 for its first user, the Infrared Telescope Technology Testbed (ITTT), for the Space Infrared Telescope Facility (SIRTF) at JPL. The model algorithm as well as a comparison of the model predictions and actual performance of this facility will be presented.

  19. Symbolic LTL Compilation for Model Checking: Extended Abstract

    NASA Technical Reports Server (NTRS)

    Rozier, Kristin Y.; Vardi, Moshe Y.

    2007-01-01

    In Linear Temporal Logic (LTL) model checking, we check LTL formulas representing desired behaviors against a formal model of the system designed to exhibit these behaviors. To accomplish this task, the LTL formulas must be translated into automata [21]. We focus on LTL compilation by investigating LTL satisfiability checking via a reduction to model checking. Having shown that symbolic LTL compilation algorithms are superior to explicit automata construction algorithms for this task [16], we concentrate here on seeking a better symbolic algorithm.We present experimental data comparing algorithmic variations such as normal forms, encoding methods, and variable ordering and examine their effects on performance metrics including processing time and scalability. Safety critical systems, such as air traffic control, life support systems, hazardous environment controls, and automotive control systems, pervade our daily lives, yet testing and simulation alone cannot adequately verify their reliability [3]. Model checking is a promising approach to formal verification for safety critical systems which involves creating a formal mathematical model of the system and translating desired safety properties into a formal specification for this model. The complement of the specification is then checked against the system model. When the model does not satisfy the specification, model-checking tools accompany this negative answer with a counterexample, which points to an inconsistency between the system and the desired behaviors and aids debugging efforts.

  20. Modeling situated abstraction : action coalescence via multidimensional coherence.

    SciTech Connect

    Sallach, D. L.; Decision and Information Sciences; Univ. of Chicago

    2007-01-01

    Situated social agents weigh dozens of priorities, each with its own complexities. Domains of interest are intertwined, and progress in one area either complements or conflicts with other priorities. Interpretive agents address these complexities through: (1) integrating cognitive complexities through the use of radial concepts, (2) recognizing the role of emotion in prioritizing alternatives and urgencies, (3) using Miller-range constraints to avoid oversimplified notions omniscience, and (4) constraining actions to 'moves' in multiple prototype games. Situated agent orientations are dynamically grounded in pragmatic considerations as well as intertwined with internal and external priorities. HokiPoki is a situated abstraction designed to shape and focus strategic agent orientations. The design integrates four pragmatic pairs: (1) problem and solution, (2) dependence and power, (3) constraint and affordance, and (4) (agent) intent and effect. In this way, agents are empowered to address multiple facets of a situation in an exploratory, or even arbitrary, order. HokiPoki is open to the internal orientation of the agent as it evolves, but also to the communications and actions of other agents.

  1. Particle Tracking Model and Abstraction of Transport Processes

    SciTech Connect

    B. Robinson

    2000-04-07

    The purpose of the transport methodology and component analysis is to provide the numerical methods for simulating radionuclide transport and model setup for transport in the unsaturated zone (UZ) site-scale model. The particle-tracking method of simulating radionuclide transport is incorporated into the FEHM computer code and the resulting changes in the FEHM code are to be submitted to the software configuration management system. This Analysis and Model Report (AMR) outlines the assumptions, design, and testing of a model for calculating radionuclide transport in the unsaturated zone at Yucca Mountain. In addition, methods for determining colloid-facilitated transport parameters are outlined for use in the Total System Performance Assessment (TSPA) analyses. Concurrently, process-level flow model calculations are being carrier out in a PMR for the unsaturated zone. The computer code TOUGH2 is being used to generate three-dimensional, dual-permeability flow fields, that are supplied to the Performance Assessment group for subsequent transport simulations. These flow fields are converted to input files compatible with the FEHM code, which for this application simulates radionuclide transport using the particle-tracking algorithm outlined in this AMR. Therefore, this AMR establishes the numerical method and demonstrates the use of the model, but the specific breakthrough curves presented do not necessarily represent the behavior of the Yucca Mountain unsaturated zone.

  2. A model for the synchronous machine using frequency response measurements

    SciTech Connect

    Bacalao, N.J.; Arizon, P. de; Sanchez L., R.O.

    1995-02-01

    This paper presents new techniques to improve the accuracy and velocity for the modeling of synchronous machines in stability and transient studies. The proposed model uses frequency responses as input data, obtained either directly from measurements or calculated from the available data. The new model is flexible as it allows changes in the detail in which the machine can be represented, and it is possible to partly compensate for the numerical errors incurred when using large integration time steps. The model can be used in transient stability and electromagnetic transient studies as secondary arc evaluation, load rejections and sub-synchronous resonance.

  3. Modelling machine ensembles with discrete event dynamical system theory

    NASA Technical Reports Server (NTRS)

    Hunter, Dan

    1990-01-01

    Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).

  4. Component based modelling of piezoelectric ultrasonic actuators for machining applications

    NASA Astrophysics Data System (ADS)

    Saleem, A.; Salah, M.; Ahmed, N.; Silberschmidt, V. V.

    2013-07-01

    Ultrasonically Assisted Machining (UAM) is an emerging technology that has been utilized to improve the surface finishing in machining processes such as turning, milling, and drilling. In this context, piezoelectric ultrasonic transducers are being used to vibrate the cutting tip while machining at predetermined amplitude and frequency. However, modelling and simulation of these transducers is a tedious and difficult task. This is due to the inherent nonlinearities associated with smart materials. Therefore, this paper presents a component-based model of ultrasonic transducers that mimics the nonlinear behaviour of such a system. The system is decomposed into components, a mathematical model of each component is created, and the whole system model is accomplished by aggregating the basic components' model. System parameters are identified using Finite Element technique which then has been used to simulate the system in Matlab/SIMULINK. Various operation conditions are tested and performed to demonstrate the system performance.

  5. Towards an Abstraction-Friendly Programming Model for High Productivity and High Performance Computing

    SciTech Connect

    Liao, C; Quinlan, D; Panas, T

    2009-10-06

    General purpose languages, such as C++, permit the construction of various high level abstractions to hide redundant, low level details and accelerate programming productivity. Example abstractions include functions, data structures, classes, templates and so on. However, the use of abstractions significantly impedes static code analyses and optimizations, including parallelization, applied to the abstractions complex implementations. As a result, there is a common perception that performance is inversely proportional to the level of abstraction. On the other hand, programming large scale, possibly heterogeneous high-performance computing systems is notoriously difficult and programmers are less likely to abandon the help from high level abstractions when solving real-world, complex problems. Therefore, the need for programming models balancing both programming productivity and execution performance has reached a new level of criticality. We are exploring a novel abstraction-friendly programming model in order to support high productivity and high performance computing. We believe that standard or domain-specific semantics associated with high level abstractions can be exploited to aid compiler analyses and optimizations, thus helping achieving high performance without losing high productivity. We encode representative abstractions and their useful semantics into an abstraction specification file. In the meantime, an accessible, source-to-source compiler infrastructure (the ROSE compiler) is used to facilitate recognizing high level abstractions and utilizing their semantics for more optimization opportunities. Our initial work has shown that recognizing abstractions and knowing their semantics within a compiler can dramatically extend the applicability of existing optimizations, including automatic parallelization. Moreover, a new set of optimizations have become possible within an abstraction-friendly and semantics-aware programming model. In the future, we will

  6. Applying model abstraction techniques to optimize monitoring networks for detecting subsurface contaminant transport

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Improving strategies for monitoring subsurface contaminant transport includes performance comparison of competing models, developed independently or obtained via model abstraction. Model comparison and parameter discrimination involve specific performance indicators selected to better understand s...

  7. Abstracting the principles of development using imaging and modeling

    PubMed Central

    Xiong, Fengzhu; Megason, Sean G.

    2015-01-01

    Summary Here we look at modern developmental biology with a focus on the relationship between different approaches of investigation. We argue that direct imaging is a powerful approach not only for obtaining descriptive information but also for model generation and testing that lead to mechanistic insights. Modeling, on the other hand, conceptualizes imaging data and provides guidance to perturbations. The inquiry progresses most efficiently when a trinity of approaches—quantitative imaging (measurement), modeling (theory) and perturbation (test) —are pursued in concert, but not when one approach is dominant. Using recent studies of the zebrafish system, we show how this combination has effectively advanced classic topics in developmental biology compared to a perturbation-centric approach. Finally, we show that interdisciplinary expertise and perhaps specialization are necessary for carrying out a systematic approach, and discuss the technical hurdles. PMID:25946995

  8. Committee of machine learning predictors of hydrological models uncertainty

    NASA Astrophysics Data System (ADS)

    Kayastha, Nagendra; Solomatine, Dimitri

    2014-05-01

    In prediction of uncertainty based on machine learning methods, the results of various sampling schemes namely, Monte Carlo sampling (MCS), generalized likelihood uncertainty estimation (GLUE), Markov chain Monte Carlo (MCMC), shuffled complex evolution metropolis algorithm (SCEMUA), differential evolution adaptive metropolis (DREAM), particle swarm optimization (PSO) and adaptive cluster covering (ACCO)[1] used to build a predictive models. These models predict the uncertainty (quantiles of pdf) of a deterministic output from hydrological model [2]. Inputs to these models are the specially identified representative variables (past events precipitation and flows). The trained machine learning models are then employed to predict the model output uncertainty which is specific for the new input data. For each sampling scheme three machine learning methods namely, artificial neural networks, model tree, locally weighted regression are applied to predict output uncertainties. The problem here is that different sampling algorithms result in different data sets used to train different machine learning models which leads to several models (21 predictive uncertainty models). There is no clear evidence which model is the best since there is no basis for comparison. A solution could be to form a committee of all models and to sue a dynamic averaging scheme to generate the final output [3]. This approach is applied to estimate uncertainty of streamflows simulation from a conceptual hydrological model HBV in the Nzoia catchment in Kenya. [1] N. Kayastha, D. L. Shrestha and D. P. Solomatine. Experiments with several methods of parameter uncertainty estimation in hydrological modeling. Proc. 9th Intern. Conf. on Hydroinformatics, Tianjin, China, September 2010. [2] D. L. Shrestha, N. Kayastha, and D. P. Solomatine, and R. Price. Encapsulation of parameteric uncertainty statistics by various predictive machine learning models: MLUE method, Journal of Hydroinformatic, in press

  9. Simulation model for Vuilleumier cycle machines and analysis of characteristics

    NASA Astrophysics Data System (ADS)

    Sekiya, Hiroshi; Terada, Fusao

    1992-11-01

    Numerical analysis using the computer is useful in predicting and evaluating the performance of the Vuilleumier (VM) cycle machine in research and development. The 3rd-order method must be employed particularly in the case of detailed analysis of performance and design optimization. This paper describes our simulation model for the VM machine, which is based on that method. The working space is divided into thirty-eight control volumes for the VM heat pump test machine, and the fundamental equations are derived rigorously by applying the conservative equations of mass, momentum, and energy to each control volume, using staggered mesh. These equations are solved simultaneously by the Adams-Moulton method. Then, the test machine is investigated in terms of the pressure and temperature fluctuations of the working gas, the energy flow, and the performance at each speed of revolution. The calculated results are examined in comparison with the experimental ones.

  10. Phase Transitions in a Model of Y-Molecules Abstract

    NASA Astrophysics Data System (ADS)

    Holz, Danielle; Ruth, Donovan; Toral, Raul; Gunton, James

    Immunoglobulin is a Y-shaped molecule that functions as an antibody to neutralize pathogens. In special cases where there is a high concentration of immunoglobulin molecules, self-aggregation can occur and the molecules undergo phase transitions. This prevents the molecules from completing their function. We used a simplified model of 2-Dimensional Y-molecules with three identical arms on a triangular lattice with 2-dimensional Grand Canonical Ensemble. The molecules were permitted to be placed, removed, rotated or moved on the lattice. Once phase coexistence was found, we used histogram reweighting and multicanonical sampling to calculate our phase diagram.

  11. Parallel phase model : a programming model for high-end parallel machines with manycores.

    SciTech Connect

    Wu, Junfeng; Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  12. A Model Expert System For Machine Failure Diagnosis (MED)

    NASA Astrophysics Data System (ADS)

    Liqun, Yin

    1987-05-01

    MED is a model expert system for machine failure diagnosis. MED can help the repairer quickly determine milling machine electrical failure. The key points in MED are a simple method to deal with the "subsequent visit" problem in machine failure diagnosis, a weighted list to interfere in the control of AGENDA to imitate an expert's continuous thinking process and to keep away erratic questioning and problem running away caused by probabilistic reasoning, the structuralized AGENDA, the characteristics of machine failure diagnosis and people's thinking pattern in faulure diagnosis. The structuralized AGENDA gives an idea to supply a more powerful as well as flexible control strategy in best-first search by using AGENDA. The "subsequent visit" problem is a very complicated task to solve, it will be convenient to deal with it by using a simple method to keep from consuming too much time in urgent situations. Weighted list also gives a method to improve control in inference of expert system. The characteristics of machine failure diagnosis and people's thinking pattern are both important for building a machine failure diagnosis expert system. When being told failure phenomena, MED can determine failure causes through dialogue. MED is written in LISP and run in UNIVAC 1100/10 and IBM PC/XT computers. The average diagnosis time per failure is 11 seconds to CPU, 2 minites to terminal operation, and 11 minites to a skilful repairer.

  13. Hydro- abrasive jet machining modeling for computer control and optimization

    NASA Astrophysics Data System (ADS)

    Groppetti, R.; Jovane, F.

    1993-06-01

    Use of hydro-abrasive jet machining (HAJM) for machining a wide variety of materials—metals, poly-mers, ceramics, fiber-reinforced composites, metal-matrix composites, and bonded or hybridized mate-rials—primarily for two- and three-dimensional cutting and also for drilling, turning, milling, and deburring, has been reported. However, the potential of this innovative process has not been explored fully. This article discusses process control, integration, and optimization of HAJM to establish a plat-form for the implementation of real-time adaptive control constraint (ACC), adaptive control optimiza-tion (ACO), and CAD/CAM integration. It presents the approach followed and the main results obtained during the development, implementation, automation, and integration of a HAJM cell and its computer-ized controller. After a critical analysis of the process variables and models reported in the literature to identify process variables and to define a process model suitable for HAJM real-time control and optimi-zation, to correlate process variables and parameters with machining results, and to avoid expensive and time-consuming experiments for determination of the optimal machining conditions, a process predic-tion and optimization model was identified and implemented. Then, the configuration of the HAJM cell, architecture, and multiprogramming operation of the controller in terms of monitoring, control, process result prediction, and process condition optimization were analyzed. This prediction and optimization model for selection of optimal machining conditions using multi-objective programming was analyzed. Based on the definition of an economy function and a productivity function, with suitable constraints relevant to required machining quality, required kerfing depth, and available resources, the model was applied to test cases based on experimental results.

  14. Closure modeling using field inversion and machine learning

    NASA Astrophysics Data System (ADS)

    Duraisamy, Karthik

    2015-11-01

    The recent acceleration in computational power and measurement resolution has made possible the availability of extreme scale simulations and data sets. In this work, a modeling paradigm that seeks to comprehensively harness large scale data is introduced, with the aim of improving closure models. Full-field inversion (in contrast to parameter estimation) is used to obtain corrective, spatially distributed functional terms, offering a route to directly address model-form errors. Once the inference has been performed over a number of problems that are representative of the deficient physics in the closure model, machine learning techniques are used to reconstruct the model corrections in terms of variables that appear in the closure model. These machine-learned functional forms are then used to augment the closure model in predictive computations. The approach is demonstrated to be able to successfully reconstruct functional corrections and yield predictions with quantified uncertainties in a range of turbulent flows.

  15. Control of discrete event systems modeled as hierarchical state machines

    NASA Technical Reports Server (NTRS)

    Brave, Y.; Heymann, M.

    1991-01-01

    The authors examine a class of discrete event systems (DESs) modeled as asynchronous hierarchical state machines (AHSMs). For this class of DESs, they provide an efficient method for testing reachability, which is an essential step in many control synthesis procedures. This method utilizes the asynchronous nature and hierarchical structure of AHSMs, thereby illustrating the advantage of the AHSM representation as compared with its equivalent (flat) state machine representation. An application of the method is presented where an online minimally restrictive solution is proposed for the problem of maintaining a controlled AHSM within prescribed legal bounds.

  16. Analytical model for force prediction when machining metal matrix composites

    NASA Astrophysics Data System (ADS)

    Sikder, Snahungshu

    Metal Matrix Composites (MMC) offer several thermo-mechanical advantages over standard materials and alloys which make them better candidates in different applications. Their light weight, high stiffness, and strength have attracted several industries such as automotive, aerospace, and defence for their wide range of products. However, the wide spread application of Meal Matrix Composites is still a challenge for industry. The hard and abrasive nature of the reinforcement particles is responsible for rapid tool wear and high machining costs. Fracture and debonding of the abrasive reinforcement particles are the considerable damage modes that directly influence the tool performance. It is very important to find highly effective way to machine MMCs. So, it is important to predict forces when machining Metal Matrix Composites because this will help to choose perfect tools for machining and ultimately save both money and time. This research presents an analytical force model for predicting the forces generated during machining of Metal Matrix Composites. In estimating the generated forces, several aspects of cutting mechanics were considered including: shearing force, ploughing force, and particle fracture force. Chip formation force was obtained by classical orthogonal metal cutting mechanics and the Johnson-Cook Equation. The ploughing force was formulated while the fracture force was calculated from the slip line field theory and the Griffith theory of failure. The predicted results were compared with previously measured data. The results showed very good agreement between the theoretically predicted and experimentally measured cutting forces.

  17. Applying Machine Trust Models to Forensic Investigations

    NASA Astrophysics Data System (ADS)

    Wojcik, Marika; Venter, Hein; Eloff, Jan; Olivier, Martin

    Digital forensics involves the identification, preservation, analysis and presentation of electronic evidence for use in legal proceedings. In the presence of contradictory evidence, forensic investigators need a means to determine which evidence can be trusted. This is particularly true in a trust model environment where computerised agents may make trust-based decisions that influence interactions within the system. This paper focuses on the analysis of evidence in trust-based environments and the determination of the degree to which evidence can be trusted. The trust model proposed in this work may be implemented in a tool for conducting trust-based forensic investigations. The model takes into account the trust environment and parameters that influence interactions in a computer network being investigated. Also, it allows for crimes to be reenacted to create more substantial evidentiary proof.

  18. Three dimensional CAD model of the Ignitor machine

    NASA Astrophysics Data System (ADS)

    Orlandi, S.; Zanaboni, P.; Macco, A.; Sioli, V.; Risso, E.

    1998-11-01

    defind The final, global product of all the structural and thermomechanical design activities is a complete three dimensional CAD (AutoCAD and Intergraph Design Review) model of the IGNITOR machine. With this powerful tool, any interface, modification, or upgrading of the machine design is managed as an integrated part of the general effort aimed at the construction of the Ignitor facility. ind The activities that are underway, to complete the design of the core of the experiment and that will be described, concern the following: ind - the cryogenic cooling system, ind - the radial press, the center post, the mechanical supports (legs) of the entire machine, ind - the inner mechanical supports of major components such as the plasma chamber and the outer poloidal field coils.

  19. Global ocean modeling on the Connection Machine

    SciTech Connect

    Smith, R.D.; Dukowicz, J.K.; Malone, R.C.

    1993-10-01

    The authors have developed a version of the Bryan-Cox-Semtner ocean model (Bryan, 1969; Semtner, 1976; Cox, 1984) for massively parallel computers. Such models are three-dimensional, Eulerian models that use latitude and longitude as the horizontal spherical coordinates and fixed depth levels as the vertical coordinate. The incompressible Navier-Stokes equations, with a turbulent eddy viscosity, and mass continuity equation are solved, subject to the hydrostatic and Boussinesq approximations. The traditional model formulation uses a rigid-lid approximation (vertical velocity = 0 at the ocean surface) to eliminate fast surface waves. These waves would otherwise require that a very short time step be used in numerical simulations, which would greatly increase the computational cost. To solve the equations with the rigid-lid assumption, the equations of motion are split into two parts: a set of twodimensional ``barotropic`` equations describing the vertically-averaged flow, and a set of three-dimensional ``baroclinic`` equations describing temperature, salinity and deviations of the horizontal velocities from the vertically-averaged flow.

  20. Thermal-mechanical modeling of laser ablation hybrid machining

    NASA Astrophysics Data System (ADS)

    Matin, Mohammad Kaiser

    2001-08-01

    Hard, brittle and wear-resistant materials like ceramics pose a problem when being machined using conventional machining processes. Machining ceramics even with a diamond cutting tool is very difficult and costly. Near net-shape processes, like laser evaporation, produce micro-cracks that require extra finishing. Thus it is anticipated that ceramic machining will have to continue to be explored with new-sprung techniques before ceramic materials become commonplace. This numerical investigation results from the numerical simulations of the thermal and mechanical modeling of simultaneous material removal from hard-to-machine materials using both laser ablation and conventional tool cutting utilizing the finite element method. The model is formulated using a two dimensional, planar, computational domain. The process simulation acronymed, LAHM (Laser Ablation Hybrid Machining), uses laser energy for two purposes. The first purpose is to remove the material by ablation. The second purpose is to heat the unremoved material that lies below the ablated material in order to ``soften'' it. The softened material is then simultaneously removed by conventional machining processes. The complete solution determines the temperature distribution and stress contours within the material and tracks the moving boundary that occurs due to material ablation. The temperature distribution is used to determine the distance below the phase change surface where sufficient ``softening'' has occurred, so that a cutting tool may be used to remove additional material. The model incorporated for tracking the ablative surface does not assume an isothermal melt phase (e.g. Stefan problem) for laser ablation. Both surface absorption and volume absorption of laser energy as function of depth have been considered in the models. LAHM, from the thermal and mechanical point of view is a complex machining process involving large deformations at high strain rates, thermal effects of the laser, removal of

  1. An Expectation-Maximization Method for Calibrating Synchronous Machine Models

    SciTech Connect

    Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang

    2013-07-21

    The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.

  2. Problems in modeling man machine control behavior in biodynamic environments

    NASA Technical Reports Server (NTRS)

    Jex, H. R.

    1972-01-01

    Reviewed are some current problems in modeling man-machine control behavior in a biodynamic environment. It is given in two parts: (1) a review of the models which are appropriate for manual control behavior and the added elements necessary to deal with biodynamic interfaces; and (2) a review of some biodynamic interface pilot/vehicle problems which have occurred, been solved, or need to be solved.

  3. Abstract Model of the SATS Concept of Operations: Initial Results and Recommendations

    NASA Technical Reports Server (NTRS)

    Dowek, Gilles; Munoz, Cesar; Carreno, Victor A.

    2004-01-01

    An abstract mathematical model of the concept of operations for the Small Aircraft Transportation System (SATS) is presented. The Concept of Operations consist of several procedures that describe nominal operations for SATS, Several safety properties of the system are proven using formal techniques. The final goal of the verification effort is to show that under nominal operations, aircraft are safely separated. The abstract model was written and formally verified in the Prototype Verification System (PVS).

  4. Bilingual Cluster Based Models for Statistical Machine Translation

    NASA Astrophysics Data System (ADS)

    Yamamoto, Hirofumi; Sumita, Eiichiro

    We propose a domain specific model for statistical machine translation. It is well-known that domain specific language models perform well in automatic speech recognition. We show that domain specific language and translation models also benefit statistical machine translation. However, there are two problems with using domain specific models. The first is the data sparseness problem. We employ an adaptation technique to overcome this problem. The second issue is domain prediction. In order to perform adaptation, the domain must be provided, however in many cases, the domain is not known or changes dynamically. For these cases, not only the translation target sentence but also the domain must be predicted. This paper focuses on the domain prediction problem for statistical machine translation. In the proposed method, a bilingual training corpus, is automatically clustered into sub-corpora. Each sub-corpus is deemed to be a domain. The domain of a source sentence is predicted by using its similarity to the sub-corpora. The predicted domain (sub-corpus) specific language and translation models are then used for the translation decoding. This approach gave an improvement of 2.7 in BLEU score on the IWSLT05 Japanese to English evaluation corpus (improving the score from 52.4 to 55.1). This is a substantial gain and indicates the validity of the proposed bilingual cluster based models.

  5. Knowledge in formation: The machine-modeled frame of mind

    SciTech Connect

    Shore, B.

    1996-12-31

    Artificial Intelligence researchers have used the digital computer as a model for the human mind in two different ways. Most obviously, the computer has been used as a tool on which simulations of thinking-as-programs are developed and tested. Less obvious, but of great significance, is the use of the computer as a conceptual model for the human mind. This essay traces the sources of this machine-modeled conception of cognition in a great variety of social institutions and everyday experienced treating them as {open_quotes}cultural models{close_quotes} which have contributed to the naturalness of The mine-as-machine paradigm for many Americans. The roots of these models antedate the actual development of modern computers, and take the form of a {open_quotes}modularity schema{close_quotes} that has shaped the cultural and cognitive landscape of modernity. The essay concludes with a consideration of some of the cognitive consequences of this extension of machine logic into modern life, and proposes an important distinction between information processing models of thought and meaning-making in how human cognition is conceptualized.

  6. Multiple measurement models of articulated arm coordinate measuring machines

    NASA Astrophysics Data System (ADS)

    Zheng, Dateng; Xiao, Zhongyue; Xia, Xiang

    2015-09-01

    The existing articulated arm coordinate measuring machines(AACMM) with one measurement model are easy to cause low measurement accuracy because the whole sampling space is much bigger than the result in the unstable calibration parameters. To compensate for the deficiency of one measurement model, the multiple measurement models are built by the Denavit-Hartenberg's notation, the homemade standard rod components are used as a calibration tool and the Levenberg-Marquardt calibration algorithm is applied to solve the structural parameters in the measurement models. During the tests of multiple measurement models, the sample areas are selected in two situations. It is found that the measurement errors' sigma value(0.083 4 mm) dealt with one measurement model is nearly two times larger than that of the multiple measurement models(0.043 1 mm) in the same sample area. While in the different sample area, the measurement errors' sigma value(0.054 0 mm) dealt with the multiple measurement models is about 40% of one measurement model(0.137 3 mm). The preliminary results suggest that the measurement accuracy of AACMM dealt with multiple measurement models is superior to the accuracy of the existing machine with one measurement model. This paper proposes the multiple measurement models to improve the measurement accuracy of AACMM without increasing any hardware cost.

  7. Tracer transport in soils and shallow groundwater: model abstraction with modern tools

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Vadose zone controls contaminant transport from the surface to groundwater, and modeling transport in vadose zone has become a burgeoning field. Exceedingly complex models of subsurface contaminant transport are often inefficient. Model abstraction is the methodology for reducing the complexity of a...

  8. Stochastic Local Interaction (SLI) model: Bridging machine learning and geostatistics

    NASA Astrophysics Data System (ADS)

    Hristopulos, Dionissios T.

    2015-12-01

    Machine learning and geostatistics are powerful mathematical frameworks for modeling spatial data. Both approaches, however, suffer from poor scaling of the required computational resources for large data applications. We present the Stochastic Local Interaction (SLI) model, which employs a local representation to improve computational efficiency. SLI combines geostatistics and machine learning with ideas from statistical physics and computational geometry. It is based on a joint probability density function defined by an energy functional which involves local interactions implemented by means of kernel functions with adaptive local kernel bandwidths. SLI is expressed in terms of an explicit, typically sparse, precision (inverse covariance) matrix. This representation leads to a semi-analytical expression for interpolation (prediction), which is valid in any number of dimensions and avoids the computationally costly covariance matrix inversion.

  9. 97. View of International Business Machine (IBM) digital computer model ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    97. View of International Business Machine (IBM) digital computer model 7090 magnetic core installation, international telephone and telegraph (ITT) Artic Services Inc., Official photograph BMEWS site II, Clear, AK, by unknown photographer, 17 September 1965, BMEWS, clear as negative no. A-6604. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  10. Modeling the Swift BAT Trigger Algorithm with Machine Learning

    NASA Astrophysics Data System (ADS)

    Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori

    2016-02-01

    To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift/BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of ≳97% (≲3% error), which is a significant improvement on a cut in GRB flux, which has an accuracy of 89.6% (10.4% error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of {n}0∼ {0.48}-0.23+0.41 {{{Gpc}}}-3 {{{yr}}}-1 with power-law indices of {n}1∼ {1.7}-0.5+0.6 and {n}2∼ -{5.9}-0.1+5.7 for GRBs above and below a break point of {z}1∼ {6.8}-3.2+2.8. This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting.

  11. Editorial: Mathematical Methods and Modeling in Machine Fault Diagnosis

    DOE PAGESBeta

    Yan, Ruqiang; Chen, Xuefeng; Li, Weihua; Sheng, Shuangwen

    2014-12-18

    Modern mathematics has commonly been utilized as an effective tool to model mechanical equipment so that their dynamic characteristics can be studied analytically. This will help identify potential failures of mechanical equipment by observing change in the equipment’s dynamic parameters. On the other hand, dynamic signals are also important and provide reliable information about the equipment’s working status. Modern mathematics has also provided us with a systematic way to design and implement various signal processing methods, which are used to analyze these dynamic signals, and to enhance intrinsic signal components that are directly related to machine failures. This special issuemore » is aimed at stimulating not only new insights on mathematical methods for modeling but also recently developed signal processing methods, such as sparse decomposition with potential applications in machine fault diagnosis. Finally, the papers included in this special issue provide a glimpse into some of the research and applications in the field of machine fault diagnosis through applications of the modern mathematical methods.« less

  12. Editorial: Mathematical Methods and Modeling in Machine Fault Diagnosis

    SciTech Connect

    Yan, Ruqiang; Chen, Xuefeng; Li, Weihua; Sheng, Shuangwen

    2014-12-18

    Modern mathematics has commonly been utilized as an effective tool to model mechanical equipment so that their dynamic characteristics can be studied analytically. This will help identify potential failures of mechanical equipment by observing change in the equipment’s dynamic parameters. On the other hand, dynamic signals are also important and provide reliable information about the equipment’s working status. Modern mathematics has also provided us with a systematic way to design and implement various signal processing methods, which are used to analyze these dynamic signals, and to enhance intrinsic signal components that are directly related to machine failures. This special issue is aimed at stimulating not only new insights on mathematical methods for modeling but also recently developed signal processing methods, such as sparse decomposition with potential applications in machine fault diagnosis. Finally, the papers included in this special issue provide a glimpse into some of the research and applications in the field of machine fault diagnosis through applications of the modern mathematical methods.

  13. "Machine" consciousness and "artificial" thought: an operational architectonics model guided approach.

    PubMed

    Fingelkurts, Andrew A; Fingelkurts, Alexander A; Neves, Carlos F H

    2012-01-01

    Instead of using low-level neurophysiology mimicking and exploratory programming methods commonly used in the machine consciousness field, the hierarchical operational architectonics (OA) framework of brain and mind functioning proposes an alternative conceptual-theoretical framework as a new direction in the area of model-driven machine (robot) consciousness engineering. The unified brain-mind theoretical OA model explicitly captures (though in an informal way) the basic essence of brain functional architecture, which indeed constitutes a theory of consciousness. The OA describes the neurophysiological basis of the phenomenal level of brain organization. In this context the problem of producing man-made "machine" consciousness and "artificial" thought is a matter of duplicating all levels of the operational architectonics hierarchy (with its inherent rules and mechanisms) found in the brain electromagnetic field. We hope that the conceptual-theoretical framework described in this paper will stimulate the interest of mathematicians and/or computer scientists to abstract and formalize principles of hierarchy of brain operations which are the building blocks for phenomenal consciousness and thought. PMID:21130079

  14. Model-Driven Engineering of Machine Executable Code

    NASA Astrophysics Data System (ADS)

    Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira

    Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.

  15. Modeling of Passive Forces of Machine Tool Covers

    NASA Astrophysics Data System (ADS)

    Kolar, Petr; Hudec, Jan; Sulitka, Matej

    The passive forces acting against the drive force are phenomena that influence dynamical properties and precision of linear axes equipped with feed drives. Covers are one of important sources of passive forces in machine tools. The paper describes virtual evaluation of cover passive forces using the cover complex model. The model is able to compute interaction between flexible cover segments and sealing wiper. The result is deformation of cover segments and wipers which is used together with measured friction coefficient for computation of cover total passive force. This resulting passive force is dependent on cover position. Comparison of computational results and measurement on the real cover is presented in the paper.

  16. Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents

    SciTech Connect

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard; Parker, Lynne Edwards

    2014-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.

  17. Abstraction of Models for Pitting and Crevice Corrosion of Drip Shield and Waste Package Outer Barrier

    SciTech Connect

    K. Mon

    2001-08-29

    This analyses and models report (AMR) was conducted in response to written work direction (CRWMS M and O 1999a). ICN 01 of this AMR was developed following guidelines provided in TWP-MGR-MD-000004 REV 01, ''Technical Work Plan for: Integrated Management of Technical Product Input Department'' (BSC 2001, Addendum B). The purpose and scope of this AMR is to review and analyze upstream process-level models (CRWMS M and O 2000a and CRWMS M and O 2000b) and information relevant to pitting and crevice corrosion degradation of waste package outer barrier (Alloy 22) and drip shield (Titanium Grade 7) materials, and to develop abstractions of the important processes in a form that is suitable for input to the WAPDEG analysis for long-term degradation of waste package outer barrier and drip shield in the repository. The abstraction is developed in a manner that ensures consistency with the process-level models and information and captures the essential behavior of the processes represented. Also considered in the model abstraction are the probably range of exposure conditions in emplacement drifts and local exposure conditions on drip shield and waste package surfaces. The approach, method, and assumptions that are employed in the model abstraction are documented and justified.

  18. Technical Work Plan for: Near Field Environment: Engineered System: Radionuclide Transport Abstraction Model Report

    SciTech Connect

    J.D. Schreiber

    2006-12-08

    This technical work plan (TWP) describes work activities to be performed by the Near-Field Environment Team. The objective of the work scope covered by this TWP is to generate Revision 03 of EBS Radionuclide Transport Abstraction, referred to herein as the radionuclide transport abstraction (RTA) report. The RTA report is being revised primarily to address condition reports (CRs), to address issues identified by the Independent Validation Review Team (IVRT), to address the potential impact of transport, aging, and disposal (TAD) canister design on transport models, and to ensure integration with other models that are closely associated with the RTA report and being developed or revised in other analysis/model reports in response to IVRT comments. The RTA report will be developed in accordance with the most current version of LP-SIII.10Q-BSC and will reflect current administrative procedures (LP-3.15Q-BSC, ''Managing Technical Product Inputs''; LP-SIII.2Q-BSC, ''Qualification of Unqualified Data''; etc.), and will develop related Document Input Reference System (DIRS) reports and data qualifications as applicable in accordance with prevailing procedures. The RTA report consists of three models: the engineered barrier system (EBS) flow model, the EBS transport model, and the EBS-unsaturated zone (UZ) interface model. The flux-splitting submodel in the EBS flow model will change, so the EBS flow model will be validated again. The EBS transport model and validation of the model will be substantially revised in Revision 03 of the RTA report, which is the main subject of this TWP. The EBS-UZ interface model may be changed in Revision 03 of the RTA report due to changes in the conceptualization of the UZ transport abstraction model (a particle tracker transport model based on the discrete fracture transfer function will be used instead of the dual-continuum transport model previously used). Validation of the EBS-UZ interface model will be revised to be consistent with

  19. A dynamic model for material removal in ultrasonic machining

    SciTech Connect

    Wang, Z.Y.; Rojurkar, K.P.

    1995-12-31

    This paper proposes a dynamic model of the material removal mechanism and provides a relationship between material removal rate and operation parameters in ultrasonic machining (USM). The model incorporates effect of high values of vibration amplitude, frequency and grit size. The effect of non-uniformity of abrasive grits is also considered by using a probability distribution for the diameter of the abrasive particles. The model is able to predict accurately the increasing rate of material removal for increasing values of amplitude and frequency. It can also be used to determine the reducing rate of material removal, after a certain maximum level is attained, for further increments of vibration amplitude and frequency. Equations representing the dynamic normal stress and elastic displacement of work-piece caused by the impact of an arbitrary grit are used in developing a model considering the dynamic impact phenomena of grits on the work-piece. The analysis shows that there is an effective speed zone for the tool. Within this range, grits in the cutting zone can obtain the maximum momentum and energy from the tool. During the machining process, only those grits whose sizes are in the range of the effective speed zone, can abrade work-piece most effectively.

  20. Geochemistry Model Abstraction and Sensitivity Studies for the 21 PWR CSNF Waste Package

    SciTech Connect

    P. Bernot; S. LeStrange; E. Thomas; K. Zarrabi; S. Arthur

    2002-10-29

    The CSNF geochemistry model abstraction, as directed by the TWP (BSC 2002b), was developed to provide regression analysis of EQ6 cases to obtain abstracted values of pH (and in some cases HCO{sub 3}{sup -} concentration) for use in the Configuration Generator Model. The pH of the system is the controlling factor over U mineralization, CSNF degradation rate, and HCO{sub 3}{sup -} concentration in solution. The abstraction encompasses a large variety of combinations for the degradation rates of materials. The ''base case'' used EQ6 simulations looking at differing steel/alloy corrosion rates, drip rates, and percent fuel exposure. Other values such as the pH/HCO{sub 3}{sup -} dependent fuel corrosion rate and the corrosion rate of A516 were kept constant. Relationships were developed for pH as a function of these differing rates to be used in the calculation of total C and subsequently, the fuel rate. An additional refinement to the abstraction was the addition of abstracted pH values for cases where there was limited O{sub 2} for waste package corrosion and a flushing fluid other than J-13, which has been used in all EQ6 calculation up to this point. These abstractions also used EQ6 simulations with varying combinations of corrosion rates of materials to abstract the pH (and HCO{sub 3}{sup -} in the case of the limiting O{sub 2} cases) as a function of WP materials corrosion rates. The goodness of fit for most of the abstracted values was above an R{sup 2} of 0.9. Those below this value occurred during the time at the very beginning of WP corrosion when large variations in the system pH are observed. However, the significance of F-statistic for all the abstractions showed that the variable relationships are significant. For the abstraction, an analysis of the minerals that may form the ''sludge'' in the waste package was also presented. This analysis indicates that a number a different iron and aluminum minerals may form in the waste package other than those

  1. Support Vector Machines for Petrophysical Modelling and Lithoclassification

    NASA Astrophysics Data System (ADS)

    Al-Anazi, Ammal Fannoush Khalifah

    2011-12-01

    Given increasing challenges of oil and gas production from partially depleted conventional or unconventional reservoirs, reservoir characterization is a key element of the reservoir development workflow. Reservoir characterization impacts well placement, injection and production strategies, and field management. Reservoir characterization projects point and line data to a large three-dimensional volume. The relationship between variables, e.g. porosity and permeability, is often established by regression yet the complexities between measured variables often lead to poor correlation coefficients between the regressed variables. Recent advances in machine learning methods have provided attractive alternatives for constructing interpretation models of rock properties in heterogeneous reservoirs. Here, Support Vector Machines (SVMs), a class of a learning machine that is formulated to output regression models and classifiers of competitive generalization capability, has been explored to determine its capabilities for determining the relationship, both in regression and in classification, between reservoir rock properties. This thesis documents research on the capability of SVMs to model petrophysical and elastic properties in heterogeneous sandstone and carbonate reservoirs. Specifically, the capabilities of SVM regression and classification has been examined and compared to neural network-based methods, namely multilayered neural networks, radial basis function neural networks, general regression neural networks, probabilistic neural networks, and linear discriminant analysis. The petrophysical properties that have been evaluated include porosity, permeability, Poisson's ratio and Young's modulus. Statistical error analysis reveals that the SVM method yields comparable or superior predictions of petrophysical and elastic rock properties and classification of the lithology compared to neural networks. The SVM method also shows uniform prediction capability under the

  2. Transferable Atomic Multipole Machine Learning Models for Small Organic Molecules.

    PubMed

    Bereau, Tristan; Andrienko, Denis; von Lilienfeld, O Anatole

    2015-07-14

    Accurate representation of the molecular electrostatic potential, which is often expanded in distributed multipole moments, is crucial for an efficient evaluation of intermolecular interactions. Here we introduce a machine learning model for multipole coefficients of atom types H, C, O, N, S, F, and Cl in any molecular conformation. The model is trained on quantum-chemical results for atoms in varying chemical environments drawn from thousands of organic molecules. Multipoles in systems with neutral, cationic, and anionic molecular charge states are treated with individual models. The models' predictive accuracy and applicability are illustrated by evaluating intermolecular interaction energies of nearly 1,000 dimers and the cohesive energy of the benzene crystal. PMID:26575759

  3. Modelling fate and transport of pesticides in river catchments with drinking water abstractions

    NASA Astrophysics Data System (ADS)

    Desmet, Nele; Seuntjens, Piet; Touchant, Kaatje

    2010-05-01

    When drinking water is abstracted from surface water, the presence of pesticides may have a large impact on the purification costs. In order to respect imposed thresholds at points of drinking water abstraction in a river catchment, sustainable pesticide management strategies might be required in certain areas. To improve management strategies, a sound understanding of the emission routes, the transport, the environmental fate and the sources of pesticides is needed. However, pesticide monitoring data on which measures are founded, are generally scarce. Data scarcity hampers the interpretation and the decision making. In such a case, a modelling approach can be very useful as a tool to obtain complementary information. Modelling allows to take into account temporal and spatial variability in both discharges and concentrations. In the Netherlands, the Meuse river is used for drinking water abstraction and the government imposes the European drinking water standard for individual pesticides (0.1 ?g.L-1) for surface waters at points of drinking water abstraction. The reported glyphosate concentrations in the Meuse river frequently exceed the standard and this enhances the request for targeted measures. In this study, a model for the Meuse river was developed to estimate the contribution of influxes at the Dutch-Belgian border on the concentration levels detected at the drinking water intake 250 km downstream and to assess the contribution of the tributaries to the glyphosate loads. The effects of glyphosate decay on environmental fate were considered as well. Our results show that the application of a river model allows to asses fate and transport of pesticides in a catchment in spite of monitoring data scarcity. Furthermore, the model provides insight in the contribution of different sub basins to the pollution level. The modelling results indicate that the effect of local measures to reduce pesticides concentrations in the river at points of drinking water

  4. Modeling of Unsteady Three-dimensional Flows in Multistage Machines

    NASA Technical Reports Server (NTRS)

    Hall, Kenneth C.; Pratt, Edmund T., Jr.; Kurkov, Anatole (Technical Monitor)

    2003-01-01

    Despite many years of development, the accurate and reliable prediction of unsteady aerodynamic forces acting on turbomachinery blades remains less than satisfactory, especially when viewed next to the great success investigators have had in predicting steady flows. Hall and Silkowski (1997) have proposed that one of the main reasons for the discrepancy between theory and experiment and/or industrial experience is that many of the current unsteady aerodynamic theories model a single blade row in an infinitely long duct, ignoring potentially important multistage effects. However, unsteady flows are made up of acoustic, vortical, and entropic waves. These waves provide a mechanism for the rotors and stators of multistage machines to communicate with one another. In other words, wave behavior makes unsteady flows fundamentally a multistage (and three-dimensional) phenomenon. In this research program, we have has as goals (1) the development of computationally efficient computer models of the unsteady aerodynamic response of blade rows embedded in a multistage machine (these models will ultimately be capable of analyzing three-dimensional viscous transonic flows), and (2) the use of these computer codes to study a number of important multistage phenomena.

  5. Modeling the Swift BAT Trigger Algorithm with Machine Learning

    NASA Technical Reports Server (NTRS)

    Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori

    2015-01-01

    To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. (2014) is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of approximately greater than 97% (approximately less than 3% error), which is a significant improvement on a cut in GRB flux which has an accuracy of 89:6% (10:4% error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of eta(sub 0) approximately 0.48(+0.41/-0.23) Gpc(exp -3) yr(exp -1) with power-law indices of eta(sub 1) approximately 1.7(+0.6/-0.5) and eta(sub 2) approximately -5.9(+5.7/-0.1) for GRBs above and below a break point of z(sub 1) approximately 6.8(+2.8/-3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting. The code used in this is analysis is publicly available online.

  6. Applications and modelling of bulk HTSs in brushless ac machines

    NASA Astrophysics Data System (ADS)

    Barnes, G. J.; McCulloch, M. D.; Dew-Hughes, D.

    2000-06-01

    The use of high temperature superconducting material in its bulk form for engineering applications is attractive due to the large power densities that can be achieved. In brushless electrical machines, there are essentially four properties that can be exploited; their hysteretic nature, their flux shielding properties, their ability to trap large flux densities and their ability to produce levitation. These properties translate to hysteresis machines, reluctance machines, trapped-field synchronous machines and linear motors respectively. Each one of these machines is addressed separately and computer simulations that reveal the current and field distributions within the machines are used to explain their operation.

  7. Machine vision algorithm generation using human visual models

    NASA Astrophysics Data System (ADS)

    Daley, Wayne D.; Doll, Theodore J.; McWhorter, Shane W.; Wasilewski, Anthony A.

    1999-01-01

    The design of robust machine vision algorithms is one of the most difficult parts of developing and integrating automated systems. Historically, most of the techniques have been developed using ad hoc methodologies. This problem is more severe in the area of natural/biological products. In this arena, it has been difficult to capture and model the natural variability to be expected in the products. This present difficulty in performing quality and process control in the meat, fruit and vegetable industries. While some systems have been introduced, they do not adequately address the wide range of needs. This paper will propose an algorithm development technique that utilizes modes of the human visual system. It will address that subset of problems that humans perform well, but have proven difficult to automate with the standard machine vision techniques. The basis of the technique evaluation will be the Georgia Tech Vision model. This approach demonstrates a high level of accuracy in its ability to solve difficult problems. This paper will present the approach, the result, and possibilities for implementation.

  8. Global atmospheric and ocean modeling on the connection machine

    SciTech Connect

    Atlas, S.R.

    1993-12-01

    This paper describes the high-level architecture of two parallel global climate models: an atmospheric model based on the Geophysical Fluid Dynamics Laboratory (GFDL) SKYHI model, and an ocean model descended from the Bryan-Cox-Semtner ocean general circulation model. These parallel models are being developed as part of a long-term research collaboration between Los Alamos National Laboratory (LANL) and the GFDL. The goal of this collaboration is to develop parallel global climate models which are modular in structure, portable across a wide variety of machine architectures and programming paradigms, and provide an appropriate starting point for a fully coupled model. Several design considerations have emerged as central to achieving these goals. These include the expression of the models in terms of mathematical primitives such as stencil operators, to facilitate performance optimization on different computational platforms; the isolation of communication from computation to allow flexible implementation of a single code under message-passing or data parallel programming paradigms; and judicious memory management to achieve modularity without memory explosion costs.

  9. Modelling the structure and function of enzymes by machine learning.

    PubMed

    Sternberg, M J; Lewis, R A; King, R D; Muggleton, S

    1992-01-01

    A machine learning program, GOLEM, has been applied to two problems: (1) the prediction of protein secondary structure from sequence and (2) modelling a quantitative structure-activity relationship in drug design. GOLEM takes as input observations and combines them with background knowledge of chemistry to yield rules expressed as stereochemical principles for prediction. The secondary structure prediction was explored on the alpha/alpha class of proteins; on an unrelated test set it yielded 81% accuracy. The rules from GOLEM defined patterns of residues forming alpha-helices. The system studied for drug design was the activities of trimethoprim analogues binding to E. coli dihydrofolate reductase. The GOLEM rules were a better model than standard regression approaches. More importantly, these rules described the chemical properties of the enzyme-binding site that were in broad agreement with the crystallographic structure. PMID:1290938

  10. Modeling the meaning of words: neural correlates of abstract and concrete noun processing.

    PubMed

    Mårtensson, Frida; Roll, Mikael; Apt, Pia; Horne, Merle

    2011-01-01

    We present a model relating analysis of abstract and concrete word meaning in terms of semantic features and contextual frames within a general framework of neurocognitive information processing. The approach taken here assumes concrete noun meanings to be intimately related to sensory feature constellations. These features are processed by posterior sensory regions of the brain, e.g. the occipital lobe, which handles visual information. The interpretation of abstract nouns, however, is likely to be more dependent on semantic frames and linguistic context. A greater involvement of more anteriorly located, perisylvian brain areas has previously been found for the processing of abstract words. In the present study, a word association test was carried out in order to compare semantic processing in healthy subjects (n=12) with subjects with aphasia due to perisylvian lesions (n=3) and occipital lesions (n=1). The word associations were coded into different categories depending on their semantic content. A double dissociation was found, where, compared to the controls, the perisylvian aphasic subjects had problems associating to abstract nouns and produced fewer semantic framebased associations, whereas the occipital aphasic subject showed disturbances in concrete noun processing and made fewer semantic feature based associations. PMID:22237493

  11. Kinetic modeling of α-hydrogen abstractions from unsaturated and saturated oxygenate compounds by hydrogen atoms.

    PubMed

    Paraskevas, Paschalis D; Sabbe, Maarten K; Reyniers, Marie-Françoise; Papayannakos, Nikos G; Marin, Guy B

    2014-10-01

    Hydrogen-abstraction reactions play a significant role in thermal biomass conversion processes, as well as regular gasification, pyrolysis, or combustion. In this work, a group additivity model is constructed that allows prediction of reaction rates and Arrhenius parameters of hydrogen abstractions by hydrogen atoms from alcohols, ethers, esters, peroxides, ketones, aldehydes, acids, and diketones in a broad temperature range (300-2000 K). A training set of 60 reactions was developed with rate coefficients and Arrhenius parameters calculated by the CBS-QB3 method in the high-pressure limit with tunneling corrections using Eckart tunneling coefficients. From this set of reactions, 15 group additive values were derived for the forward and the reverse reaction, 4 referring to primary and 11 to secondary contributions. The accuracy of the model is validated upon an ab initio and an experimental validation set of 19 and 21 reaction rates, respectively, showing that reaction rates can be predicted with a mean factor of deviation of 2 for the ab initio and 3 for the experimental values. Hence, this work illustrates that the developed group additive model can be reliably applied for the accurate prediction of kinetics of α-hydrogen abstractions by hydrogen atoms from a broad range of oxygenates. PMID:25209711

  12. An Initial-Abstraction, Constant-Loss Model for Unit Hydrograph Modeling for Applicable Watersheds in Texas

    USGS Publications Warehouse

    Asquith, William H.; Roussel, Meghan C.

    2007-01-01

    Estimation of representative hydrographs from design storms, which are known as design hydrographs, provides for cost-effective, riskmitigated design of drainage structures such as bridges, culverts, roadways, and other infrastructure. During 2001?07, the U.S. Geological Survey (USGS), in cooperation with the Texas Department of Transportation, investigated runoff hydrographs, design storms, unit hydrographs,and watershed-loss models to enhance design hydrograph estimation in Texas. Design hydrographs ideally should mimic the general volume, peak, and shape of observed runoff hydrographs. Design hydrographs commonly are estimated in part by unit hydrographs. A unit hydrograph is defined as the runoff hydrograph that results from a unit pulse of excess rainfall uniformly distributed over the watershed at a constant rate for a specific duration. A time-distributed, watershed-loss model is required for modeling by unit hydrographs. This report develops a specific time-distributed, watershed-loss model known as an initial-abstraction, constant-loss model. For this watershed-loss model, a watershed is conceptualized to have the capacity to store or abstract an absolute depth of rainfall at and near the beginning of a storm. Depths of total rainfall less than this initial abstraction do not produce runoff. The watershed also is conceptualized to have the capacity to remove rainfall at a constant rate (loss) after the initial abstraction is satisfied. Additional rainfall inputs after the initial abstraction is satisfied contribute to runoff if the rainfall rate (intensity) is larger than the constant loss. The initial abstraction, constant-loss model thus is a two-parameter model. The initial-abstraction, constant-loss model is investigated through detailed computational and statistical analysis of observed rainfall and runoff data for 92 USGS streamflow-gaging stations (watersheds) in Texas with contributing drainage areas from 0.26 to 166 square miles. The analysis is

  13. An initial-abstraction, constant-loss model for unit hydrograph modeling for applicable watersheds in Texas

    USGS Publications Warehouse

    Asquith, William H.; Roussel, Meghan C.

    2007-01-01

    Estimation of representative hydrographs from design storms, which are known as design hydrographs, provides for cost-effective, riskmitigated design of drainage structures such as bridges, culverts, roadways, and other infrastructure. During 2001?07, the U.S. Geological Survey (USGS), in cooperation with the Texas Department of Transportation, investigated runoff hydrographs, design storms, unit hydrographs,and watershed-loss models to enhance design hydrograph estimation in Texas. Design hydrographs ideally should mimic the general volume, peak, and shape of observed runoff hydrographs. Design hydrographs commonly are estimated in part by unit hydrographs. A unit hydrograph is defined as the runoff hydrograph that results from a unit pulse of excess rainfall uniformly distributed over the watershed at a constant rate for a specific duration. A time-distributed, watershed-loss model is required for modeling by unit hydrographs. This report develops a specific time-distributed, watershed-loss model known as an initial-abstraction, constant-loss model. For this watershed-loss model, a watershed is conceptualized to have the capacity to store or abstract an absolute depth of rainfall at and near the beginning of a storm. Depths of total rainfall less than this initial abstraction do not produce runoff. The watershed also is conceptualized to have the capacity to remove rainfall at a constant rate (loss) after the initial abstraction is satisfied. Additional rainfall inputs after the initial abstraction is satisfied contribute to runoff if the rainfall rate (intensity) is larger than the constant loss. The initial abstraction, constant-loss model thus is a two-parameter model. The initial-abstraction, constant-loss model is investigated through detailed computational and statistical analysis of observed rainfall and runoff data for 92 USGS streamflow-gaging stations (watersheds) in Texas with contributing drainage areas from 0.26 to 166 square miles. The analysis is

  14. Modeling of tool path for the CNC sheet cutting machines

    NASA Astrophysics Data System (ADS)

    Petunin, Aleksandr A.

    2015-11-01

    In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.

  15. The abstract geometry modeling language (AgML): experience and road map toward eRHIC

    NASA Astrophysics Data System (ADS)

    Webb, Jason; Lauret, Jerome; Perevoztchikov, Victor

    2014-06-01

    The STAR experiment has adopted an Abstract Geometry Modeling Language (AgML) as the primary description of our geometry model. AgML establishes a level of abstraction, decoupling the definition of the detector from the software libraries used to create the concrete geometry model. Thus, AgML allows us to support both our legacy GEANT 3 simulation application and our ROOT/TGeo based reconstruction software from a single source, which is demonstrably self- consistent. While AgML was developed primarily as a tool to migrate away from our legacy FORTRAN-era geometry codes, it also provides a rich syntax geared towards the rapid development of detector models. AgML has been successfully employed by users to quickly develop and integrate the descriptions of several new detectors in the RHIC/STAR experiment including the Forward GEM Tracker (FGT) and Heavy Flavor Tracker (HFT) upgrades installed in STAR for the 2012 and 2013 runs. AgML has furthermore been heavily utilized to study future upgrades to the STAR detector as it prepares for the eRHIC era. With its track record of practical use in a live experiment in mind, we present the status, lessons learned and future of the AgML language as well as our experience in bringing the code into our production and development environments. We will discuss the path toward eRHIC and pushing the current model to accommodate for detector miss-alignment and high precision physics.

  16. Identifying crop vulnerability to groundwater abstraction: modelling and expert knowledge in a GIS.

    PubMed

    Procter, Chris; Comber, Lex; Betson, Mark; Buckley, Dennis; Frost, Andy; Lyons, Hester; Riding, Alison; Voyce, Kevin

    2006-11-01

    Water use is expected to increase and climate change scenarios indicate the need for more frequent water abstraction. Abstracting groundwater may have a detrimental effect on soil moisture availability for crop growth and yields. This work presents an elegant and robust method for identifying zones of crop vulnerability to abstraction. Archive groundwater level datasets were used to generate a composite groundwater surface that was subtracted from a digital terrain model. The result was the depth from surface to groundwater and identified areas underlain by shallow groundwater. Knowledge from an expert agronomist was used to define classes of risk in terms of their depth below ground level. Combining information on the permeability of geological drift types further refined the assessment of the risk of crop growth vulnerability. The nature of the mapped output is one that is easy to communicate to the intended farming audience because of the general familiarity of mapped information. Such Geographic Information System (GIS)-based products can play a significant role in the characterisation of catchments under the EU Water Framework Directive especially in the process of public liaison that is fundamental to the setting of priorities for management change. The creation of a baseline allows the impact of future increased water abstraction rates to be modelled and the vulnerability maps are in a format that can be readily understood by the various stakeholders. This methodology can readily be extended to encompass additional data layers and for a range of groundwater vulnerability issues including water resources, ecological impacts, nitrate and phosphorus. PMID:16963176

  17. Modeling the Virtual Machine Launching Overhead under Fermicloud

    SciTech Connect

    Garzoglio, Gabriele; Wu, Hao; Ren, Shangping; Timm, Steven; Bernabeu, Gerard; Noh, Seo-Young

    2014-11-12

    FermiCloud is a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows. The Cloud Bursting module of the FermiCloud enables the FermiCloud, when more computational resources are needed, to automatically launch virtual machines to available resources such as public clouds. One of the main challenges in developing the cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on FermiCloud’s system operational data, the VM launching overhead is not a constant. It varies with physical resource (CPU, memory, I/O device) utilization at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launch overhead reference model is needed. The paper is to develop a VM launch overhead reference model based on operational data we have obtained on FermiCloud and uses the reference model to guide the cloud bursting process.

  18. Machine learning and cosmological simulations - I. Semi-analytical models

    NASA Astrophysics Data System (ADS)

    Kamdar, Harshil M.; Turk, Matthew J.; Brunner, Robert J.

    2016-01-01

    We present a new exploratory framework to model galaxy formation and evolution in a hierarchical Universe by using machine learning (ML). Our motivations are two-fold: (1) presenting a new, promising technique to study galaxy formation, and (2) quantitatively analysing the extent of the influence of dark matter halo properties on galaxies in the backdrop of semi-analytical models (SAMs). We use the influential Millennium Simulation and the corresponding Munich SAM to train and test various sophisticated ML algorithms (k-Nearest Neighbors, decision trees, random forests, and extremely randomized trees). By using only essential dark matter halo physical properties for haloes of M > 1012 M⊙ and a partial merger tree, our model predicts the hot gas mass, cold gas mass, bulge mass, total stellar mass, black hole mass and cooling radius at z = 0 for each central galaxy in a dark matter halo for the Millennium run. Our results provide a unique and powerful phenomenological framework to explore the galaxy-halo connection that is built upon SAMs and demonstrably place ML as a promising and a computationally efficient tool to study small-scale structure formation.

  19. Predictive Surface Roughness Model for End Milling of Machinable Glass Ceramic

    NASA Astrophysics Data System (ADS)

    Mohan Reddy, M.; Gorin, Alexander; Abou-El-Hossein, K. A.

    2011-02-01

    Advanced ceramics of Machinable glass ceramic is attractive material to produce high accuracy miniaturized components for many applications in various industries such as aerospace, electronics, biomedical, automotive and environmental communications due to their wear resistance, high hardness, high compressive strength, good corrosion resistance and excellent high temperature properties. Many research works have been conducted in the last few years to investigate the performance of different machining operations when processing various advanced ceramics. Micro end-milling is one of the machining methods to meet the demand of micro parts. Selecting proper machining parameters are important to obtain good surface finish during machining of Machinable glass ceramic. Therefore, this paper describes the development of predictive model for the surface roughness of Machinable glass ceramic in terms of speed, feed rate by using micro end-milling operation.

  20. Access, Equity, and Opportunity. Women in Machining: A Model Program.

    ERIC Educational Resources Information Center

    Warner, Heather

    The Women in Machining (WIM) program is a Machine Action Project (MAP) initiative that was developed in response to a local skilled metalworking labor shortage, despite a virtual absence of women and people of color from area shops. The project identified post-war stereotypes and other barriers that must be addressed if women are to have an equal…

  1. Parameterizing Phrase Based Statistical Machine Translation Models: An Analytic Study

    ERIC Educational Resources Information Center

    Cer, Daniel

    2011-01-01

    The goal of this dissertation is to determine the best way to train a statistical machine translation system. I first develop a state-of-the-art machine translation system called Phrasal and then use it to examine a wide variety of potential learning algorithms and optimization criteria and arrive at two very surprising results. First, despite the…

  2. Modeling Stochastic Kinetics of Molecular Machines at Multiple Levels: From Molecules to Modules

    PubMed Central

    Chowdhury, Debashish

    2013-01-01

    A molecular machine is either a single macromolecule or a macromolecular complex. In spite of the striking superficial similarities between these natural nanomachines and their man-made macroscopic counterparts, there are crucial differences. Molecular machines in a living cell operate stochastically in an isothermal environment far from thermodynamic equilibrium. In this mini-review we present a catalog of the molecular machines and an inventory of the essential toolbox for theoretically modeling these machines. The tool kits include 1), nonequilibrium statistical-physics techniques for modeling machines and machine-driven processes; and 2), statistical-inference methods for reverse engineering a functional machine from the empirical data. The cell is often likened to a microfactory in which the machineries are organized in modular fashion; each module consists of strongly coupled multiple machines, but different modules interact weakly with each other. This microfactory has its own automated supply chain and delivery system. Buoyed by the success achieved in modeling individual molecular machines, we advocate integration of these models in the near future to develop models of functional modules. A system-level description of the cell from the perspective of molecular machinery (the mechanome) is likely to emerge from further integrations that we envisage here. PMID:23746505

  3. Alternative Models of Service, Centralized Machine Operations. Phase II Report. Volume II.

    ERIC Educational Resources Information Center

    Technology Management Corp., Alexandria, VA.

    A study was conducted to determine if the centralization of playback machine operations for the national free library program would be feasible, economical, and desirable. An alternative model of playback machine services was constructed and compared with existing network operations considering both cost and service. The alternative model was…

  4. AZOrange - High performance open source machine learning for QSAR modeling in a graphical programming environment

    PubMed Central

    2011-01-01

    Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of

  5. Modelling the sensitivity of river reaches to water abstraction: RAPHSA- a hydroecology tool for environmental managers

    NASA Astrophysics Data System (ADS)

    Klaar, Megan; Laize, Cedric; Maddock, Ian; Acreman, Mike; Tanner, Kath; Peet, Sarah

    2014-05-01

    A key challenge for environmental managers is the determination of environmental flows which allow a maximum yield of water resources to be taken from surface and sub-surface sources, whilst ensuring sufficient water remains in the environment to support biota and habitats. It has long been known that sensitivity to changes in water levels resulting from river and groundwater abstractions varies between rivers. Whilst assessment at the catchment scale is ideal for determining broad pressures on water resources and ecosystems, assessment of the sensitivity of reaches to changes in flow has previously been done on a site-by-site basis, often with the application of detailed but time consuming techniques (e.g. PHABSIM). While this is appropriate for a limited number of sites, it is costly in terms of money and time resources and therefore not appropriate for application at a national level required by responsible licensing authorities. To address this need, the Environment Agency (England) is developing an operational tool to predict relationships between physical habitat and flow which may be applied by field staff to rapidly determine the sensitivity of physical habitat to flow alteration for use in water resource management planning. An initial model of river sensitivity to abstraction (defined as the change in physical habitat related to changes in river discharge) was developed using site characteristics and data from 66 individual PHABSIM surveys throughout the UK (Booker & Acreman, 2008). By applying a multivariate multiple linear regression analysis to the data to define habitat availability-flow curves using resource intensity as predictor variables, the model (known as RAPHSA- Rapid Assessment of Physical Habitat Sensitivity to Abstraction) is able to take a risk-based approach to modeled certainty. Site specific information gathered using desk-based, or a variable amount of field work can be used to predict the shape of the habitat- flow curves, with the

  6. Experimental "evolutional machines": mathematical and experimental modeling of biological evolution

    NASA Astrophysics Data System (ADS)

    Brilkov, A. V.; Loginov, I. A.; Morozova, E. V.; Shuvaev, A. N.; Pechurkin, N. S.

    Experimentalists possess model systems of two major types for study of evolution continuous cultivation in the chemostat and long-term development in closed laboratory microecosystems with several trophic structure If evolutionary changes or transfer from one steady state to another in the result of changing qualitative properties of the system take place in such systems the main characteristics of these evolution steps can be measured By now this has not been realized from the point of view of methodology though a lot of data on the work of both types of evolutionary machines has been collected In our experiments with long-term continuous cultivation we used the bacterial strains containing in plasmids the cloned genes of bioluminescence and green fluorescent protein which expression level can be easily changed and controlled In spite of the apparent kinetic diversity of evolutionary transfers in two types of systems the general mechanisms characterizing the increase of used energy flow by populations of primer producent can be revealed at their study According to the energy approach at spontaneous transfer from one steady state to another e g in the process of microevolution competition or selection heat dissipation characterizing the rate of entropy growth should increase rather then decrease or maintain steady as usually believed The results of our observations of experimental evolution require further development of thermodynamic theory of open and closed biological systems and further study of general mechanisms of biological

  7. INVENTORY ABSTRACTION

    SciTech Connect

    G. Ragan

    2001-12-19

    The purpose of the inventory abstraction, which has been prepared in accordance with a technical work plan (CRWMS M&O 2000e for ICN 02 of the present analysis, and BSC 2001e for ICN 03 of the present analysis), is to: (1) Interpret the results of a series of relative dose calculations (CRWMS M&O 2000c, 2000f). (2) Recommend, including a basis thereof, a set of radionuclides that should be modeled in the Total System Performance Assessment in Support of the Site Recommendation (TSPA-SR) and the Total System Performance Assessment in Support of the Final Environmental Impact Statement (TSPA-FEIS). (3) Provide initial radionuclide inventories for the TSPA-SR and TSPA-FEIS models. (4) Answer the U.S. Nuclear Regulatory Commission (NRC)'s Issue Resolution Status Report ''Key Technical Issue: Container Life and Source Term'' (CLST IRSR) key technical issue (KTI): ''The rate at which radionuclides in SNF [spent nuclear fuel] are released from the EBS [engineered barrier system] through the oxidation and dissolution of spent fuel'' (NRC 1999, Subissue 3). The scope of the radionuclide screening analysis encompasses the period from 100 years to 10,000 years after the potential repository at Yucca Mountain is sealed for scenarios involving the breach of a waste package and subsequent degradation of the waste form as required for the TSPA-SR calculations. By extending the time period considered to one million years after repository closure, recommendations are made for the TSPA-FEIS. The waste forms included in the inventory abstraction are Commercial Spent Nuclear Fuel (CSNF), DOE Spent Nuclear Fuel (DSNF), High-Level Waste (HLW), naval Spent Nuclear Fuel (SNF), and U.S. Department of Energy (DOE) plutonium waste. The intended use of this analysis is in TSPA-SR and TSPA-FEIS. Based on the recommendations made here, models for release, transport, and possibly exposure will be developed for the isotopes that would be the highest contributors to the dose given a release to the

  8. A real-time model of the synchronous machine based on digital signal processors

    SciTech Connect

    Do, Vanque; Barry, A.O. )

    1993-02-01

    A real-time digital model of a complete hydraulic synchronous machine is presented. The model is based on parallel processing using digital-signal processors (DSP) for fast calculation. The paper describes the modeling of the machine using block diagrams to represent the generator, voltage regulator, stabilizer, turbine, penstock and governor. Details of the hardware and software used to implement the real-time model of the machine are given. A first series of tests has been done and results are shown to evaluate the steady-state and transient performance of the model.

  9. Mathematical modeling of synergetic aspects of machine building enterprise management

    NASA Astrophysics Data System (ADS)

    Kazakov, O. D.; Andriyanov, S. V.

    2016-04-01

    The multivariate method of determining the optimal values of leading key performance indicators of production divisions of machine-building enterprises in the aspect of synergetics has been worked out.

  10. DFT modeling of chemistry on the Z machine

    NASA Astrophysics Data System (ADS)

    Mattsson, Thomas

    2013-06-01

    Density Functional Theory (DFT) has proven remarkably accurate in predicting properties of matter under shock compression for a wide-range of elements and compounds: from hydrogen to xenon via water. Materials where chemistry plays a role are of particular interest for many applications. For example the deep interiors of Neptune, Uranus, and hundreds of similar exoplanets are composed of molecular ices of carbon, hydrogen, oxygen, and nitrogen at pressures of several hundred GPa and temperatures of many thousand Kelvin. High-quality thermophysical experimental data and high-fidelity simulations including chemical reaction are necessary to constrain planetary models over a large range of conditions. As examples of where chemical reactions are important, and demonstration of the high fidelity possible for these both structurally and chemically complex systems, we will discuss shock- and re-shock of liquid carbon dioxide (CO2) in the range 100 to 800 GPa, shock compression of the hydrocarbon polymers polyethylene (PE) and poly(4-methyl-1-pentene) (PMP), and finally simulations of shock compression of glow discharge polymer (GDP) including the effects of doping with germanium. Experimental results from Sandia's Z machine have time and again validated the DFT simulations at extreme conditions and the combination of experiment and DFT provide reliable data for evaluating existing and constructing future wide-range equations of state models for molecular compounds like CO2 and polymers like PE, PMP, and GDP. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  11. Machinability and modeling of cutting mechanism for Titanium Metal Matrix composites

    NASA Astrophysics Data System (ADS)

    Bejjani, Roland

    Titanium Metal Matrix composites (TiMMC) is a new class of material. However, it is a very difficult to cut material. Therefore, the tool life is limited. In order to optimize the machining of TiMMC, three approaches (stages) were used. First, a TAGUCHI method for the design of experiments was used in order to identify the effects of the machining inputs (speed, feed, depth) to the output (cutting forces, surface roughness). To enhance even further the tool life, Laser Assisted Machining (LAM) was also experimented. In a second approach, and in order to better understand the cutting mechanism of TiMMC, the chip formation was analyzed and a new model for the adiabatic shear band in the chip segment was developed. In the last approach, and in order to have a better analysis tool to understand the cutting mechanism, a new constitutive model for TiMMC for simulation purposes was developed, with an added damage model. The FEM simulations results led to predictions of temperature, stress, strain, and damage, and can be used as an analysis tool and even for industrial applications. Following experimental work and analysis, I found that cutting TiMMC at higher speeds is more efficient and productive because it increases tool life. It was found that at higher speeds, fewer hard TiC particles are broken, resulting in reduced tool abrasion wear. In order to further optimize the machining of TiMMC, an unconventional machining method was used. In fact, Laser Assisted Machining (LAM) was used and was found to increase the tool life by approximately 180%. To understand the effects of the particles on the tool, micro scale observations of hard particles with SEM microscopy were performed and it was found that the tool/particle interaction while cutting can exist under three forms. The particles can either be cut at the surface, pushed inside the material, or even some of the pieces of the cut particles can be pushed inside the material. No particle de-bonding was observed. Some

  12. Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness

    NASA Astrophysics Data System (ADS)

    Kusuma, K. K.; Maruf, A.

    2016-02-01

    Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.

  13. A Sustainable Model for Integrating Current Topics in Machine Learning Research into the Undergraduate Curriculum

    ERIC Educational Resources Information Center

    Georgiopoulos, M.; DeMara, R. F.; Gonzalez, A. J.; Wu, A. S.; Mollaghasemi, M.; Gelenbe, E.; Kysilka, M.; Secretan, J.; Sharma, C. A.; Alnsour, A. J.

    2009-01-01

    This paper presents an integrated research and teaching model that has resulted from an NSF-funded effort to introduce results of current Machine Learning research into the engineering and computer science curriculum at the University of Central Florida (UCF). While in-depth exposure to current topics in Machine Learning has traditionally occurred…

  14. (abstract) Modeling Protein Families and Human Genes: Hidden Markov Models and a Little Beyond

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre

    1994-01-01

    We will first give a brief overview of Hidden Markov Models (HMMs) and their use in Computational Molecular Biology. In particular, we will describe a detailed application of HMMs to the G-Protein-Coupled-Receptor Superfamily. We will also describe a number of analytical results on HMMs that can be used in discrimination tests and database mining. We will then discuss the limitations of HMMs and some new directions of research. We will conclude with some recent results on the application of HMMs to human gene modeling and parsing.

  15. Abstract Painting

    ERIC Educational Resources Information Center

    Henkes, Robert

    1978-01-01

    Abstract art provokes numerous interpretations, and as many misunderstandings. The adolescent reaction is no exception. The procedure described here can help the student to understand the abstract from at least one direction. (Author/RK)

  16. Comparison of two different surfaces for 3d model abstraction in support of remote sensing simulations

    SciTech Connect

    Pope, Paul A; Ranken, Doug M

    2010-01-01

    A method for abstracting a 3D model by shrinking a triangular mesh, defined upon a best fitting ellipsoid surrounding the model, onto the model's surface has been previously described. This ''shrinkwrap'' process enables a semi-regular mesh to be defined upon an object's surface. This creates a useful data structure for conducting remote sensing simulations and image processing. However, using a best fitting ellipsoid having a graticule-based tessellation to seed the shrinkwrap process suffers from a mesh which is too dense at the poles. To achieve a more regular mesh, the use of a best fitting, subdivided icosahedron was tested. By subdividing each of the twenty facets of the icosahedron into regular triangles of a predetermined size, arbitrarily dense, highly-regular starting meshes can be created. Comparisons of the meshes resulting from these two seed surfaces are described. Use of a best fitting icosahedron-based mesh as the seed surface in the shrinkwrap process is preferable to using a best fitting ellipsoid. The impacts to remote sensing simulations, specifically generation of synthetic imagery, is illustrated.

  17. On problems in defining abstract and metaphysical concepts--emergence of a new model.

    PubMed

    Nahod, Bruno; Nahod, Perina Vukša

    2014-12-01

    Basic anthropological terminology is the first project covering terms from the domain of the social sciences under the Croatian Special Field Terminology program (Struna). Problems that have been sporadically noticed or whose existence could have been presumed during the processing of terms mainly from technical fields and sciences have finally emerged in "anthropology". The principles of the General Theory of Terminology (GTT), which are followed in Struna, were put to a truly exacting test, and sometimes stretched beyond their limits when applied to concepts that do not necessarily have references in the physical world; namely, abstract and metaphysical concepts. We are currently developing a new terminographical model based on Idealized Cognitive Models (ICM), which will hopefully ensure a better cross-filed implementation of various types of concepts and their relations. The goal of this paper is to introduce the theoretical bases of our model. Additionally, we will present a pilot study of the series of experiments in which we are trying to investigate the nature of conceptual categorization in special languages and its proposed difference form categorization in general language. PMID:25643547

  18. A Framework for the Abstraction of Mesoscale Modeling for Weather Simulation

    NASA Astrophysics Data System (ADS)

    Limpasuvan, V.; Ujcich, B. E.

    2009-12-01

    Widely disseminated weather forecast results (e. g. from various national centers and private companies) are useful for typical users in gauging future atmospheric disturbances. However, these canonical forecasts may not adequately meet the needs of end-users in the various scientific fields since a predetermined model, as structured by the model administrator, produces these forecasts. To perform his/her own successful forecasts, a user faces a steep learning curve involving the collection of initial condition data (e.g. radar, satellite, and reanalyses) and operation of a suitable model (and associated software/computing). In this project, we develop an intermediate (prototypical) software framework and a web-based front-end interface that allow for the abstraction of an advanced weather model upon which the end-user can perform customizable forecasts and analyses. Having such an accessible, front-end interface for a weather model can benefit educational programs at the secondary school and undergraduate level, scientific research in the fields like fluid dynamics and meteorology, and the general public. In all cases, our project allows the user to generate a localized domain of choice, run the desired forecast on a remote high-performance computer cluster, and visually see the results. For instance, an undergraduate science curriculum could incorporate the resulting weather forecast performed under this project in laboratory exercises. Scientific researchers and graduate students would be able to readily adjust key prognostic variables in the simulation within this project’s framework. The general public within the contiguous United States could also run a simplified version of the project’s software with adjustments in forecast clarity (spatial resolution) and region size (domain). Special cases of general interests, in which a detailed forecast may be required, would be over areas of possible strong weather activities.

  19. Distributed model for electromechanical interaction in rotordynamics of cage rotor electrical machines

    NASA Astrophysics Data System (ADS)

    Laiho, Antti; Holopainen, Timo P.; Klinge, Paul; Arkkio, Antero

    2007-05-01

    In this work the effects of the electromechanical interaction on rotordynamics and vibration characteristics of cage rotor electrical machines were considered. An eccentric rotor motion distorts the electromagnetic field in the air-gap between the stator and rotor inducing a total force, the unbalanced magnetic pull, exerted on the rotor. In this paper a low-order parametric model for the unbalanced magnetic pull is coupled with a three-dimensional finite element structural model of the electrical machine. The main contribution of the work is to present a computationally efficient electromechanical model for vibration analysis of cage rotor machines. In this model, the interaction between the mechanical and electromagnetic systems is distributed over the air gap of the machine. This enables the inclusion of rotor and stator deflections into the analysis and, thus, yields more realistic prediction for the effects of electromechanical interaction. The model was tested by implementing it for two electrical machines with nominal speeds close to one of the rotor bending critical speeds. Rated machine data was used in order to predict the effects of the electromechanical interaction on vibration characteristics of the example machines.

  20. Modelling of the dynamic behaviour of hard-to-machine alloys

    NASA Astrophysics Data System (ADS)

    Hokka, M.; Leemet, T.; Shrot, A.; Bäker, M.; Kuokkala, V.-T.

    2012-08-01

    Machining of titanium alloys and nickel based superalloys can be difficult due to their excellent mechanical properties combining high strength, ductility, and excellent overall high temperature performance. Machining of these alloys can, however, be improved by simulating the processes and by optimizing the machining parameters. The simulations, however, need accurate material models that predict the material behaviour in the range of strains and strain rates that occur in the machining processes. In this work, the behaviour of titanium 15-3-3-3 alloy and nickel based superalloy 625 were characterized in compression, and Johnson-Cook material model parameters were obtained from the results. For the titanium alloy, the adiabatic Johnson-Cook model predicts softening of the material adequately, but the high strain hardening rate of Alloy 625 in the model prevents the localization of strain and no shear bands were formed when using this model. For Alloy 625, the Johnson-Cook model was therefore modified to decrease the strain hardening rate at large strains. The models were used in the simulations of orthogonal cutting of the material. For both materials, the models are able to predict the serrated chip formation, frequently observed in the machining of these alloys. The machining forces also match relatively well, but some differences can be seen in the details of the experimentally obtained and simulated chip shapes.

  1. Machine Learning Models for Detection of Regions of High Model Form Uncertainty in RANS

    NASA Astrophysics Data System (ADS)

    Ling, Julia; Templeton, Jeremy

    2015-11-01

    Reynolds Averaged Navier Stokes (RANS) models are widely used because of their computational efficiency and ease-of-implementation. However, because they rely on inexact turbulence closures, they suffer from significant model form uncertainty in many flows. Many RANS models make use of the Boussinesq hypothesis, which assumes a non-negative, scalar eddy viscosity that provides a linear relation between the Reynolds stresses and the mean strain rate. In many flows of engineering relevance, this eddy viscosity assumption is violated, leading to inaccuracies in the RANS predictions. For example, in near wall regions, the Boussinesq hypothesis fails to capture the correct Reynolds stress anisotropy. In regions of flow curvature, the linear relation between Reynolds stresses and mean strain rate may be inaccurate. This model form uncertainty cannot be quantified by simply varying the model parameters, as it is rooted in the model structure itself. Machine learning models were developed to detect regions of high model form uncertainty. These machine learning models consisted of binary classifiers that predicted, on a point-by-point basis, whether or not key RANS assumptions were violated. These classifiers were trained and evaluated for their sensitivity, specificity, and generalizability on a database of canonical flows.

  2. Atmospheric modeling of air pollution. 1979-May, 1980 (a bibliography with abstracts). Report for 1979-May 80

    SciTech Connect

    Carrigan, B.

    1980-06-01

    Lower atmospheric modeling of air pollution from both mobile and stationary sources are covered in the bibliography. Models cover local diffusion, urban heat islands, precipitation washout, worldwide diffusion, climatology, and smog. Stratospheric modeling concerning supersonic aircraft are excluded. (This updated bibliography contains 130 abstracts, 88 of which are new entries to the previous edition.)

  3. Atmospheric modeling of air pollution. 1977-78 (a bibliography with abstracts). Report for 1977-1978

    SciTech Connect

    Carrigan, B.

    1980-06-01

    Lower atmospheric modeling of air pollution from both mobile and stationary sources are covered in the bibliography. Models cover local diffusion, urban heat islands, precipitation washout, worldwide diffusion, climatology, and smog. Stratospheric modeling concerning supersonic aircraft are excluded. (This updated bibliography contains 216 abstracts, none of which are new entries to the previous edition.)

  4. A stochastic model for the cell formation problem considering machine reliability

    NASA Astrophysics Data System (ADS)

    Esmailnezhad, Bahman; Fattahi, Parviz; Kheirkhah, Amir Saman

    2015-03-01

    This paper presents a new mathematical model to solve cell formation problem in cellular manufacturing systems, where inter-arrival time, processing time, and machine breakdown time are probabilistic. The objective function maximizes the number of operations of each part with more arrival rate within one cell. Because a queue behind each machine; queuing theory is used to formulate the model. To solve the model, two metaheurstic algorithms such as modified particle swarm optimization and genetic algorithm are proposed. For the generation of initial solutions in these algorithms, a new heuristic method is developed, which always creates feasible solutions. Both metaheurstic algorithms are compared against global solutions obtained from Lingo software's branch and bound (B&B). Also, a statistical method will be used for comparison of solutions of two metaheurstic algorithms. The results of numerical examples indicate that considering the machine breakdown has significant effect on block structures of machine-part matrixes.

  5. What good are abstract and what-if models? Lessons from the Gaïa hypothesis.

    PubMed

    Dutreuil, Sébastien

    2014-08-01

    This article on the epistemology of computational models stems from an analysis of the Gaïa hypothesis (GH). It begins with James Kirchner's criticisms of the central computational model of GH: Daisyworld. Among other things, the model has been criticized for being too abstract, describing fictional entities (fictive daisies on an imaginary planet) and trying to answer counterfactual (what-if) questions (how would a planet look like if life had no influence on it?). For these reasons the model has been considered not testable and therefore not legitimate in science, and in any case not very interesting since it explores non actual issues. This criticism implicitly assumes that science should only be involved in the making of models that are "actual" (by opposition to what-if) and "specific" (by opposition to abstract). I challenge both of these criticisms in this article. First by showing that although the testability-understood as the comparison of model output with empirical data-is an important procedure for explanatory models, there are plenty of models that are not testable. The fact that these are not testable (in this restricted sense) has nothing to do with their being "abstract" or "what-if" but with their being predictive models. Secondly, I argue that "abstract" and "what-if" models aim at (respectable) epistemic purposes distinct from those pursued by "actual and specific models". Abstract models are used to propose how-possibly explanation or to pursue theorizing. What-if models are used to attribute causal or explanatory power to a variable of interest. The fact that they aim at different epistemic goals entails that it may not be accurate to consider the choice between different kinds of model as a "strategy". PMID:25515262

  6. Human factors model concerning the man-machine interface of mining crewstations

    NASA Technical Reports Server (NTRS)

    Rider, James P.; Unger, Richard L.

    1989-01-01

    The U.S. Bureau of Mines is developing a computer model to analyze the human factors aspect of mining machine operator compartments. The model will be used as a research tool and as a design aid. It will have the capability to perform the following: simulated anthropometric or reach assessment, visibility analysis, illumination analysis, structural analysis of the protective canopy, operator fatigue analysis, and computation of an ingress-egress rating. The model will make extensive use of graphics to simplify data input and output. Two dimensional orthographic projections of the machine and its operator compartment are digitized and the data rebuilt into a three dimensional representation of the mining machine. Anthropometric data from either an individual or any size population may be used. The model is intended for use by equipment manufacturers and mining companies during initial design work on new machines. In addition to its use in machine design, the model should prove helpful as an accident investigation tool and for determining the effects of machine modifications made in the field on the critical areas of visibility and control reach ability.

  7. Robust current control of AC machines using the internal model control method

    SciTech Connect

    Harnefors, L.; Nee, H.P.

    1995-12-31

    In the present paper, the internal model control (IMC) method is introduced and applied to ac machine current control. A permanent-magnet synchronous machine is used as an example. It is shown that the IMC design is straightforward and the resulting controller is simple to implement. The controller parameters are expressed in the machine parameters and the desired closed-loop rise time. The extra cost of implementation compared to PI control is negligible. It is further shown that IMC is able to outperform PI control with as well as without decoupling with respect to dq variable interaction in the presence of parameter deviations.

  8. Including slot harmonics to mechanical model of two-pole induction machine with a force actuator

    NASA Astrophysics Data System (ADS)

    Sinervo, Anssi; Arkkio, Antero

    2012-10-01

    A simple mechanical model is identified for a two-pole induction machine that has a four-pole extra winding as a force actuator. The actuator can be used to suppress rotor vibrations. Forces affecting the rotor of the induction machine are separated into actuator force, purely mechanical force due to mass unbalance, and force caused by unbalanced magnetic pull from higher harmonics and unipolar flux. The force due to higher harmonics is embedded to the mechanical model. Parameters of the modified mechanical model are identified from measurements and the modifications are shown to be necessary. The force produced by the actuator is calculated using the mechanical model, direct flux measurements, and voltage and current of the force actuator. All three methods are shown to give matching results proving that the mechanical model can be used in vibration control. The test machine is shown to have time periodic behavior and discrete Fourier analysis is used to obtain time-invariant model parameters.

  9. Temperature Control of Fimbriation Circuit Switch in Uropathogenic Escherichia coli: Quantitative Analysis via Automated Model Abstraction

    PubMed Central

    Kuwahara, Hiroyuki; Myers, Chris J.; Samoilov, Michael S.

    2010-01-01

    Uropathogenic Escherichia coli (UPEC) represent the predominant cause of urinary tract infections (UTIs). A key UPEC molecular virulence mechanism is type 1 fimbriae, whose expression is controlled by the orientation of an invertible chromosomal DNA element—the fim switch. Temperature has been shown to act as a major regulator of fim switching behavior and is overall an important indicator as well as functional feature of many urologic diseases, including UPEC host-pathogen interaction dynamics. Given this panoptic physiological role of temperature during UTI progression and notable empirical challenges to its direct in vivo studies, in silico modeling of corresponding biochemical and biophysical mechanisms essential to UPEC pathogenicity may significantly aid our understanding of the underlying disease processes. However, rigorous computational analysis of biological systems, such as fim switch temperature control circuit, has hereto presented a notoriously demanding problem due to both the substantial complexity of the gene regulatory networks involved as well as their often characteristically discrete and stochastic dynamics. To address these issues, we have developed an approach that enables automated multiscale abstraction of biological system descriptions based on reaction kinetics. Implemented as a computational tool, this method has allowed us to efficiently analyze the modular organization and behavior of the E. coli fimbriation switch circuit at different temperature settings, thus facilitating new insights into this mode of UPEC molecular virulence regulation. In particular, our results suggest that, with respect to its role in shutting down fimbriae expression, the primary function of FimB recombinase may be to effect a controlled down-regulation (rather than increase) of the ON-to-OFF fim switching rate via temperature-dependent suppression of competing dynamics mediated by recombinase FimE. Our computational analysis further implies that this down

  10. Modelling of internal architecture of kinesin nanomotor as a machine language.

    PubMed

    Khataee, H R; Ibrahim, M Y

    2012-09-01

    Kinesin is a protein-based natural nanomotor that transports molecular cargoes within cells by walking along microtubules. Kinesin nanomotor is considered as a bio-nanoagent which is able to sense the cell through its sensors (i.e. its heads and tail), make the decision internally and perform actions on the cell through its actuator (i.e. its motor domain). The study maps the agent-based architectural model of internal decision-making process of kinesin nanomotor to a machine language using an automata algorithm. The applied automata algorithm receives the internal agent-based architectural model of kinesin nanomotor as a deterministic finite automaton (DFA) model and generates a regular machine language. The generated regular machine language was acceptable by the architectural DFA model of the nanomotor and also in good agreement with its natural behaviour. The internal agent-based architectural model of kinesin nanomotor indicates the degree of autonomy and intelligence of the nanomotor interactions with its cell. Thus, our developed regular machine language can model the degree of autonomy and intelligence of kinesin nanomotor interactions with its cell as a language. Modelling of internal architectures of autonomous and intelligent bio-nanosystems as machine languages can lay the foundation towards the concept of bio-nanoswarms and next phases of the bio-nanorobotic systems development. PMID:22894532

  11. Computationally-efficient finite-element-based thermal and electromagnetic models of electric machines

    NASA Astrophysics Data System (ADS)

    Zhou, Kan

    With the modern trend of transportation electrification, electric machines are a key component of electric/hybrid electric vehicle (EV/HEV) powertrains. It is therefore important that vehicle powertrain-level and system-level designers and control engineers have access to accurate yet computationally-efficient (CE), physics-based modeling tools of the thermal and electromagnetic (EM) behavior of electric machines. In this dissertation, CE yet sufficiently-accurate thermal and EM models for electric machines, which are suitable for use in vehicle powertrain design, optimization, and control, are developed. This includes not only creating fast and accurate thermal and EM models for specific machine designs, but also the ability to quickly generate and determine the performance of new machine designs through the application of scaling techniques to existing designs. With the developed techniques, the thermal and EM performance can be accurately and efficiently estimated. Furthermore, powertrain or system designers can easily and quickly adjust the characteristics and the performance of the machine in ways that are favorable to the overall vehicle performance.

  12. Interpreting linear support vector machine models with heat map molecule coloring

    PubMed Central

    2011-01-01

    Background Model-based virtual screening plays an important role in the early drug discovery stage. The outcomes of high-throughput screenings are a valuable source for machine learning algorithms to infer such models. Besides a strong performance, the interpretability of a machine learning model is a desired property to guide the optimization of a compound in later drug discovery stages. Linear support vector machines showed to have a convincing performance on large-scale data sets. The goal of this study is to present a heat map molecule coloring technique to interpret linear support vector machine models. Based on the weights of a linear model, the visualization approach colors each atom and bond of a compound according to its importance for activity. Results We evaluated our approach on a toxicity data set, a chromosome aberration data set, and the maximum unbiased validation data sets. The experiments show that our method sensibly visualizes structure-property and structure-activity relationships of a linear support vector machine model. The coloring of ligands in the binding pocket of several crystal structures of a maximum unbiased validation data set target indicates that our approach assists to determine the correct ligand orientation in the binding pocket. Additionally, the heat map coloring enables the identification of substructures important for the binding of an inhibitor. Conclusions In combination with heat map coloring, linear support vector machine models can help to guide the modification of a compound in later stages of drug discovery. Particularly substructures identified as important by our method might be a starting point for optimization of a lead compound. The heat map coloring should be considered as complementary to structure based modeling approaches. As such, it helps to get a better understanding of the binding mode of an inhibitor. PMID:21439031

  13. Learning with Technology: Video Modeling with Concrete-Representational-Abstract Sequencing for Students with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Yakubova, Gulnoza; Hughes, Elizabeth M.; Shinaberry, Megan

    2016-01-01

    The purpose of this study was to determine the effectiveness of a video modeling intervention with concrete-representational-abstract instructional sequence in teaching mathematics concepts to students with autism spectrum disorder (ASD). A multiple baseline across skills design of single-case experimental methodology was used to determine the…

  14. Modeling human-machine interactions for operations room layouts

    NASA Astrophysics Data System (ADS)

    Hendy, Keith C.; Edwards, Jack L.; Beevis, David

    2000-11-01

    The LOCATE layout analysis tool was used to analyze three preliminary configurations for the Integrated Command Environment (ICE) of a future USN platform. LOCATE develops a cost function reflecting the quality of all human-human and human-machine communications within a workspace. This proof- of-concept study showed little difference between the efficacy of the preliminary designs selected for comparison. This was thought to be due to the limitations of the study, which included the assumption of similar size for each layout and a lack of accurate measurement data for various objects in the designs, due largely to their notional nature. Based on these results, the USN offered an opportunity to conduct a LOCATE analysis using more appropriate assumptions. A standard crew was assumed, and subject matter experts agreed on the communications patterns for the analysis. Eight layouts were evaluated with the concepts of coordination and command factored into the analysis. Clear differences between the layouts emerged. The most promising design was refined further by the USN, and a working mock-up built for human-in-the-loop evaluation. LOCATE was applied to this configuration for comparison with the earlier analyses.

  15. Abstract Constructions.

    ERIC Educational Resources Information Center

    Pietropola, Anne

    1998-01-01

    Describes a lesson designed to culminate a year of eighth-grade art classes in which students explore elements of design and space by creating 3-D abstract constructions. Outlines the process of using foam board and markers to create various shapes and optical effects. (DSK)

  16. Modelling of Tool Wear and Residual Stress during Machining of AISI H13 Tool Steel

    NASA Astrophysics Data System (ADS)

    Outeiro, José C.; Umbrello, Domenico; Pina, José C.; Rizzuti, Stefania

    2007-05-01

    Residual stresses can enhance or impair the ability of a component to withstand loading conditions in service (fatigue, creep, stress corrosion cracking, etc.), depending on their nature: compressive or tensile, respectively. This poses enormous problems in structural assembly as this affects the structural integrity of the whole part. In addition, tool wear issues are of critical importance in manufacturing since these affect component quality, tool life and machining cost. Therefore, prediction and control of both tool wear and the residual stresses in machining are absolutely necessary. In this work, a two-dimensional Finite Element model using an implicit Lagrangian formulation with an automatic remeshing was applied to simulate the orthogonal cutting process of AISI H13 tool steel. To validate such model the predicted and experimentally measured chip geometry, cutting forces, temperatures, tool wear and residual stresses on the machined affected layers were compared. The proposed FE model allowed us to investigate the influence of tool geometry, cutting regime parameters and tool wear on residual stress distribution in the machined surface and subsurface of AISI H13 tool steel. The obtained results permit to conclude that in order to reduce the magnitude of surface residual stresses, the cutting speed should be increased, the uncut chip thickness (or feed) should be reduced and machining with honed tools having large cutting edge radii produce better results than chamfered tools. Moreover, increasing tool wear increases the magnitude of surface residual stresses.

  17. Modelling of Tool Wear and Residual Stress during Machining of AISI H13 Tool Steel

    SciTech Connect

    Outeiro, Jose C.; Pina, Jose C.; Umbrello, Domenico; Rizzuti, Stefania

    2007-05-17

    Residual stresses can enhance or impair the ability of a component to withstand loading conditions in service (fatigue, creep, stress corrosion cracking, etc.), depending on their nature: compressive or tensile, respectively. This poses enormous problems in structural assembly as this affects the structural integrity of the whole part. In addition, tool wear issues are of critical importance in manufacturing since these affect component quality, tool life and machining cost. Therefore, prediction and control of both tool wear and the residual stresses in machining are absolutely necessary. In this work, a two-dimensional Finite Element model using an implicit Lagrangian formulation with an automatic remeshing was applied to simulate the orthogonal cutting process of AISI H13 tool steel. To validate such model the predicted and experimentally measured chip geometry, cutting forces, temperatures, tool wear and residual stresses on the machined affected layers were compared. The proposed FE model allowed us to investigate the influence of tool geometry, cutting regime parameters and tool wear on residual stress distribution in the machined surface and subsurface of AISI H13 tool steel. The obtained results permit to conclude that in order to reduce the magnitude of surface residual stresses, the cutting speed should be increased, the uncut chip thickness (or feed) should be reduced and machining with honed tools having large cutting edge radii produce better results than chamfered tools. Moreover, increasing tool wear increases the magnitude of surface residual stresses.

  18. Machine learning for many-body physics: The case of the Anderson impurity model

    NASA Astrophysics Data System (ADS)

    Arsenault, Louis-François; Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole; Millis, Andrew J.

    2014-10-01

    Machine learning methods are applied to finding the Green's function of the Anderson impurity model, a basic model system of quantum many-body condensed-matter physics. Different methods of parametrizing the Green's function are investigated; a representation in terms of Legendre polynomials is found to be superior due to its limited number of coefficients and its applicability to state of the art methods of solution. The dependence of the errors on the size of the training set is determined. The results indicate that a machine learning approach to dynamical mean-field theory may be feasible.

  19. Numerically Controlled Machining Of Wind-Tunnel Models

    NASA Technical Reports Server (NTRS)

    Kovtun, John B.

    1990-01-01

    New procedure for dynamic models and parts for wind-tunnel tests or radio-controlled flight tests constructed. Involves use of single-phase numerical control (NC) technique to produce highly-accurate, symmetrical models in less time.

  20. Scientist-Centered Workflow Abstractions via Generic Actors, Workflow Templates, and Context-Awareness for Groundwater Modeling and Analysis

    SciTech Connect

    Chin, George; Sivaramakrishnan, Chandrika; Critchlow, Terence J.; Schuchardt, Karen L.; Ngu, Anne Hee Hiong

    2011-07-04

    A drawback of existing scientific workflow systems is the lack of support to domain scientists in designing and executing their own scientific workflows. Many domain scientists avoid developing and using workflows because the basic objects of workflows are too low-level and high-level tools and mechanisms to aid in workflow construction and use are largely unavailable. In our research, we are prototyping higher-level abstractions and tools to better support scientists in their workflow activities. Specifically, we are developing generic actors that provide abstract interfaces to specific functionality, workflow templates that encapsulate workflow and data patterns that can be reused and adapted by scientists, and context-awareness mechanisms to gather contextual information from the workflow environment on behalf of the scientist. To evaluate these scientist-centered abstractions on real problems, we apply them to construct and execute scientific workflows in the specific domain area of groundwater modeling and analysis.

  1. Nonlinear and Digital Man-machine Control Systems Modeling

    NASA Technical Reports Server (NTRS)

    Mekel, R.

    1972-01-01

    An adaptive modeling technique is examined by which controllers can be synthesized to provide corrective dynamics to a human operator's mathematical model in closed loop control systems. The technique utilizes a class of Liapunov functions formulated for this purpose, Liapunov's stability criterion and a model-reference system configuration. The Liapunov function is formulated to posses variable characteristics to take into consideration the identification dynamics. The time derivative of the Liapunov function generate the identification and control laws for the mathematical model system. These laws permit the realization of a controller which updates the human operator's mathematical model parameters so that model and human operator produce the same response when subjected to the same stimulus. A very useful feature is the development of a digital computer program which is easily implemented and modified concurrent with experimentation. The program permits the modeling process to interact with the experimentation process in a mutually beneficial way.

  2. Abductive machine learning for modeling and predicting the educational score in school health surveys.

    PubMed

    Abdel-Aal, R E; Mangoud, A M

    1996-09-01

    The use of modern abductive machine learning techniques is described for modeling and predicting outcome parameters in terms of input parameters in medical survey data. The AIM (Abductory Induction Mechanism) abductive network machine-learning tool is used to model the educational score in a health survey of 2,720 Albanian primary school children. Data included the child's age, gender, vision, nourishment, parasite infection, family size, parents' education, and educational score. Models synthesized by training on just 100 cases predict the educational score output for the remaining 2,620 cases with 100% accuracy. Simple models represented as analytical functions highlight global relationships and trends in the survey population. Models generated are quite robust, with no change in the basic model structure for a 10-fold increase in the size of the training set. Compared to other statistical and neural network approaches, AIM provides faster and highly automated model synthesis, requiring little or no user intervention. PMID:8952313

  3. Fusing Dual-Event Datasets for Mycobacterium Tuberculosis Machine Learning Models and their Evaluation

    PubMed Central

    Ekins, Sean; Freundlich, Joel S.; Reynolds, Robert C.

    2013-01-01

    The search for new tuberculosis treatments continues as we need to find molecules that can act more quickly, be accommodated in multi-drug regimens, and overcome ever increasing levels of drug resistance. Multiple large scale phenotypic high-throughput screens against Mycobacterium tuberculosis (Mtb) have generated dose response data, enabling the generation of machine learning models. These models also incorporated cytotoxicity data and were recently validated with a large external dataset. A cheminformatics data-fusion approach followed by Bayesian machine learning, Support Vector Machine or Recursive Partitioning model development (based on publicly available Mtb screening data) was used to compare individual datasets and subsequent combined models. A set of 1924 commercially available molecules with promising antitubercular activity (and lack of relative cytotoxicity to Vero cells) were used to evaluate the predictive nature of the models. We demonstrate that combining three datasets incorporating antitubercular and cytotoxicity data in Vero cells from our previous screens results in external validation receiver operator curve (ROC) of 0.83 (Bayesian or RP Forest). Models that do not have the highest five-fold cross validation ROC scores can outperform other models in a test set dependent manner. We demonstrate with predictions for a recently published set of Mtb leads from GlaxoSmithKline that no single machine learning model may be enough to identify compounds of interest. Dataset fusion represents a further useful strategy for machine learning construction as illustrated with Mtb. Coverage of chemistry and Mtb target spaces may also be limiting factors for the whole-cell screening data generated to date. PMID:24144044

  4. Predictive modeling and multi-objective optimization of machining-induced residual stresses: Investigation of machining parameter effects

    NASA Astrophysics Data System (ADS)

    Ulutan, Durul

    2013-01-01

    In the aerospace industry, titanium and nickel-based alloys are frequently used for critical structural components, especially due to their higher strength at both low and high temperatures, and higher wear and chemical degradation resistance. However, because of their unfavorable thermal properties, deformation and friction-induced microstructural changes prevent the end products from having good surface integrity properties. In addition to surface roughness, microhardness changes, and microstructural alterations, the machining-induced residual stress profiles of titanium and nickel-based alloys contribute in the surface integrity of these products. Therefore, it is essential to create a comprehensive method that predicts the residual stress outcomes of machining processes, and understand how machining parameters (cutting speed, uncut chip thickness, depth of cut, etc.) or tool parameters (tool rake angle, cutting edge radius, tool material/coating, etc.) affect the machining-induced residual stresses. Since experiments involve a certain amount of error in measurements, physics-based simulation experiments should also involve an uncertainty in the predicted values, and a rich set of simulation experiments are utilized to create expected value and variance for predictions. As the first part of this research, a method to determine the friction coefficients during machining from practical experiments was introduced. Using these friction coefficients, finite element-based simulation experiments were utilized to determine flow stress characteristics of materials and then to predict the machining-induced forces and residual stresses, and the results were validated using the experimental findings. A sensitivity analysis on the numerical parameters was conducted to understand the effect of changing physical and numerical parameters, increasing the confidence on the selected parameters, and the effect of machining parameters on machining-induced forces and residual

  5. State Machine Modeling of the Space Launch System Solid Rocket Boosters

    NASA Technical Reports Server (NTRS)

    Harris, Joshua A.; Patterson-Hine, Ann

    2013-01-01

    The Space Launch System is a Shuttle-derived heavy-lift vehicle currently in development to serve as NASA's premiere launch vehicle for space exploration. The Space Launch System is a multistage rocket with two Solid Rocket Boosters and multiple payloads, including the Multi-Purpose Crew Vehicle. Planned Space Launch System destinations include near-Earth asteroids, the Moon, Mars, and Lagrange points. The Space Launch System is a complex system with many subsystems, requiring considerable systems engineering and integration. To this end, state machine analysis offers a method to support engineering and operational e orts, identify and avert undesirable or potentially hazardous system states, and evaluate system requirements. Finite State Machines model a system as a finite number of states, with transitions between states controlled by state-based and event-based logic. State machines are a useful tool for understanding complex system behaviors and evaluating "what-if" scenarios. This work contributes to a state machine model of the Space Launch System developed at NASA Ames Research Center. The Space Launch System Solid Rocket Booster avionics and ignition subsystems are modeled using MATLAB/Stateflow software. This model is integrated into a larger model of Space Launch System avionics used for verification and validation of Space Launch System operating procedures and design requirements. This includes testing both nominal and o -nominal system states and command sequences.

  6. Abstract State-Space Models for a Class of Linear Hyperbolic Systems of Balance Laws

    NASA Astrophysics Data System (ADS)

    Bartecki, Krzysztof

    2015-12-01

    The paper discusses and compares different abstract state-space representations for a class of linear hyperbolic systems defined on a one-dimensional spatial domain. It starts with their PDE representation in both weakly and strongly coupled forms. Next, the homogeneous state equation including the unbounded formal state operator is presented. Based on the semigroup approach, some results of well-posedness and internal stability are given. The boundary and observation operators are introduced, assuming a typical configuration of boundary inputs as well as pointwise observations of the state variables. Consequently, the homogeneous state equation is extended to the so-called boundary control state/signal form. Next, the classical additive statespace representation involving (A, B, C)-triple of state, input and output operators is considered. After short discussion on the appropriate Hilbert spaces, state-space equation in the so-called factor form is also presented. Finally, the resolvent of the system state operator A is discussed.

  7. SAINT: A combined simulation language for modeling man-machine systems

    NASA Technical Reports Server (NTRS)

    Seifert, D. J.

    1979-01-01

    SAINT (Systems Analysis of Integrated Networks of Tasks) is a network modeling and simulation technique for design and analysis of complex man machine systems. SAINT provides the conceptual framework for representing systems that consist of discrete task elements, continuous state variables, and interactions between them. It also provides a mechanism for combining human performance models and dynamic system behaviors in a single modeling structure. The SAINT technique is described and applications of the SAINT are discussed.

  8. Assessing model uncertainty using hexavalent chromium and lung cancer mortality as an example [Abstract 2015

    EPA Science Inventory

    Introduction: The National Research Council recommended quantitative evaluation of uncertainty in effect estimates for risk assessment. This analysis considers uncertainty across model forms and model parameterizations with hexavalent chromium [Cr(VI)] and lung cancer mortality a...

  9. Multiscale Modeling and Analysis of an Ultra-Precision Damage Free Machining Method

    NASA Astrophysics Data System (ADS)

    Guan, Chaoliang; Peng, Wenqiang

    2016-06-01

    Under the condition of high laser flux, laser induced damage of optical element does not occur is the key to success of laser fusion ignition system. US government survey showed that the processing defects caused the laser induced damage threshold (LIDT) to decrease is one of the three major challenges. Cracks and scratches caused by brittle and plastic removal machining are fatal flaws. Using hydrodynamic effect polishing method can obtain damage free surface on quartz glass. The material removal mechanism of this typical ultra-precision machining process was modeled in multiscale. In atomic scale, chemical modeling illustrated the weakening and breaking of chemical bond energy. In particle scale, micro contact modeling given the elastic remove mode boundary of materials. In slurry scale, hydrodynamic flow modeling showed the dynamic pressure and shear stress distribution which are relations with machining effect. Experiment was conducted on a numerically controlled system, and one quartz glass optical component was polished in the elastic mode. Results show that the damages are removed away layer by layer as the removal depth increases due to the high damage free machining ability of the HEP. And the LIDT of sample was greatly improved.

  10. Experience with abstract notation one

    NASA Technical Reports Server (NTRS)

    Harvey, James D.; Weaver, Alfred C.

    1990-01-01

    The development of computer science has produced a vast number of machine architectures, programming languages, and compiler technologies. The cross product of these three characteristics defines the spectrum of previous and present data representation methodologies. With regard to computer networks, the uniqueness of these methodologies presents an obstacle when disparate host environments are to be interconnected. Interoperability within a heterogeneous network relies upon the establishment of data representation commonality. The International Standards Organization (ISO) is currently developing the abstract syntax notation one standard (ASN.1) and the basic encoding rules standard (BER) that collectively address this problem. When used within the presentation layer of the open systems interconnection reference model, these two standards provide the data representation commonality required to facilitate interoperability. The details of a compiler that was built to automate the use of ASN.1 and BER are described. From this experience, insights into both standards are given and potential problems relating to this development effort are discussed.

  11. The Sausage Machine: A New Two-Stage Parsing Model.

    ERIC Educational Resources Information Center

    Frazier, Lyn; Fodor, Janet Dean

    1978-01-01

    The human sentence parsing device assigns phrase structure to sentences in two steps. The first stage parser assigns lexical and phrasal nodes to substrings of words. The second stage parser then adds higher nodes to link these phrasal packages together into a complete phrase marker. This model is compared with others. (Author/RD)

  12. Experiments with encapsulation of Monte Carlo simulation results in machine learning models

    NASA Astrophysics Data System (ADS)

    Lal Shrestha, Durga; Kayastha, Nagendra; Solomatine, Dimitri

    2010-05-01

    Uncertainty analysis techniques based on Monte Carlo (MC) simulation have been applied in hydrological sciences successfully in the last decades. They allow for quantification of the model output uncertainty resulting from uncertain model parameters, input data or model structure. They are very flexible, conceptually simple and straightforward, but become impractical in real time applications for complex models when there is little time to perform the uncertainty analysis because of the large number of model runs required. A number of new methods were developed to improve the efficiency of Monte Carlo methods and still these methods require considerable number of model runs in both offline and operational mode to produce reliable and meaningful uncertainty estimation. This paper presents experiments with machine learning techniques used to encapsulate the results of MC runs. A version of MC simulation method, the generalised likelihood uncertain estimation (GLUE) method, is first used to assess the parameter uncertainty of the conceptual rainfall-runoff model HBV. Then the three machines learning methods, namely artificial neural networks, M5 model trees and locally weighted regression methods are trained to encapsulate the uncertainty estimated by the GLUE method using the historical input data. The trained machine learning models are then employed to predict the uncertainty of the model output for the new input data. This method has been applied to two contrasting catchments: the Brue catchment (United Kingdom) and the Bagamati catchment (Nepal). The experimental results demonstrate that the machine learning methods are reasonably accurate in approximating the uncertainty estimated by GLUE. The great advantage of the proposed method is its efficiency to reproduce the MC based simulation results; it can thus be an effective tool to assess the uncertainty of flood forecasting in real time.

  13. Lateral-Directional Parameter Estimation on the X-48B Aircraft Using an Abstracted, Multi-Objective Effector Model

    NASA Technical Reports Server (NTRS)

    Ratnayake, Nalin A.; Waggoner, Erin R.; Taylor, Brian R.

    2011-01-01

    The problem of parameter estimation on hybrid-wing-body aircraft is complicated by the fact that many design candidates for such aircraft involve a large number of aerodynamic control effectors that act in coplanar motion. This adds to the complexity already present in the parameter estimation problem for any aircraft with a closed-loop control system. Decorrelation of flight and simulation data must be performed in order to ascertain individual surface derivatives with any sort of mathematical confidence. Non-standard control surface configurations, such as clamshell surfaces and drag-rudder modes, further complicate the modeling task. In this paper, time-decorrelation techniques are applied to a model structure selected through stepwise regression for simulated and flight-generated lateral-directional parameter estimation data. A virtual effector model that uses mathematical abstractions to describe the multi-axis effects of clamshell surfaces is developed and applied. Comparisons are made between time history reconstructions and observed data in order to assess the accuracy of the regression model. The Cram r-Rao lower bounds of the estimated parameters are used to assess the uncertainty of the regression model relative to alternative models. Stepwise regression was found to be a useful technique for lateral-directional model design for hybrid-wing-body aircraft, as suggested by available flight data. Based on the results of this study, linear regression parameter estimation methods using abstracted effectors are expected to perform well for hybrid-wing-body aircraft properly equipped for the task.

  14. Dynamic modelling and analysis of multi-machine power systems including wind farms

    NASA Astrophysics Data System (ADS)

    Tabesh, Ahmadreza

    2005-11-01

    This thesis introduces a small-signal dynamic model, based on a frequency response approach, for the analysis of a multi-machine power system with special focus on an induction machine based wind farm. The proposed approach is an alternative method to the conventional eigenvalue analysis method which is widely employed for small-signal dynamic analyses of power systems. The proposed modelling approach is successfully applied and evaluated for a power system that (i) includes multiple synchronous generators, and (ii) a wind farm based on either fixed-speed, variable-speed, or doubly-fed induction machine based wind energy conversion units. The salient features of the proposed method, as compared with the conventional eigenvalue analysis method, are: (i) computational efficiency since the proposed method utilizes the open-loop transfer-function matrix of the system, (ii) performance indices that are obtainable based on frequency response data and quantitatively describe the dynamic behavior of the system, and (iii) capability to formulate various wind energy conversion unit, within a wind farm, in a modular form. The developed small-signal dynamic model is applied to a set of multi-machine study systems and the results are validated based on comparison (i) with digital time-domain simulation results obtained from PSCAD/EMTDC software tool, and (ii) where applicable with eigenvalue analysis results.

  15. Ghosts in the Machine. Interoceptive Modeling for Chronic Pain Treatment.

    PubMed

    Di Lernia, Daniele; Serino, Silvia; Cipresso, Pietro; Riva, Giuseppe

    2016-01-01

    Pain is a complex and multidimensional perception, embodied in our daily experiences through interoceptive appraisal processes. The article reviews the recent literature about interoception along with predictive coding theories and tries to explain a missing link between the sense of the physiological condition of the entire body and the perception of pain in chronic conditions, which are characterized by interoceptive deficits. Understanding chronic pain from an interoceptive point of view allows us to better comprehend the multidimensional nature of this specific organic information, integrating the input of several sources from Gifford's Mature Organism Model to Melzack's neuromatrix. The article proposes the concept of residual interoceptive images (ghosts), to explain the diffuse multilevel nature of chronic pain perceptions. Lastly, we introduce a treatment concept, forged upon the possibility to modify the interoceptive chronic representation of pain through external input in a process that we call interoceptive modeling, with the ultimate goal of reducing pain in chronic subjects. PMID:27445681

  16. Ghosts in the Machine. Interoceptive Modeling for Chronic Pain Treatment

    PubMed Central

    Di Lernia, Daniele; Serino, Silvia; Cipresso, Pietro; Riva, Giuseppe

    2016-01-01

    Pain is a complex and multidimensional perception, embodied in our daily experiences through interoceptive appraisal processes. The article reviews the recent literature about interoception along with predictive coding theories and tries to explain a missing link between the sense of the physiological condition of the entire body and the perception of pain in chronic conditions, which are characterized by interoceptive deficits. Understanding chronic pain from an interoceptive point of view allows us to better comprehend the multidimensional nature of this specific organic information, integrating the input of several sources from Gifford's Mature Organism Model to Melzack's neuromatrix. The article proposes the concept of residual interoceptive images (ghosts), to explain the diffuse multilevel nature of chronic pain perceptions. Lastly, we introduce a treatment concept, forged upon the possibility to modify the interoceptive chronic representation of pain through external input in a process that we call interoceptive modeling, with the ultimate goal of reducing pain in chronic subjects. PMID:27445681

  17. Modeling aspects of estuarine eutrophication. (Latest citations from the Selected Water Resources Abstracts database). Published Search

    SciTech Connect

    Not Available

    1993-05-01

    The bibliography contains citations concerning mathematical modeling of existing water quality stresses in estuaries, harbors, bays, and coves. Both physical hydraulic and numerical models for estuarine circulation are discussed. (Contains a minimum of 96 citations and includes a subject term index and title list.)

  18. ShrinkWrap: 3D model abstraction for remote sensing simulation

    SciTech Connect

    Pope, Paul A

    2009-01-01

    Remote sensing simulations often require the use of 3D models of objects of interest. There are a multitude of these models available from various commercial sources. There are image processing, computational, database storage, and . data access advantages to having a regularized, encapsulating, triangular mesh representing the surface of a 3D object model. However, this is usually not how these models are stored. They can have too much detail in some areas, and not enough detail in others. They can have a mix of planar geometric primitives (triangles, quadrilaterals, n-sided polygons) representing not only the surface of the model, but also interior features. And the exterior mesh is usually not regularized nor encapsulating. This paper presents a method called SHRlNKWRAP which can be used to process 3D object models to achieve output models having the aforementioned desirable traits. The method works by collapsing an encapsulating sphere, which has a regularized triangular mesh on its surface, onto the surface of the model. A GUI has been developed to make it easy to leverage this capability. The SHRlNKWRAP processing chain and use of the GUI are described and illustrated.

  19. Fractured rock hydrogeology: Modeling studies. (Latest citations from the Selected Water Resources Abstracts database). Published Search

    SciTech Connect

    Not Available

    1993-07-01

    The bibliography contains citations concerning the use of mathematical and conceptual models in describing the hydraulic parameters of fluid flow in fractured rock. Topics include the use of tracers, solute and mass transport studies, and slug test analyses. The use of modeling techniques in injection well performance prediction is also discussed. (Contains 250 citations and includes a subject term index and title list.)

  20. Modeling of the flow stress for AISI H13 Tool Steel during Hard Machining Processes

    NASA Astrophysics Data System (ADS)

    Umbrello, Domenico; Rizzuti, Stefania; Outeiro, José C.; Shivpuri, Rajiv

    2007-04-01

    In general, the flow stress models used in computer simulation of machining processes are a function of effective strain, effective strain rate and temperature developed during the cutting process. However, these models do not adequately describe the material behavior in hard machining, where a range of material hardness between 45 and 60 HRC are used. Thus, depending on the specific material hardness different material models must be used in modeling the cutting process. This paper describes the development of a hardness-based flow stress and fracture models for the AISI H13 tool steel, which can be applied for range of material hardness mentioned above. These models were implemented in a non-isothermal viscoplastic numerical model to simulate the machining process for AISI H13 with various hardness values and applying different cutting regime parameters. Predicted results are validated by comparing them with experimental results found in the literature. They are found to predict reasonably well the cutting forces as well as the change in chip morphology from continuous to segmented chip as the material hardness change.

  1. Modeling of the flow stress for AISI H13 Tool Steel during Hard Machining Processes

    SciTech Connect

    Umbrello, Domenico; Rizzuti, Stefania; Outeiro, Jose C.; Shivpuri, Rajiv

    2007-04-07

    In general, the flow stress models used in computer simulation of machining processes are a function of effective strain, effective strain rate and temperature developed during the cutting process. However, these models do not adequately describe the material behavior in hard machining, where a range of material hardness between 45 and 60 HRC are used. Thus, depending on the specific material hardness different material models must be used in modeling the cutting process. This paper describes the development of a hardness-based flow stress and fracture models for the AISI H13 tool steel, which can be applied for range of material hardness mentioned above. These models were implemented in a non-isothermal viscoplastic numerical model to simulate the machining process for AISI H13 with various hardness values and applying different cutting regime parameters. Predicted results are validated by comparing them with experimental results found in the literature. They are found to predict reasonably well the cutting forces as well as the change in chip morphology from continuous to segmented chip as the material hardness change.

  2. A tool for urban soundscape evaluation applying Support Vector Machines for developing a soundscape classification model.

    PubMed

    Torija, Antonio J; Ruiz, Diego P; Ramos-Ridao, Angel F

    2014-06-01

    To ensure appropriate soundscape management in urban environments, the urban-planning authorities need a range of tools that enable such a task to be performed. An essential step during the management of urban areas from a sound standpoint should be the evaluation of the soundscape in such an area. In this sense, it has been widely acknowledged that a subjective and acoustical categorization of a soundscape is the first step to evaluate it, providing a basis for designing or adapting it to match people's expectations as well. In this sense, this work proposes a model for automatic classification of urban soundscapes. This model is intended for the automatic classification of urban soundscapes based on underlying acoustical and perceptual criteria. Thus, this classification model is proposed to be used as a tool for a comprehensive urban soundscape evaluation. Because of the great complexity associated with the problem, two machine learning techniques, Support Vector Machines (SVM) and Support Vector Machines trained with Sequential Minimal Optimization (SMO), are implemented in developing model classification. The results indicate that the SMO model outperforms the SVM model in the specific task of soundscape classification. With the implementation of the SMO algorithm, the classification model achieves an outstanding performance (91.3% of instances correctly classified). PMID:24007752

  3. Improved quality prediction model for multistage machining process based on geometric constraint equation

    NASA Astrophysics Data System (ADS)

    Zhu, Limin; He, Gaiyun; Song, Zhanjie

    2016-03-01

    Product variation reduction is critical to improve process efficiency and product quality, especially for multistage machining process (MMP). However, due to the variation accumulation and propagation, it becomes quite difficult to predict and reduce product variation for MMP. While the method of statistical process control can be used to control product quality, it is used mainly to monitor the process change rather than to analyze the cause of product variation. In this paper, based on a differential description of the contact kinematics of locators and part surfaces, and the geometric constraints equation defined by the locating scheme, an improved analytical variation propagation model for MMP is presented. In which the influence of both locator position and machining error on part quality is considered while, in traditional model, it usually focuses on datum error and fixture error. Coordinate transformation theory is used to reflect the generation and transmission laws of error in the establishment of the model. The concept of deviation matrix is heavily applied to establish an explicit mapping between the geometric deviation of part and the process error sources. In each machining stage, the part deviation is formulized as three separated components corresponding to three different kinds of error sources, which can be further applied to fault identification and design optimization for complicated machining process. An example part for MMP is given out to validate the effectiveness of the methodology. The experiment results show that the model prediction and the actual measurement match well. This paper provides a method to predict part deviation under the influence of fixture error, datum error and machining error, and it enriches the way of quality prediction for MMP.

  4. Analytical modeling of a new disc permanent magnet linear synchronous machine for electric vehicles

    SciTech Connect

    Liu, C.T.; Chen, J.W.; Su, K.S.

    1999-09-01

    This paper develops an analytical approach based on a qd0 reference frame model to analyze dynamic and steady state characteristics of disc permanent magnet linear synchronous machines (DPMLSMs). The established compact mathematical model can be more easily employed to analyze the system behavior and to design the controller. Superiority in operational electromagnetic characteristics of the proposed DPMLSM for electric vehicle (EV) applications is verified by both numerical simulations and experimental investigations.

  5. Product Model for Integrated Machining and Inspection Process Planning

    NASA Astrophysics Data System (ADS)

    Gutiérrez Rubert, S.; Bruscas Bellido, G. M.; Rosado Castellano, P.; Romero Subirón, F.

    2009-11-01

    In the product-process development closed-loop an integrated product and process plan model is essential for structuring and interchanging data and information. Many of the currently existing standards (STEP) provide an appropriate solution for the different stages of the closed-loop using a clear feature-based approach. However, inspection planning is not undertaken in the same manner and detailed inspection (measurement) planning is performed directly. In order to carry out inspection planning, that is both integrated and at the same level as process planning, the Inspection Feature (InspF) is proposed here, which is directly related with product and process functionality. The proposal includes an InspF library that makes it possible part interpretation from an inspection point of view, while also providing alternatives and not being restricted to the use of just one single type of measurement equipment.

  6. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications

    SciTech Connect

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi; Sozer, Yilmaz; Husain, Iqbal; Muljadi, Eduard

    2015-09-02

    This paper presents a nonlinear analytical model of a novel double sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets (PM), stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry which makes it a good alternative for evaluating prospective designs of TFM as compared to finite element solvers which are numerically intensive and require more computation time. A single phase, 1 kW, 400 rpm machine is analytically modeled and its resulting flux distribution, no-load EMF and torque, verified with Finite Element Analysis (FEA). The results are found to be in agreement with less than 5% error, while reducing the computation time by 25 times.

  7. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications: Preprint

    SciTech Connect

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi; Sozer, Yilmaz; Husain; Iqbal; Muljadi, Eduard

    2015-08-24

    This paper presents a nonlinear analytical model of a novel double-sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets, stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry that makes it a good alternative for evaluating prospective designs of TFM compared to finite element solvers that are numerically intensive and require more computation time. A single-phase, 1-kW, 400-rpm machine is analytically modeled, and its resulting flux distribution, no-load EMF, and torque are verified with finite element analysis. The results are found to be in agreement, with less than 5% error, while reducing the computation time by 25 times.

  8. Modeling and optimizing electrodischarge machine process (EDM) with an approach based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zabbah, Iman

    2011-12-01

    Electro Discharge Machine (EDM) is the commonest untraditional method of production for forming metals and the Non-Oxide ceramics. The increase of smoothness, the increase of the remove of filings, and also the decrease of proportional erosion tool has an important role in this machining. That is directly related to the choosing of input parameters.The complicated and non-linear nature of EDM has made the process impossible with usual and classic method. So far, some methods have been used based on intelligence to optimize this process. At the top of them we can mention artificial neural network that has modelled the process as a black box. The problem of this kind of machining is seen when a workpiece is composited of the collection of carbon-based materials such as silicon carbide. In this article, besides using the new method of mono-pulse technical of EDM, we design a fuzzy neural network and model it. Then the genetic algorithm is used to find the optimal inputs of machine. In our research, workpiece is a Non-Oxide metal called silicon carbide. That makes the control process more difficult. At last, the results are compared with the previous methods.

  9. Modeling and optimizing electrodischarge machine process (EDM) with an approach based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zabbah, Iman

    2012-01-01

    Electro Discharge Machine (EDM) is the commonest untraditional method of production for forming metals and the Non-Oxide ceramics. The increase of smoothness, the increase of the remove of filings, and also the decrease of proportional erosion tool has an important role in this machining. That is directly related to the choosing of input parameters.The complicated and non-linear nature of EDM has made the process impossible with usual and classic method. So far, some methods have been used based on intelligence to optimize this process. At the top of them we can mention artificial neural network that has modelled the process as a black box. The problem of this kind of machining is seen when a workpiece is composited of the collection of carbon-based materials such as silicon carbide. In this article, besides using the new method of mono-pulse technical of EDM, we design a fuzzy neural network and model it. Then the genetic algorithm is used to find the optimal inputs of machine. In our research, workpiece is a Non-Oxide metal called silicon carbide. That makes the control process more difficult. At last, the results are compared with the previous methods.

  10. ABSTRACT: Design of Groundwater Monitoring Networks Considering Conceptual Model and Parametric Uncertainty

    SciTech Connect

    A. Hassan; H. Bekhit; Y. Zhang; J. Chapman

    2008-09-15

    Uncertainty built into conceptual groundwater flow and transport models and associated parametric uncertainty should be appropriately included when such models are used to develop detection monitoring networks for contaminated sites. We compare alternative approaches of propagating such uncertainty from the flow and transport model into the network design. The focus is on detection monitoring networks where the primary objective is to intercept the contaminant before it reaches a boundary of interest (e.g., compliance boundary). Different uncertainty propagation approaches identify different well locations and different well combinations (networks) as having the highest detection efficiency. It is thus recommended that multiple uncertainty propagation approaches are considered. If several approaches yield consistent results in terms of identifying the best performing candidate wells and the best performing well network for detecting a contaminant plume, this would provide confidence in the suitability of the selected well locations.

  11. Using Machine Learning to Create Turbine Performance Models (Presentation)

    SciTech Connect

    Clifton, A.

    2013-04-01

    Wind turbine power output is known to be a strong function of wind speed, but is also affected by turbulence and shear. In this work, new aerostructural simulations of a generic 1.5 MW turbine are used to explore atmospheric influences on power output. Most significant is the hub height wind speed, followed by hub height turbulence intensity and then wind speed shear across the rotor disk. These simulation data are used to train regression trees that predict the turbine response for any combination of wind speed, turbulence intensity, and wind shear that might be expected at a turbine site. For a randomly selected atmospheric condition, the accuracy of the regression tree power predictions is three times higher than that of the traditional power curve methodology. The regression tree method can also be applied to turbine test data and used to predict turbine performance at a new site. No new data is required in comparison to the data that are usually collected for a wind resource assessment. Implementing the method requires turbine manufacturers to create a turbine regression tree model from test site data. Such an approach could significantly reduce bias in power predictions that arise because of different turbulence and shear at the new site, compared to the test site.

  12. A Simple Computational Model of a jellyfish-like flying machine

    NASA Astrophysics Data System (ADS)

    Fang, Fang; Ristroph, Leif; Shelley, Michael

    2013-11-01

    We explore theoretically the aerodynamics of a jellyfish-like flying machine recently fabricated at NYU. This experimental device achieves flight and hovering by opening and closing a set of flapping wings. It displays orientational flight stability without additional control surfaces or feedback control. Our model machine consists of two symmetric massless flapping wings connected to a body with mass and moment of inertia. A vortex sheet shedding and wake model is used for the flow simulation. Use of the Fast Multipole Method (FMM), and adaptive addition/deletion of vortices, allows us to simulate for long times and resolve complex wakes. We use our model to explore the physical parameters that maintain body hovering, its ascent and descent, and investigate the stability of these states.

  13. Modeling clinical judgment and implicit guideline compliance in the diagnosis of melanomas using machine learning.

    PubMed

    Sboner, Andrea; Aliferis, Constantin F

    2005-01-01

    We explore several machine learning techniques to model clinical decision making of 6 dermatologists in the clinical task of melanoma diagnosis of 177 pigmented skin lesions (76 malignant, 101 benign). In particular we apply Support Vector Machine (SVM) classifiers to model clinician judgments, Markov Blanket and SVM feature selection to eliminate clinical features that are effectively ignored by the dermatologists, and a novel explanation technique whereby regression tree induction is run on the reduced SVM model's output to explain the physicians' implicit patterns of decision making. Our main findings include: (a) clinician judgments can be accurately predicted, (b) subtle decision making rules are revealed enabling the explanation of differences of opinion among physicians, and (c) physician judgment is non-compliant with the diagnostic guidelines that physicians self-report as guiding their decision making. PMID:16779123

  14. The Academy for Community College Leadership Advancement, Innovation, and Modeling (ACCLAIM): Abstract.

    ERIC Educational Resources Information Center

    North Carolina State Univ., Raleigh. Academy for Community Coll. Leadership Advancement, Innovation, and Modeling.

    The Academy for Community College Leadership, Innovation, and Modeling (ACCLAIM) is a 3-year pilot project funded by the W. K. Kellogg Foundation, North Carolina State University (NCSU), and the community college systems of Maryland, Virginia, South Carolina, and North Carolina. ACCLAIM's purpose is to help the region's community colleges assume a…

  15. Law machines: scale models, forensic materiality and the making of modern patent law.

    PubMed

    Pottage, Alain

    2011-10-01

    Early US patent law was machine made. Before the Patent Office took on the function of examining patent applications in 1836, questions of novelty and priority were determined in court, within the forum of the infringement action. And at all levels of litigation, from the circuit courts up to the Supreme Court, working models were the media through which doctrine, evidence and argument were made legible, communicated and interpreted. A model could be set on a table, pointed at, picked up, rotated or upended so as to display a point of interest to a particular audience within the courtroom, and, crucially, set in motion to reveal the 'mode of operation' of a machine. The immediate object of demonstration was to distinguish the intangible invention from its tangible embodiment, but models also'machined' patent law itself. Demonstrations of patent claims with models articulated and resolved a set of conceptual tensions that still make the definition and apprehension of the invention difficult, even today, but they resolved these tensions in the register of materiality, performativity and visibility, rather than the register of conceptuality. The story of models tells us something about how inventions emerge and subsist within the context of patent litigation and patent doctrine, and it offers a starting point for renewed reflection on the question of how technology becomes property. PMID:22164718

  16. Abstraction and art.

    PubMed Central

    Gortais, Bernard

    2003-01-01

    In a given social context, artistic creation comprises a set of processes, which relate to the activity of the artist and the activity of the spectator. Through these processes we see and understand that the world is vaster than it is said to be. Artistic processes are mediated experiences that open up the world. A successful work of art expresses a reality beyond actual reality: it suggests an unknown world using the means and the signs of the known world. Artistic practices incorporate the means of creation developed by science and technology and change forms as they change. Artists and the public follow different processes of abstraction at different levels, in the definition of the means of creation, of representation and of perception of a work of art. This paper examines how the processes of abstraction are used within the framework of the visual arts and abstract painting, which appeared during a period of growing importance for the processes of abstraction in science and technology, at the beginning of the twentieth century. The development of digital platforms and new man-machine interfaces allow multimedia creations. This is performed under the constraint of phases of multidisciplinary conceptualization using generic representation languages, which tend to abolish traditional frontiers between the arts: visual arts, drama, dance and music. PMID:12903659

  17. Modeling and predicting abstract concept or idea introduction and propagation through geopolitical groups

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger M.; Handley, James W.; Hicklen, Michael L.

    2007-04-01

    This paper describes a novel capability for modeling known idea propagation transformations and predicting responses to new ideas from geopolitical groups. Ideas are captured using semantic words that are text based and bear cognitive definitions. We demonstrate a unique algorithm for converting these into analytical predictive equations. Using the illustrative idea of "proposing a gasoline price increase of 1 per gallon from 2" and its changing perceived impact throughout 5 demographic groups, we identify 13 cost of living Diplomatic, Information, Military, and Economic (DIME) features common across all 5 demographic groups. This enables the modeling and monitoring of Political, Military, Economic, Social, Information, and Infrastructure (PMESII) effects of each group to this idea and how their "perception" of this proposal changes. Our algorithm and results are summarized in this paper.

  18. Fractured rock hydrogeology (excluding modeling). (Latest citations from the Selected Water Resources abstracts database). Published Search

    SciTech Connect

    Not Available

    1994-01-01

    The bibliography contains citations concerning the nature and occurrence of groundwater in fractured crystalline and sedimentary rocks. Techniques for determining connectivity and hydraulic conductivity, pollutant distribution in fractures, and site studies in specific geologic environments are among the topics discussed. Citations pertaining to modeling studies of fractured rock hydrogeology are addressed in a separate bibliography. (Contains a minimum of 62 citations and includes a subject term index and title list.)

  19. Fractured rock hydrogeology (excluding modeling). (Latest citations from the Selected Water Resources Abstracts database). Published Search

    SciTech Connect

    Not Available

    1992-11-01

    The bibliography contains citations concerning the nature and occurrence of groundwater in fractured crystalline and sedimentary rocks. Techniques for determining connectivity and hydraulic conductivity, pollutant distribution in fractures, and site studies in specific geologic environments are among the topics discussed. Citations pertaining to modeling studies of fractured rock hydrogeology are addressed in a separate bibliography. (Contains a minimum of 54 citations and includes a subject term index and title list.)

  20. A paradigm for data-driven predictive modeling using field inversion and machine learning

    NASA Astrophysics Data System (ADS)

    Parish, Eric J.; Duraisamy, Karthik

    2016-01-01

    We propose a modeling paradigm, termed field inversion and machine learning (FIML), that seeks to comprehensively harness data from sources such as high-fidelity simulations and experiments to aid the creation of improved closure models for computational physics applications. In contrast to inferring model parameters, this work uses inverse modeling to obtain corrective, spatially distributed functional terms, offering a route to directly address model-form errors. Once the inference has been performed over a number of problems that are representative of the deficient physics in the closure model, machine learning techniques are used to reconstruct the model corrections in terms of variables that appear in the closure model. These reconstructed functional forms are then used to augment the closure model in a predictive computational setting. As a first demonstrative example, a scalar ordinary differential equation is considered, wherein the model equation has missing and deficient terms. Following this, the methodology is extended to the prediction of turbulent channel flow. In both of these applications, the approach is demonstrated to be able to successfully reconstruct functional corrections and yield accurate predictive solutions while providing a measure of model form uncertainties.

  1. Machine Learning Techniques for Combining Multi-Model Climate Projections (Invited)

    NASA Astrophysics Data System (ADS)

    Monteleoni, C.

    2013-12-01

    The threat of climate change is one of the greatest challenges currently facing society. Given the profound impact machine learning has made on the natural sciences to which it has been applied, such as the field of bioinformatics, machine learning is poised to accelerate discovery in climate science. Recent advances in the fledgling field of climate informatics have demonstrated the promise of machine learning techniques for problems in climate science. A key problem in climate science is how to combine the projections of the multi-model ensemble of global climate models that inform the Intergovernmental Panel on Climate Change (IPCC). I will present three approaches to this problem. Our Tracking Climate Models (TCM) work demonstrated the promise of an algorithm for online learning with expert advice, for this task. Given temperature projections and hindcasts from 20 IPCC global climate models, and over 100 years of historical temperature data, TCM generated predictions that tracked the changing sequence of which model currently predicts best. On historical data, at both annual and monthly time-scales, and in future simulations, TCM consistently outperformed the average over climate models, the existing benchmark in climate science, at both global and continental scales. We then extended TCM to take into account climate model projections at higher spatial resolutions, and to model geospatial neighborhood influence between regions. Our second algorithm enables neighborhood influence by modifying the transition dynamics of the Hidden Markov Model from which TCM is derived, allowing the performance of spatial neighbors to influence the temporal switching probabilities for the best climate model at a given location. We recently applied a third technique, sparse matrix completion, in which we create a sparse (incomplete) matrix from climate model projections/hindcasts and observed temperature data, and apply a matrix completion algorithm to recover it, yielding

  2. Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.

    PubMed

    Falk, Carl F; Biesanz, Jeremy C

    2011-11-30

    Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates. PMID:26736127

  3. Abstraction of mechanistic sorption model results for performance assessment calculations at Yucca Mountain, Nevada

    SciTech Connect

    Turner, D.R.; Pabalan, R.T. )

    1999-01-01

    Sorption onto minerals in the geologic setting may help to mitigate potential radionuclide transport from the proposed high-level radioactive waste repository at Yucca Mountain (YM), Nevada. An approach is developed for including aspects of more mechanistic sorption models into current probabilistic performance assessment (PA) calculations. Data on water chemistry from the vicinity of YM are screened and used to calculate the ranges in parameters that could exert control on radionuclide corruption behavior. Using a diffuse-layer surface complexation model, sorption parameters for Np(V) and U(VI) are calculated based on the chemistry of each water sample. Model results suggest that log normal probability distribution functions (PDFs) of sorption parameters are appropriate for most of the samples, but the calculated range is almost five orders of magnitude for Np(V) sorption and nine orders of magnitude for U(VI) sorption. Calculated sorption parameters may also vary at a single sample location by almost a factor of 10 over time periods of the order of days to years due to changes in chemistry, although sampling and analytical methodologies may introduce artifacts that add uncertainty to the evaluation of these fluctuations. Finally, correlation coefficients between the calculated Np(V) and U(VI) sorption parameters can be included as input into PA sampling routines, so that the value selected for one radionuclide sorption parameter is conditioned by its statistical relationship to the others. The approaches outlined here can be adapted readily to current PA efforts, using site-specific information to provide geochemical constraints on PDFs for radionuclide transport parameters.

  4. Abstraction of mechanistic sorption model results for performance assessment calculations at Yucca Mountain, Nevada

    SciTech Connect

    Turner, D.R.; Pabalan, R.T.

    1999-11-01

    Sorption onto minerals in the geologic setting may help to mitigate potential radionuclide transport from the proposed high-level radioactive waste repository at Yucca Mountain (YM), Nevada. An approach is developed for including aspects of more mechanistic sorption models into current probabilistic performance assessment (PA) calculations. Data on water chemistry from the vicinity of YM are screened and used to calculate the ranges in parameters that could exert control on radionuclide corruption behavior. Using a diffuse-layer surface complexation model, sorption parameters for Np(V) and U(VI) are calculated based on the chemistry of each water sample. Model results suggest that log normal probability distribution functions (PDFs) of sorption parameters are appropriate for most of the samples, but the calculated range is almost five orders of magnitude for Np(V) sorption and nine orders of magnitude for U(VI) sorption. Calculated sorption parameters may also vary at a single sample location by almost a factor of 10 over time periods of the order of days to years due to changes in chemistry, although sampling and analytical methodologies may introduce artifacts that add uncertainty to the evaluation of these fluctuations. Finally, correlation coefficients between the calculated Np(V) and U(VI) sorption parameters can be included as input into PA sampling routines, so that the value selected for one radionuclide sorption parameter is conditioned by its statistical relationship to the others. The approaches outlined here can be adapted readily to current PA efforts, using site-specific information to provide geochemical constraints on PDFs for radionuclide transport parameters.

  5. Modeling Physical Processes at the Nanoscale—Insight into Self-Organization of Small Systems (abstract)

    NASA Astrophysics Data System (ADS)

    Proykova, Ana

    2009-04-01

    Essential contributions have been made in the field of finite-size systems of ingredients interacting with potentials of various ranges. Theoretical simulations have revealed peculiar size effects on stability, ground state structure, phases, and phase transformation of systems confined in space and time. Models developed in the field of pure physics (atomic and molecular clusters) have been extended and successfully transferred to finite-size systems that seem very different—small-scale financial markets, autoimmune reactions, and social group reactions to advertisements. The models show that small-scale markets diverge unexpectedly fast as a result of small fluctuations; autoimmune reactions are sequences of two discontinuous phase transitions; and social groups possess critical behavior (social percolation) under the influence of an external field (advertisement). Some predicted size-dependent properties have been experimentally observed. These findings lead to the hypothesis that restrictions on an object's size determine the object's total internal (configuration) and external (environmental) interactions. Since phases are emergent phenomena produced by self-organization of a large number of particles, the occurrence of a phase in a system containing a small number of ingredients is remarkable.

  6. A study of sound transmission in an abstract middle ear using physical and finite element models.

    PubMed

    Gonzalez-Herrera, Antonio; Olson, Elizabeth S

    2015-11-01

    The classical picture of middle ear (ME) transmission has the tympanic membrane (TM) as a piston and the ME cavity as a vacuum. In reality, the TM moves in a complex multiphasic pattern and substantial pressure is radiated into the ME cavity by the motion of the TM. This study explores ME transmission with a simple model, using a tube terminated with a plastic membrane. Membrane motion was measured with a laser interferometer and pressure on both sides of the membrane with micro-sensors that could be positioned close to the membrane without disturbance. A finite element model of the system explored the experimental results. Both experimental and theoretical results show resonances that are in some cases primarily acoustical or mechanical and sometimes produced by coupled acousto-mechanics. The largest membrane motions were a result of the membrane's mechanical resonances. At these resonant frequencies, sound transmission through the system was larger with the membrane in place than it was when the membrane was absent. PMID:26627771

  7. Learning with Technology: Video Modeling with Concrete-Representational-Abstract Sequencing for Students with Autism Spectrum Disorder.

    PubMed

    Yakubova, Gulnoza; Hughes, Elizabeth M; Shinaberry, Megan

    2016-07-01

    The purpose of this study was to determine the effectiveness of a video modeling intervention with concrete-representational-abstract instructional sequence in teaching mathematics concepts to students with autism spectrum disorder (ASD). A multiple baseline across skills design of single-case experimental methodology was used to determine the effectiveness of the intervention on the acquisition and maintenance of addition, subtraction, and number comparison skills for four elementary school students with ASD. Findings supported the effectiveness of the intervention in improving skill acquisition and maintenance at a 3-week follow-up. Implications for practice and future research are discussed. PMID:26983919

  8. Kinematic modeling and verification of an articulated arm coordinate measuring machine

    NASA Astrophysics Data System (ADS)

    Zhang, Huaishan; Gao, Guanbin; Wang, Wen; Na, Jing; Wu, Xing

    2016-01-01

    The articulated arm coordinate measuring machine (AACMM) is a new type of non-orthogonal coordinate measuring machine (CMM). Unlike the traditional orthogonal CMM which has three linear guides the AACMM is composed of a series of linkages connected by rotating joints. Firstly, the coordinate systems of the AACMM are established according to D-H method, the homogeneous transformation matrixes from the probe to the base of the AACMM are derived. And the graphic simulation system of the AACMM is built in Matlab, which verify the magnitude and direction of the joint angles qualitatively. Then, the data acquisition software of the AACMM is compiled by Visual C++, and there is a statistical analysis on the calculated measuring coordinates and actual coordinates, which indicates that the kinematic model of the AACMM is correct. The kinematic model provides a basis for measurement, calibration and error compensation of the AACMM.

  9. Kinetic modeling of hydrocarbon autoignition at low and intermediate temperatures in a rapid compression machine

    SciTech Connect

    Curran, H J; Pitz, W J; Westbrook, C K; Griffiths, J F; Mohamed, C

    2000-11-01

    A computer model is used to examine oxidation of hydrocarbon fuels in a rapid compression machine. For one of the fuels studied, n-heptane, significant fuel consumption is computed to take place during the compression stroke under some operating conditions, while for the less reactive n-pentane, no appreciable fuel consumption occurs until after the end of compression. The third fuel studied, a 60 PRF mixture of iso-octane and n-heptane, exhibits behavior that is intermediate between that of n-heptane and n-pentane. The model results indicate that computational studies of rapid compression machine ignition must consider fuel reaction during compression in order to achieve satisfactory agreement between computed and experimental results.

  10. Modeling and design optimization of switched reluctance machine by boundary element analysis and simulation

    SciTech Connect

    Tang, Y.; Kline, J.A. Sr.

    1996-12-01

    Nonlinear boundary element analysis provides a more accurate and detailing tool for the design of switched reluctance machines, than the conventional equivalent-circuit methods. Design optimization through more detailed analysis and simulation can reduce development and prototyping costs and time to market. Firstly, magnetic field modeling of an industrial switched reluctance machine by boundary element method is reported in this paper. Secondly, performance prediction and dynamic simulation of motor and control design are presented. Thirdly, magnetic forces that cause noise and vibration are studied, to include the effects of motor and control design variations on noise in the design process. Testing of the motor in NEMA 215-Frame size is carried out to verify the accuracy of modeling and simulation.

  11. ERGONOMICS ABSTRACTS 48347-48982.

    ERIC Educational Resources Information Center

    Ministry of Technology, London (England). Warren Spring Lab.

    IN THIS COLLECTION OF ERGONOMICS ABSTRACTS AND ANNOTATIONS THE FOLLOWING AREAS OF CONCERN ARE REPRESENTED--GENERAL REFERENCES, METHODS, FACILITIES, AND EQUIPMENT RELATING TO ERGONOMICS, SYSTEMS OF MAN AND MACHINES, VISUAL, AUDITORY, AND OTHER SENSORY INPUTS AND PROCESSES (INCLUDING SPEECH AND INTELLIGIBILITY), INPUT CHANNELS, BODY MEASUREMENTS,…

  12. A mathematical model of the controlled axial flow divider for mobile machines

    NASA Astrophysics Data System (ADS)

    Mulyukin, V. L.; Karelin, D. L.; Belousov, A. M.

    2016-06-01

    The authors give a mathematical model of the axial adjustable flow divider allowing one to define the parameters of the feed pump and the hydraulic motor-wheels in the multi-circuit hydrostatic transmission of mobile machines, as well as for example built features that allows to clearly evaluate the mutual influence of the values of pressure and flow on all input and output circuits of the system.

  13. Inversion of a radiative transfer model for estimation of rice chlorophyll content using support vector machine

    NASA Astrophysics Data System (ADS)

    Lv, Jie; Yan, Zhenguo; Wei, Jingyi

    2014-11-01

    Accurate retrieval of crop chlorophyll content is of great importance for crop growth monitoring, crop stress situations, and the crop yield estimation. This study focused on retrieval of rice chlorophyll content from data through radiative transfer model inversion. A field campaign was carried out in September 2009 in the farmland of ChangChun, Jinlin province, China. A different set of 10 sites of the same species were used in 2009 for validation of methodologies. Reflectance of rice was collected using ASD field spectrometer for the solar reflective wavelengths (350-2500 nm), chlorophyll content of rice was measured by SPAD-502 chlorophyll meter. Each sample sites was recorded with a Global Position System (GPS).Firstly, the PROSPECT radiative transfer model was inverted using support vector machine in order to link rice spectrum and the corresponding chlorophyll content. Secondly, genetic algorithms were adopted to select parameters of support vector machine, then support vector machine was trained the training data set, in order to establish leaf chlorophyll content estimation model. Thirdly, a validation data set was established based on hyperspectral data, and the leaf chlorophyll content estimation model was applied to the validation data set to estimate leaf chlorophyll content of rice in the research area. Finally, the outcome of the inversion was evaluated using the calculated R2 and RMSE values with the field measurements. The results of the study highlight the significance of support vector machine in estimating leaf chlorophyll content of rice. Future research will concentrated on the view of the definition of satellite images and the selection of the best measurement configuration for accurate estimation of rice characteristics.

  14. Quantitative chemogenomics: machine-learning models of protein-ligand interaction.

    PubMed

    Andersson, Claes R; Gustafsson, Mats G; Strömbergsson, Helena

    2011-01-01

    Chemogenomics is an emerging interdisciplinary field that lies in the interface of biology, chemistry, and informatics. Most of the currently used drugs are small molecules that interact with proteins. Understanding protein-ligand interaction is therefore central to drug discovery and design. In the subfield of chemogenomics known as proteochemometrics, protein-ligand-interaction models are induced from data matrices that consist of both protein and ligand information along with some experimentally measured variable. The two general aims of this quantitative multi-structure-property-relationship modeling (QMSPR) approach are to exploit sparse/incomplete information sources and to obtain more general models covering larger parts of the protein-ligand space, than traditional approaches that focuses mainly on specific targets or ligands. The data matrices, usually obtained from multiple sparse/incomplete sources, typically contain series of proteins and ligands together with quantitative information about their interactions. A useful model should ideally be easy to interpret and generalize well to new unseen protein-ligand combinations. Resolving this requires sophisticated machine-learning methods for model induction, combined with adequate validation. This review is intended to provide a guide to methods and data sources suitable for this kind of protein-ligand-interaction modeling. An overview of the modeling process is presented including data collection, protein and ligand descriptor computation, data preprocessing, machine-learning-model induction and validation. Concerns and issues specific for each step in this kind of data-driven modeling will be discussed. PMID:21470169

  15. RMP model based optimization of power system stabilizers in multi-machine power system.

    PubMed

    Baek, Seung-Mook; Park, Jung-Wook

    2009-01-01

    This paper describes the nonlinear parameter optimization of power system stabilizer (PSS) by using the reduced multivariate polynomial (RMP) algorithm with the one-shot property. The RMP model estimates the second-order partial derivatives of the Hessian matrix after identifying the trajectory sensitivities, which can be computed from the hybrid system modeling with a set of differential-algebraic-impulsive-switched (DAIS) structure for a power system. Then, any nonlinear controller in the power system can be optimized by achieving a desired performance measure, mathematically represented by an objective function (OF). In this paper, the output saturation limiter of the PSS, which is used to improve low-frequency oscillation damping performance during a large disturbance, is optimally tuned exploiting the Hessian estimated by the RMP model. Its performances are evaluated with several case studies on both single-machine infinite bus (SMIB) and multi-machine power system (MMPS) by time-domain simulation. In particular, all nonlinear parameters of multiple PSSs on IEEE benchmark two-area four-machine power system are optimized to be robust against various disturbances by using the weighted sum of the OFs. PMID:19596547

  16. A model of unsteady spatially inhomogeneous flow in a radial-axial blade machine

    NASA Astrophysics Data System (ADS)

    Ambrozhevich, A. V.; Munshtukov, D. A.

    A two-dimensional model of the gasdynamic process in a radial-axial blade machine is proposed which allows for the instantaneous local state of the field of flow parameters, changes in the set angles along the median profile line, profile losses, and centrifugal and Coriolis forces. The model also allows for the injection of cooling air and completion of fuel combustion in the flow. The model is equally applicable to turbines and compressors. The use of the method of singularities provides for a unified and relatively simple description of various factors affecting the flow and, therefore, for computational efficiency.

  17. Extreme learning machine based spatiotemporal modeling of lithium-ion battery thermal dynamics

    NASA Astrophysics Data System (ADS)

    Liu, Zhen; Li, Han-Xiong

    2015-03-01

    Due to the overwhelming complexity of the electrochemical related behaviors and internal structure of lithium ion batteries, it is difficult to obtain an accurate mathematical expression of their thermal dynamics based on the physical principal. In this paper, a data based thermal model which is suitable for online temperature distribution estimation is proposed for lithium-ion batteries. Based on the physics based model, a simple but effective low order model is obtained using the Karhunen-Loeve decomposition method. The corresponding uncertain chemical related heat generation term in the low order model is approximated using extreme learning machine. All uncertain parameters in the low order model can be determined analytically in a linear way. Finally, the temperature distribution of the whole battery can be estimated in real time based on the identified low order model. Simulation results demonstrate the effectiveness of the proposed model. The simple training process of the model makes it superior for onboard application.

  18. Use of different sampling schemes in machine learning-based prediction of hydrological models' uncertainty

    NASA Astrophysics Data System (ADS)

    Kayastha, Nagendra; Solomatine, Dimitri; Lal Shrestha, Durga; van Griensven, Ann

    2013-04-01

    In recent years, a lot of attention in the hydrologic literature is given to model parameter uncertainty analysis. The robustness estimation of uncertainty depends on the efficiency of sampling method used to generate the best fit responses (outputs) and on ease of use. This paper aims to investigate: (1) how sampling strategies effect the uncertainty estimations of hydrological models, (2) how to use this information in machine learning predictors of models uncertainty. Sampling of parameters may employ various algorithms. We compared seven different algorithms namely, Monte Carlo (MC) simulation, generalized likelihood uncertainty estimation (GLUE), Markov chain Monte Carlo (MCMC), shuffled complex evolution metropolis algorithm (SCEMUA), differential evolution adaptive metropolis (DREAM), partical swarm optimization (PSO) and adaptive cluster covering (ACCO) [1]. These methods were applied to estimate uncertainty of streamflow simulation using conceptual model HBV and Semi-distributed hydrological model SWAT. Nzoia catchment in West Kenya is considered as the case study. The results are compared and analysed based on the shape of the posterior distribution of parameters, uncertainty results on model outputs. The MLUE method [2] uses results of Monte Carlo sampling (or any other sampling shceme) to build a machine learning (regression) model U able to predict uncertainty (quantiles of pdf) of a hydrological model H outputs. Inputs to these models are specially identified representative variables (past events precipitation and flows). The trained machine learning models are then employed to predict the model output uncertainty which is specific for the new input data. The problem here is that different sampling algorithms result in different data sets used to train such a model U, which leads to several models (and there is no clear evidence which model is the best since there is no basis for comparison). A solution could be to form a committee of all models U and

  19. Bayesian reliability modeling and assessment solution for NC machine tools under small-sample data

    NASA Astrophysics Data System (ADS)

    Yang, Zhaojun; Kan, Yingnan; Chen, Fei; Xu, Binbin; Chen, Chuanhai; Yang, Chuangui

    2015-11-01

    Although Markov chain Monte Carlo(MCMC) algorithms are accurate, many factors may cause instability when they are utilized in reliability analysis; such instability makes these algorithms unsuitable for widespread engineering applications. Thus, a reliability modeling and assessment solution aimed at small-sample data of numerical control(NC) machine tools is proposed on the basis of Bayes theories. An expert-judgment process of fusing multi-source prior information is developed to obtain the Weibull parameters' prior distributions and reduce the subjective bias of usual expert-judgment methods. The grid approximation method is applied to two-parameter Weibull distribution to derive the formulas for the parameters' posterior distributions and solve the calculation difficulty of high-dimensional integration. The method is then applied to the real data of a type of NC machine tool to implement a reliability assessment and obtain the mean time between failures(MTBF). The relative error of the proposed method is 5.8020×10-4 compared with the MTBF obtained by the MCMC algorithm. This result indicates that the proposed method is as accurate as MCMC. The newly developed solution for reliability modeling and assessment of NC machine tools under small-sample data is easy, practical, and highly suitable for widespread application in the engineering field; in addition, the solution does not reduce accuracy.

  20. Experimental study on light induced influence model to mice using support vector machine

    NASA Astrophysics Data System (ADS)

    Ji, Lei; Zhao, Zhimin; Yu, Yinshan; Zhu, Xingyue

    2014-08-01

    Previous researchers have made studies on different influences created by light irradiation to animals, including retinal damage, changes of inner index and so on. However, the model of light induced damage to animals using physiological indicators as features in machine learning method is never founded. This study was designed to evaluate the changes in micro vascular diameter, the serum absorption spectrum and the blood flow influenced by light irradiation of different wavelengths, powers and exposure time with support vector machine (SVM). The micro images of the mice auricle were recorded and the vessel diameters were calculated by computer program. The serum absorption spectrums were analyzed. The result shows that training sample rate 20% and 50% have almost the same correct recognition rate. Better performance and accuracy was achieved by third-order polynomial kernel SVM quadratic optimization method and it worked suitably for predicting the light induced damage to organisms.

  1. An application of three-dimensional modeling in the cutting machine of intersecting line software

    NASA Astrophysics Data System (ADS)

    Lu, Jixiang

    2011-11-01

    This paper developed a software platform of intersecting line cutting machine. The software platform consists of three parts. The first is the interface of parameter input and modify, the second is the three-dimensional display of main tube and branch tube, and the last is the cutting simulation and G code output. We can obtain intersection data by intersection algorithm, and we also make three-dimensional model and dynamic simulation on the data of intersecting line cutting. By changing the parameters and the assembly sequence of main tube and branch tube, you can see the modified two-dimensional and three-dimensional graphics and corresponding G-code output file. This method has been applied to practical cutting machine of intersecting line software.

  2. Machine learning methods enable predictive modeling of antibody feature:function relationships in RV144 vaccinees.

    PubMed

    Choi, Ickwon; Chung, Amy W; Suscovich, Todd J; Rerks-Ngarm, Supachai; Pitisuttithum, Punnee; Nitayaphan, Sorachai; Kaewkungwal, Jaranit; O'Connell, Robert J; Francis, Donald; Robb, Merlin L; Michael, Nelson L; Kim, Jerome H; Alter, Galit; Ackerman, Margaret E; Bailey-Kellogg, Chris

    2015-04-01

    The adaptive immune response to vaccination or infection can lead to the production of specific antibodies to neutralize the pathogen or recruit innate immune effector cells for help. The non-neutralizing role of antibodies in stimulating effector cell responses may have been a key mechanism of the protection observed in the RV144 HIV vaccine trial. In an extensive investigation of a rich set of data collected from RV144 vaccine recipients, we here employ machine learning methods to identify and model associations between antibody features (IgG subclass and antigen specificity) and effector function activities (antibody dependent cellular phagocytosis, cellular cytotoxicity, and cytokine release). We demonstrate via cross-validation that classification and regression approaches can effectively use the antibody features to robustly predict qualitative and quantitative functional outcomes. This integration of antibody feature and function data within a machine learning framework provides a new, objective approach to discovering and assessing multivariate immune correlates. PMID:25874406

  3. Predicting Pre-planting Risk of Stagonospora nodorum blotch in Winter Wheat Using Machine Learning Models.

    PubMed

    Mehra, Lucky K; Cowger, Christina; Gross, Kevin; Ojiambo, Peter S

    2016-01-01

    Pre-planting factors have been associated with the late-season severity of Stagonospora nodorum blotch (SNB), caused by the fungal pathogen Parastagonospora nodorum, in winter wheat (Triticum aestivum). The relative importance of these factors in the risk of SNB has not been determined and this knowledge can facilitate disease management decisions prior to planting of the wheat crop. In this study, we examined the performance of multiple regression (MR) and three machine learning algorithms namely artificial neural networks, categorical and regression trees, and random forests (RF), in predicting the pre-planting risk of SNB in wheat. Pre-planting factors tested as potential predictor variables were cultivar resistance, latitude, longitude, previous crop, seeding rate, seed treatment, tillage type, and wheat residue. Disease severity assessed at the end of the growing season was used as the response variable. The models were developed using 431 disease cases (unique combinations of predictors) collected from 2012 to 2014 and these cases were randomly divided into training, validation, and test datasets. Models were evaluated based on the regression of observed against predicted severity values of SNB, sensitivity-specificity ROC analysis, and the Kappa statistic. A strong relationship was observed between late-season severity of SNB and specific pre-planting factors in which latitude, longitude, wheat residue, and cultivar resistance were the most important predictors. The MR model explained 33% of variability in the data, while machine learning models explained 47 to 79% of the total variability. Similarly, the MR model correctly classified 74% of the disease cases, while machine learning models correctly classified 81 to 83% of these cases. Results show that the RF algorithm, which explained 79% of the variability within the data, was the most accurate in predicting the risk of SNB, with an accuracy rate of 93%. The RF algorithm could allow early assessment of

  4. Predicting Pre-planting Risk of Stagonospora nodorum blotch in Winter Wheat Using Machine Learning Models

    PubMed Central

    Mehra, Lucky K.; Cowger, Christina; Gross, Kevin; Ojiambo, Peter S.

    2016-01-01

    Pre-planting factors have been associated with the late-season severity of Stagonospora nodorum blotch (SNB), caused by the fungal pathogen Parastagonospora nodorum, in winter wheat (Triticum aestivum). The relative importance of these factors in the risk of SNB has not been determined and this knowledge can facilitate disease management decisions prior to planting of the wheat crop. In this study, we examined the performance of multiple regression (MR) and three machine learning algorithms namely artificial neural networks, categorical and regression trees, and random forests (RF), in predicting the pre-planting risk of SNB in wheat. Pre-planting factors tested as potential predictor variables were cultivar resistance, latitude, longitude, previous crop, seeding rate, seed treatment, tillage type, and wheat residue. Disease severity assessed at the end of the growing season was used as the response variable. The models were developed using 431 disease cases (unique combinations of predictors) collected from 2012 to 2014 and these cases were randomly divided into training, validation, and test datasets. Models were evaluated based on the regression of observed against predicted severity values of SNB, sensitivity-specificity ROC analysis, and the Kappa statistic. A strong relationship was observed between late-season severity of SNB and specific pre-planting factors in which latitude, longitude, wheat residue, and cultivar resistance were the most important predictors. The MR model explained 33% of variability in the data, while machine learning models explained 47 to 79% of the total variability. Similarly, the MR model correctly classified 74% of the disease cases, while machine learning models correctly classified 81 to 83% of these cases. Results show that the RF algorithm, which explained 79% of the variability within the data, was the most accurate in predicting the risk of SNB, with an accuracy rate of 93%. The RF algorithm could allow early assessment of

  5. Uncertainty "escalation" and use of machine learning to forecast residual and data model uncertainties

    NASA Astrophysics Data System (ADS)

    Solomatine, Dimitri

    2016-04-01

    When speaking about model uncertainty many authors implicitly assume the data uncertainty (mainly in parameters or inputs) which is probabilistically described by distributions. Often however it is look also into the residual uncertainty as well. It is hence reasonable to classify the main approaches to uncertainty analysis with respect to the two main types of model uncertainty that can be distinguished: A. The residual uncertainty of models. In this case the model parameters and/or model inputs are considered to be fixed (deterministic), i.e. the model is considered to be optimal (calibrated) and deterministic. Model error is considered as the manifestation of uncertainty. If there is enough past data about the model errors (i.e. it uncertainty), it is possible to build a statistical or machine learning model of uncertainty trained on this data. The following methods can be mentioned: (a) quantile regression (QR) method by Koenker and Basset in which linear regression is used to build predictive models for distribution quantiles [1] (b) a more recent approach that takes into account the input variables influencing such uncertainty and uses more advanced machine learning (non-linear) methods (neural networks, model trees etc.) - the UNEEC method [2,3,7] (c) and even more recent DUBRAUE method (Dynamic Uncertainty Model By Regression on Absolute Error), a autoregressive model of model residuals (it corrects the model residual first and then carries out the uncertainty prediction by a autoregressive statistical model) [5] B. The data uncertainty (parametric and/or input) - in this case we study the propagation of uncertainty (presented typically probabilistically) from parameters or inputs to the model outputs. In case of simple functions representing models analytical approaches can be used, or approximation methods (e.g., first-order second moment method). However, for real complex non-linear models implemented in software there is no other choice except using

  6. Mathematical concepts for modeling human behavior in complex man-machine systems

    NASA Technical Reports Server (NTRS)

    Johannsen, G.; Rouse, W. B.

    1979-01-01

    Many human behavior (e.g., manual control) models have been found to be inadequate for describing processes in certain real complex man-machine systems. An attempt is made to find a way to overcome this problem by examining the range of applicability of existing mathematical models with respect to the hierarchy of human activities in real complex tasks. Automobile driving is chosen as a baseline scenario, and a hierarchy of human activities is derived by analyzing this task in general terms. A structural description leads to a block diagram and a time-sharing computer analogy.

  7. Fast and accurate modeling of molecular atomization energies with machine learning.

    PubMed

    Rupp, Matthias; Tkatchenko, Alexandre; Müller, Klaus-Robert; von Lilienfeld, O Anatole

    2012-02-01

    We introduce a machine learning model to predict atomization energies of a diverse set of organic molecules, based on nuclear charges and atomic positions only. The problem of solving the molecular Schrödinger equation is mapped onto a nonlinear statistical regression problem of reduced complexity. Regression models are trained on and compared to atomization energies computed with hybrid density-functional theory. Cross validation over more than seven thousand organic molecules yields a mean absolute error of ∼10  kcal/mol. Applicability is demonstrated for the prediction of molecular atomization potential energy curves. PMID:22400967

  8. A Multianalyzer Machine Learning Model for Marine Heterogeneous Data Schema Mapping

    PubMed Central

    Yan, Wang; Jiajin, Le; Yun, Zhang

    2014-01-01

    The main challenges that marine heterogeneous data integration faces are the problem of accurate schema mapping between heterogeneous data sources. In order to improve the schema mapping efficiency and get more accurate learning results, this paper proposes a heterogeneous data schema mapping method basing on multianalyzer machine learning model. The multianalyzer analysis the learning results comprehensively, and a fuzzy comprehensive evaluation system is introduced for output results' evaluation and multi factor quantitative judging. Finally, the data mapping comparison experiment on the East China Sea observing data confirms the effectiveness of the model and shows multianalyzer's obvious improvement of mapping error rate. PMID:25250372

  9. Sensitivity Analysis of a Spatio-Temporal Avalanche Forecasting Model Based on Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Matasci, G.; Pozdnoukhov, A.; Kanevski, M.

    2009-04-01

    The recent progress in environmental monitoring technologies allows capturing extensive amount of data that can be used to assist in avalanche forecasting. While it is not straightforward to directly obtain the stability factors with the available technologies, the snow-pack profiles and especially meteorological parameters are becoming more and more available at finer spatial and temporal scales. Being very useful for improving physical modelling, these data are also of particular interest regarding their use involving the contemporary data-driven techniques of machine learning. Such, the use of support vector machine classifier opens ways to discriminate the ``safe'' and ``dangerous'' conditions in the feature space of factors related to avalanche activity based on historical observations. The input space of factors is constructed from the number of direct and indirect snowpack and weather observations pre-processed with heuristic and physical models into a high-dimensional spatially varying vector of input parameters. The particular system presented in this work is implemented for the avalanche-prone site of Ben Nevis, Lochaber region in Scotland. A data-driven model for spatio-temporal avalanche danger forecasting provides an avalanche danger map for this local (5x5 km) region at the resolution of 10m based on weather and avalanche observations made by forecasters on a daily basis at the site. We present the further work aimed at overcoming the ``black-box'' type modelling, a disadvantage the machine learning methods are often criticized for. It explores what the data-driven method of support vector machine has to offer to improve the interpretability of the forecast, uncovers the properties of the developed system with respect to highlighting which are the important features that led to the particular prediction (both in time and space), and presents the analysis of sensitivity of the prediction with respect to the varying input parameters. The purpose of the

  10. A multianalyzer machine learning model for marine heterogeneous data schema mapping.

    PubMed

    Yan, Wang; Jiajin, Le; Yun, Zhang

    2014-01-01

    The main challenges that marine heterogeneous data integration faces are the problem of accurate schema mapping between heterogeneous data sources. In order to improve the schema mapping efficiency and get more accurate learning results, this paper proposes a heterogeneous data schema mapping method basing on multianalyzer machine learning model. The multianalyzer analysis the learning results comprehensively, and a fuzzy comprehensive evaluation system is introduced for output results' evaluation and multi factor quantitative judging. Finally, the data mapping comparison experiment on the East China Sea observing data confirms the effectiveness of the model and shows multianalyzer's obvious improvement of mapping error rate. PMID:25250372

  11. A hybrid prognostic model for multistep ahead prediction of machine condition

    NASA Astrophysics Data System (ADS)

    Roulias, D.; Loutas, T. H.; Kostopoulos, V.

    2012-05-01

    Prognostics are the future trend in condition based maintenance. In the current framework a data driven prognostic model is developed. The typical procedure of developing such a model comprises a) the selection of features which correlate well with the gradual degradation of the machine and b) the training of a mathematical tool. In this work the data are taken from a laboratory scale single stage gearbox under multi-sensor monitoring. Tests monitoring the condition of the gear pair from healthy state until total brake down following several days of continuous operation were conducted. After basic pre-processing of the derived data, an indicator that correlated well with the gearbox condition was obtained. Consecutively the time series is split in few distinguishable time regions via an intelligent data clustering scheme. Each operating region is modelled with a feed-forward artificial neural network (FFANN) scheme. The performance of the proposed model is tested by applying the system to predict the machine degradation level on unseen data. The results show the plausibility and effectiveness of the model in following the trend of the timeseries even in the case that a sudden change occurs. Moreover the model shows ability to generalise for application in similar mechanical assets.

  12. Structure Based Thermostability Prediction Models for Protein Single Point Mutations with Machine Learning Tools

    PubMed Central

    Jia, Lei; Yarlagadda, Ramya; Reed, Charles C.

    2015-01-01

    Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find “hot spots” in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html) is a public database that consists of thousands of protein mutants’ experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG) and melting temperature change (dTm) were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor) and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models. PMID:26361227

  13. Structure Based Thermostability Prediction Models for Protein Single Point Mutations with Machine Learning Tools.

    PubMed

    Jia, Lei; Yarlagadda, Ramya; Reed, Charles C

    2015-01-01

    Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find "hot spots" in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html) is a public database that consists of thousands of protein mutants' experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG) and melting temperature change (dTm) were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor) and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models. PMID:26361227

  14. Estimating the complexity of 3D structural models using machine learning methods

    NASA Astrophysics Data System (ADS)

    Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques

    2016-04-01

    Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.

  15. [Modelling a penicillin fed-batch fermentation using least squares support vector machines].

    PubMed

    Liu, Yi; Wang, Hai-Qing

    2006-01-01

    The biochemical processes are usually characterized as seriously time varying and nonlinear dynamic systems. Building their first-principle models are very costly and difficult due to the absence of inherent mechanism and efficient on-line sensors. Furthermore, these detailed and complicated models do not necessary guarantee a good performance in practice. An approach via least squares support vector machines (LS-SVM) based on Pensim simulator is proposed for modelling the penicillin fed-batch fermentation process, and the adjustment strategy for parameters of LS-SVM is presented. Based on the proposed modelling method, the predictive models of penicillin concentration, biomass concentration and substrate concentration are obtained by using very limited on-line measurements. The results show that the models established are more accurate and efficient, and suffice for the requirements of control and optimization for biochemical processes. PMID:16572855

  16. Modeling of surface topography in single-point diamond turning machine.

    PubMed

    Huang, Chih-Yu; Liang, Rongguang

    2015-08-10

    Surface roughness is an important factor in characterizing the performance of high-precision optical surfaces. In this paper, we propose a model to estimate the surface roughness generated by a single-point diamond turning machine. In this model, we take into consideration the basic tool-cutting parameters as well as the relative vibration between the tool and the workpiece in both the infeed and feeding directions. Current models focus on the relative tool-workpiece vibration in the infeed direction. However, based on our experimental measurements, the contribution of relative tool-workpiece vibration in the feeding direction is significant and cannot be ignored in the model. The proposed model is able to describe the surface topography for flat as well as cylindrical surfaces of the workpiece. It has the potential to describe more complex spherical surfaces or freeform surfaces. Our experimental study with metal materials shows good correlation between the model and the diamond-turned surfaces. PMID:26368364

  17. Estimating Inflows to Lake Okeechobee Using Climate Indices: A Machine Learning Modeling Approach

    NASA Astrophysics Data System (ADS)

    Kalra, A.; Ahmad, S.

    2008-12-01

    The operation of regional water management systems that include lakes and storage reservoirs for flood control and water supply can be significantly improved by using climate indices. This research is focused on forecasting Lag 1 annual inflow to Lake Okeechobee, located in South Florida, using annual oceanic- atmospheric indices of Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO), Atlantic Multidecadal Oscillation (AMO), and El Nino-Southern Oscillations (ENSO). Support Vector Machine (SVM) and Least Square Support Vector Machine (LSSVM), belonging to the class of data driven models, are developed to forecast annual lake inflow using annual oceanic-atmospheric indices data from 1914 to 2003. The models were trained with 80 years of data and tested for 10 years of data. Based on Correlation Coefficient, Root Means Square Error, and Mean Absolute Error model predictions were in good agreement with measured inflow volumes. Sensitivity analysis, performed to evaluate the effect of individual and coupled oscillations, revealed a strong signal for AMO and ENSO indices compared to PDO and NAO indices for one year lead-time inflow forecast. Inflow predictions from the SVM models were better when compared with the predictions obtained from feed forward back propagation Artificial Neural Network (ANN) models.

  18. A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces

    NASA Astrophysics Data System (ADS)

    Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  19. Supercomputer Assisted Generation of Machine Learning Agents for the Calibration of Building Energy Models

    SciTech Connect

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard

    2013-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrot pur- poses. EnergyPlus is the agship Department of Energy software that performs BEM for dierent types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manu- ally by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building en- ergy modeling unfeasible for smaller projects. In this paper, we describe the \\Autotune" research which employs machine learning algorithms to generate agents for the dierent kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of En- ergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-eective cali- bration of building models.

  20. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP

    PubMed Central

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740

  1. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    PubMed

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740

  2. Biosimilarity Assessments of Model IgG1-Fc Glycoforms Using a Machine Learning Approach.

    PubMed

    Kim, Jae Hyun; Joshi, Sangeeta B; Tolbert, Thomas J; Middaugh, C Russell; Volkin, David B; Smalter Hall, Aaron

    2016-02-01

    Biosimilarity assessments are performed to decide whether 2 preparations of complex biomolecules can be considered "highly similar." In this work, a machine learning approach is demonstrated as a mathematical tool for such assessments using a variety of analytical data sets. As proof-of-principle, physical stability data sets from 8 samples, 4 well-defined immunoglobulin G1-Fragment crystallizable glycoforms in 2 different formulations, were examined (see More et al., companion article in this issue). The data sets included triplicate measurements from 3 analytical methods across different pH and temperature conditions (2066 data features). Established machine learning techniques were used to determine whether the data sets contain sufficient discriminative power in this application. The support vector machine classifier identified the 8 distinct samples with high accuracy. For these data sets, there exists a minimum threshold in terms of information quality and volume to grant enough discriminative power. Generally, data from multiple analytical techniques, multiple pH conditions, and at least 200 representative features were required to achieve the highest discriminative accuracy. In addition to classification accuracy tests, various methods such as sample space visualization, similarity analysis based on Euclidean distance, and feature ranking by mutual information scores are demonstrated to display their effectiveness as modeling tools for biosimilarity assessments. PMID:26869422

  3. Selecting statistical or machine learning techniques for regional landslide susceptibility modelling by evaluating spatial prediction

    NASA Astrophysics Data System (ADS)

    Goetz, Jason; Brenning, Alexander; Petschko, Helene; Leopold, Philip

    2015-04-01

    With so many techniques now available for landslide susceptibility modelling, it can be challenging to decide on which technique to apply. Generally speaking, the criteria for model selection should be tied closely to end users' purpose, which could be spatial prediction, spatial analysis or both. In our research, we focus on comparing the spatial predictive abilities of landslide susceptibility models. We illustrate how spatial cross-validation, a statistical approach for assessing spatial prediction performance, can be applied with the area under the receiver operating characteristic curve (AUROC) as a prediction measure for model comparison. Several machine learning and statistical techniques are evaluated for prediction in Lower Austria: support vector machine, random forest, bundling with penalized linear discriminant analysis, logistic regression, weights of evidence, and the generalized additive model. In addition to predictive performance, the importance of predictor variables in each model was estimated using spatial cross-validation by calculating the change in AUROC performance when variables are randomly permuted. The susceptibility modelling techniques were tested in three areas of interest in Lower Austria, which have unique geologic conditions associated with landslide occurrence. Overall, we found for the majority of comparisons that there were little practical or even statistically significant differences in AUROCs. That is the models' prediction performances were very similar. Therefore, in addition to prediction, the ability to interpret models for spatial analysis and the qualitative qualities of the prediction surface (map) are considered and discussed. The measure of variable importance provided some insight into the model behaviour for prediction, in particular for "black-box" models. However, there were no clear patterns in all areas of interest to why certain variables were given more importance over others.

  4. One- and two-dimensional Stirling machine simulation using experimentally generated reversing flow turbuulence models

    SciTech Connect

    Goldberg, L.F.

    1990-08-01

    The activities described in this report do not constitute a continuum but rather a series of linked smaller investigations in the general area of one- and two-dimensional Stirling machine simulation. The initial impetus for these investigations was the development and construction of the Mechanical Engineering Test Rig (METR) under a grant awarded by NASA to Dr. Terry Simon at the Department of Mechanical Engineering, University of Minnesota. The purpose of the METR is to provide experimental data on oscillating turbulent flows in Stirling machine working fluid flow path components (heater, cooler, regenerator, etc.) with particular emphasis on laminar/turbulent flow transitions. Hence, the initial goals for the grant awarded by NASA were, broadly, to provide computer simulation backup for the design of the METR and to analyze the results produced. This was envisaged in two phases: First, to apply an existing one-dimensional Stirling machine simulation code to the METR and second, to adapt a two-dimensional fluid mechanics code which had been developed for simulating high Rayleigh number buoyant cavity flows to the METR. The key aspect of this latter component was the development of an appropriate turbulence model suitable for generalized application to Stirling simulation. A final-step was then to apply the two-dimensional code to an existing Stirling machine for which adequate experimental data exist. The work described herein was carried out over a period of three years on a part-time basis. Forty percent of the first year`s funding was provided as a match to the NASA funds by the Underground Space Center, University of Minnesota, which also made its computing facilities available to the project at no charge.

  5. Evaluating machine learning and statistical prediction techniques for landslide susceptibility modeling

    NASA Astrophysics Data System (ADS)

    Goetz, J. N.; Brenning, A.; Petschko, H.; Leopold, P.

    2015-08-01

    Statistical and now machine learning prediction methods have been gaining popularity in the field of landslide susceptibility modeling. Particularly, these data driven approaches show promise when tackling the challenge of mapping landslide prone areas for large regions, which may not have sufficient geotechnical data to conduct physically-based methods. Currently, there is no best method for empirical susceptibility modeling. Therefore, this study presents a comparison of traditional statistical and novel machine learning models applied for regional scale landslide susceptibility modeling. These methods were evaluated by spatial k-fold cross-validation estimation of the predictive performance, assessment of variable importance for gaining insights into model behavior and by the appearance of the prediction (i.e. susceptibility) map. The modeling techniques applied were logistic regression (GLM), generalized additive models (GAM), weights of evidence (WOE), the support vector machine (SVM), random forest classification (RF), and bootstrap aggregated classification trees (bundling) with penalized discriminant analysis (BPLDA). These modeling methods were tested for three areas in the province of Lower Austria, Austria. The areas are characterized by different geological and morphological settings. Random forest and bundling classification techniques had the overall best predictive performances. However, the performances of all modeling techniques were for the majority not significantly different from each other; depending on the areas of interest, the overall median estimated area under the receiver operating characteristic curve (AUROC) differences ranged from 2.9 to 8.9 percentage points. The overall median estimated true positive rate (TPR) measured at a 10% false positive rate (FPR) differences ranged from 11 to 15pp. The relative importance of each predictor was generally different between the modeling methods. However, slope angle, surface roughness and plan

  6. Uncertainty "escalation" and use of machine learning to forecast residual and data model uncertainties

    NASA Astrophysics Data System (ADS)

    Solomatine, Dimitri

    2016-04-01

    When speaking about model uncertainty many authors implicitly assume the data uncertainty (mainly in parameters or inputs) which is probabilistically described by distributions. Often however it is look also into the residual uncertainty as well. It is hence reasonable to classify the main approaches to uncertainty analysis with respect to the two main types of model uncertainty that can be distinguished: A. The residual uncertainty of models. In this case the model parameters and/or model inputs are considered to be fixed (deterministic), i.e. the model is considered to be optimal (calibrated) and deterministic. Model error is considered as the manifestation of uncertainty. If there is enough past data about the model errors (i.e. it uncertainty), it is possible to build a statistical or machine learning model of uncertainty trained on this data. The following methods can be mentioned: (a) quantile regression (QR) method by Koenker and Basset in which linear regression is used to build predictive models for distribution quantiles [1] (b) a more recent approach that takes into account the input variables influencing such uncertainty and uses more advanced machine learning (non-linear) methods (neural networks, model trees etc.) - the UNEEC method [2,3,7] (c) and even more recent DUBRAUE method (Dynamic Uncertainty Model By Regression on Absolute Error), a autoregressive model of model residuals (it corrects the model residual first and then carries out the uncertainty prediction by a autoregressive statistical model) [5] B. The data uncertainty (parametric and/or input) - in this case we study the propagation of uncertainty (presented typically probabilistically) from parameters or inputs to the model outputs. In case of simple functions representing models analytical approaches can be used, or approximation methods (e.g., first-order second moment method). However, for real complex non-linear models implemented in software there is no other choice except using

  7. Three-Phase Unbalanced Transient Dynamics and Powerflow for Modeling Distribution Systems With Synchronous Machines

    SciTech Connect

    Elizondo, Marcelo A.; Tuffner, Francis K.; Schneider, Kevin P.

    2016-01-01

    Unlike transmission systems, distribution feeders in North America operate under unbalanced conditions at all times, and generally have a single strong voltage source. When a distribution feeder is connected to a strong substation source, the system is dynamically very stable, even for large transients. However if a distribution feeder, or part of the feeder, is separated from the substation and begins to operate as an islanded microgrid, transient dynamics become more of an issue. To assess the impact of transient dynamics at the distribution level, it is not appropriate to use traditional transmission solvers, which generally assume transposed lines and balanced loads. Full electromagnetic solvers capture a high level of detail, but it is difficult to model large systems because of the required detail. This paper proposes an electromechanical transient model of synchronous machine for distribution-level modeling and microgrids. This approach includes not only the machine model, but also its interface with an unbalanced network solver, and a powerflow method to solve unbalanced conditions without a strong reference bus. The presented method is validated against a full electromagnetic transient simulation.

  8. Classification of signaling proteins based on molecular star graph descriptors using Machine Learning models.

    PubMed

    Fernandez-Lozano, Carlos; Cuiñas, Rubén F; Seoane, José A; Fernández-Blanco, Enrique; Dorado, Julian; Munteanu, Cristian R

    2015-11-01

    Signaling proteins are an important topic in drug development due to the increased importance of finding fast, accurate and cheap methods to evaluate new molecular targets involved in specific diseases. The complexity of the protein structure hinders the direct association of the signaling activity with the molecular structure. Therefore, the proposed solution involves the use of protein star graphs for the peptide sequence information encoding into specific topological indices calculated with S2SNet tool. The Quantitative Structure-Activity Relationship classification model obtained with Machine Learning techniques is able to predict new signaling peptides. The best classification model is the first signaling prediction model, which is based on eleven descriptors and it was obtained using the Support Vector Machines-Recursive Feature Elimination (SVM-RFE) technique with the Laplacian kernel (RFE-LAP) and an AUROC of 0.961. Testing a set of 3114 proteins of unknown function from the PDB database assessed the prediction performance of the model. Important signaling pathways are presented for three UniprotIDs (34 PDBs) with a signaling prediction greater than 98.0%. PMID:26297890

  9. A Genetic Algorithm Based Support Vector Machine Model for Blood-Brain Barrier Penetration Prediction

    PubMed Central

    Zhang, Daqing; Xiao, Jianfeng; Zhou, Nannan; Zheng, Mingyue; Luo, Xiaomin; Jiang, Hualiang; Chen, Kaixian

    2015-01-01

    Blood-brain barrier (BBB) is a highly complex physical barrier determining what substances are allowed to enter the brain. Support vector machine (SVM) is a kernel-based machine learning method that is widely used in QSAR study. For a successful SVM model, the kernel parameters for SVM and feature subset selection are the most important factors affecting prediction accuracy. In most studies, they are treated as two independent problems, but it has been proven that they could affect each other. We designed and implemented genetic algorithm (GA) to optimize kernel parameters and feature subset selection for SVM regression and applied it to the BBB penetration prediction. The results show that our GA/SVM model is more accurate than other currently available log BB models. Therefore, to optimize both SVM parameters and feature subset simultaneously with genetic algorithm is a better approach than other methods that treat the two problems separately. Analysis of our log BB model suggests that carboxylic acid group, polar surface area (PSA)/hydrogen-bonding ability, lipophilicity, and molecular charge play important role in BBB penetration. Among those properties relevant to BBB penetration, lipophilicity could enhance the BBB penetration while all the others are negatively correlated with BBB penetration. PMID:26504797

  10. Integrating Machine Learning into a Crowdsourced Model for Earthquake-Induced Damage Assessment

    NASA Technical Reports Server (NTRS)

    Rebbapragada, Umaa; Oommen, Thomas

    2011-01-01

    On January 12th, 2010, a catastrophic 7.0M earthquake devastated the country of Haiti. In the aftermath of an earthquake, it is important to rapidly assess damaged areas in order to mobilize the appropriate resources. The Haiti damage assessment effort introduced a promising model that uses crowdsourcing to map damaged areas in freely available remotely-sensed data. This paper proposes the application of machine learning methods to improve this model. Specifically, we apply work on learning from multiple, imperfect experts to the assessment of volunteer reliability, and propose the use of image segmentation to automate the detection of damaged areas. We wrap both tasks in an active learning framework in order to shift volunteer effort from mapping a full catalog of images to the generation of high-quality training data. We hypothesize that the integration of machine learning into this model improves its reliability, maintains the speed of damage assessment, and allows the model to scale to higher data volumes.

  11. A model-based analysis of impulsivity using a slot-machine gambling paradigm

    PubMed Central

    Paliwal, Saee; Petzschner, Frederike H.; Schmitz, Anna Katharina; Tittgemeyer, Marc; Stephan, Klaas E.

    2014-01-01

    Impulsivity plays a key role in decision-making under uncertainty. It is a significant contributor to problem and pathological gambling (PG). Standard assessments of impulsivity by questionnaires, however, have various limitations, partly because impulsivity is a broad, multi-faceted concept. What remains unclear is which of these facets contribute to shaping gambling behavior. In the present study, we investigated impulsivity as expressed in a gambling setting by applying computational modeling to data from 47 healthy male volunteers who played a realistic, virtual slot-machine gambling task. Behaviorally, we found that impulsivity, as measured independently by the 11th revision of the Barratt Impulsiveness Scale (BIS-11), correlated significantly with an aggregate read-out of the following gambling responses: bet increases (BIs), machines switches (MS), casino switches (CS), and double-ups (DUs). Using model comparison, we compared a set of hierarchical Bayesian belief-updating models, i.e., the Hierarchical Gaussian Filter (HGF) and Rescorla–Wagner reinforcement learning (RL) models, with regard to how well they explained different aspects of the behavioral data. We then examined the construct validity of our winning models with multiple regression, relating subject-specific model parameter estimates to the individual BIS-11 total scores. In the most predictive model (a three-level HGF), the two free parameters encoded uncertainty-dependent mechanisms of belief updates and significantly explained BIS-11 variance across subjects. Furthermore, in this model, decision noise was a function of trial-wise uncertainty about winning probability. Collectively, our results provide a proof of concept that hierarchical Bayesian models can characterize the decision-making mechanisms linked to the impulsive traits of an individual. These novel indices of gambling mechanisms unmasked during actual play may be useful for online prevention measures for at-risk players and

  12. Study of Two-Dimensional Compressible Non-Acoustic Modeling of Stirling Machine Type Components

    NASA Technical Reports Server (NTRS)

    Tew, Roy C., Jr.; Ibrahim, Mounir B.

    2001-01-01

    A two-dimensional (2-D) computer code was developed for modeling enclosed volumes of gas with oscillating boundaries, such as Stirling machine components. An existing 2-D incompressible flow computer code, CAST, was used as the starting point for the project. CAST was modified to use the compressible non-acoustic Navier-Stokes equations to model an enclosed volume including an oscillating piston. The devices modeled have low Mach numbers and are sufficiently small that the time required for acoustics to propagate across them is negligible. Therefore, acoustics were excluded to enable more time efficient computation. Background information about the project is presented. The compressible non-acoustic flow assumptions are discussed. The governing equations used in the model are presented in transport equation format. A brief description is given of the numerical methods used. Comparisons of code predictions with experimental data are then discussed.

  13. Accurate modeling of switched reluctance machine based on hybrid trained WNN

    SciTech Connect

    Song, Shoujun Ge, Lefei; Ma, Shaojie; Zhang, Man

    2014-04-15

    According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, the nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.

  14. Discriminative feature-rich models for syntax-based machine translation.

    SciTech Connect

    Dixon, Kevin R.

    2012-12-01

    This report describes the campus executive LDRD %E2%80%9CDiscriminative Feature-Rich Models for Syntax-Based Machine Translation,%E2%80%9D which was an effort to foster a better relationship between Sandia and Carnegie Mellon University (CMU). The primary purpose of the LDRD was to fund the research of a promising graduate student at CMU; in this case, Kevin Gimpel was selected from the pool of candidates. This report gives a brief overview of Kevin Gimpel's research.

  15. Feature combination networks for the interpretation of statistical machine learning models: application to Ames mutagenicity

    PubMed Central

    2014-01-01

    Background A new algorithm has been developed to enable the interpretation of black box models. The developed algorithm is agnostic to learning algorithm and open to all structural based descriptors such as fragments, keys and hashed fingerprints. The algorithm has provided meaningful interpretation of Ames mutagenicity predictions from both random forest and support vector machine models built on a variety of structural fingerprints. A fragmentation algorithm is utilised to investigate the model’s behaviour on specific substructures present in the query. An output is formulated summarising causes of activation and deactivation. The algorithm is able to identify multiple causes of activation or deactivation in addition to identifying localised deactivations where the prediction for the query is active overall. No loss in performance is seen as there is no change in the prediction; the interpretation is produced directly on the model’s behaviour for the specific query. Results Models have been built using multiple learning algorithms including support vector machine and random forest. The models were built on public Ames mutagenicity data and a variety of fingerprint descriptors were used. These models produced a good performance in both internal and external validation with accuracies around 82%. The models were used to evaluate the interpretation algorithm. Interpretation was revealed that links closely with understood mechanisms for Ames mutagenicity. Conclusion This methodology allows for a greater utilisation of the predictions made by black box models and can expedite further study based on the output for a (quantitative) structure activity model. Additionally the algorithm could be utilised for chemical dataset investigation and knowledge extraction/human SAR development. PMID:24661325

  16. Research on Error Modelling and Identification of 3 Axis NC Machine Tools Based on Cross Grid Encoder Measurement

    NASA Astrophysics Data System (ADS)

    Du, Z. C.; Lv, C. F.; Hong, M. S.

    2006-10-01

    A new error modelling and identification method based on the cross grid encoder is proposed in this paper. Generally, there are 21 error components in the geometric error of the 3 axis NC machine tools. However according our theoretical analysis, the squareness error among different guide ways affects not only the translation error component, but also the rotational ones. Therefore, a revised synthetic error model is developed. And the mapping relationship between the error component and radial motion error of round workpiece manufactured on the NC machine tools are deduced. This mapping relationship shows that the radial error of circular motion is the comprehensive function result of all the error components of link, worktable, sliding table and main spindle block. Aiming to overcome the solution singularity shortcoming of traditional error component identification method, a new multi-step identification method of error component by using the Cross Grid Encoder measurement technology is proposed based on the kinematic error model of NC machine tool. Firstly, the 12 translational error components of the NC machine tool are measured and identified by using the least square method (LSM) when the NC machine tools go linear motion in the three orthogonal planes: XOY plane, XOZ plane and YOZ plane. Secondly, the circular error tracks are measured when the NC machine tools go circular motion in the same above orthogonal planes by using the cross grid encoder Heidenhain KGM 182. Therefore 9 rotational errors can be identified by using LSM. Finally the experimental validation of the above modelling theory and identification method is carried out in the 3 axis CNC vertical machining centre Cincinnati 750 Arrow. The entire 21 error components have been successfully measured out by the above method. Research shows the multi-step modelling and identification method is very suitable for 'on machine measurement'.

  17. Simulation of abrasive flow machining process for 2D and 3D mixture models

    NASA Astrophysics Data System (ADS)

    Dash, Rupalika; Maity, Kalipada

    2015-12-01

    Improvement of surface finish and material removal has been quite a challenge in a finishing operation such as abrasive flow machining (AFM). Factors that affect the surface finish and material removal are media viscosity, extrusion pressure, piston velocity, and particle size in abrasive flow machining process. Performing experiments for all the parameters and accurately obtaining an optimized parameter in a short time are difficult to accomplish because the operation requires a precise finish. Computational fluid dynamics (CFD) simulation was employed to accurately determine optimum parameters. In the current work, a 2D model was designed, and the flow analysis, force calculation, and material removal prediction were performed and compared with the available experimental data. Another 3D model for a swaging die finishing using AFM was simulated at different viscosities of the media to study the effects on the controlling parameters. A CFD simulation was performed by using commercially available ANSYS FLUENT. Two phases were considered for the flow analysis, and multiphase mixture model was taken into account. The fluid was considered to be a

  18. Using a Support Vector Machine (SVM) to Improve Generalization Ability of Load Model Parameters

    SciTech Connect

    Ma, Jian; Dong, Zhao Yang; Zhang, Pei

    2009-04-24

    Load modeling plays an important role in power system stability analysis and planning studies. The parameters of load models may experience variations in different application situations. Choosing appropriate parameters is critical for dynamic simulation and stability studies in power system. This paper presents a method to select the parameters with good generalization ability based on a given large number of available parameters that have been identified from dynamic simulation data in different scenarios. Principal component analysis is used to extract the major features of the given parameter sets. Reduced feature vectors are obtained by mapping the given parameter sets into principal component space. Then support vectors are found by implementing a classification problem. Load model parameters based on the obtained support vectors are built to reflect the dynamic property of the load. All of the given parameter sets were identified from simulation data based on the New England 10-machine 39-bus system, by taking into account different situations, such as load types, fault locations, fault types, and fault clearing time. The parameters obtained by support vector machine have good generalization capability, and can represent the load more accurately in most situations.

  19. Modeling complex responses of FM-sensitive cells in the auditory midbrain using a committee machine.

    PubMed

    Chang, T R; Chiu, T W; Sun, X; Poon, Paul W F

    2013-11-01

    Frequency modulation (FM) is an important building block of complex sounds that include speech signals. Exploring the neural mechanisms of FM coding with computer modeling could help understand how speech sounds are processed in the brain. Here, we modeled the single unit responses of auditory neurons recorded from the midbrain of anesthetized rats. These neurons displayed spectral temporal receptive fields (STRFs) that had multiple-trigger features, and were more complex than those with single-trigger features. Their responses have not been modeled satisfactorily with simple artificial neural networks, unlike neurons with simple-trigger features. To improve model performance, here we tested an approach with the committee machine. For a given neuron, the peri-stimulus time histogram (PSTH) was first generated in response to a repeated random FM tone, and peaks in the PSTH were segregated into groups based on the similarity of their pre-spike FM trigger features. Each group was then modeled using an artificial neural network with simple architecture, and, when necessary, by increasing the number of neurons in the hidden layer. After initial training, the artificial neural networks with their optimized weighting coefficients were pooled into a committee machine for training. Finally, the model performance was tested by prediction of the response of the same cell to a novel FM tone. The results showed improvement over simple artificial neural networks, supporting that trigger-feature-based modeling can be extended to cells with complex responses. This article is part of a Special Issue entitled Neural Coding 2012. This article is part of a Special Issue entitled Neural Coding 2012. PMID:23665390

  20. Machine learning models identify molecules active against the Ebola virus in vitro.

    PubMed

    Ekins, Sean; Freundlich, Joel S; Clark, Alex M; Anantpadma, Manu; Davey, Robert A; Madrid, Peter

    2015-01-01

    The search for small molecule inhibitors of Ebola virus (EBOV) has led to several high throughput screens over the past 3 years. These have identified a range of FDA-approved active pharmaceutical ingredients (APIs) with anti-EBOV activity in vitro and several of which are also active in a mouse infection model. There are millions of additional commercially-available molecules that could be screened for potential activities as anti-EBOV compounds. One way to prioritize compounds for testing is to generate computational models based on the high throughput screening data and then virtually screen compound libraries. In the current study, we have generated Bayesian machine learning models with viral pseudotype entry assay and the EBOV replication assay data. We have validated the models internally and externally. We have also used these models to computationally score the MicroSource library of drugs to select those likely to be potential inhibitors. Three of the highest scoring molecules that were not in the model training sets, quinacrine, pyronaridine and tilorone, were tested in vitro and had EC 50 values of 350, 420 and 230 nM, respectively. Pyronaridine is a component of a combination therapy for malaria that was recently approved by the European Medicines Agency, which may make it more readily accessible for clinical testing. Like other known antimalarial drugs active against EBOV, it shares the 4-aminoquinoline scaffold. Tilorone, is an investigational antiviral agent that has shown a broad array of biological activities including cell growth inhibition in cancer cells, antifibrotic properties, α7 nicotinic receptor agonist activity, radioprotective activity and activation of hypoxia inducible factor-1. Quinacrine is an antimalarial but also has use as an anthelmintic. Our results suggest data sets with less than 1,000 molecules can produce validated machine learning models that can in turn be utilized to identify novel EBOV inhibitors in vitro. PMID:26834994

  1. Machine learning models identify molecules active against the Ebola virus in vitro

    PubMed Central

    Ekins, Sean; Freundlich, Joel S.; Clark, Alex M.; Anantpadma, Manu; Davey, Robert A.; Madrid, Peter

    2016-01-01

    The search for small molecule inhibitors of Ebola virus (EBOV) has led to several high throughput screens over the past 3 years. These have identified a range of FDA-approved active pharmaceutical ingredients (APIs) with anti-EBOV activity in vitro and several of which are also active in a mouse infection model. There are millions of additional commercially-available molecules that could be screened for potential activities as anti-EBOV compounds. One way to prioritize compounds for testing is to generate computational models based on the high throughput screening data and then virtually screen compound libraries. In the current study, we have generated Bayesian machine learning models with viral pseudotype entry assay and the EBOV replication assay data. We have validated the models internally and externally. We have also used these models to computationally score the MicroSource library of drugs to select those likely to be potential inhibitors. Three of the highest scoring molecules that were not in the model training sets, quinacrine, pyronaridine and tilorone, were tested in vitro and had EC 50 values of 350, 420 and 230 nM, respectively. Pyronaridine is a component of a combination therapy for malaria that was recently approved by the European Medicines Agency, which may make it more readily accessible for clinical testing. Like other known antimalarial drugs active against EBOV, it shares the 4-aminoquinoline scaffold. Tilorone, is an investigational antiviral agent that has shown a broad array of biological activities including cell growth inhibition in cancer cells, antifibrotic properties, α7 nicotinic receptor agonist activity, radioprotective activity and activation of hypoxia inducible factor-1. Quinacrine is an antimalarial but also has use as an anthelmintic. Our results suggest data sets with less than 1,000 molecules can produce validated machine learning models that can in turn be utilized to identify novel EBOV inhibitors in vitro. PMID:26834994

  2. Hybrid wavelet-support vector machine approach for modelling rainfall-runoff process.

    PubMed

    Komasi, Mehdi; Sharghi, Soroush

    2016-01-01

    Because of the importance of water resources management, the need for accurate modeling of the rainfall-runoff process has rapidly grown in the past decades. Recently, the support vector machine (SVM) approach has been used by hydrologists for rainfall-runoff modeling and the other fields of hydrology. Similar to the other artificial intelligence models, such as artificial neural network (ANN) and adaptive neural fuzzy inference system, the SVM model is based on the autoregressive properties. In this paper, the wavelet analysis was linked to the SVM model concept for modeling the rainfall-runoff process of Aghchai and Eel River watersheds. In this way, the main time series of two variables, rainfall and runoff, were decomposed to multiple frequent time series by wavelet theory; then, these time series were imposed as input data on the SVM model in order to predict the runoff discharge one day ahead. The obtained results show that the wavelet SVM model can predict both short- and long-term runoff discharges by considering the seasonality effects. Also, the proposed hybrid model is relatively more appropriate than classical autoregressive ones such as ANN and SVM because it uses the multi-scale time series of rainfall and runoff data in the modeling process. PMID:27120649

  3. Abstracts of SIG Sessions.

    ERIC Educational Resources Information Center

    Proceedings of the ASIS Annual Meeting, 1995

    1995-01-01

    Presents abstracts of 15 special interest group (SIG) sessions. Topics include navigation and information utilization in the Internet, natural language processing, automatic indexing, image indexing, classification, users' models of database searching, online public access catalogs, education for information professions, information services,…

  4. Divide-and-conquer approach for brain machine interfaces: nonlinear mixture of competitive linear models.

    PubMed

    Kim, Sung-Phil; Sanchez, Justin C; Erdogmus, Deniz; Rao, Yadunandana N; Wessberg, Johan; Principe, Jose C; Nicolelis, Miguel

    2003-01-01

    This paper proposes a divide-and-conquer strategy for designing brain machine interfaces. A nonlinear combination of competitively trained local linear models (experts) is used to identify the mapping from neuronal activity in cortical areas associated with arm movement to the hand position of a primate. The proposed architecture and the training algorithm are described in detail and numerical performance comparisons with alternative linear and nonlinear modeling approaches, including time-delay neural networks and recursive multilayer perceptrons, are presented. This new strategy allows training the local linear models using normalized LMS and using a relatively smaller nonlinear network to efficiently combine the predictions of the linear experts. This leads to savings in computational requirements, while the performance is still similar to a large fully nonlinear network. PMID:12850045

  5. EBS Radionuclide Transport Abstraction

    SciTech Connect

    J. Prouty

    2006-07-14

    The purpose of this report is to develop and analyze the engineered barrier system (EBS) radionuclide transport abstraction model, consistent with Level I and Level II model validation, as identified in Technical Work Plan for: Near-Field Environment and Transport: Engineered Barrier System: Radionuclide Transport Abstraction Model Report Integration (BSC 2005 [DIRS 173617]). The EBS radionuclide transport abstraction (or EBS RT Abstraction) is the conceptual model used in the total system performance assessment (TSPA) to determine the rate of radionuclide releases from the EBS to the unsaturated zone (UZ). The EBS RT Abstraction conceptual model consists of two main components: a flow model and a transport model. Both models are developed mathematically from first principles in order to show explicitly what assumptions, simplifications, and approximations are incorporated into the models used in the TSPA. The flow model defines the pathways for water flow in the EBS and specifies how the flow rate is computed in each pathway. Input to this model includes the seepage flux into a drift. The seepage flux is potentially split by the drip shield, with some (or all) of the flux being diverted by the drip shield and some passing through breaches in the drip shield that might result from corrosion or seismic damage. The flux through drip shield breaches is potentially split by the waste package, with some (or all) of the flux being diverted by the waste package and some passing through waste package breaches that might result from corrosion or seismic damage. Neither the drip shield nor the waste package survives an igneous intrusion, so the flux splitting submodel is not used in the igneous scenario class. The flow model is validated in an independent model validation technical review. The drip shield and waste package flux splitting algorithms are developed and validated using experimental data. The transport model considers advective transport and diffusive transport

  6. A hybrid flowshop scheduling model considering dedicated machines and lot-splitting for the solar cell industry

    NASA Astrophysics Data System (ADS)

    Wang, Li-Chih; Chen, Yin-Yann; Chen, Tzu-Li; Cheng, Chen-Yang; Chang, Chin-Wei

    2014-10-01

    This paper studies a solar cell industry scheduling problem, which is similar to traditional hybrid flowshop scheduling (HFS). In a typical HFS problem, the allocation of machine resources for each order should be scheduled in advance. However, the challenge in solar cell manufacturing is the number of machines that can be adjusted dynamically to complete the job. An optimal production scheduling model is developed to explore these issues, considering the practical characteristics, such as hybrid flowshop, parallel machine system, dedicated machines, sequence independent job setup times and sequence dependent job setup times. The objective of this model is to minimise the makespan and to decide the processing sequence of the orders/lots in each stage, lot-splitting decisions for the orders and the number of machines used to satisfy the demands in each stage. From the experimental results, lot-splitting has significant effect on shortening the makespan, and the improvement effect is influenced by the processing time and the setup time of orders. Therefore, the threshold point to improve the makespan can be identified. In addition, the model also indicates that more lot-splitting approaches, that is, the flexibility of allocating orders/lots to machines is larger, will result in a better scheduling performance.

  7. MIP models and hybrid algorithms for simultaneous job splitting and scheduling on unrelated parallel machines.

    PubMed

    Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204

  8. Semiparametric regression of multidimensional genetic pathway data: least-squares kernel machines and linear mixed models.

    PubMed

    Liu, Dawei; Lin, Xihong; Ghosh, Debashis

    2007-12-01

    We consider a semiparametric regression model that relates a normal outcome to covariates and a genetic pathway, where the covariate effects are modeled parametrically and the pathway effect of multiple gene expressions is modeled parametrically or nonparametrically using least-squares kernel machines (LSKMs). This unified framework allows a flexible function for the joint effect of multiple genes within a pathway by specifying a kernel function and allows for the possibility that each gene expression effect might be nonlinear and the genes within the same pathway are likely to interact with each other in a complicated way. This semiparametric model also makes it possible to test for the overall genetic pathway effect. We show that the LSKM semiparametric regression can be formulated using a linear mixed model. Estimation and inference hence can proceed within the linear mixed model framework using standard mixed model software. Both the regression coefficients of the covariate effects and the LSKM estimator of the genetic pathway effect can be obtained using the best linear unbiased predictor in the corresponding linear mixed model formulation. The smoothing parameter and the kernel parameter can be estimated as variance components using restricted maximum likelihood. A score test is developed to test for the genetic pathway effect. Model/variable selection within the LSKM framework is discussed. The methods are illustrated using a prostate cancer data set and evaluated using simulations. PMID:18078480

  9. State Event Models for the Formal Analysis of Human-Machine Interactions

    NASA Technical Reports Server (NTRS)

    Combefis, Sebastien; Giannakopoulou, Dimitra; Pecheur, Charles

    2014-01-01

    The work described in this paper was motivated by our experience with applying a framework for formal analysis of human-machine interactions (HMI) to a realistic model of an autopilot. The framework is built around a formally defined conformance relation called "fullcontrol" between an actual system and the mental model according to which the system is operated. Systems are well-designed if they can be described by relatively simple, full-control, mental models for their human operators. For this reason, our framework supports automated generation of minimal full-control mental models for HMI systems, where both the system and the mental models are described as labelled transition systems (LTS). The autopilot that we analysed has been developed in the NASA Ames HMI prototyping tool ADEPT. In this paper, we describe how we extended the models that our HMI analysis framework handles to allow adequate representation of ADEPT models. We then provide a property-preserving reduction from these extended models to LTSs, to enable application of our LTS-based formal analysis algorithms. Finally, we briefly discuss the analyses we were able to perform on the autopilot model with our extended framework.

  10. Computational modeling of skin reflectance spectra for biological parameter estimation through machine learning

    NASA Astrophysics Data System (ADS)

    Vyas, Saurabh; Van Nguyen, Hien; Burlina, Philippe; Banerjee, Amit; Garza, Luis; Chellappa, Rama

    2012-06-01

    A computational skin re ectance model is used here to provide the re ectance, absorption, scattering, and transmittance based on the constitutive biological components that make up the layers of the skin. The changes in re ectance are mapped back to deviations in model parameters, which include melanosome level, collagen level and blood oxygenation. The computational model implemented in this work is based on the Kubelka- Munk multi-layer re ectance model and the Fresnel Equations that describe a generic N-layer model structure. This assumes the skin as a multi-layered material, with each layer consisting of specic absorption, scattering coecients, re ectance spectra and transmittance based on the model parameters. These model parameters include melanosome level, collagen level, blood oxygenation, blood level, dermal depth, and subcutaneous tissue re ectance. We use this model, coupled with support vector machine based regression (SVR), to predict the biological parameters that make up the layers of the skin. In the proposed approach, the physics-based forward mapping is used to generate a large set of training exemplars. The samples in this dataset are then used as training inputs for the SVR algorithm to learn the inverse mapping. This approach was tested on VIS-range hyperspectral data. Performance validation of the proposed approach was performed by measuring the prediction error on the skin constitutive parameters and exhibited very promising results.

  11. Large-scale ligand-based predictive modelling using support vector machines.

    PubMed

    Alvarsson, Jonathan; Lampa, Samuel; Schaal, Wesley; Andersson, Claes; Wikberg, Jarl E S; Spjuth, Ola

    2016-01-01

    The increasing size of datasets in drug discovery makes it challenging to build robust and accurate predictive models within a reasonable amount of time. In order to investigate the effect of dataset sizes on predictive performance and modelling time, ligand-based regression models were trained on open datasets of varying sizes of up to 1.2 million chemical structures. For modelling, two implementations of support vector machines (SVM) were used. Chemical structures were described by the signatures molecular descriptor. Results showed that for the larger datasets, the LIBLINEAR SVM implementation performed on par with the well-established libsvm with a radial basis function kernel, but with dramatically less time for model building even on modest computer resources. Using a non-linear kernel proved to be infeasible for large data sizes, even with substantial computational resources on a computer cluster. To deploy the resulting models, we extended the Bioclipse decision support framework to support models from LIBLINEAR and made our models of logD and solubility available from within Bioclipse. PMID:27516811

  12. Fishery landing forecasting using EMD-based least square support vector machine models

    NASA Astrophysics Data System (ADS)

    Shabri, Ani

    2015-05-01

    In this paper, the novel hybrid ensemble learning paradigm integrating ensemble empirical mode decomposition (EMD) and least square support machine (LSSVM) is proposed to improve the accuracy of fishery landing forecasting. This hybrid is formulated specifically to address in modeling fishery landing, which has high nonlinear, non-stationary and seasonality time series which can hardly be properly modelled and accurately forecasted by traditional statistical models. In the hybrid model, EMD is used to decompose original data into a finite and often small number of sub-series. The each sub-series is modeled and forecasted by a LSSVM model. Finally the forecast of fishery landing is obtained by aggregating all forecasting results of sub-series. To assess the effectiveness and predictability of EMD-LSSVM, monthly fishery landing record data from East Johor of Peninsular Malaysia, have been used as a case study. The result shows that proposed model yield better forecasts than Autoregressive Integrated Moving Average (ARIMA), LSSVM and EMD-ARIMA models on several criteria..

  13. Modeling of variable speed refrigerated display cabinets based on adaptive support vector machine

    NASA Astrophysics Data System (ADS)

    Cao, Zhikun; Han, Hua; Gu, Bo

    2010-01-01

    In this paper the adaptive support vector machine (ASVM) method is introduced to the field of intelligent modeling of refrigerated display cabinets and used to construct a highly precise mathematical model of their performance. A model for a variable speed open vertical display cabinet was constructed using preprocessing techniques for measured data, including the elimination of outlying data points by the use of an exponential weighted moving average (EWMA). Using dynamic loss coefficient adjustment, the adaptation of the SVM for use in this application was achieved. From there, the object function for energy use per unit of display area total energy consumption (TEC)/total display area (TDA) was constructed and solved using the ASVM method. When compared to the results achieved using a back-propagation neural network (BPNN) model, the ASVM model for the refrigerated display cabinet was characterized by its simple structure, fast convergence speed and high prediction accuracy. The ASVM model also has better noise rejection properties than that of original SVM model. It was revealed by the theoretical analysis and experimental results presented in this paper that it is feasible to model of the display cabinet built using the ASVM method.

  14. Unified error model based spatial error compensation for four types of CNC machining center: Part I-Singular function based unified error model

    NASA Astrophysics Data System (ADS)

    Fan, Kaiguo; Yang, Jianguo; Yang, Liyan

    2015-08-01

    To unify the error model for four types of CNC machining center, the comprehensive error model of each type of CNC machining center was established using the homogenous transformation matrix (HTM). The internal rules between the HTMs and the kinematic chains were analyzed in this research. The analysis results show that the HTM elements associated with the motion axes which are at the rear of the reference coordinate system are positive value. On the contrary, the HTM elements associated with the motion axes which are at the front of the reference coordinate system are negative value. To express these internal rules, the singular function was introduced to the HTMs. And a unified error model for four types of CNC machining center was established based on the HTM and the singular function. The unified error model includes 18 error elements which are the main factors affecting the machining accuracy of CNC machine tools. The practical results show that the unified error model is not only suitable for vertical machining center but also suitable for horizontal machining center.

  15. Machine Learning Models and Pathway Genome Data Base for Trypanosoma cruzi Drug Discovery

    PubMed Central

    McCall, Laura-Isobel; Sarker, Malabika; Yadav, Maneesh; Ponder, Elizabeth L.; Kallel, E. Adam; Kellar, Danielle; Chen, Steven; Arkin, Michelle; Bunin, Barry A.; McKerrow, James H.; Talcott, Carolyn

    2015-01-01

    Background Chagas disease is a neglected tropical disease (NTD) caused by the eukaryotic parasite Trypanosoma cruzi. The current clinical and preclinical pipeline for T. cruzi is extremely sparse and lacks drug target diversity. Methodology/Principal Findings In the present study we developed a computational approach that utilized data from several public whole-cell, phenotypic high throughput screens that have been completed for T. cruzi by the Broad Institute, including a single screen of over 300,000 molecules in the search for chemical probes as part of the NIH Molecular Libraries program. We have also compiled and curated relevant biological and chemical compound screening data including (i) compounds and biological activity data from the literature, (ii) high throughput screening datasets, and (iii) predicted metabolites of T. cruzi metabolic pathways. This information was used to help us identify compounds and their potential targets. We have constructed a Pathway Genome Data Base for T. cruzi. In addition, we have developed Bayesian machine learning models that were used to virtually screen libraries of compounds. Ninety-seven compounds were selected for in vitro testing, and 11 of these were found to have EC50 < 10μM. We progressed five compounds to an in vivo mouse efficacy model of Chagas disease and validated that the machine learning model could identify in vitro active compounds not in the training set, as well as known positive controls. The antimalarial pyronaridine possessed 85.2% efficacy in the acute Chagas mouse model. We have also proposed potential targets (for future verification) for this compound based on structural similarity to known compounds with targets in T. cruzi. Conclusions/ Significance We have demonstrated how combining chemoinformatics and bioinformatics for T. cruzi drug discovery can bring interesting in vivo active molecules to light that may have been overlooked. The approach we have taken is broadly applicable to other

  16. Kinetostatic modeling and analysis of an exechon parallel kinematic machine(PKM) module

    NASA Astrophysics Data System (ADS)

    Zhao, Yanqin; Jin, Yan; Zhang, Jun

    2016-01-01

    As a newly invented parallel kinematic machine(PKM), Exechon has found its potential application in machining and assembling industries due to high rigidity and high dynamics. To guarantee the overall performance, the loading conditions and deflections of the key components must be revealed to provide basic mechanic data for component design. For this purpose, a kinetostatic model is proposed with substructure synthesis technique. The Exechon is divided into a platform subsystem, a fixed base subsystem and three limb subsystems according to its structure. By modeling the limb assemblage as a spatial beam constrained by two sets of lumped virtual springs representing the compliances of revolute joint, universal joint and spherical joint, the equilibrium equations of limb subsystems are derived with finite element method(FEM). The equilibrium equations of the platform are derived with Newton's 2nd law. By introducing deformation compatibility conditions between the platform and limb, the governing equilibrium equations of the system are derived to formulate an analytical expression for system's deflections. The platform's elastic displacements and joint reactions caused by the gravity are investigated to show a strong position-dependency and axis-symmetry due to its kinematic and structure features. The proposed kinetostatic model is a trade-off between the accuracy of FEM and concision of analytical method, thus can predict the kinetostatics throughout the workspace in a quick and succinct manner. The proposed modeling methodology and kinetostatic analysis can be further expanded to other PKMs with necessary modifications, providing useful information for kinematic calibration as well as component strength calculations.

  17. Modeling end-gas knock in a rapid-compression machine

    SciTech Connect

    Bush, W.B.; Fendell, F.E.; Fink, S.F.

    1984-01-01

    A rapid-compression machine is a laboratory apparatus to study aspects of the compression stroke, combustion event, and expansion stroke of an Otta cycle. As a simple model of such a machine, unsteady one-dimensional nonisobaric laminar flame propagation through a combustible premixture, enclosed in a variable volume, is examined in the asymptotic limit of Arrhenius activation temperature large relative to the conventional adiabatic flame temperature. In this limit, a thin propagating flame separates nondiffusive expanses of burned and unburned gas. The pressure through the enclosure is spatially homogeneous for smooth flame propagation. However, expansion of the hot burned gas results in compressional preheating of the remaining unburned gas, and in fact the spatially homogeneous gas may undergo autoconversion prior to arrival of the propagating flame. If such an explosion is too rapid for acoustic adjustment, large spatial differences in pressure arise and the resulting nonlinear waves produce audible knock. Here attention is concentrated on what fraction (if any) of the total charge may undergo autoconversion for a given operating condition, and what enhanced heat transfer from the end gas would preclude autoconversion - though too great heat transfer from the end gas could result in flame quenching (unburned residual fuel).

  18. Modeling end-gas knock in a rapid-compression machine

    SciTech Connect

    Bush, W.B.; Fendell, F.E.; Fink, S.F.

    1984-01-09

    A rapid-compression machine is a laboratory apparatus to study aspects of the compression stroke, combustion event, and expansion stroke of an Otto cycle. As a simple model of such a machine, unsteady one-dimensional nonisobaric laminar flame propagation through a combustible premixture, enclosed in a variable volume, is examined in the asymptotic limit of Arrhenius activation temperature large relative to the conventional adiabatic flame temperature. In this limit, a thin propagating flame separates nondiffusive expanses of burned and unburned gas. The pressure through the enclosure is spatially homogeneous for smooth flame propagation. However, expansion of the hot burned gas results in compressional preheating of the remaining unburned gas, and in fact the spatially homogeneous gas may undergo autoconversion prior to arrival of the propagation flame. If such an explosion is too rapid for acoustic adjustment, large spatial differences in pressure arise and the resulting nonlinear waves produce audible knock. Here attention is concentrated on what fraction (if any) of the total charge may undergo autoconversion for a given operating condition, and what enhanced heat transfer from the end gas would preclude autoconversion--though too great heat transfer from the end gas could result in flame quenching (unburned residual fuel).

  19. Modeling and Control of a Double-effect Absorption Refrigerating Machine

    NASA Astrophysics Data System (ADS)

    Hihara, Eiji; Yamamoto, Yuuji; Saito, Takamoto; Nagaoka, Yoshikazu; Nishiyama, Noriyuki

    For the purpose of impoving the response to cooling load variations and the part load characteristics, the optimal operation of a double-effect absorption refrigerating machine was investigated. The test machine was designed to be able to control energy input and weak solution flow rate continuously. It is composed of a gas-fired high-temperature generator, a separator, a low-temperature generator, an absorber, a condenser, an evaporator, and high- and low-temperature heat exchangers. The working fluid is Lithium Bromide and water solution. The standard output is 80 kW. Based on the experimental data, a simulation model of the static characteristics was developed. The experiments and simulation analysis indicate that there is an optimal weak solution flow rate which maximizes the coefficient of performance under any given cooling load condition. The optimal condition is closely related to the refrigerant steam flow rate flowing from the separator to the high temperature heat exchanger with the medium solution. The heat transfer performance of heat exchangers in the components influences the COP. The change in the overall heat transfer coefficient of absorber has much effect on the COP compared to other components.

  20. One- and two-dimensional Stirling machine simulation using experimentally generated flow turbulence models

    NASA Technical Reports Server (NTRS)

    Goldberg, Louis F.

    1990-01-01

    Investigations of one- and two-dimensional (1- or 2-D) simulations of Stirling machines centered around experimental data generated by the U. of Minnesota Mechanical Engineering Test Rig (METR) are covered. This rig was used to investigate oscillating flows about a zero mean with emphasis on laminar/turbulent flow transitions in tubes. The Space Power Demonstrator Engine (SPDE) and in particular, its heater, were the subjects of the simulations. The heater was treated as a 1- or 2-D entity in an otherwise 1-D system. The 2-D flow effects impacted the transient flow predictions in the heater itself but did not have a major impact on overall system performance. Information propagation effects may be a significant issue in the simulation (if not the performance) of high-frequency, high-pressure Stirling machines. This was investigated further by comparing a simulation against an experimentally validated analytic solution for the fluid dynamics of a transmission line. The applicability of the pressure-linking algorithm for compressible flows may be limited by characteristic number (defined as flow path information traverses per cycle); this warrants further study. Lastly the METR was simulated in 1- and 2-D. A two-parameter k-w foldback function turbulence model was developed and tested against a limited set of METR experimental data.

  1. Ecophysiological Modeling of Grapevine Water Stress in Burgundy Terroirs by a Machine-Learning Approach.

    PubMed

    Brillante, Luca; Mathieu, Olivier; Lévêque, Jean; Bois, Benjamin

    2016-01-01

    In a climate change scenario, successful modeling of the relationships between plant-soil-meteorology is crucial for a sustainable agricultural production, especially for perennial crops. Grapevines (Vitis vinifera L. cv Chardonnay) located in eight experimental plots (Burgundy, France) along a hillslope were monitored weekly for 3 years for leaf water potentials, both at predawn (Ψpd) and at midday (Ψstem). The water stress experienced by grapevine was modeled as a function of meteorological data (minimum and maximum temperature, rainfall) and soil characteristics (soil texture, gravel content, slope) by a gradient boosting machine. Model performance was assessed by comparison with carbon isotope discrimination (δ(13)C) of grape sugars at harvest and by the use of a test-set. The developed models reached outstanding prediction performance (RMSE < 0.08 MPa for Ψstem and < 0.06 MPa for Ψpd), comparable to measurement accuracy. Model predictions at a daily time step improved correlation with δ(13)C data, respect to the observed trend at a weekly time scale. The role of each predictor in these models was described in order to understand how temperature, rainfall, soil texture, gravel content and slope affect the grapevine water status in the studied context. This work proposes a straight-forward strategy to simulate plant water stress in field condition, at a local scale; to investigate ecological relationships in the vineyard and adapt cultural practices to future conditions. PMID:27375651

  2. Rotary ultrasonic machining of CFRP: a mechanistic predictive model for cutting force.

    PubMed

    Cong, W L; Pei, Z J; Sun, X; Zhang, C L

    2014-02-01

    Cutting force is one of the most important output variables in rotary ultrasonic machining (RUM) of carbon fiber reinforced plastic (CFRP) composites. Many experimental investigations on cutting force in RUM of CFRP have been reported. However, in the literature, there are no cutting force models for RUM of CFRP. This paper develops a mechanistic predictive model for cutting force in RUM of CFRP. The material removal mechanism of CFRP in RUM has been analyzed first. The model is based on the assumption that brittle fracture is the dominant mode of material removal. CFRP micromechanical analysis has been conducted to represent CFRP as an equivalent homogeneous material to obtain the mechanical properties of CFRP from its components. Based on this model, relationships between input variables (including ultrasonic vibration amplitude, tool rotation speed, feedrate, abrasive size, and abrasive concentration) and cutting force can be predicted. The relationships between input variables and important intermediate variables (indentation depth, effective contact time, and maximum impact force of single abrasive grain) have been investigated to explain predicted trends of cutting force. Experiments are conducted to verify the model, and experimental results agree well with predicted trends from this model. PMID:24120374

  3. Model for noise-induced hearing loss using support vector machine

    NASA Astrophysics Data System (ADS)

    Qiu, Wei; Ye, Jun; Liu-White, Xiaohong; Hamernik, Roger P.

    2005-09-01

    Contemporary noise standards are based on the assumption that an energy metric such as the equivalent noise level is sufficient for estimating the potential of a noise stimulus to cause noise-induced hearing loss (NIHL). Available data, from laboratory-based experiments (Lei et al., 1994; Hamernik and Qiu, 2001) indicate that while an energy metric may be necessary, it is not sufficient for the prediction of NIHL. A support vector machine (SVM) NIHL prediction model was constructed, based on a 550-subject (noise-exposed chinchillas) database. Training of the model used data from 367 noise-exposed subjects. The model was tested using the remaining 183 subjects. Input variables for the model included acoustic, audiometric, and biological variables, while output variables were PTS and cell loss. The results show that an energy parameter is not sufficient to predict NIHL, especially in complex noise environments. With the kurtosis and other noise and biological parameters included as additional inputs, the performance of SVM prediction model was significantly improved. The SVM prediction model has the potential to reliably predict noise-induced hearing loss. [Work supported by NIOSH.

  4. Hidden Markov models and other machine learning approaches in computational molecular biology

    SciTech Connect

    Baldi, P.

    1995-12-31

    This tutorial was one of eight tutorials selected to be presented at the Third International Conference on Intelligent Systems for Molecular Biology which was held in the United Kingdom from July 16 to 19, 1995. Computational tools are increasingly needed to process the massive amounts of data, to organize and classify sequences, to detect weak similarities, to separate coding from non-coding regions, and reconstruct the underlying evolutionary history. The fundamental problem in machine learning is the same as in scientific reasoning in general, as well as statistical modeling: to come up with a good model for the data. In this tutorial four classes of models are reviewed. They are: Hidden Markov models; artificial Neural Networks; Belief Networks; and Stochastic Grammars. When dealing with DNA and protein primary sequences, Hidden Markov models are one of the most flexible and powerful alignments and data base searches. In this tutorial, attention is focused on the theory of Hidden Markov Models, and how to apply them to problems in molecular biology.

  5. Ecophysiological Modeling of Grapevine Water Stress in Burgundy Terroirs by a Machine-Learning Approach

    PubMed Central

    Brillante, Luca; Mathieu, Olivier; Lévêque, Jean; Bois, Benjamin

    2016-01-01

    In a climate change scenario, successful modeling of the relationships between plant-soil-meteorology is crucial for a sustainable agricultural production, especially for perennial crops. Grapevines (Vitis vinifera L. cv Chardonnay) located in eight experimental plots (Burgundy, France) along a hillslope were monitored weekly for 3 years for leaf water potentials, both at predawn (Ψpd) and at midday (Ψstem). The water stress experienced by grapevine was modeled as a function of meteorological data (minimum and maximum temperature, rainfall) and soil characteristics (soil texture, gravel content, slope) by a gradient boosting machine. Model performance was assessed by comparison with carbon isotope discrimination (δ13C) of grape sugars at harvest and by the use of a test-set. The developed models reached outstanding prediction performance (RMSE < 0.08 MPa for Ψstem and < 0.06 MPa for Ψpd), comparable to measurement accuracy. Model predictions at a daily time step improved correlation with δ13C data, respect to the observed trend at a weekly time scale. The role of each predictor in these models was described in order to understand how temperature, rainfall, soil texture, gravel content and slope affect the grapevine water status in the studied context. This work proposes a straight-forward strategy to simulate plant water stress in field condition, at a local scale; to investigate ecological relationships in the vineyard and adapt cultural practices to future conditions. PMID:27375651

  6. The computer as a physical system: A microscopic quantum mechanical Hamiltonian model of computers as represented by Turing machines

    NASA Astrophysics Data System (ADS)

    Benioff, Paul

    1980-05-01

    In this paper a microscopic quantum mechanical model of computers as represented by Turing machines is constructed. It is shown that for each number N and Turing machine Q there exists a Hamiltonian H N Q and a class of appropriate initial states such that if c is such an initial state, then ψ Q N (t)=exp(-1 H N Q t) ψ Q N (0) correctly describes at times t 3, t 6,⋯, t 3N model states that correspond to the completion of the first, second, ⋯, Nth computation step of Q. The model parameters can be adjusted so that for an arbitrary time interval Δ around t 3, t 6,⋯, t 3N, the "machine" part of ψ Q N (t) is stationary.

  7. Business Machines

    ERIC Educational Resources Information Center

    Pactor, Paul

    1970-01-01

    The U.S. Department of Labor has projected a 106 percent increase in the demand for office machine operators over the next 10 years. Machines with a high frequency of use include printing calculators, 10-key adding machines, and key punch machines. The 12th grade is the logical time for teaching business machines. (CH)

  8. Evaluation models for soil nutrient based on support vector machine and artificial neural networks.

    PubMed

    Li, Hao; Leng, Weijia; Zhou, Yibing; Chen, Fudi; Xiu, Zhilong; Yang, Dazuo

    2014-01-01

    Soil nutrient is an important aspect that contributes to the soil fertility and environmental effects. Traditional evaluation approaches of soil nutrient are quite hard to operate, making great difficulties in practical applications. In this paper, we present a series of comprehensive evaluation models for soil nutrient by using support vector machine (SVM), multiple linear regression (MLR), and artificial neural networks (ANNs), respectively. We took the content of organic matter, total nitrogen, alkali-hydrolysable nitrogen, rapidly available phosphorus, and rapidly available potassium as independent variables, while the evaluation level of soil nutrient content was taken as dependent variable. Results show that the average prediction accuracies of SVM models are 77.87% and 83.00%, respectively, while the general regression neural network (GRNN) model's average prediction accuracy is 92.86%, indicating that SVM and GRNN models can be used effectively to assess the levels of soil nutrient with suitable dependent variables. In practical applications, both SVM and GRNN models can be used for determining the levels of soil nutrient. PMID:25548781

  9. Mathematical modeling and multi-criteria optimization of rotary electrical discharge machining process

    NASA Astrophysics Data System (ADS)

    Shrinivas Balraj, U.

    2015-12-01

    In this paper, mathematical modeling of three performance characteristics namely material removal rate, surface roughness and electrode wear rate in rotary electrical discharge machining RENE80 nickel super alloy is done using regression approach. The parameters considered are peak current, pulse on time, pulse off time and electrode rotational speed. The regression approach is very much effective in mathematical modeling when the performance characteristic is influenced by many variables. The modeling of these characteristics is helpful in predicting the performance under a given set of combination of input process parameters. The adequacy of developed models is tested by correlation coefficient and Analysis of Variance. It is observed that the developed models are adequate in establishing the relationship between input parameters and performance characteristics. Further, multi-criteria optimization of process parameter levels is carried using grey based Taguchi method. The experiments are planned based on Taguchi's L9 orthogonal array. The proposed method employs single grey relational grade as a performance index to obtain optimum levels of parameters. It is found that peak current and electrode rotational speed are influential on these characteristics. Confirmation experiments are conducted to validate optimal parameters and it reveals the improvements in material removal rate, surface roughness and electrode wear rate as 13.84%, 12.91% and 19.42% respectively.

  10. Investigating driver injury severity patterns in rollover crashes using support vector machine models.

    PubMed

    Chen, Cong; Zhang, Guohui; Qian, Zhen; Tarefder, Rafiqul A; Tian, Zong

    2016-05-01

    Rollover crash is one of the major types of traffic crashes that induce fatal injuries. It is important to investigate the factors that affect rollover crashes and their influence on driver injury severity outcomes. This study employs support vector machine (SVM) models to investigate driver injury severity patterns in rollover crashes based on two-year crash data gathered in New Mexico. The impacts of various explanatory variables are examined in terms of crash and environmental information, vehicle features, and driver demographics and behavior characteristics. A classification and regression tree (CART) model is utilized to identify significant variables and SVM models with polynomial and Gaussian radius basis function (RBF) kernels are used for model performance evaluation. It is shown that the SVM models produce reasonable prediction performance and the polynomial kernel outperforms the Gaussian RBF kernel. Variable impact analysis reveals that factors including comfortable driving environment conditions, driver alcohol or drug involvement, seatbelt use, number of travel lanes, driver demographic features, maximum vehicle damages in crashes, crash time, and crash location are significantly associated with driver incapacitating injuries and fatalities. These findings provide insights for better understanding rollover crash causes and the impacts of various explanatory factors on driver injury severity patterns. PMID:26938584

  11. A Reordering Model Using a Source-Side Parse-Tree for Statistical Machine Translation

    NASA Astrophysics Data System (ADS)

    Hashimoto, Kei; Yamamoto, Hirofumi; Okuma, Hideo; Sumita, Eiichiro; Tokuda, Keiichi

    This paper presents a reordering model using a source-side parse-tree for phrase-based statistical machine translation. The proposed model is an extension of IST-ITG (imposing source tree on inversion transduction grammar) constraints. In the proposed method, the target-side word order is obtained by rotating nodes of the source-side parse-tree. We modeled the node rotation, monotone or swap, using word alignments based on a training parallel corpus and source-side parse-trees. The model efficiently suppresses erroneous target word orderings, especially global orderings. Furthermore, the proposed method conducts a probabilistic evaluation of target word reorderings. In English-to-Japanese and English-to-Chinese translation experiments, the proposed method resulted in a 0.49-point improvement (29.31 to 29.80) and a 0.33-point improvement (18.60 to 18.93) in word BLEU-4 compared with IST-ITG constraints, respectively. This indicates the validity of the proposed reordering model.

  12. A computational visual saliency model based on statistics and machine learning.

    PubMed

    Lin, Ru-Je; Lin, Wei-Song

    2014-01-01

    Identifying the type of stimuli that attracts human visual attention has been an appealing topic for scientists for many years. In particular, marking the salient regions in images is useful for both psychologists and many computer vision applications. In this paper, we propose a computational approach for producing saliency maps using statistics and machine learning methods. Based on four assumptions, three properties (Feature-Prior, Position-Prior, and Feature-Distribution) can be derived and combined by a simple intersection operation to obtain a saliency map. These properties are implemented by a similarity computation, support vector regression (SVR) technique, statistical analysis of training samples, and information theory using low-level features. This technique is able to learn the preferences of human visual behavior while simultaneously considering feature uniqueness. Experimental results show that our approach performs better in predicting human visual attention regions than 12 other models in two test databases. PMID:25084782

  13. Data on Support Vector Machines (SVM) model to forecast photovoltaic power.

    PubMed

    Malvoni, M; De Giorgi, M G; Congedo, P M

    2016-12-01

    The data concern the photovoltaic (PV) power, forecasted by a hybrid model that considers weather variations and applies a technique to reduce the input data size, as presented in the paper entitled "Photovoltaic forecast based on hybrid pca-lssvm using dimensionality reducted data" (M. Malvoni, M.G. De Giorgi, P.M. Congedo, 2015) [1]. The quadratic Renyi entropy criteria together with the principal component analysis (PCA) are applied to the Least Squares Support Vector Machines (LS-SVM) to predict the PV power in the day-ahead time frame. The data here shared represent the proposed approach results. Hourly PV power predictions for 1,3,6,12, 24 ahead hours and for different data reduction sizes are provided in Supplementary material. PMID:27622206

  14. Prediction of CO concentrations based on a hybrid Partial Least Square and Support Vector Machine model

    NASA Astrophysics Data System (ADS)

    Yeganeh, B.; Motlagh, M. Shafie Pour; Rashidi, Y.; Kamalan, H.

    2012-08-01

    Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS-SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS-SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65-85% for hybrid PLS-SVM model respectively. Also it was found that the hybrid PLS-SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS-SVM model.

  15. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  16. Copper Conductivity Model Development and Validation Using Flyer Plate Experiments on the Z-machine

    NASA Astrophysics Data System (ADS)

    Riford, L.; Lemke, R. W.; Cochrane, K.

    2015-11-01

    Magnetically accelerated flyer plate experiments done on Sandia's Z-machine provide insight into a multitude of materials problems at high energies and densities including conductivity model development and validation. In an experiment with ten Cu flyer plates of thicknesses 500-1000 μm, VISAR measurements exhibit a characteristic jump in the velocity correlated with magnetic field burn-through and the expansion of melted material at the free surface. The experiment is modeled using Sandia's shock and multiphysics MHD code ALEGRA. Simulated free surface velocities are within 1% of the measured data early in time, but divergence occurs at the feature, where the simulation indicates a slower burn through time. The cause was found to be in the Cu conductivity model's compressed regime. The model was improved by lowering the conductivity in the region 12.5-16 g/cc and 350-16000 K with a novel parameter based optimization method using the velocity feature as a figure of merit. Sandia National Laboratories is a multiprogram laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U. S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  17. Study on the machined depth when nanoscratching on 6H-SiC using Berkovich indenter: Modelling and experimental study

    NASA Astrophysics Data System (ADS)

    Zhang, Feihu; Meng, Binbin; Geng, Yanquan; Zhang, Yong

    2016-04-01

    In order to investigate the deformation characteristics and material removing mechanism of the single crystal silicon carbide at the nanoscale, the nanoscratching tests were conducted on the surface of 6H-SiC (0 0 0 1) by using Berkovich indenter. In this paper, a theoretical model for nanoscratching with Berkovich indenter is proposed to reveal the relationship between the applied normal load and the machined depth. The influences of the elastic recovery and the stress distribution of the material are considered in the developed theoretical model. Experimental and theoretical machined depths are compared when scratching in different directions. Results show that the effects of the elastic recovery of the material, the geometry of the tip and the stress distribution of the interface between the tip and sample have large influences on the machined depth which should be considered for this kind of hard brittle material of 6H-SiC.

  18. Working with simple machines

    NASA Astrophysics Data System (ADS)

    Norbury, John W.

    2006-11-01

    A set of examples is provided that illustrate the use of work as applied to simple machines. The ramp, pulley, lever and hydraulic press are common experiences in the life of a student, and their theoretical analysis therefore makes the abstract concept of work more real. The mechanical advantage of each of these systems is also discussed so that students can evaluate their usefulness as machines.

  19. Filtered selection coupled with support vector machines generate a functionally relevant prediction model for colorectal cancer

    PubMed Central

    Gabere, Musa Nur; Hussein, Mohamed Aly; Aziz, Mohammad Azhar

    2016-01-01

    Purpose There has been considerable interest in using whole-genome expression profiles for the classification of colorectal cancer (CRC). The selection of important features is a crucial step before training a classifier. Methods In this study, we built a model that uses support vector machine (SVM) to classify cancer and normal samples using Affymetrix exon microarray data obtained from 90 samples of 48 patients diagnosed with CRC. From the 22,011 genes, we selected the 20, 30, 50, 100, 200, 300, and 500 genes most relevant to CRC using the minimum-redundancy–maximum-relevance (mRMR) technique. With these gene sets, an SVM model was designed using four different kernel types (linear, polynomial, radial basis function [RBF], and sigmoid). Results The best model, which used 30 genes and RBF kernel, outperformed other combinations; it had an accuracy of 84% for both ten fold and leave-one-out cross validations in discriminating the cancer samples from the normal samples. With this 30 genes set from mRMR, six classifiers were trained using random forest (RF), Bayes net (BN), multilayer perceptron (MLP), naïve Bayes (NB), reduced error pruning tree (REPT), and SVM. Two hybrids, mRMR + SVM and mRMR + BN, were the best models when tested on other datasets, and they achieved a prediction accuracy of 95.27% and 91.99%, respectively, compared to other mRMR hybrid models (mRMR + RF, mRMR + NB, mRMR + REPT, and mRMR + MLP). Ingenuity pathway analysis was used to analyze the functions of the 30 genes selected for this model and their potential association with CRC: CDH3, CEACAM7, CLDN1, IL8, IL6R, MMP1, MMP7, and TGFB1 were predicted to be CRC biomarkers. Conclusion This model could be used to further develop a diagnostic tool for predicting CRC based on gene expression data from patient samples. PMID:27330311

  20. Machine learning and hurdle models for improving regional predictions of stream water acid neutralizing capacity

    NASA Astrophysics Data System (ADS)

    Povak, Nicholas A.; Hessburg, Paul F.; Reynolds, Keith M.; Sullivan, Timothy J.; McDonnell, Todd C.; Salter, R. Brion

    2013-06-01

    In many industrialized regions of the world, atmospherically deposited sulfur derived from industrial, nonpoint air pollution sources reduces stream water quality and results in acidic conditions that threaten aquatic resources. Accurate maps of predicted stream water acidity are an essential aid to managers who must identify acid-sensitive streams, potentially affected biota, and create resource protection strategies. In this study, we developed correlative models to predict the acid neutralizing capacity (ANC) of streams across the southern Appalachian Mountain region, USA. Models were developed using stream water chemistry data from 933 sampled locations and continuous maps of pertinent environmental and climatic predictors. Environmental predictors were averaged across the upslope contributing area for each sampled stream location and submitted to both statistical and machine-learning regression models. Predictor variables represented key aspects of the contributing geology, soils, climate, topography, and acidic deposition. To reduce model error rates, we employed hurdle modeling to screen out well-buffered sites and predict continuous ANC for the remainder of the stream network. Models predicted acid-sensitive streams in forested watersheds with small contributing areas, siliceous lithologies, cool and moist environments, low clay content soils, and moderate or higher dry sulfur deposition. Our results confirmed findings from other studies and further identified several influential climatic variables and variable interactions. Model predictions indicated that one quarter of the total stream network was sensitive to additional sulfur inputs (i.e., ANC < 100 µeq L-1), while <10% displayed much lower ANC (<50 µeq L-1). These methods may be readily adapted in other regions to assess stream water quality and potential biotic sensitivity to acidic inputs.

  1. EBS Radionuclide Transport Abstraction

    SciTech Connect

    J.D. Schreiber

    2005-08-25

    The purpose of this report is to develop and analyze the engineered barrier system (EBS) radionuclide transport abstraction model, consistent with Level I and Level II model validation, as identified in ''Technical Work Plan for: Near-Field Environment and Transport: Engineered Barrier System: Radionuclide Transport Abstraction Model Report Integration'' (BSC 2005 [DIRS 173617]). The EBS radionuclide transport abstraction (or EBS RT Abstraction) is the conceptual model used in the total system performance assessment for the license application (TSPA-LA) to determine the rate of radionuclide releases from the EBS to the unsaturated zone (UZ). The EBS RT Abstraction conceptual model consists of two main components: a flow model and a transport model. Both models are developed mathematically from first principles in order to show explicitly what assumptions, simplifications, and approximations are incorporated into the models used in the TSPA-LA. The flow model defines the pathways for water flow in the EBS and specifies how the flow rate is computed in each pathway. Input to this model includes the seepage flux into a drift. The seepage flux is potentially split by the drip shield, with some (or all) of the flux being diverted by the drip shield and some passing through breaches in the drip shield that might result from corrosion or seismic damage. The flux through drip shield breaches is potentially split by the waste package, with some (or all) of the flux being diverted by the waste package and some passing through waste package breaches that might result from corrosion or seismic damage. Neither the drip shield nor the waste package survives an igneous intrusion, so the flux splitting submodel is not used in the igneous scenario class. The flow model is validated in an independent model validation technical review. The drip shield and waste package flux splitting algorithms are developed and validated using experimental data. The transport model considers

  2. Modeling workflow to design machine translation applications for public health practice

    PubMed Central

    Turner, Anne M.; Brownstein, Megumu K.; Cole, Kate; Karasz, Hilary; Kirchhoff, Katrin

    2014-01-01

    Objective Provide a detailed understanding of the information workflow processes related to translating health promotion materials for limited English proficiency individuals in order to inform the design of context-driven machine translation (MT) tools for public health (PH). Materials and Methods We applied a cognitive work analysis framework to investigate the translation information workflow processes of two large health departments in Washington State. Researchers conducted interviews, performed a task analysis, and validated results with PH professionals to model translation workflow and identify functional requirements for a translation system for PH. Results The study resulted in a detailed description of work related to translation of PH materials, an information workflow diagram, and a description of attitudes towards MT technology. We identified a number of themes that hold design implications for incorporating MT in PH translation practice. A PH translation tool prototype was designed based on these findings. Discussion This study underscores the importance of understanding the work context and information workflow for which systems will be designed. Based on themes and translation information workflow processes, we identified key design guidelines for incorporating MT into PH translation work. Primary amongst these is that MT should be followed by human review for translations to be of high quality and for the technology to be adopted into practice. Counclusion The time and costs of creating multilingual health promotion materials are barriers to translation. PH personnel were interested in MT's potential to improve access to low-cost translated PH materials, but expressed concerns about ensuring quality. We outline design considerations and a potential machine translation tool to best fit MT systems into PH practice. PMID:25445922

  3. Modeling and Analysis of Reservation Frame Slotted-ALOHA in Wireless Machine-to-Machine Area Networks for Data Collection

    PubMed Central

    Vázquez-Gallego, Francisco; Alonso, Luis; Alonso-Zarate, Jesus

    2015-01-01

    Reservation frame slotted-ALOHA (RFSA) was proposed in the past to manage the access to the wireless channel when devices generate long messages fragmented into small packets. In this paper, we consider an M2M area network composed of end-devices that periodically respond to the requests from a gateway with the transmission of fragmented messages. The idle network is suddenly set into saturation, having all end-devices attempting to get access to the channel simultaneously. This has been referred to as delta traffic. While previous works analyze the throughput of RFSA in steady-state conditions, assuming that traffic is generated following random distributions, the performance of RFSA under delta traffic has never received attention. In this paper, we propose a theoretical model to calculate the average delay and energy consumption required to resolve the contention under delta traffic using RFSA. We have carried out computer-based simulations to validate the accuracy of the theoretical model and to compare the performance for RFSA and FSA. Results show that there is an optimal frame length that minimizes delay and energy consumption and which depends on the number of end-devices. In addition, it is shown that RFSA reduces the energy consumed per end-device by more than 50% with respect to FSA under delta traffic. PMID:25671510

  4. Modeling and analysis of reservation frame slotted-ALOHA in wireless machine-to-machine area networks for data collection.

    PubMed

    Vázquez-Gallego, Francisco; Alonso, Luis; Alonso-Zarate, Jesus

    2015-01-01

    Reservation frame slotted-ALOHA (RFSA) was proposed in the past to manage the access to the wireless channel when devices generate long messages fragmented into small packets. In this paper, we consider an M2M area network composed of end-devices that periodically respond to the requests from a gateway with the transmission of fragmented messages. The idle network is suddenly set into saturation, having all end-devices attempting to get access to the channel simultaneously. This has been referred to as delta traffic. While previous works analyze the throughput of RFSA in steady-state conditions, assuming that traffic is generated following random distributions, the performance of RFSA under delta traffic has never received attention. In this paper, we propose a theoretical model to calculate the average delay and energy consumption required to resolve the contention under delta traffic using RFSA.We have carried out computer-based simulations to validate the accuracy of the theoretical model and to compare the performance for RFSA and FSA. Results show that there is an optimal frame length that minimizes delay and energy consumption and which depends on the number of end-devices. In addition, it is shown that RFSA reduces the energy consumed per end-device by more than 50% with respect to FSA under delta traffic. PMID:25671510

  5. Non-parametric temporal modeling of the hemodynamic response function via a liquid state machine.

    PubMed

    Avesani, Paolo; Hazan, Hananel; Koilis, Ester; Manevitz, Larry M; Sona, Diego

    2015-10-01

    Standard methods for the analysis of functional MRI data strongly rely on prior implicit and explicit hypotheses made to simplify the analysis. In this work the attention is focused on two such commonly accepted hypotheses: (i) the hemodynamic response function (HRF) to be searched in the BOLD signal can be described by a specific parametric model e.g., double-gamma; (ii) the effect of stimuli on the signal is taken to be linearly additive. While these assumptions have been empirically proven to generate high sensitivity for statistical methods, they also limit the identification of relevant voxels to what is already postulated in the signal, thus not allowing the discovery of unknown correlates in the data due to the presence of unexpected hemodynamics. This paper tries to overcome these limitations by proposing a method wherein the HRF is learned directly from data rather than induced from its basic form assumed in advance. This approach produces a set of voxel-wise models of HRF and, as a result, relevant voxels are filterable according to the accuracy of their prediction in a machine learning framework. This approach is instantiated using a temporal architecture based on the paradigm of Reservoir Computing wherein a Liquid State Machine is combined with a decoding Feed-Forward Neural Network. This splits the modeling into two parts: first a representation of the complex temporal reactivity of the hemodynamic response is determined by a universal global "reservoir" which is essentially temporal; second an interpretation of the encoded representation is determined by a standard feed-forward neural network, which is trained by the data. Thus the reservoir models the temporal state of information during and following temporal stimuli in a feed-back system, while the neural network "translates" this data to fit the specific HRF response as given, e.g. by BOLD signal measurements in fMRI. An empirical analysis on synthetic datasets shows that the learning process can

  6. Mathematical Modeling and Simulation of the Pressing Section of a Paper Machine Including Dynamic Capillary Effect

    NASA Astrophysics Data System (ADS)

    Printsypar, G.; Iliev, O.; Rief, S.

    2011-12-01

    Paper production is a challenging problem which attracts attention of many scientists. The process which is of our interest takes place in the pressing section of a paper machine. The paper layer is dried by means of the pressing it against fabrics, i.e. press felts. The paper-felt sandwich is transported through the press nips at high speed (for more details see [3]). Since the natural drainage of water in the felts is much longer than the drying in the pressing section we include in the consideration the dynamic capillary effect. The dynamic capillary pressure-saturation relation proposed by Hassanizadeh and Gray (see [2]) is adopted for the pressing process. One of the other issues which is taken into account while modeling the pressing section is the appearance of fully saturated regions. We include in consideration two flow regimes: the one-phase water flow and the two-phase air-water flow. It leads to a free boundary problem. We also account for the complexity of the paper-felt sandwich porous structure. Apart from the two flow regimes the computational domain is divided by layers into nonoverlapping subdomains. Then, the system of equations describing transport processes in the pressing section is stated taking into account all these features. The presented model is discretized by the finite volume method. We carry out some numerical experiments for different configurations of the pressing section (roll press, shoe press) and for parameters which are typical for paper-felt sandwich during the paper production process. The experiments show that the dynamic capillary effect has a significant influence on the distribution of pressure even for small values of the material coefficient (see Fig. 1). The obtained results are in agreement with laboratory experiment performed in [1], which states that the distribution of the pressure is not symmetric with the maximum value occurring in front of the center of the pressing nip and the minimum value less than entry

  7. Prediction of recombinant protein overexpression in Escherichia coli using a machine learning based model (RPOLP).

    PubMed

    Habibi, Narjeskhatoon; Norouzi, Alireza; Mohd Hashim, Siti Z; Shamsir, Mohd Shahir; Samian, Razip

    2015-11-01

    Recombinant protein overexpression, an important biotechnological process, is ruled by complex biological rules which are mostly unknown, is in need of an intelligent algorithm so as to avoid resource-intensive lab-based trial and error experiments in order to determine the expression level of the recombinant protein. The purpose of this study is to propose a predictive model to estimate the level of recombinant protein overexpression for the first time in the literature using a machine learning approach based on the sequence, expression vector, and expression host. The expression host was confined to Escherichia coli which is the most popular bacterial host to overexpress recombinant proteins. To provide a handle to the problem, the overexpression level was categorized as low, medium and high. A set of features which were likely to affect the overexpression level was generated based on the known facts (e.g. gene length) and knowledge gathered from related literature. Then, a representative sub-set of features generated in the previous objective was determined using feature selection techniques. Finally a predictive model was developed using random forest classifier which was able to adequately classify the multi-class imbalanced small dataset constructed. The result showed that the predictive model provided a promising accuracy of 80% on average, in estimating the overexpression level of a recombinant protein. PMID:26476414

  8. Geometric dimension model of virtual astronaut body for ergonomic analysis of man-machine space system

    NASA Astrophysics Data System (ADS)

    Qianxiang, Zhou

    2012-07-01

    It is very important to clarify the geometric characteristic of human body segment and constitute analysis model for ergonomic design and the application of ergonomic virtual human. The typical anthropometric data of 1122 Chinese men aged 20-35 years were collected using three-dimensional laser scanner for human body. According to the correlation between different parameters, curve fitting were made between seven trunk parameters and ten body parameters with the SPSS 16.0 software. It can be concluded that hip circumference and shoulder breadth are the most important parameters in the models and the two parameters have high correlation with the others parameters of human body. By comparison with the conventional regressive curves, the present regression equation with the seven trunk parameters is more accurate to forecast the geometric dimensions of head, neck, height and the four limbs with high precision. Therefore, it is greatly valuable for ergonomic design and analysis of man-machine system.This result will be very useful to astronaut body model analysis and application.

  9. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine

    PubMed Central

    Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang

    2016-01-01

    Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust. PMID:27551829

  10. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine.

    PubMed

    Shang, Qiang; Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang

    2016-01-01

    Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust. PMID:27551829

  11. Machine Shop Grinding Machines.

    ERIC Educational Resources Information Center

    Dunn, James

    This curriculum manual is one in a series of machine shop curriculum manuals intended for use in full-time secondary and postsecondary classes, as well as part-time adult classes. The curriculum can also be adapted to open-entry, open-exit programs. Its purpose is to equip students with basic knowledge and skills that will enable them to enter the…

  12. Curriculum Assessment Using Artificial Neural Network and Support Vector Machine Modeling Approaches: A Case Study. IR Applications. Volume 29

    ERIC Educational Resources Information Center

    Chen, Chau-Kuang

    2010-01-01

    Artificial Neural Network (ANN) and Support Vector Machine (SVM) approaches have been on the cutting edge of science and technology for pattern recognition and data classification. In the ANN model, classification accuracy can be achieved by using the feed-forward of inputs, back-propagation of errors, and the adjustment of connection weights. In…

  13. Nonlinear Generator Control Based on Equilibrium Point Analysis for Standard One-Machine Infinite-Bus System Model

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Fujimoto, Koji; Kawamoto, Shunji

    The aim of this letter is to show that the unstable equilibrium point of the Japanese standard one-machine infinite-bus system model is eliminated by adding a simple nonlinear complementary control input to the AVR, and then the critical clearing time of the system can be more enhanced in comparison with the PSS by introducing the proposed nonlinear generator control.

  14. Support Vector Machine Model for Automatic Detection and Classification of Seismic Events

    NASA Astrophysics Data System (ADS)

    Barros, Vesna; Barros, Lucas

    2016-04-01

    The automated processing of multiple seismic signals to detect, localize and classify seismic events is a central tool in both natural hazards monitoring and nuclear treaty verification. However, false detections and missed detections caused by station noise and incorrect classification of arrivals are still an issue and the events are often unclassified or poorly classified. Thus, machine learning techniques can be used in automatic processing for classifying the huge database of seismic recordings and provide more confidence in the final output. Applied in the context of the International Monitoring System (IMS) - a global sensor network developed for the Comprehensive Nuclear-Test-Ban Treaty (CTBT) - we propose a fully automatic method for seismic event detection and classification based on a supervised pattern recognition technique called the Support Vector Machine (SVM). According to Kortström et al., 2015, the advantages of using SVM are handleability of large number of features and effectiveness in high dimensional spaces. Our objective is to detect seismic events from one IMS seismic station located in an area of high seismicity and mining activity and classify them as earthquakes or quarry blasts. It is expected to create a flexible and easily adjustable SVM method that can be applied in different regions and datasets. Taken a step further, accurate results for seismic stations could lead to a modification of the model and its parameters to make it applicable to other waveform technologies used to monitor nuclear explosions such as infrasound and hydroacoustic waveforms. As an authorized user, we have direct access to all IMS data and bulletins through a secure signatory account. A set of significant seismic waveforms containing different types of events (e.g. earthquake, quarry blasts) and noise is being analysed to train the model and learn the typical pattern of the signal from these events. Moreover, comparing the performance of the support

  15. Piaget on Abstraction.

    ERIC Educational Resources Information Center

    Moessinger, Pierre; Poulin-Dubois, Diane

    1981-01-01

    Reviews and discusses Piaget's recent work on abstract reasoning. Piaget's distinction between empirical and reflective abstraction is presented; his hypotheses are considered to be metaphorical. (Author/DB)

  16. An Insight to the Modeling of 1 × 1 Rib Loop Formation Process on Circular Weft Knitting Machine using Computer

    NASA Astrophysics Data System (ADS)

    Ray, Sadhan Chandra

    2015-10-01

    The mechanics of single jersey loop formation is well-reported is literature. However, as the concept of any model of double jersey loop formation process is not available in accessible international literature. Therefore, it was planned to develop a model of 1 × 1 rib loop formation process on dial and cylinder machine using computer so that the influence of various input variables on the final loop length as well on the profile of tension on the yarn inside Knitting Zone (KZ) can be understood. The model provides an insight into the mechanics of 1 × 1 rib loop formation system on dial and cylinder machine. Besides, the degree of agreement between predicted and measured values of loop length and cam forces as well as theoretical analysis of the model have justified the acceptability of the model.

  17. Extracorporeal machine perfusion of the pancreas: technical aspects and its clinical implications--a systematic review of experimental models.

    PubMed

    Kuan, Kean Guan; Wee, Mau Nam; Chung, Wen Yuan; Kumar, Rohan; Mees, Soeren Torge; Dennison, Ashley; Maddern, Guy; Trochsler, Markus

    2016-01-01

    Pancreas or pancreatic islet transplantation is an important treatment option for insulin-dependent diabetes and its complications. However, as the pancreas is particularly susceptible to ischaemic-reperfusion injury, the criteria for pancreas and islet donation are especially strict. With a chronic shortage of donors, one critical challenge is to maximise organ availability and expand the donor pool. To achieve that, continuous improvement in organ preservation is required, with the aims of reducing ischaemia-reperfusion injury, prolong preservation time and improve graft function. Static cold storage, the only method used in clinical pancreas and islet cell transplant currently, has likely reached its plateau. Machine perfusion, hypothermic or normothermic, could hold the key to improving donor pancreas quality as well as quantity available for transplant. This article reviews the literature on experimental models of pancreas machine perfusion, examines the benefits of machine perfusion, the technical aspects and their clinical implications. PMID:26253243

  18. Manifest: A computer program for 2-D flow modeling in Stirling machines

    NASA Technical Reports Server (NTRS)

    Gedeon, David

    1989-01-01

    A computer program named Manifest is discussed. Manifest is a program one might want to use to model the fluid dynamics in the manifolds commonly found between the heat exchangers and regenerators of Stirling machines; but not just in the manifolds - in the regenerators as well. And in all sorts of other places too, such as: in heaters or coolers, or perhaps even in cylinder spaces. There are probably nonStirling uses for Manifest also. In broad strokes, Manifest will: (1) model oscillating internal compressible laminar fluid flow in a wide range of two-dimensional regions, either filled with porous materials or empty; (2) present a graphics-based user-friendly interface, allowing easy selection and modification of region shape and boundary condition specification; (3) run on a personal computer, or optionally (in the case of its number-crunching module) on a supercomputer; and (4) allow interactive examination of the solution output so the user can view vector plots of flow velocity, contour plots of pressure and temperature at various locations and tabulate energy-related integrals of interest.

  19. Hidden Markov Model and Support Vector Machine based decoding of finger movements using Electrocorticography

    PubMed Central

    Wissel, Tobias; Pfeiffer, Tim; Frysch, Robert; Knight, Robert T.; Chang, Edward F.; Hinrichs, Hermann; Rieger, Jochem W.; Rose, Georg

    2013-01-01

    Objective Support Vector Machines (SVM) have developed into a gold standard for accurate classification in Brain-Computer-Interfaces (BCI). The choice of the most appropriate classifier for a particular application depends on several characteristics in addition to decoding accuracy. Here we investigate the implementation of Hidden Markov Models (HMM)for online BCIs and discuss strategies to improve their performance. Approach We compare the SVM, serving as a reference, and HMMs for classifying discrete finger movements obtained from the Electrocorticograms of four subjects doing a finger tapping experiment. The classifier decisions are based on a subset of low-frequency time domain and high gamma oscillation features. Main results We show that decoding optimization between the two approaches is due to the way features are extracted and selected and less dependent on the classifier. An additional gain in HMM performance of up to 6% was obtained by introducing model constraints. Comparable accuracies of up to 90% were achieved with both SVM and HMM with the high gamma cortical response providing the most important decoding information for both techniques. Significance We discuss technical HMM characteristics and adaptations in the context of the presented data as well as for general BCI applications. Our findings suggest that HMMs and their characteristics are promising for efficient online brain-computer interfaces. PMID:24045504

  20. Assessment of machine learning reliability methods for quantifying the applicability domain of QSAR regression models.

    PubMed

    Toplak, Marko; Močnik, Rok; Polajnar, Matija; Bosnić, Zoran; Carlsson, Lars; Hasselgren, Catrin; Demšar, Janez; Boyer, Scott; Zupan, Blaž; Stålring, Jonna

    2014-02-24

    The vastness of chemical space and the relatively small coverage by experimental data recording molecular properties require us to identify subspaces, or domains, for which we can confidently apply QSAR models. The prediction of QSAR models in these domains is reliable, and potential subsequent investigations of such compounds would find that the predictions closely match the experimental values. Standard approaches in QSAR assume that predictions are more reliable for compounds that are "similar" to those in subspaces with denser experimental data. Here, we report on a study of an alternative set of techniques recently proposed in the machine learning community. These methods quantify prediction confidence through estimation of the prediction error at the point of interest. Our study includes 20 public QSAR data sets with continuous response and assesses the quality of 10 reliability scoring methods by observing their correlation with prediction error. We show that these new alternative approaches can outperform standard reliability scores that rely only on similarity to compounds in the training set. The results also indicate that the quality of reliability scoring methods is sensitive to data set characteristics and to the regression method used in QSAR. We demonstrate that at the cost of increased computational complexity these dependencies can be leveraged by integration of scores from various reliability estimation approaches. The reliability estimation techniques described in this paper have been implemented in an open source add-on package ( https://bitbucket.org/biolab/orange-reliability ) to the Orange data mining suite. PMID:24490838

  1. Hybrid polylingual object model: an efficient and seamless integration of Java and native components on the Dalvik virtual machine.

    PubMed

    Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv

    2014-01-01

    JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded. PMID:25110745

  2. Highly predictive support vector machine (SVM) models for anthrax toxin lethal factor (LF) inhibitors.

    PubMed

    Zhang, Xia; Amin, Elizabeth Ambrose

    2016-01-01

    Anthrax is a highly lethal, acute infectious disease caused by the rod-shaped, Gram-positive bacterium Bacillus anthracis. The anthrax toxin lethal factor (LF), a zinc metalloprotease secreted by the bacilli, plays a key role in anthrax pathogenesis and is chiefly responsible for anthrax-related toxemia and host death, partly via inactivation of mitogen-activated protein kinase kinase (MAPKK) enzymes and consequent disruption of key cellular signaling pathways. Antibiotics such as fluoroquinolones are capable of clearing the bacilli but have no effect on LF-mediated toxemia; LF itself therefore remains the preferred target for toxin inactivation. However, currently no LF inhibitor is available on the market as a therapeutic, partly due to the insufficiency of existing LF inhibitor scaffolds in terms of efficacy, selectivity, and toxicity. In the current work, we present novel support vector machine (SVM) models with high prediction accuracy that are designed to rapidly identify potential novel, structurally diverse LF inhibitor chemical matter from compound libraries. These SVM models were trained and validated using 508 compounds with published LF biological activity data and 847 inactive compounds deposited in the Pub Chem BioAssay database. One model, M1, demonstrated particularly favorable selectivity toward highly active compounds by correctly predicting 39 (95.12%) out of 41 nanomolar-level LF inhibitors, 46 (93.88%) out of 49 inactives, and 844 (99.65%) out of 847 Pub Chem inactives in external, unbiased test sets. These models are expected to facilitate the prediction of LF inhibitory activity for existing molecules, as well as identification of novel potential LF inhibitors from large datasets. PMID:26615468

  3. In situ monitoring and machine modeling of snowpack evolution in complex terrains

    NASA Astrophysics Data System (ADS)

    Frolik, J.; Skalka, C.

    2014-12-01

    It is well known that snowpack evolution depends on variety of landscape conditions including tree cover, slope, wind exposure, etc. In this presentation we report on methods that combine modern in-situ sensor technologies with machine learning-based algorithms to obtain improved models of snowpack evolution. Snowcloud is an embedded data collection system for snow hydrology field research campaigns that leverages distributed wireless sensor network technology to provide data at low cost and high spatial-temporal resolution. The system is compact thus allowing it to be deployed readily within dense canopies and/or steep slopes. The system has demonstrated robustness for multiple-seasons of operation thus showing it is applicable to not only short-term strategic monitoring but extended studies as well. We have used data collected by Snowcloud deployments to develop improved models of snowpack evolution using genetic programming (GP). Such models can be used to augment existing sensor infrastructure to obtain better areal snow depth and snow-water equivalence estimations. The presented work will discuss three multi-season deployments and present data (collected at 1-3 hour intervals and a multiple locations) on snowdepth variation throughout the season. The three deployment sites (Eastern Sierra Mountains, CA; Hubbard Brook Experimental Forest, NH; and Sulitjelma, Norway) are varied not only geographically but also terrain-wise within each small study area (~2.5 hectacre). We will also discuss models generated by inductive (GP) learning, including non-linear regression techniques and evaluation, and how short-term Snowcloud field campaigns can augment existing infrastructure.

  4. Multiscale Modeling of Biological Functions: From Enzymes to Molecular Machines (Nobel Lecture)

    PubMed Central

    Warshel, Arieh

    2016-01-01

    Adetailed understanding of the action of biological molecules is a pre-requisite for rational advances in health sciences and related fields. Here, the challenge is to move from available structural information to a clear understanding of the underlying function of the system. In light of the complexity of macromolecular complexes, it is essential to use computer simulations to describe how the molecular forces are related to a given function. However, using a full and reliable quantum mechanical representation of large molecular systems has been practically impossible. The solution to this (and related) problems has emerged from the realization that large systems can be spatially divided into a region where the quantum mechanical description is essential (e.g. a region where bonds are being broken), with the remainder of the system being represented on a simpler level by empirical force fields. This idea has been particularly effective in the development of the combined quantum mechanics/molecular mechanics (QM/MM) models. Here, the coupling between the electrostatic effects of the quantum and classical subsystems has been a key to the advances in describing the functions of enzymes and other biological molecules. The same idea of representing complex systems in different resolutions in both time and length scales has been found to be very useful in modeling the action of complex systems. In such cases, starting with coarse grained (CG) representations that were originally found to be very useful in simulating protein folding, and augmenting them with a focus on electrostatic energies, has led to models that are particularly effective in probing the action of molecular machines. The same multiscale idea is likely to play a major role in modeling of even more complex systems, including cells and collections of cells. PMID:25060243

  5. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere.

    PubMed

    Ma, Denglong; Zhang, Zaoxiao

    2016-07-01

    Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem. PMID:27035273

  6. A Comparison of Costs of Searching the Machine-Readable Data Bases ERIC and "Psychological Abstracts" in an Annual Subscription Rate System Against Costs Estimated for the Same Searches Done in the Lockheed DIALOG System and the System Development Corporation for ERIC, and the Lockheed DIALOG System and PASAT for "Psychological Abstracts."

    ERIC Educational Resources Information Center

    Palmer, Crescentia

    A comparison of costs for computer-based searching of Psychological Abstracts and Educational Resources Information Center (ERIC) systems by the New York State Library at Albany was produced by combining data available from search request forms and from bills from the contract subscription service, the State University of New…

  7. Object Classification via Planar Abstraction

    NASA Astrophysics Data System (ADS)

    Oesau, Sven; Lafarge, Florent; Alliez, Pierre

    2016-06-01

    We present a supervised machine learning approach for classification of objects from sampled point data. The main idea consists in first abstracting the input object into planar parts at several scales, then discriminate between the different classes of objects solely through features derived from these planar shapes. Abstracting into planar shapes provides a means to both reduce the computational complexity and improve robustness to defects inherent to the acquisition process. Measuring statistical properties and relationships between planar shapes offers invariance to scale and orientation. A random forest is then used for solving the multiclass classification problem. We demonstrate the potential of our approach on a set of indoor objects from the Princeton shape benchmark and on objects acquired from indoor scenes and compare the performance of our method with other point-based shape descriptors.

  8. An Entity-Relationship Model for a European Machine-Dictionary of Medicine

    PubMed Central

    Rossi-Mori, Angelo; Thornton, Anna M.; Gangemi, Aldo

    1990-01-01

    Dictionaries, thesauri, nomenclatures are among the conventional tools for the systematic organization of terms and concepts of medicine. Computer support provides new functions for them, mainly because it allows for the co-existence in the same system of different views and approaches, until now in alternative. We analyze and model the general (language independent) features of both linguistic-terminological aspects and conceptual aspects of an integrated terminological data base of medicine. The special language of medicine is peculiar with respect to common language, particularly because of its high rate of synonyms and phrasal terms. We examine this peculiarity, analyzing the relationship between medical terms and underlying concepts. We build a continuous scale from the ’free’ text used by health operators in a given document to more and more regular and abstract forms (spelling variants, true synonyms, contextual variants, equivalent terms in different languages, morphosyntactical representatives, concepts).

  9. Prediction of calcium-binding sites by combining loop-modeling with machine learning

    PubMed Central

    2009-01-01

    Background Protein ligand-binding sites in the apo state exhibit structural flexibility. This flexibility often frustrates methods for structure-based recognition of these sites because it leads to the absence of electron density for these critical regions, particularly when they are in surface loops. Methods for recognizing functional sites in these missing loops would be useful for recovering additional functional information. Results We report a hybrid approach for recognizing calcium-binding sites in disordered regions. Our approach combines loop modeling with a machine learning method (FEATURE) for structure-based site recognition. For validation, we compared the performance of our method on known calcium-binding sites for which there are both holo and apo structures. When loops in the apo structures are rebuilt using modeling methods, FEATURE identifies 14 out of 20 crystallographically proven calcium-binding sites. It only recognizes 7 out of 20 calcium-binding sites in the initial apo crystal structures. We applied our method to unstructured loops in proteins from SCOP families known to bind calcium in order to discover potential cryptic calcium binding sites. We built 2745 missing loops and evaluated them for potential calcium binding. We made 102 predictions of calcium-binding sites. Ten predictions are consistent with independent experimental verifications. We found indirect experimental evidence for 14 other predictions. The remaining 78 predictions are novel predictions, some with intriguing potential biological significance. In particular, we see an enrichment of beta-sheet folds with predicted calcium binding sites in the connecting loops on the surface that may be important for calcium-mediated function switches. Conclusion Protein crystal structures are a potentially rich source of functional information. When loops are missing in these structures, we may be losing important information about binding sites and active sites. We have shown that

  10. Chaotic Boltzmann machines

    PubMed Central

    Suzuki, Hideyuki; Imura, Jun-ichi; Horio, Yoshihiko; Aihara, Kazuyuki

    2013-01-01

    The chaotic Boltzmann machine proposed in this paper is a chaotic pseudo-billiard system that works as a Boltzmann machine. Chaotic Boltzmann machines are shown numerically to have computing abilities comparable to conventional (stochastic) Boltzmann machines. Since no randomness is required, efficient hardware implementation is expected. Moreover, the ferromagnetic phase transition of the Ising model is shown to be characterised by the largest Lyapunov exponent of the proposed system. In general, a method to relate probabilistic models to nonlinear dynamics by derandomising Gibbs sampling is presented. PMID:23558425

  11. Machine Learning Based Multi-Physical-Model Blending for Enhancing Renewable Energy Forecast -- Improvement via Situation Dependent Error Correction

    SciTech Connect

    Lu, Siyuan; Hwang, Youngdeok; Khabibrakhmanov, Ildar; Marianno, Fernando J.; Shao, Xiaoyan; Zhang, Jie; Hodge, Bri-Mathias; Hamann, Hendrik F.

    2015-07-15

    With increasing penetration of solar and wind energy to the total energy supply mix, the pressing need for accurate energy forecasting has become well-recognized. Here we report the development of a machine-learning based model blending approach for statistically combining multiple meteorological models for improving the accuracy of solar/wind power forecast. Importantly, we demonstrate that in addition to parameters to be predicted (such as solar irradiance and power), including additional atmospheric state parameters which collectively define weather situations as machine learning input provides further enhanced accuracy for the blended result. Functional analysis of variance shows that the error of individual model has substantial dependence on the weather situation. The machine-learning approach effectively reduces such situation dependent error thus produces more accurate results compared to conventional multi-model ensemble approaches based on simplistic equally or unequally weighted model averaging. Validation over an extended period of time results show over 30% improvement in solar irradiance/power forecast accuracy compared to forecasts based on the best individual model.

  12. Advancing brain-machine interfaces: moving beyond linear state space models.

    PubMed

    Rouse, Adam G; Schieber, Marc H

    2015-01-01

    Advances in recent years have dramatically improved output control by Brain-Machine Interfaces (BMIs). Such devices nevertheless remain robotic and limited in their movements compared to normal human motor performance. Most current BMIs rely on transforming recorded neural activity to a linear state space composed of a set number of fixed degrees of freedom. Here we consider a variety of ways in which BMI design might be advanced further by applying non-linear dynamics observed in normal motor behavior. We consider (i) the dynamic range and precision of natural movements, (ii) differences between cortical activity and actual body movement, (iii) kinematic and muscular synergies, and (iv) the implications of large neuronal populations. We advance the hypothesis that a given population of recorded neurons may transmit more useful information than can be captured by a single, linear model across all movement phases and contexts. We argue that incorporating these various non-linear characteristics will be an important next step in advancing BMIs to more closely match natural motor performance. PMID:26283932

  13. Advancing brain-machine interfaces: moving beyond linear state space models

    PubMed Central

    Rouse, Adam G.; Schieber, Marc H.

    2015-01-01

    Advances in recent years have dramatically improved output control by Brain-Machine Interfaces (BMIs). Such devices nevertheless remain robotic and limited in their movements compared to normal human motor performance. Most current BMIs rely on transforming recorded neural activity to a linear state space composed of a set number of fixed degrees of freedom. Here we consider a variety of ways in which BMI design might be advanced further by applying non-linear dynamics observed in normal motor behavior. We consider (i) the dynamic range and precision of natural movements, (ii) differences between cortical activity and actual body movement, (iii) kinematic and muscular synergies, and (iv) the implications of large neuronal populations. We advance the hypothesis that a given population of recorded neurons may transmit more useful information than can be captured by a single, linear model across all movement phases and contexts. We argue that incorporating these various non-linear characteristics will be an important next step in advancing BMIs to more closely match natural motor performance. PMID:26283932

  14. Fullrmc, a rigid body Reverse Monte Carlo modeling package enabled with machine learning and artificial intelligence.

    PubMed

    Aoun, Bachir

    2016-05-01

    A new Reverse Monte Carlo (RMC) package "fullrmc" for atomic or rigid body and molecular, amorphous, or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython, C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with a set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modeling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. In addition, fullrmc provides a unique way with almost no additional computational cost to recur a group's selection, allowing the system to go out of local minimas by refining a group's position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group. PMID:26800289

  15. A Comparison of Hourly Typhoon Rainfall Forecasting Models Based on Support Vector Machines and Random Forests with Different Predictor Sets

    NASA Astrophysics Data System (ADS)

    Lin, Kun-Hsiang; Tseng, Hung-Wei; Kuo, Chen-Min; Yang, Tao-Chang; Yu, Pao-Shan

    2016-04-01

    Typhoons with heavy rainfall and strong wind often cause severe floods and losses in Taiwan, which motivates the development of rainfall forecasting models as part of an early warning system. Thus, this study aims to develop rainfall forecasting models based on two machine learning methods, support vector machines (SVMs) and random forests (RFs), and investigate the performances of the models with different predictor sets for searching the optimal predictor set in forecasting. Four predictor sets were used: (1) antecedent rainfalls, (2) antecedent rainfalls and typhoon characteristics, (3) antecedent rainfalls and meteorological factors, and (4) antecedent rainfalls, typhoon characteristics and meteorological factors to construct for 1- to 6-hour ahead rainfall forecasting. An application to three rainfall stations in Yilan River basin, northeastern Taiwan, was conducted. Firstly, the performance of the SVMs-based forecasting model with predictor set #1 was analyzed. The results show that the accuracy of the models for 2- to 6-hour ahead forecasting decrease rapidly as compared to the accuracy of the model for 1-hour ahead forecasting which is acceptable. For improving the model performance, each predictor set was further examined in the SVMs-based forecasting model. The results reveal that the SVMs-based model using predictor set #4 as input variables performs better than the other sets and a significant improvement of model performance is found especially for the long lead time forecasting. Lastly, the performance of the SVMs-based model using predictor set #4 as input variables was compared with the performance of the RFs-based model using predictor set #4 as input variables. It is found that the RFs-based model is superior to the SVMs-based model in hourly typhoon rainfall forecasting. Keywords: hourly typhoon rainfall forecasting, predictor selection, support vector machines, random forests

  16. Implications of the Turing machine model of computation for processor and programming language design

    NASA Astrophysics Data System (ADS)

    Hunter, Geoffrey

    2004-01-01

    A computational process is classified according to the theoretical model that is capable of executing it; computational processes that require a non-predeterminable amount of intermediate storage for their execution are Turing-machine (TM) processes, while those whose storage are predeterminable are Finite Automation (FA) processes. Simple processes (such as traffic light controller) are executable by Finite Automation, whereas the most general kind of computation requires a Turing Machine for its execution. This implies that a TM process must have a non-predeterminable amount of memory allocated to it at intermediate instants of its execution; i.e. dynamic memory allocation. Many processes encountered in practice are TM processes. The implication for computational practice is that the hardware (CPU) architecture and its operating system must facilitate dynamic memory allocation, and that the programming language used to specify TM processes must have statements with the semantic attribute of dynamic memory allocation, for in Alan Turing"s thesis on computation (1936) the "standard description" of a process is invariant over the most general data that the process is designed to process; i.e. the program describing the process should never have to be modified to allow for differences in the data that is to be processed in different instantiations; i.e. data-invariant programming. Any non-trivial program is partitioned into sub-programs (procedures, subroutines, functions, modules, etc). Examination of the calls/returns between the subprograms reveals that they are nodes in a tree-structure; this tree-structure is independent of the programming language used to encode (define) the process. Each sub-program typically needs some memory for its own use (to store values intermediate between its received data and its computed results); this locally required memory is not needed before the subprogram commences execution, and it is not needed after its execution terminates

  17. Working with Simple Machines

    ERIC Educational Resources Information Center

    Norbury, John W.

    2006-01-01

    A set of examples is provided that illustrate the use of work as applied to simple machines. The ramp, pulley, lever and hydraulic press are common experiences in the life of a student, and their theoretical analysis therefore makes the abstract concept of work more real. The mechanical advantage of each of these systems is also discussed so that…

  18. Modelling and simulation of effect of ultrasonic vibrations on machining of Ti6Al4V.

    PubMed

    Patil, Sandip; Joshi, Shashikant; Tewari, Asim; Joshi, Suhas S

    2014-02-01

    The titanium alloys cause high machining heat generation and consequent rapid wear of cutting tool edges during machining. The ultrasonic assisted turning (UAT) has been found to be very effective in machining of various materials; especially in the machining of "difficult-to-cut" material like Ti6Al4V. The present work is a comprehensive study involving 2D FE transient simulation of UAT in DEFORM framework and their experimental characterization. The simulation shows that UAT reduces the stress level on cutting tool during machining as compared to that of in continuous turning (CT) barring the penetration stage, wherein both tools are subjected to identical stress levels. There is a 40-45% reduction in cutting forces and about 48% reduction in cutting temperature in UAT over that of in CT. However, the reduction magnitude reduces with an increase in the cutting speed. The experimental analysis of UAT process shows that the surface roughness in UAT is lower than in CT, and the UATed surfaces have matte finish as against the glossy finish on the CTed surfaces. Microstructural observations of the chips and machined surfaces in both processes reveal that the intensity of thermal softening and shear band formation is reduced in UAT over that of in CT. PMID:24103362

  19. Unified error model based spatial error compensation for four types of CNC machining center: Part II-unified model based spatial error compensation

    NASA Astrophysics Data System (ADS)

    Fan, Kaiguo; Yang, Jianguo; Yang, Liyan

    2014-12-01

    In this paper, a spatial error compensation method was proposed for CNC machining center based on the unified error model. The spatial error distribution was analyzed in this research. The result shows that the spatial error is relative to each axis of a CNC machine tool. Moreover, the spatial error distribution is non-linear and there is no regularity. In order to improve the modeling accuracy and efficiency, an automatic error modeling application was designed based on the orthogonal polynomials. To realize the spatial error compensation, a multi-thread parallel processing mode based error compensation controller was designed. Using the spatial error compensation method, the machine tools' accuracy is greatly improved compared to that with no compensation.

  20. Pan evaporation modeling using least square support vector machine, multivariate adaptive regression splines and M5 model tree

    NASA Astrophysics Data System (ADS)

    Kisi, Ozgur

    2015-09-01

    Pan evaporation (Ep) modeling is an important issue in reservoir management, regional water resources planning and evaluation of drinking-water supplies. The main purpose of this study is to investigate the accuracy of least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 Model Tree (M5Tree) in modeling Ep. The first part of the study focused on testing the ability of the LSSVM, MARS and M5Tree models in estimating the Ep data of Mersin and Antalya stations located in Mediterranean Region of Turkey by using cross-validation method. The LSSVM models outperformed the MARS and M5Tree models in estimating Ep of Mersin and Antalya stations with local input and output data. The average root mean square error (RMSE) of the M5Tree and MARS models was decreased by 24-32.1% and 10.8-18.9% using LSSVM models for the Mersin and Antalya stations, respectively. The ability of three different methods was examined in estimation of Ep using input air temperature, solar radiation, relative humidity and wind speed data from nearby station in the second part of the study (cross-station application without local input data). The results showed that the MARS models provided better accuracy than the LSSVM and M5Tree models with respect to RMSE, mean absolute error (MAE) and determination coefficient (R2) criteria. The average RMSE accuracy of the LSSVM and M5Tree was increased by 3.7% and 16.5% using MARS. In the case of without local input data, the average RMSE accuracy of the LSSVM and M5Tree was respectively increased by 11.4% and 18.4% using MARS. In the third part of the study, the ability of the applied models was examined in Ep estimation using input and output data of nearby station. The results reported that the MARS models performed better than the other models with respect to RMSE, MAE and R2 criteria. The average RMSE of the LSSVM and M5Tree was respectively decreased by 54% and 3.4% using MARS. The overall results indicated that

  1. Dry machinability of aluminum alloys.

    SciTech Connect

    Shareef, I.; Natarajan, M.; Ajayi, O. O.; Energy Technology; Department of IMET

    2005-01-01

    Adverse effects of the use of cutting fluids and environmental concerns with regard to cutting fluid disposability is compelling industry to adopt Dry or near Dry Machining, with the aim of eliminating or significantly reducing the use of metal working fluids. Pending EPA regulations on metal cutting, dry machining is becoming a hot topic of research and investigation both in industry and federal research labs. Although the need for dry machining may be apparent, most of the manufacturers still consider dry machining to be impractical and even if possible, very expensive. This perception is mainly due to lack of appropriate cutting tools that can withstand intense heat and Built-up-Edge (BUE) formation during dry machining. The challenge of heat dissipation without coolant requires a completely different approach to tooling. Special tooling utilizing high-performance multi-layer, multi-component, heat resisting, low friction coatings could be a plausible answer to the challenge of dry machining. In pursuit of this goal Argonne National Labs has introduced Nano-crystalline near frictionless carbon (NFC) diamond like coatings (DLC), while industrial efforts have led to the introduction of composite coatings such as titanium aluminum nitride (TiAlN), tungsten carbide/carbon (WC/C) and others. Although, these coatings are considered to be very promising, they have not been tested either from tribological or from dry machining applications point of view. As such a research program in partnership with federal labs and industrial sponsors has started with the goal of exploring the feasibility of dry machining using the newly developed coatings such as Near Frictionless Carbon Coatings (NFC), Titanium Aluminum Nitride (TiAlN), and multi-layer multicomponent nano coatings such as TiAlCrYN and TiAlN/YN. Although various coatings are under investigation as part of the overall dry machinability program, this extended abstract deals with a systematic investigation of dry

  2. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to

  3. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering.

    PubMed

    Shanechi, Maryam M; Orsborn, Amy L; Carmena, Jose M

    2016-04-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain's behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user's motor intention during CLDA-a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter

  4. Gasoline surrogate modeling of gasoline ignition in a rapid compression machine and comparison to experiments

    SciTech Connect

    Mehl, M; Kukkadapu, G; Kumar, K; Sarathy, S M; Pitz, W J; Sung, S J

    2011-09-15

    The use of gasoline in homogeneous charge compression ignition engines (HCCI) and in duel fuel diesel - gasoline engines, has increased the need to understand its compression ignition processes under engine-like conditions. These processes need to be studied under well-controlled conditions in order to quantify low temperature heat release and to provide fundamental validation data for chemical kinetic models. With this in mind, an experimental campaign has been undertaken in a rapid compression machine (RCM) to measure the ignition of gasoline mixtures over a wide range of compression temperatures and for different compression pressures. By measuring the pressure history during ignition, information on the first stage ignition (when observed) and second stage ignition are captured along with information on the phasing of the heat release. Heat release processes during ignition are important because gasoline is known to exhibit low temperature heat release, intermediate temperature heat release and high temperature heat release. In an HCCI engine, the occurrence of low-temperature and intermediate-temperature heat release can be exploited to obtain higher load operation and has become a topic of much interest for engine researchers. Consequently, it is important to understand these processes under well-controlled conditions. A four-component gasoline surrogate model (including n-heptane, iso-octane, toluene, and 2-pentene) has been developed to simulate real gasolines. An appropriate surrogate mixture of the four components has been developed to simulate the specific gasoline used in the RCM experiments. This chemical kinetic surrogate model was then used to simulate the RCM experimental results for real gasoline. The experimental and modeling results covered ultra-lean to stoichiometric mixtures, compressed temperatures of 640-950 K, and compression pressures of 20 and 40 bar. The agreement between the experiments and model is encouraging in terms of first

  5. Pose measurement base on machine vision for the aircraft model in wire-driven parallel suspension system

    NASA Astrophysics Data System (ADS)

    Chen, Yi-feng; Wu, Liao-ni; Yue, Sui-lu; Lin, Qi

    2013-03-01

    In wind tunnel tests, the pose of the aircraft model in wire-driven parallel suspension system (WDPSS) is determined by driving several wires. Pose measurement is very important for the study of WDPSS. Using machine vision technology, Monocular Vision Measurement System has been constructed to estimate the pose of the aircraft model by applying a camera calibration, by extracting corresponding control points for the aircraft model, and by applying several homogeneous transformations. This article describes the programs of the measurement system, measurement principle and data processing methods which is based on HALCON to achieve the Solution of the pose of aircraft model. Through experiments, practical feasibility of the system is validated.

  6. Modelling of the radial forging process of a hollow billet with the mandrel on the lever radial forging machine

    NASA Astrophysics Data System (ADS)

    Karamyshev, A. P.; Nekrasov, I. I.; Pugin, A. I.; Fedulov, A. A.

    2016-04-01

    The finite-element method (FEM) has been used in scientific research of forming technological process modelling. Among the others, the process of the multistage radial forging of hollow billets has been modelled. The model includes both the thermal problem, concerning preliminary heating of the billet taking into account thermal expansion, and the deformation problem, when the billet is forged in a special machine. The latter part of the model describes such features of the process as die calibration, die movement, initial die temperature, friction conditions, etc. The results obtained can be used to define the necessary process parameters and die calibration.

  7. A model of application system for man-machine-environment system engineering in vessels based on IDEF0

    NASA Astrophysics Data System (ADS)

    Shang, Zhen; Qiu, Changhua; Zhu, Shifan

    2011-09-01

    Applying man-machine-environment system engineering (MMESE) in vessels is a method to improve the effectiveness of the interaction between equipment, environment, and humans for the purpose of advancing operating efficiency, performance, safety, and habitability of a vessel and its subsystems. In the following research, the life cycle of vessels was divided into 9 phases, and 15 research subjects were also identified from among these phases. The 15 subjects were systemized, and then the man-machine-environment engineering system application model for vessels was developed using the ICAM definition method 0 (IDEF0), which is a systematical modeling method. This system model bridges the gap between the data and information flow of every two associated subjects with the major basic research methods and approaches included, which brings the formerly relatively independent subjects together as a whole. The application of this systematic model should facilitate the application of man-machine-environment system engineering in vessels, especially at the conceptual and embodiment design phases. The managers and designers can deal with detailed tasks quickly and efficiently while reducing repetitive work.

  8. Annual Conference Abstracts

    ERIC Educational Resources Information Center

    Engineering Education, 1975

    1975-01-01

    Papers abstracted represent those submitted to the distribution center at the 83rd American Society for Engineering Education Convention. Abstracts are grouped under headings corresponding to the main topic of the paper. (Editor/CP)

  9. Solving the AI Planning Plus Scheduling Problem Using Model Checking via Automatic Translation from the Abstract Plan Preparation Language (APPL) to the Symbolic Analysis Laboratory (SAL)

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Munoz, Cesar A.; Siminiceanu, Radu I.

    2007-01-01

    This paper describes a translator from a new planning language named the Abstract Plan Preparation Language (APPL) to the Symbolic Analysis Laboratory (SAL) model checker. This translator has been developed in support of the Spacecraft Autonomy for Vehicles and Habitats (SAVH) project sponsored by the Exploration Technology Development Program, which is seeking to mature autonomy technology for the vehicles and operations centers of Project Constellation.

  10. Seismic waves modeling with the Fourier pseudo-spectral method on massively parallel machines.

    NASA Astrophysics Data System (ADS)

    Klin, Peter

    2015-04-01

    The Fourier pseudo-spectral method (FPSM) is an approach for the 3D numerical modeling of the wave propagation, which is based on the discretization of the spatial domain in a structured grid and relies on global spatial differential operators for the solution of the wave equation. This last peculiarity is advantageous from the accuracy point of view but poses difficulties for an efficient implementation of the method to be run on parallel computers with distributed memory architecture. The 1D spatial domain decomposition approach has been so far commonly adopted in the parallel implementations of the FPSM, but it implies an intensive data exchange among all the processors involved in the computation, which can degrade the performance because of communication latencies. Moreover, the scalability of the 1D domain decomposition is limited, since the number of processors can not exceed the number of grid points along the directions in which the domain is partitioned. This limitation inhibits an efficient exploitation of the computational environments with a very large number of processors. In order to overcome the limitations of the 1D domain decomposition we implemented a parallel version of the FPSM based on a 2D domain decomposition, which allows to achieve a higher degree of parallelism and scalability on massively parallel machines with several thousands of processing elements. The parallel programming is essentially achieved using the MPI protocol but OpenMP parts are also included in order to exploit the single processor multi - threading capabilities, when available. The developed tool is aimed at the numerical simulation of the seismic waves propagation and in particular is intended for earthquake ground motion research. We show the scalability tests performed up to 16k processing elements on the IBM Blue Gene/Q computer at CINECA (Italy), as well as the application to the simulation of the earthquake ground motion in the alluvial plain of the Po river (Italy).

  11. Parametric modeling and optimization of laser scanning parameters during laser assisted machining of Inconel 718

    NASA Astrophysics Data System (ADS)

    Venkatesan, K.; Ramanujam, R.; Kuppan, P.

    2016-04-01

    This paper presents a parametric effect, microstructure, micro-hardness and optimization of laser scanning parameters (LSP) on heating experiments during laser assisted machining of Inconel 718 alloy. The laser source used for experiments is a continuous wave Nd:YAG laser with maximum power of 2 kW. The experimental parameters in the present study are cutting speed in the range of 50-100 m/min, feed rate of 0.05-0.1 mm/rev, laser power of 1.25-1.75 kW and approach angle of 60-90°of laser beam axis to tool. The plan of experiments are based on central composite rotatable design L31 (43) orthogonal array. The surface temperature is measured via on-line measurement using infrared pyrometer. Parametric significance on surface temperature is analysed using response surface methodology (RSM), analysis of variance (ANOVA) and 3D surface graphs. The structural change of the material surface is observed using optical microscope and quantitative measurement of heat affected depth that are analysed by Vicker's hardness test. The results indicate that the laser power and approach angle are the most significant parameters to affect the surface temperature. The optimum ranges of laser power and approach angle was identified as 1.25-1.5 kW and 60-65° using overlaid contour plot. The developed second order regression model is found to be in good agreement with experimental values with R2 values of 0.96 and 0.94 respectively for surface temperature and heat affected depth.

  12. Fluid-structure interaction modeling of wind turbines: simulating the full machine

    NASA Astrophysics Data System (ADS)

    Hsu, Ming-Chen; Bazilevs, Yuri

    2012-12-01

    In this paper we present our aerodynamics and fluid-structure interaction (FSI) computational techniques that enable dynamic, fully coupled, 3D FSI simulation of wind turbines at full scale, and in the presence of the nacelle and tower (i.e., simulation of the "full machine"). For the interaction of wind and flexible blades we employ a nonmatching interface discretization approach, where the aerodynamics is computed using a low-order finite-element-based ALE-VMS technique, while the rotor blades are modeled as thin composite shells discretized using NURBS-based isogeometric analysis (IGA). We find that coupling FEM and IGA in this manner gives a good combination of efficiency, accuracy, and flexibility of the computational procedures for wind turbine FSI. The interaction between the rotor and tower is handled using a non-overlapping sliding-interface approach, where both moving- and stationary-domain formulations of aerodynamics are employed. At the fluid-structure and sliding interfaces, the kinematic and traction continuity is enforced weakly, which is a key ingredient of the proposed numerical methodology. We present several simulations of a three-blade 5~MW wind turbine, with and without the tower. We find that, in the case of no tower, the presence of the sliding interface has no effect on the prediction of aerodynamic loads on the rotor. From this we conclude that weak enforcement of the kinematics gives just as accurate results as the strong enforcement, and thus enables the simulation of rotor-tower interaction (as well as other applications involving mechanical components in relative motion). We also find that the blade passing the tower produces a 10-12 % drop (per blade) in the aerodynamic torque. We feel this finding may be important when it comes to the fatigue-life analysis and prediction for wind turbine blades.

  13. Neural control and adaptive neural forward models for insect-like, energy-efficient, and adaptable locomotion of walking machines.

    PubMed

    Manoonpong, Poramate; Parlitz, Ulrich; Wörgötter, Florentin

    2013-01-01

    Living creatures, like walking animals, have found fascinating solutions for the problem of locomotion control. Their movements show the impression of elegance including versatile, energy-efficient, and adaptable locomotion. During the last few decades, roboticists have tried to imitate such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast, animal locomotion mechanisms seem to largely depend not only on central mechanisms (central pattern generators, CPGs) and sensory feedback (afferent-based control) but also on internal forward models (efference copies). They are used to a different degree in different animals. Generally, CPGs organize basic rhythmic motions which are shaped by sensory feedback while internal models are used for sensory prediction and state estimations. According to this concept, we present here adaptive neural locomotion control consisting of a CPG mechanism with neuromodulation and local leg control mechanisms based on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show that the employed embodied neural closed-loop system can be a powerful way for developing robust and adaptable machines. PMID:23408775

  14. A Synthesis Model for Forcing Action Arrangement in the System of Reducing Dynamic Loads of a Mobile Machine

    NASA Astrophysics Data System (ADS)

    Kaźmierczak, H.; Pawłowski, T.; Zembrowski, K.

    2015-05-01

    An idea is presented for a method to lower excessive dynamic loads in the system of supporting structure, mechanical-hydraulic forcing system, vibration isolation system, protective unit. The dynamic characteristics of the system are determined by the method of dynamic susceptibility. An analytical model of the system was built (mobile machine to carry out protective treatments; project WDN-POIG.01.03.01-00-164/09).

  15. Neural control and adaptive neural forward models for insect-like, energy-efficient, and adaptable locomotion of walking machines

    PubMed Central

    Manoonpong, Poramate; Parlitz, Ulrich; Wörgötter, Florentin

    2013-01-01

    Living creatures, like walking animals, have found fascinating solutions for the problem of locomotion control. Their movements show the impression of elegance including versatile, energy-efficient, and adaptable locomotion. During the last few decades, roboticists have tried to imitate such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast, animal locomotion mechanisms seem to largely depend not only on central mechanisms (central pattern generators, CPGs) and sensory feedback (afferent-based control) but also on internal forward models (efference copies). They are used to a different degree in different animals. Generally, CPGs organize basic rhythmic motions which are shaped by sensory feedback while internal models are used for sensory prediction and state estimations. According to this concept, we present here adaptive neural locomotion control consisting of a CPG mechanism with neuromodulation and local leg control mechanisms based on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show that the employed embodied neural closed-loop system can be a powerful way for developing robust and adaptable machines. PMID:23408775

  16. Abstraction and Consolidation

    ERIC Educational Resources Information Center

    Monaghan, John; Ozmantar, Mehmet Fatih

    2006-01-01

    The framework for this paper is a recently developed theory of abstraction in context. The paper reports on data collected from one student working on tasks concerned with absolute value functions. It examines the relationship between mathematical constructions and abstractions. It argues that an abstraction is a consolidated construction that can…

  17. Database machines

    NASA Technical Reports Server (NTRS)

    Stiefel, M. L.

    1983-01-01

    The functions and performance characteristics of data base machines (DBM), including machines currently being studied in research laboratories and those currently offered on a commerical basis are discussed. The cost/benefit considerations that must be recognized in selecting a DBM are discussed, as well as the future outlook for such machines.

  18. A Directed Acyclic Graph-Large Margin Distribution Machine Model for Music Symbol Classification.

    PubMed

    Wen, Cuihong; Zhang, Jing; Rebelo, Ana; Cheng, Fanyong

    2016-01-01

    Optical Music Recognition (OMR) has received increasing attention in recent years. In this paper, we propose a classifier based on a new method named Directed Acyclic Graph-Large margin Distribution Machine (DAG-LDM). The DAG-LDM is an improvement of the Large margin Distribution Machine (LDM), which is a binary classifier that optimizes the margin distribution by maximizing the margin mean and minimizing the margin variance simultaneously. We modify the LDM to the DAG-LDM to solve the multi-class music symbol classification problem. Tests are conducted on more than 10000 music symbol images, obtained from handwritten and printed images of music scores. The proposed method provides superior classification capability and achieves much higher classification accuracy than the state-of-the-art algorithms such as Support Vector Machines (SVMs) and Neural Networks (NNs). PMID:26985826

  19. A Directed Acyclic Graph-Large Margin Distribution Machine Model for Music Symbol Classification

    PubMed Central

    Wen, Cuihong; Zhang, Jing; Rebelo, Ana; Cheng, Fanyong

    2016-01-01

    Optical Music Recognition (OMR) has received increasing attention in recent years. In this paper, we propose a classifier based on a new method named Directed Acyclic Graph-Large margin Distribution Machine (DAG-LDM). The DAG-LDM is an improvement of the Large margin Distribution Machine (LDM), which is a binary classifier that optimizes the margin distribution by maximizing the margin mean and minimizing the margin variance simultaneously. We modify the LDM to the DAG-LDM to solve the multi-class music symbol classification problem. Tests are conducted on more than 10000 music symbol images, obtained from handwritten and printed images of music scores. The proposed method provides superior classification capability and achieves much higher classification accuracy than the state-of-the-art algorithms such as Support Vector Machines (SVMs) and Neural Networks (NNs). PMID:26985826

  20. Effects of imbalance and geometric error on precision grinding machines

    SciTech Connect

    Bibler, J.E.

    1997-06-01

    To study balancing in grinding, a simple mechanical system was examined. It was essential to study such a well-defined system, as opposed to a large, complex system such as a machining center. The use of a compact, well-defined system enabled easy quantification of the imbalance force input, its phase angle to any geometric decentering, and good understanding of the machine mode shapes. It is important to understand a simple system such as the one I examined given that imbalance is so intimately coupled to machine dynamics. It is possible to extend the results presented here to industrial machines, although that is not part of this work. In addition to the empirical testing, a simple mechanical system to look at how mode shapes, balance, and geometric error interplay to yield spindle error motion was modelled. The results of this model will be presented along with the results from a more global grinding model. The global model, presented at ASPE in November 1996, allows one to examine the effects of changing global machine parameters like stiffness and damping. This geometrically abstract, one-dimensional model will be presented to demonstrate the usefulness of an abstract approach for first-order understanding but it will not be the main focus of this thesis. 19 refs., 36 figs., 10 tables.

  1. MOAtox: A comprehensive mode of action and acute aquatic toxicity database for predictive model development (SETAC abstract)

    EPA Science Inventory

    The mode of toxic action (MOA) has been recognized as a key determinant of chemical toxicity and as an alternative to chemical class-based predictive toxicity modeling. However, the development of quantitative structure activity relationship (QSAR) and other models has been limit...

  2. Abstracts and program proceedings of the 1994 meeting of the International Society for Ecological Modelling North American Chapter

    SciTech Connect

    Kercher, J.R.

    1994-06-01

    This document contains information about the 1994 meeting of the International Society for Ecological Modelling North American Chapter. The topics discussed include: extinction risk assessment modelling, ecological risk analysis of uranium mining, impacts of pesticides, demography, habitats, atmospheric deposition, and climate change.

  3. (abstract) A Test of the Theoretical Models of Bipolar Outflows: The Bipolar Outflow in Mon R2

    NASA Technical Reports Server (NTRS)

    Xie, Taoling; Goldsmith, Paul; Patel, Nimesh

    1993-01-01

    We report some results of a study of the massive bipolar outflow in the central region of the relatively nearby giant molecular cloud Monoceros R2. We make a quantative comparison of our results with the Shu et al. outflow model which incorporates a radially directed wind sweeping up the ambient material into a shell. We find that this simple model naturally explains the shape of this thin shell. Although Shu's model in its simplest form predicts with reasonable parameters too much mass at very small polar angles, as previously pointed out by Masson and Chernin, it provides a reasonable good fit to the mass distribution at larger polar angles. It is possible that this discrepancy is due to inhomogeneities of the ambient molecular gas which is not considered by the model. We also discuss the constraints imposed by these results on recent jet-driven outflow models.

  4. An asymptotical machine

    NASA Astrophysics Data System (ADS)

    Cristallini, Achille

    2016-07-01

    A new and intriguing machine may be obtained replacing the moving pulley of a gun tackle with a fixed point in the rope. Its most important feature is the asymptotic efficiency. Here we obtain a satisfactory description of this machine by means of vector calculus and elementary trigonometry. The mathematical model has been compared with experimental data and briefly discussed.

  5. Abstraction and Problem Reformulation

    NASA Technical Reports Server (NTRS)

    Giunchiglia, Fausto

    1992-01-01

    In work done jointly with Toby Walsh, the author has provided a sound theoretical foundation to the process of reasoning with abstraction (GW90c, GWS9, GW9Ob, GW90a). The notion of abstraction formalized in this work can be informally described as: (property 1), the process of mapping a representation of a problem, called (following historical convention (Sac74)) the 'ground' representation, onto a new representation, called the 'abstract' representation, which, (property 2) helps deal with the problem in the original search space by preserving certain desirable properties and (property 3) is simpler to handle as it is constructed from the ground representation by "throwing away details". One desirable property preserved by an abstraction is provability; often there is a relationship between provability in the ground representation and provability in the abstract representation. Another can be deduction or, possibly inconsistency. By 'throwing away details' we usually mean that the problem is described in a language with a smaller search space (for instance a propositional language or a language without variables) in which formulae of the abstract representation are obtained from the formulae of the ground representation by the use of some terminating rewriting technique. Often we require that the use of abstraction results in more efficient .reasoning. However, it might simply increase the number of facts asserted (eg. by allowing, in practice, the exploration of deeper search spaces or by implementing some form of learning). Among all abstractions, three very important classes have been identified. They relate the set of facts provable in the ground space to those provable in the abstract space. We call: TI abstractions all those abstractions where the abstractions of all the provable facts of the ground space are provable in the abstract space; TD abstractions all those abstractions wllere the 'unabstractions' of all the provable facts of the abstract space are

  6. Modeling the stress dependence of Barkhausen phenomena for stress axis linear and noncollinear with applied magnetic field (abstract)

    SciTech Connect

    Sablik, M.J.; Augustyniak, B.; Chmielewski, M.

    1996-04-01

    The almost linear dependence of the maximum Barkhausen noise signal amplitude on stress has made it a tool for nondestructive evaluation of residual stress. Recently, a model has been developed to account for the stress dependence of the Barkhausen noise signal. The model uses the development of Alessandro {ital et} {ital al}. who use coupled Langevin equations to derive an expression for the Barkhausen noise power spectrum. The model joins this expression to the magnetomechanical hysteresis model of Sablik {ital et} {ital al}., obtaining both a hysteretic and stress-dependent result for the magnetic-field-dependent Barkhausen noise envelope and obtaining specifically the almost linear stress dependence of the Barkhausen noise maximum experimentally. In this paper, we extend the model to derive the angular dependence observed by Kwun of the Barkhausen noise amplitude when stress axis is taken at different angles relative to magnetic field. We also apply the model to the experimental observation that in XC10 French steel, there is an apparent almost linear correlation with stress of hysteresis loss and of the integral of the Barkhausen noise signal over applied field {ital H}. Further, the two quantities, Barkhausen noise integral and hysteresis loss, are linearly correlated with each other. The model shows how that behavior is to be expected for the measured steel because of its sharply rising hysteresis curve. {copyright} {ital 1996 American Institute of Physics.}

  7. Abstraction in mathematics.

    PubMed

    Ferrari, Pier Luigi

    2003-07-29

    Some current interpretations of abstraction in mathematical settings are examined from different perspectives, including history and learning. It is argued that abstraction is a complex concept and that it cannot be reduced to generalization or decontextualization only. In particular, the links between abstraction processes and the emergence of new objects are shown. The role that representations have in abstraction is discussed, taking into account both the historical and the educational perspectives. As languages play a major role in mathematics, some ideas from functional linguistics are applied to explain to what extent mathematical notations are to be considered abstract. Finally, abstraction is examined from the perspective of mathematics education, to show that the teaching ideas resulting from one-dimensional interpretations of abstraction have proved utterly unsuccessful. PMID:12903658

  8. IRON PRECIPITATION AND ARSENIC ATTENUATION - ASSESSMENT OF ARSENIC NATURAL ATTENUATION OF THE SUBSURFACE USING A GEOCHEMICAL MODEL (PHREEQC): ABSTRACT

    EPA Science Inventory

    NRMRL-ADA-01310 Chen, J., Lin, Z, and Azadpour-Keeley**, A. "Iron Precipitation and Arsenic Attenuation - Assessment of Arsenic Natural Attenuation of the Subsurface Using a Geochemical Model (PHRE...

  9. Machine learning methods for empirical streamflow simulation: a comparison of model accuracy, interpretability, and uncertainty in seasonal watersheds

    NASA Astrophysics Data System (ADS)

    Shortridge, Julie E.; Guikema, Seth D.; Zaitchik, Benjamin F.

    2016-07-01

    In the past decade, machine learning methods for empirical rainfall-runoff modeling have seen extensive development and been proposed as a useful complement to physical hydrologic models, particularly in basins where data to support process-based models are limited. However, the majority of research has focused on a small number of methods, such as artificial neural networks, despite the development of multiple other approaches for non-parametric regression in recent years. Furthermore, this work has often evaluated model performance based on predictive accuracy alone, while not considering broader objectives, such as model interpretability and uncertainty, that are important if such methods are to be used for planning and management decisions. In this paper, we use multiple regression and machine learning approaches (including generalized additive models, multivariate adaptive regression splines, artificial neural networks, random forests, and M5 cubist models) to simulate monthly streamflow in five highly seasonal rivers in the highlands of Ethiopia and compare their performance in terms of predictive accuracy, error structure and bias, model interpretability, and uncertainty when faced with extreme climate conditions. While the relative predictive performance of models differed across basins, data-driven approaches were able to achieve reduced errors when compared to physical models developed for the region. Methods such as random forests and generalized additive models may have advantages in terms of visualization and interpretation of model structure, which can be useful in providing insights into physical watershed function. However, the uncertainty associated with model predictions under extreme climate conditions should be carefully evaluated, since certain models (especially generalized additive models and multivariate adaptive regression splines) become highly variable when faced with high temperatures.

  10. Constructing query-driven dynamic machine learning model with application to protein-ligand binding sites prediction.

    PubMed

    Yu, Dong-Jun; Hu, Jun; Li, Qian-Mu; Tang, Zhen-Min; Yang, Jing-Yu; Shen, Hong-Bin

    2015-01-01

    We are facing an era with annotated biological data rapidly and continuously generated. How to effectively incorporate new annotated data into the learning step is crucial for enhancing the performance of a bioinformatics prediction model. Although machine-learning-based methods have been extensively used for dealing with various biological problems, existing approaches usually train static prediction models based on fixed training datasets. The static approaches are found having several disadvantages such as low scalability and impractical when training dataset is huge. In view of this, we propose a dynamic learning framework for constructing query-driven prediction models. The key difference between the proposed framework and the existing approaches is that the training set for the machine learning algorithm of the proposed framework is dynamically generated according to the query input, as opposed to training a general model regardless of queries in traditional static methods. Accordingly, a query-driven predictor based on the smaller set of data specifically selected from the entire annotated base dataset will be applied on the query. The new way for constructing the dynamic model enables us capable of updating the annotated base dataset flexibly and using the most relevant core subset as the training set makes the constructed model having better generalization ability on the query, showing "part could be better than all" phenomenon. According to the new framework, we have implemented a dynamic protein-ligand binding sites predictor called OSML (On-site model for ligand binding sites prediction). Computer experiments on 10 different ligand types of three hierarchically organized levels show that OSML outperforms most existing predictors. The results indicate that the current dynamic framework is a promising future direction for bridging the gap between the rapidly accumulated annotated biological data and the effective machine-learning-based predictors. OSML

  11. [Prediction model of net photosynthetic rate of ginseng under forest based on optimized parameters support vector machine].

    PubMed

    Wu, Hai-wei; Yu, Hai-ye; Zhang, Lei

    2011-05-01

    Using K-fold cross validation method and two support vector machine functions, four kernel functions, grid-search, genetic algorithm and particle swarm optimization, the authors constructed the support vector machine model of the best penalty parameter c and the best correlation coefficient. Using information granulation technology, the authors constructed P particle and epsilon particle about those factors affecting net photosynthetic rate, and reduced these dimensions of the determinant. P particle includes the percent of visible spectrum ingredients. Epsilon particle includes leaf temperature, scattering radiation, air temperature, and so on. It is possible to obtain the best correlation coefficient among photosynthetic effective radiation, visible spectrum and individual net photosynthetic rate by this technology. The authors constructed the training set and the forecasting set including photosynthetic effective radiation, P particle and epsilon particle. The result shows that epsilon-SVR-RBF-genetic algorithm model, nu-SVR-linear-grid-search model and nu-SVR-RBF-genetic algorithm model obtain the correlation coefficient of up to 97% about the forecasting set including photosynthetic effective radiation and P particle. The penalty parameter c of nu-SVR-linear-grid-search model is the minimum, so the model's generalization ability is the best. The authors forecasted the forecasting set including photosynthetic effective radiation, P particle and epsilon particle by the model, and the correlation coefficient is up to 96%. PMID:21800612

  12. Least Square Support Vector Machine Modelling of Breakdown Voltage of Solid Insulating Materials in the Presence of Voids

    NASA Astrophysics Data System (ADS)

    Behera, S.; Tripathy, R. K.; Mohanty, S.

    2013-03-01

    The least square formulation of support vector machine (SVM) was recently proposed and derived from the statistical learning theory. It is also marked as a new development by learning from examples based on neural networks, radial basis function and splines or other functions. Here least square support vector machine (LS-SVM) is used as a machine learning technique for the prediction of the breakdown voltage of solid insulator. The breakdown voltage is due to partial discharge of five solid insulating materials under ac condition. That has been predicted as a function of four input parameters such as thickness of insulating samples ` t', diameter of void ` d', the thickness of the void ` t 1' and relative permittivity of materials `ɛ r ' by using the LS-SVM model. From experimental studies performed on cylindrical-plane electrode system, the requisite training data is obtained. The voids with different dimension are artificially created. Detailed studies have been carried out to determine the LS-SVM parameters which give the best result. At the completion of training it is found that the LS-SVM model is capable of predicting the breakdown voltage V b = ( t, t 1, d, ɛ r ) very efficiently and with a small value of the mean absolute error.

  13. The evolving market structures of gambling: case studies modelling the socioeconomic assignment of gaming machines in Melbourne and Sydney, Australia.

    PubMed

    Marshall, David C; Baker, Robert G V

    2002-01-01

    The expansion of gambling industries worldwide is intertwined with the growing government dependence on gambling revenue for fiscal assignments. In Australia, electronic gaming machines (EGMs) have dominated recent gambling industry growth. As EGMs have proliferated, growing recognition has emerged that EGM distribution closely reflects levels of socioeconomic disadvantage. More machines are located in less advantaged regions. This paper analyses time-series socioeconomic distributions of EGMs in Melbourne, Australia, an immature EGM market, and then compares the findings with the mature market in Sydney. Similar findings in both cities suggest that market assignment of EGMs transcends differences in historical and legislative environments. This indicates that similar underlying structures are evident in both markets. Modelling the spatial structures of gambling markets provides an opportunity to identify regions most at risk of gambling related problems. Subsequently, policies can be formulated which ensure fiscal revenue from gambling can be better targeted towards regions likely to be most afflicted by excessive gambling-related problems. PMID:12375384

  14. Modelling and calibration technique of laser triangulation sensors for integration in robot arms and articulated arm coordinate measuring machines.

    PubMed

    Santolaria, Jorge; Guillomía, David; Cajal, Carlos; Albajez, José A; Aguilar, Juan J

    2009-01-01

    A technique for intrinsic and extrinsic calibration of a laser triangulation sensor (LTS) integrated in an articulated arm coordinate measuring machine (AACMM) is presented in this paper. After applying a novel approach to the AACMM kinematic parameter identification problem, by means of a single calibration gauge object, a one-step calibration method to obtain both intrinsic-laser plane, CCD sensor and camera geometry-and extrinsic parameters related to the AACMM main frame has been developed. This allows the integration of LTS and AACMM mathematical models without the need of additional optimization methods after the prior sensor calibration, usually done in a coordinate measuring machine (CMM) before the assembly of the sensor in the arm. The experimental tests results for accuracy and repeatability show the suitable performance of this technique, resulting in a reliable, quick and friendly calibration method for the AACMM final user. The presented method is also valid for sensor integration in robot arms and CMMs. PMID:22400001

  15. System identification modeling of ship manoeuvring motion in 4 degrees of freedom based on support vector machines

    NASA Astrophysics Data System (ADS)

    Wang, Xue-gang; Zou, Zao-jian; Yu, Long; Cai, Wei

    2015-06-01

    Based on support vector machines, three modeling methods, i.e., white-box modeling, grey-box modeling and black-box modeling of ship manoeuvring motion in 4 degrees of freedom are investigated. With the whole-ship mathematical model for ship manoeuvring motion, in which the hydrodynamic coefficients are obtained from roll planar motion mechanism test, some zigzag tests and turning circle manoeuvres are simulated. In the white-box modeling and grey-box modeling, the training data taken every 5 s from the simulated 20°/20° zigzag test are used, while in the black-box modeling, the training data taken every 5 s from the simulated 15°/15°, 20°/20° zigzag tests and 15°, 25° turning manoeuvres are used; and the trained support vector machines are used to predict the whole 20°/20° zigzag test. Comparisons between the simulated and predicted 20?/20° zigzag tests show good predictive ability of the proposed methods. Besides, all mathematical models obtained by the proposed modeling methods are used to predict the 10°/10° zigzag test and 35° turning circle manoeuvre, and the predicted results are compared with those of simulation tests to demonstrate the good generalization performance of the mathematical models. Finally, the proposed modeling methods are analyzed and compared with each other in aspects of application conditions, prediction accuracy and computation speed. The appropriate modeling method can be chosen according to the intended use of the mathematical models and the available data needed for system identification.

  16. Seasonal Water Storage Variations as Impacted by Water Abstractions: Comparing the Output of a Global Hydrological Model with GRACE and GPS Observations

    NASA Astrophysics Data System (ADS)

    Döll, Petra; Fritsche, Mathias; Eicker, Annette; Müller Schmied, Hannes

    2014-11-01

    Better quantification of continental water storage variations is expected to improve our understanding of water flows, including evapotranspiration, runoff and river discharge as well as human water abstractions. For the first time, total water storage (TWS) on the land area of the globe as computed by the global water model WaterGAP (Water Global Assessment and Prognosis) was compared to both gravity recovery and climate experiment (GRACE) and global positioning system (GPS) observations. The GRACE satellites sense the effect of TWS on the dynamic gravity field of the Earth. GPS reference points are displaced due to crustal deformation caused by time-varying TWS. Unfortunately, the worldwide coverage of the GPS tracking network is irregular, while GRACE provides global coverage albeit with low spatial resolution. Detrended TWS time series were analyzed by determining scaling factors for mean annual amplitude ( f GRACE) and time series of monthly TWS ( f GPS). Both GRACE and GPS indicate that WaterGAP underestimates seasonal variations of TWS on most of the land area of the globe. In addition, seasonal maximum TWS occurs 1 month earlier according to WaterGAP than according to GRACE on most land areas. While WaterGAP TWS is sensitive to the applied climate input data, none of the two data sets result in a clearly better fit to the observations. Due to the low number of GPS sites, GPS observations are less useful for validating global hydrological models than GRACE observations, but they serve to support the validity of GRACE TWS as observational target for hydrological modeling. For unknown reasons, WaterGAP appears to fit better to GPS than to GRACE. Both GPS and GRACE data, however, are rather uncertain due to a number of reasons, in particular in dry regions. It is not possible to benefit from either GPS or GRACE observations to monitor and quantify human water abstractions if only detrended (seasonal) TWS variations are considered. Regarding GRACE, this is

  17. Agent-Based Modelling of Agricultural Water Abstraction in Response to Climate, Policy, and Demand Changes: Results from East Anglia, UK

    NASA Astrophysics Data System (ADS)

    Swinscoe, T. H. A.; Knoeri, C.; Fleskens, L.; Barrett, J.

    2014-12-01

    Freshwater is a vital natural resource for multiple needs, such as drinking water for the public, industrial processes, hydropower for energy companies, and irrigation for agriculture. In the UK, crop production is the largest in East Anglia, while at the same time the region is also the driest, with average annual rainfall between 560 and 720 mm (1971 to 2000). Many water catchments of East Anglia are reported as over licensed or over abstracted. Therefore, freshwater available for agricultural irrigation abstraction in this region is becoming both increasingly scarce due to competing demands, and increasingly variable and uncertain due to climate and policy changes. It is vital for water users and policy makers to understand how these factors will affect individual abstractors and water resource management at the system level. We present first results of an Agent-based Model that captures the complexity of this system as individual abstractors interact, learn and adapt to these internal and external changes. The purpose of this model is to simulate what patterns of water resource management emerge on the system level based on local interactions, adaptations and behaviours, and what policies lead to a sustainable water resource management system. The model is based on an irrigation abstractor typology derived from a survey in the study area, to capture individual behavioural intentions under a range of water availability scenarios, in addition to farm attributes, and demographics. Regional climate change scenarios, current and new abstraction licence reforms by the UK regulator, such as water trading and water shares, and estimated demand increases from other sectors were used as additional input data. Findings from the integrated model provide new understanding of the patterns of water resource management likely to emerge at the system level.

  18. (abstract)Electron Impact Emission Cross Sections for Modeling UV Auroral and Dayglow Observations of the Upper Atmospheres of Planets

    NASA Technical Reports Server (NTRS)

    Ajello, J. M.; Shemansky, D. E.; James, G.; Kanik, I.; Slevin, J. A.

    1993-01-01

    In the upper atmospheres of the Jovian and Terrestrial planets a dominant mechanism for energy transfer occurs through electron collisional processes with neutral species leading to UV radiation. In response to the need for accurate collision cross sections to model spectroscopic observations of planetary systems, JPL has measured in the laboratory emission cross sections and medium resolution spectra of H, H(sub 2), N(sub 2), SO(sub 2), and other important planetary gases.Voyager and International Ultraviolet Explorer (IUE) spacecraft have established that band systems of H(sub 2) and N(sub 2) are the dominant UV molecular emissions in the solar system produced by electron impact. Applications of our data to models of Voyager, IUE, Galileo, and Hubble Space Telescope observations of the planets will be described.

  19. Loving Those Abstracts

    ERIC Educational Resources Information Center

    Stevens, Lori

    2004-01-01

    The author describes a lesson she did on abstract art with her high school art classes. She passed out a required step-by-step outline of the project process. She asked each of them to look at abstract art. They were to list five or six abstract artists they thought were interesting, narrow their list down to the one most personally intriguing,…

  20. Estimating the period and Q of the Chandler Wobble from observations and models of its excitation (Abstract)

    NASA Astrophysics Data System (ADS)

    Gross, R.; Nastula, J.

    2015-08-01

    Any irregularly shaped solid body rotating about some axis that is not aligned with its figure axis will freely wobble as it rotates. For the Earth, this free wobble is known as the Chandler wobble in honor of S.C. Chandler, Jr. who first observed it in 1891. Unlike the forced wobbles of the Earth, such as the annual wobble, whose periods are the same as the periods of the forcing mechanisms, the period of the free Chandler wobble is a function of the internal structure and rheology of the Earth, and its decay time constant, or quality factor Q, is a function of the dissipation mechanism(s), like mantle anelasticity, that are acting to dampen it. Improved estimates of the period and Q of the Chandler wobble can therefore be used to improve our understanding of these properties of the Earth. Here, estimates of the period and Q of the Chandler wobble are obtained by finding those values that minimize the power within the Chandler band of the difference between observed and modeled polar motion excitation spanning 1962- 2010. Atmosphere, ocean, and hydrology models are used to model the excitation caused by both mass and motion variations within these global geophysical fluids. Direct observations of the excitation caused by mass variations as determined from GRACE time varying gravitational field measurements are also used. The resulting estimates of the period and Q of the Chandler wobble will be presented along with a discussion of the robustness of the estimates.

  1. Weibull Multiplicative Model and Machine Learning Models for Full-Automatic Dark-Spot Detection from SAR Images

    NASA Astrophysics Data System (ADS)

    Taravat, A.; Del Frate, F.

    2013-09-01

    As a major aspect of marine pollution, oil release into the sea has serious biological and environmental impacts. Among remote sensing systems (which is a tool that offers a non-destructive investigation method), synthetic aperture radar (SAR) can provide valuable synoptic information about the position and size of the oil spill due to its wide area coverage and day/night, and all-weather capabilities. In this paper we present a new automated method for oil-spill monitoring. A new approach is based on the combination of Weibull Multiplicative Model and machine learning techniques to differentiate between dark spots and the background. First, the filter created based on Weibull Multiplicative Model is applied to each sub-image. Second, the sub-image is segmented by two different neural networks techniques (Pulsed Coupled Neural Networks and Multilayer Perceptron Neural Networks). As the last step, a very simple filtering process is used to eliminate the false targets. The proposed approaches were tested on 20 ENVISAT and ERS2 images which contained dark spots. The same parameters were used in all tests. For the overall dataset, the average accuracies of 94.05 % and 95.20 % were obtained for PCNN and MLP methods, respectively. The average computational time for dark-spot detection with a 256 × 256 image in about 4 s for PCNN segmentation using IDL software which is the fastest one in this field at present. Our experimental results demonstrate that the proposed approach is very fast, robust and effective. The proposed approach can be applied to the future spaceborne SAR images.

  2. Using detailed inter-network simulation and model abstraction to investigate and evaluate joint battlespace infosphere (JBI) support technologies

    NASA Astrophysics Data System (ADS)

    Green, David M.; Dallaire, Joel D.; Reaper, Jerome H.

    2004-08-01

    The Joint Battlespace Infosphere (JBI) program is performing a technology investigation into global communications, data mining and warehousing, and data fusion technologies by focusing on techniques and methodologies that support twenty first century military distributed collaboration. Advancement of these technologies is vitally important if military decision makers are to have the right data, in the right format, at the right time and place to support making the right decisions within available timelines. A quantitative understanding of individual and combinational effects arising from the application of technologies within a framework is presently far too complex to evaluate at more than a cursory depth. In order to facilitate quantitative analysis under these circumstances, the Distributed Information Enterprise Modeling and Simulation (DIEMS) team was formed to apply modeling and simulation (M&S) techniques to help in addressing JBI analysis challenges. The DIEMS team has been tasked utilizing collaborative distributed M&S architectures to quantitatively evaluate JBI technologies and tradeoffs. This paper first presents a high level view of the DIEMS project. Once this approach has been established, a more concentrated view of the detailed communications simulation techniques used in generating the underlying support data sets is presented.

  3. Agenda, extended abstracts, and bibliographies for a workshop on Deposit modeling, mineral resources assessment, and their role in sustainable development

    USGS Publications Warehouse

    Briskey, Joseph A., (Edited By); Schulz, Klaus J.

    2002-01-01

    Global demand for mineral resources continues to increase because of increasing global population and the desire and efforts to improve living standards worldwide. The ability to meet this growing demand for minerals is affected by the concerns about possible environmental degradation associated with minerals production and by competing land uses. Informed planning and decisions concerning sustainability and resource development require a long-term perspective and an integrated approach to land-use, resource, and environmental management worldwide. This, in turn, requires unbiased information on the global distribution of identified and especially undiscovered resources, the economic and political factors influencing their development, and the potential environmental consequences of their exploitation. The purpose of the IGC workshop is to review the state-of-the-art in mineral-deposit modeling and quantitative resource assessment and to examine their role in the sustainability of mineral use. The workshop will address such questions as: Which of the available mineral-deposit models and assessment methods are best suited for predicting the locations, deposit types, and amounts of undiscovered nonfuel mineral resources remaining in the world? What is the availability of global geologic, mineral deposit, and mineral-exploration information? How can mineral-resource assessments be used to address economic and environmental issues? Presentations will include overviews of assessment methods used in previous national and other small-scale assessments of large regions as well as resulting assessment products and their uses.

  4. Using the Relevance Vector Machine Model Combined with Local Phase Quantization to Predict Protein-Protein Interactions from Protein Sequences.

    PubMed

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Fang, Yu-Hong; Zhao, Yu-Jun; Zhang, Ming

    2016-01-01

    We propose a novel computational method known as RVM-LPQ that combines the Relevance Vector Machine (RVM) model and Local Phase Quantization (LPQ) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the LPQ feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We perform 5-fold cross-validation experiments on Yeast and Human datasets, and we achieve very high accuracies of 92.65% and 97.62%, respectively, which is significantly better than previous works. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the Yeast dataset. The experimental results demonstrate that our RVM-LPQ method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool for future proteomics research. PMID:27314023

  5. Using the Relevance Vector Machine Model Combined with Local Phase Quantization to Predict Protein-Protein Interactions from Protein Sequences

    PubMed Central

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Fang, Yu-Hong; Zhao, Yu-Jun; Zhang, Ming

    2016-01-01

    We propose a novel computational method known as RVM-LPQ that combines the Relevance Vector Machine (RVM) model and Local Phase Quantization (LPQ) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the LPQ feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We perform 5-fold cross-validation experiments on Yeast and Human datasets, and we achieve very high accuracies of 92.65% and 97.62%, respectively, which is significantly better than previous works. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the Yeast dataset. The experimental results demonstrate that our RVM-LPQ method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool for future proteomics research. PMID:27314023

  6. Modeling and Control of a Double-effect Absorption Refrigerating Machine

    NASA Astrophysics Data System (ADS)

    Hihara, Eiji; Yamamoto, Yuuji; Saito, Takamoto; Nagaoka, Yoshikazu; Nishiyama, Noriyuki

    Because the heat capacity of absorption refrigerating machines is large compared with vapor compression refrigerating machines, the dynamic characteristics at the change in cooling load conditions are problems to be improved. The control method of energy input and of weak solution flow rate following cooling load variations was investigated. As the changes in cooling load and cooling capacity are moderate, the optimal operation conditions corresponding to the cooling load can be estimated with steady state characteristics. If the relation between the cooling load and the optimal operation conditions is well known, a feed forward control can be employed. In this report a new control algorithm, which is called MOL (Multi-variable Open Loop) control, is proposed. Comparing the MOL control with the conventional chilled water outlet temperature proportional control, the MOL control enables the smooth changes in cooling capacity and the reduction in fuel consumption.

  7. SWAT and River-2D Modelling of Pinder River for Analysing Snow Trout Habitat under Different Flow Abstraction Scenarios

    NASA Astrophysics Data System (ADS)

    Nale, J. P.; Gosain, A. K.; Khosa, R.

    2015-12-01

    Pinder River, one of major headstreams of River Ganga, originates in Pindari Glaciers of Kumaon Himalayas and after passing through rugged gorges meets Alaknanda at Karanprayag forming one of the five celestial confluences of Upper Ganga region. While other sub-basins of Upper Ganga are facing severe ecological losses, Pinder basin is still in its virginal state and is well known for its beautiful valleys besides being host to unique and rare biodiversity. A proposed 252 MW run-of-river hydroelectric project at Devsari on this river has been a major concern on account of its perceived potential for egregious environmental and social impacts. In this context, the study presented tries to analyse the expected changes in aquatic habitat conditions after this project is operational (with different operation policies). SWAT hydrological modelling platform has been used to derive stream flow simulations under various scenarios ranging from the present to the likely future conditions. To analyse the habitat conditions, a two dimensional hydraulic-habitat model 'River-2D', a module of iRIC software, is used. Snow trout has been identified as the target keystone species and its habitat preferences, in the form of flow depths, flow velocity and substrate condition, are obtained from diverse sources of related literature and are provided as Habitat Suitability Indices to River-2D. Bed morphology constitutes an important River-2D input and has been obtained, for the designated 1 km long study reach of Pinder upto Karanprayag, from a combination of actual field observations and supplemented by SRTM 1 Arc-Second Global digital elevation data. Monthly Weighted Usable Area for three different life stages (Spawning, Juvenile and Adult) of Snow Trout are obtained corresponding to seven different flow discharges ranging from 10 cumec to 1000 cumec. Comparing the present and proposed future river flow conditions obtained from SWAT modelling, losses in Weighted Usable Area, for the

  8. (abstract) A Polarimetric Model for Effects of Brine Infiltrated Snow Cover and Frost Flowers on Sea Ice Backscatter

    NASA Technical Reports Server (NTRS)

    Nghiem, S. V.; Kwok, R.; Yueh, S. H.

    1995-01-01

    A polarimetric scattering model is developed to study effects of snow cover and frost flowers with brine infiltration on thin sea ice. Leads containing thin sea ice in the Artic icepack are important to heat exchange with the atmosphere and salt flux into the upper ocean. Surface characteristics of thin sea ice in leads are dominated by the formation of frost flowers with high salinity. In many cases, the thin sea ice layer is covered by snow, which wicks up brine from sea ice due to capillary force. Snow and frost flowers have a significant impact on polarimetric signatures of thin ice, which needs to be studied for accessing the retrieval of geophysical parameters such as ice thickness. Frost flowers or snow layer is modeled with a heterogeneous mixture consisting of randomly oriented ellipsoids and brine infiltration in an air background. Ice crystals are characterized with three different axial lengths to depict the nonspherical shape. Under the covering multispecies medium, the columinar sea-ice layer is an inhomogeneous anisotropic medium composed of ellipsoidal brine inclusions preferentially oriented in the vertical direction in an ice background. The underlying medium is homogeneous sea water. This configuration is described with layered inhomogeneous media containing multiple species of scatterers. The species are allowed to have different size, shape, and permittivity. The strong permittivity fluctuation theory is extended to account for the multispecies in the derivation of effective permittivities with distributions of scatterer orientations characterized by Eulerian rotation angles. Polarimetric backscattering coefficients are obtained consistently with the same physical description used in the effective permittivity calculation. The mulitspecies model allows the inclusion of high-permittivity species to study effects of brine infiltrated snow cover and frost flowers on thin ice. The results suggest that the frost cover with a rough interface

  9. (abstract) Using TOPEX/Poseidon Sea Level Observations to Test the Sensitivity of an Ocean Model to Wind Forcing

    NASA Technical Reports Server (NTRS)

    Fu, Lee-Lueng; Chao, Yi

    1996-01-01

    It has been demonstrated that current-generation global ocean general circulation models (OGCM) are able to simulate large-scale sea level variations fairly well. In this study, a GFDL/MOM-based OGCM was used to investigate its sensitivity to different wind forcing. Simulations of global sea level using wind forcing from the ERS-1 Scatterometer and the NMC operational analysis were compared to the observations made by the TOPEX/Poseidon (T/P) radar altimeter for a two-year period. The result of the study has demonstrated the sensitivity of the OGCM to the quality of wind forcing, as well as the synergistic use of two spaceborne sensors in advancing the study of wind-driven ocean dynamics.

  10. Designing a stencil compiler for the Connection Machine model CM-5

    SciTech Connect

    Brickner, R.G.; Holian, K.; Thiagarajan, B.; Johnsson, S.L. |

    1994-12-31

    In this paper the authors present the design of a stencil compiler for the Connection Machine system CM-5. The stencil compiler will optimize the data motion between processing nodes, minimize the data motion within a node, and minimize the data motion between registers and local memory in a node. The compiler will natively support two-dimensional stencils, but stencils in three dimensions will be automatically decomposed. Lower dimensional stencils are treated as degenerate stencils. The compiler will be integrated as part of the CM Fortran programming system. Much of the compiler code will be adapted from the CM-2/200 stencil compiler, which is part of CMSSL (the Connection Machine Scientific Software Library) Release 3.1 for the CM-2/200, and the compiler will be available as part of the Connection Machine Scientific Software Library (CMSSL) for the CM-5. In addition to setting down design considerations, they report on the implementation status of the stencil compiler. In particular, they discuss optimization strategies and status of code conversion from CM-2/200 to CM-5 architecture, and report on the measured performance of prototype target code which the compiler will generate.

  11. Interpreting support vector machine models for multivariate group wise analysis in neuroimaging

    PubMed Central

    Gaonkar, Bilwaj; Shinohara, Russell T; Davatzikos, Christos

    2015-01-01

    Machine learning based classification algorithms like support vector machines (SVMs) have shown great promise for turning a high dimensional neuroimaging data into clinically useful decision criteria. However, tracing imaging based patterns that contribute significantly to classifier decisions remains an open problem. This is an issue of critical importance in imaging studies seeking to determine which anatomical or physiological imaging features contribute to the classifier’s decision, thereby allowing users to critically evaluate the findings of such machine learning methods and to understand disease mechanisms. The majority of published work addresses the question of statistical inference for support vector classification using permutation tests based on SVM weight vectors. Such permutation testing ignores the SVM margin, which is critical in SVM theory. In this work we emphasize the use of a statistic that explicitly accounts for the SVM margin and show that the null distributions associated with this statistic are asymptotically normal. Further, our experiments show that this statistic is a lot less conservative as compared to weight based permutation tests and yet specific enough to tease out multivariate patterns in the data. Thus, we can better understand the multivariate patterns that the SVM uses for neuroimaging based classification. PMID:26210913

  12. Interpreting support vector machine models for multivariate group wise analysis in neuroimaging.

    PubMed

    Gaonkar, Bilwaj; T Shinohara, Russell; Davatzikos, Christos

    2015-08-01

    Machine learning based classification algorithms like support vector machines (SVMs) have shown great promise for turning a high dimensional neuroimaging data into clinically useful decision criteria. However, tracing imaging based patterns that contribute significantly to classifier decisions remains an open problem. This is an issue of critical importance in imaging studies seeking to determine which anatomical or physiological imaging features contribute to the classifier's decision, thereby allowing users to critically evaluate the findings of such machine learning methods and to understand disease mechanisms. The majority of published work addresses the question of statistical inference for support vector classification using permutation tests based on SVM weight vectors. Such permutation testing ignores the SVM margin, which is critical in SVM theory. In this work we emphasize the use of a statistic that explicitly accounts for the SVM margin and show that the null distributions associated with this statistic are asymptotically normal. Further, our experiments show that this statistic is a lot less conservative as compared to weight based permutation tests and yet specific enough to tease out multivariate patterns in the data. Thus, we can better understand the multivariate patterns that the SVM uses for neuroimaging based classification. PMID:26210913

  13. Community Development Abstracts.

    ERIC Educational Resources Information Center

    Agency for International Development (Dept. of State), Washington, DC.

    This volume of 1,108 abstracts summarizes the majority of important works on community development during the last ten years. Part I contains abstracts of periodical literature and is classified into 19 sections, including general history, communications, community and area studies, decision-making, leadership, migration and settlement, social…

  14. Leadership Abstracts, Volume 10.

    ERIC Educational Resources Information Center

    Milliron, Mark D., Ed.

    1997-01-01

    The abstracts in this series provide brief discussions of issues related to leadership, administration, professional development, technology, and education in community colleges. Volume 10 for 1997 contains the following 12 abstracts: (1) "On Community College Renewal" (Nathan L. Hodges and Mark D. Milliron); (2) "The Community College Niche in a…

  15. Has Abstractness Been Resolved?

    ERIC Educational Resources Information Center

    Al-Omoush, Ahmad

    1989-01-01

    A discussion focusing on the abstractness of analysis in phonology, debated since the 1960s, describes the issue, reviews the literature on the subject, cites specific natural language examples, and examines the extent to which the issue has been resolved. An underlying representation is said to be abstract if it is different from the derived one,…

  16. Designing for Mathematical Abstraction

    ERIC Educational Resources Information Center

    Pratt, Dave; Noss, Richard

    2010-01-01

    Our focus is on the design of systems (pedagogical, technical, social) that encourage mathematical abstraction, a process we refer to as "designing for abstraction." In this paper, we draw on detailed design experiments from our research on children's understanding about chance and distribution to re-present this work as a case study in designing…

  17. Knowledge-Based Abstracting.

    ERIC Educational Resources Information Center

    Black, William J.

    1990-01-01

    Discussion of automatic abstracting of technical papers focuses on a knowledge-based method that uses two sets of rules. Topics discussed include anaphora; text structure and discourse; abstracting techniques, including the keyword method and the indicator phrase method; and tools for text skimming. (27 references) (LRW)

  18. Leadership Abstracts, 1995.

    ERIC Educational Resources Information Center

    Johnson, Larry, Ed.

    1995-01-01

    The abstracts in this series provide two-page discussions of issues related to leadership, administration, and teaching in community colleges. The 12 abstracts for Volume 8, 1995, are: (1) "Redesigning the System To Meet the Workforce Training Needs of the Nation," by Larry Warford; (2) "The College President, the Board, and the Board Chair: A…

  19. Paper Abstract Animals

    ERIC Educational Resources Information Center

    Sutley, Jane

    2010-01-01

    Abstraction is, in effect, a simplification and reduction of shapes with an absence of detail designed to comprise the essence of the more naturalistic images being depicted. Without even intending to, young children consistently create interesting, and sometimes beautiful, abstract compositions. A child's creations, moreover, will always seem to…

  20. Is It Really Abstract?

    ERIC Educational Resources Information Center

    Kernan, Christine

    2011-01-01

    For this author, one of the most enjoyable aspects of teaching elementary art is the willingness of students to embrace the different styles of art introduced to them. In this article, she describes a project that allows upper-elementary students to learn about abstract art and the lives of some of the master abstract artists, implement the idea…

  1. Journalism Abstracts. Vol. 15.

    ERIC Educational Resources Information Center

    Popovich, Mark N., Ed.

    This book, the fifteenth volume of an annual publication, contains 373 abstracts of 52 doctoral and 321 master's theses from 50 colleges and universities. The abstracts are arranged alphabetically by author, with the doctoral dissertations appearing first. These cover such topics as advertising, audience analysis, content analysis of news issues…

  2. Leadership Abstracts, 1996.

    ERIC Educational Resources Information Center

    Johnson, Larry, Ed.

    1996-01-01

    The abstracts in this series provide two-page discussions of issues related to leadership, administration, professional development, technology, and education in community colleges. Volume 9 for 1996 includes the following 12 abstracts: (1) "Tech-Prep + School-To-Work: Working Together To Foster Educational Reform," (Roderick F. Beaumont); (2)…

  3. Mathematical Abstraction through Scaffolding

    ERIC Educational Resources Information Center

    Ozmantar, Mehmet Fatih; Roper, Tom

    2004-01-01

    This paper examines the role of scaffolding in the process of abstraction. An activity-theoretic approach to abstraction in context is taken. This examination is carried out with reference to verbal protocols of two 17 year-old students working together on a task connected to sketching the graph of |f|x|)|. Examination of the data suggests that…

  4. Carbon sequestration in Synechococcus Sp.: from molecular machines to hierarchical modeling.

    SciTech Connect

    Martino, Anthony A. (Sandia National Laboratories, Livermore, CA); Heffelfinger, Grant S.; Frink, Laura J. Douglas; Davidson, George S.; Haaland, David Michael; Timlin, Jerilyn Ann; Plimpton, Steven James; Lane, Todd W.; Thomas, Edward Victor; Rintoul, Mark Daniel; Roe, Diana C. (Sandia National Laboratories, Livermore, CA); Faulon, Jean-Loup Michel; Hart, William Eugene

    2003-02-01

    The U.S. Department of Energy recently announced the first five grants for the Genomes to Life (GTL) Program. The goal of this program is to ''achieve the most far-reaching of all biological goals: a fundamental, comprehensive, and systematic understanding of life.'' While more information about the program can be found at the GTL website (www.doegenomestolife.org), this paper provides an overview of one of the five GTL projects funded, ''Carbon Sequestration in Synechococcus Sp.: From Molecular Machines to Hierarchical Modeling.'' This project is a combined experimental and computational effort emphasizing developing, prototyping, and applying new computational tools and methods to elucidate the biochemical mechanisms of the carbon sequestration of Synechococcus Sp., an abundant marine cyanobacteria known to play an important role in the global carbon cycle. Understanding, predicting, and perhaps manipulating carbon fixation in the oceans has long been a major focus of biological oceanography and has more recently been of interest to a broader audience of scientists and policy makers. It is clear that the oceanic sinks and sources of CO(2) are important terms in the global environmental response to anthropogenic atmospheric inputs of CO(2) and that oceanic microorganisms play a key role in this response. However, the relationship between this global phenomenon and the biochemical mechanisms of carbon fixation in these microorganisms is poorly understood. The project includes five subprojects: an experimental investigation, three computational biology efforts, and a fifth which deals with addressing computational infrastructure challenges of relevance to this project and the Genomes to Life program as a whole. Our experimental effort is designed to provide biology and data to drive the computational efforts and includes significant investment in developing new experimental methods for uncovering protein partners, characterizing protein complexes, identifying new

  5. Study of Tool Wear Mechanisms and Mathematical Modeling of Flank Wear During Machining of Ti Alloy (Ti6Al4V)

    NASA Astrophysics Data System (ADS)

    Chetan; Narasimhulu, A.; Ghosh, S.; Rao, P. V.

    2014-12-01

    Machinability of titanium is poor due to its low thermal conductivity and high chemical affinity. Lower thermal conductivity of titanium alloy is undesirable on the part of cutting tool causing extensive tool wear. The main task of this work is to predict the various wear mechanisms involved during machining of Ti alloy (Ti6Al4V) and to formulate an analytical mathematical tool wear model for the same. It has been found from various experiments that adhesive and diffusion wear are the dominating wear during machining of Ti alloy with PVD coated tungsten carbide tool. It is also clear from the experiments that the tool wear increases with the increase in cutting parameters like speed, feed and depth of cut. The wear model was validated by carrying out dry machining of Ti alloy at suitable cutting conditions. It has been found that the wear model is able to predict the flank wear suitably under gentle cutting conditions.

  6. Abstract coherent categories.

    PubMed

    Rehder, B; Ross, B H

    2001-09-01

    Many studies have demonstrated the importance of the knowledge that interrelates features in people's mental representation of categories and that makes our conception of categories coherent. This article focuses on abstract coherent categories, coherent categories that are also abstract because they are defined by relations independently of any features. Four experiments demonstrate that abstract coherent categories are learned more easily than control categories with identical features and statistical structure, and also that participants induced an abstract representation of the category by granting category membership to exemplars with completely novel features. The authors argue that the human conceptual system is heavily populated with abstract coherent concepts, including conceptions of social groups, societal institutions, legal, political, and military scenarios, and many superordinate categories, such as classes of natural kinds. PMID:11550753

  7. Abstract Datatypes in PVS

    NASA Technical Reports Server (NTRS)

    Owre, Sam; Shankar, Natarajan

    1997-01-01

    PVS (Prototype Verification System) is a general-purpose environment for developing specifications and proofs. This document deals primarily with the abstract datatype mechanism in PVS which generates theories containing axioms and definitions for a class of recursive datatypes. The concepts underlying the abstract datatype mechanism are illustrated using ordered binary trees as an example. Binary trees are described by a PVS abstract datatype that is parametric in its value type. The type of ordered binary trees is then presented as a subtype of binary trees where the ordering relation is also taken as a parameter. We define the operations of inserting an element into, and searching for an element in an ordered binary tree; the bulk of the report is devoted to PVS proofs of some useful properties of these operations. These proofs illustrate various approaches to proving properties of abstract datatype operations. They also describe the built-in capabilities of the PVS proof checker for simplifying abstract datatype expressions.

  8. Abstraction of Drift Seepage

    SciTech Connect

    J.T. Birkholzer

    2004-11-01

    This model report documents the abstraction of drift seepage, conducted to provide seepage-relevant parameters and their probability distributions for use in Total System Performance Assessment for License Application (TSPA-LA). Drift seepage refers to the flow of liquid water into waste emplacement drifts. Water that seeps into drifts may contact waste packages and potentially mobilize radionuclides, and may result in advective transport of radionuclides through breached waste packages [''Risk Information to Support Prioritization of Performance Assessment Models'' (BSC 2003 [DIRS 168796], Section 3.3.2)]. The unsaturated rock layers overlying and hosting the repository form a natural barrier that reduces the amount of water entering emplacement drifts by natural subsurface processes. For example, drift seepage is limited by the capillary barrier forming at the drift crown, which decreases or even eliminates water flow from the unsaturated fractured rock into the drift. During the first few hundred years after waste emplacement, when above-boiling rock temperatures will develop as a result of heat generated by the decay of the radioactive waste, vaporization of percolation water is an additional factor limiting seepage. Estimating the effectiveness of these natural barrier capabilities and predicting the amount of seepage into drifts is an important aspect of assessing the performance of the repository. The TSPA-LA therefore includes a seepage component that calculates the amount of seepage into drifts [''Total System Performance Assessment (TSPA) Model/Analysis for the License Application'' (BSC 2004 [DIRS 168504], Section 6.3.3.1)]. The TSPA-LA calculation is performed with a probabilistic approach that accounts for the spatial and temporal variability and inherent uncertainty of seepage-relevant properties and processes. Results are used for subsequent TSPA-LA components that may handle, for example, waste package corrosion or radionuclide transport.

  9. Human-machine interactions

    DOEpatents

    Forsythe, J. Chris; Xavier, Patrick G.; Abbott, Robert G.; Brannon, Nathan G.; Bernard, Michael L.; Speed, Ann E.

    2009-04-28

    Digital technology utilizing a cognitive model based on human naturalistic decision-making processes, including pattern recognition and episodic memory, can reduce the dependency of human-machine interactions on the abilities of a human user and can enable a machine to more closely emulate human-like responses. Such a cognitive model can enable digital technology to use cognitive capacities fundamental to human-like communication and cooperation to interact with humans.

  10. Laser-induced Breakdown spectroscopy quantitative analysis method via adaptive analytical line selection and relevance vector machine regression model

    NASA Astrophysics Data System (ADS)

    Yang, Jianhong; Yi, Cancan; Xu, Jinwu; Ma, Xianghong

    2015-05-01

    A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.

  11. An animal model of slot machine gambling: the effect of structural characteristics on response latency and persistence.

    PubMed

    Peters, Heather; Hunt, Maree; Harper, David

    2010-12-01

    Despite the prevalence of problem gamblers and the ethical issues involved in studying gambling behavior with humans, few animal models of gambling have been developed. When designing an animal model it is necessary to determine if behavior in the paradigm is similar to human gambling. In human studies, response latencies following winning trials and near win trials are greater than those following clear losses. Weatherly and Derenne (Anal Gambl Behav 1:79-89, 2007) investigated whether this pattern was found with rats working in an animal analogue of slot machine gambling. They found a similar pattern of response latencies but the subjects' behavior did not come under control of the visual stimuli signalling the different outcomes. The animal model of slot machine gambling we used addressed procedural issues in Weatherly and Derenne's model and examined whether reinforcer magnitude and the presence of near win trials influenced response latency and resistance to extinction. Response latencies of the six female Norway Hooded rats varied as a function of reinforcer magnitude and the presence of near-win trials. These results are consistent with prior research and with the idea that near win trials serve as conditional reinforcers. PMID:20217196

  12. An in vivo autotransplant model of renal preservation: cold storage versus machine perfusion in the prevention of ischemia/reperfusion injury.

    PubMed

    La Manna, Gaetano; Conte, Diletta; Cappuccilli, Maria Laura; Nardo, Bruno; D'Addio, Francesca; Puviani, Lorenza; Comai, Giorgia; Bianchi, Francesca; Bertelli, Riccardo; Lanci, Nicole; Donati, Gabriele; Scolari, Maria Piera; Faenza, Alessandro; Stefoni, Sergio

    2009-07-01

    There is increasing proof that organ preservation by machine perfusion is able to limit ischemia/reperfusion injury in kidney transplantation. This study was designed to compare the efficiency in hypothermic organ preservation by machine perfusion or cold storage in an animal model of kidney autotransplantation. Twelve pigs underwent left nephrectomy after warm ischemic time; the organs were preserved in machine perfusion (n = 6) or cold storage (n = 6) and then autotransplanted with immediate contralateral nephrectomy. The following parameters were compared between the two groups of animals: hematological and urine indexes of renal function, blood/gas analysis values, histological features, tissue adenosine-5'-triphosphate (ATP) content, perforin gene expression in kidney biopsies, and organ weight changes were compared before and after preservation. The amount of cellular ATP was significantly higher in organs preserved by machine perfusion; moreover, the study of apoptosis induction revealed an enhanced perforin expression in the kidneys, which underwent simple hypothermic preservation compared to the machine-preserved ones. Organ weight was significantly decreased after cold storage, but it remained quite stable for machine-perfused kidneys. The present model seems to suggest that organ preservation by hypothermic machine perfusion is able to better control cellular impairment in comparison with cold storage. PMID:19566736

  13. Abstract Interpreters for Free

    NASA Astrophysics Data System (ADS)

    Might, Matthew

    In small-step abstract interpretations, the concrete and abstract semantics bear an uncanny resemblance. In this work, we present an analysis-design methodology that both explains and exploits that resemblance. Specifically, we present a two-step method to convert a small-step concrete semantics into a family of sound, computable abstract interpretations. The first step re-factors the concrete state-space to eliminate recursive structure; this refactoring of the state-space simultaneously determines a store-passing-style transformation on the underlying concrete semantics. The second step uses inference rules to generate an abstract state-space and a Galois connection simultaneously. The Galois connection allows the calculation of the "optimal" abstract interpretation. The two-step process is unambiguous, but nondeterministic: at each step, analysis designers face choices. Some of these choices ultimately influence properties such as flow-, field- and context-sensitivity. Thus, under the method, we can give the emergence of these properties a graph-theoretic characterization. To illustrate the method, we systematically abstract the continuation-passing style lambda calculus to arrive at two distinct families of analyses. The first is the well-known k-CFA family of analyses. The second consists of novel "environment-centric" abstract interpretations, none of which appear in the literature on static analysis of higher-order programs.

  14. A measuring model study of a new coordinate-measuring machine based on the parallel kinematic mechanism

    NASA Astrophysics Data System (ADS)

    Liu, Dejun; Huang, Qingcheng; Che, Rensheng; Ai, Qinghui

    1999-11-01

    This paper introduces a new coordinate measuring machine (CMM) comprising a parallel kinematic mechanism with three spatial degrees of freedom and describes its structure, measuring theory and characteristics. Compared with the conventional CMM, this kind of CMM has a simple structure, a flexible probe posture, low moving errors, high stiffness and minimal deformations etc. In this paper, the measuring model of the new parallel CMM is established according to the theory of the spatial mechanics and verified by computer simulation. This research offers a theoretical basis for developing new CMMs.

  15. Nonplanar machines

    SciTech Connect

    Ritson, D. )

    1989-05-01

    This talk examines methods available to minimize, but never entirely eliminate, degradation of machine performance caused by terrain following. Breaking of planar machine symmetry for engineering convenience and/or monetary savings must be balanced against small performance degradation, and can only be decided on a case-by-case basis. 5 refs.

  16. Electric machine

    DOEpatents

    El-Refaie, Ayman Mohamed Fawzi; Reddy, Patel Bhageerath

    2012-07-17

    An interior permanent magnet electric machine is disclosed. The interior permanent magnet electric machine comprises a rotor comprising a plurality of radially placed magnets each having a proximal end and a distal end, wherein each magnet comprises a plurality of magnetic segments and at least one magnetic segment towards the distal end comprises a high resistivity magnetic material.

  17. Financial and environmental modelling of water hardness--implications for utilising harvested rainwater in washing machines.

    PubMed

    Morales-Pinzón, Tito; Lurueña, Rodrigo; Gabarrell, Xavier; Gasol, Carles M; Rieradevall, Joan

    2014-02-01

    A study was conducted to determine the financial and environmental effects of water quality on rainwater harvesting systems. The potential for replacing tap water used in washing machines with rainwater was studied, and then analysis presented in this paper is valid for applications that include washing machines where tap water hardness may be important. A wide range of weather conditions, such as rainfall (284-1,794 mm/year); water hardness (14-315 mg/L CaCO3); tap water prices (0.85-2.65 Euros/m(3)) in different Spanish urban areas (from individual buildings to whole neighbourhoods); and other scenarios (including materials and water storage capacity) were analysed. Rainfall was essential for rainwater harvesting, but the tap water prices and the water hardness were the main factors for consideration in the financial and the environmental analyses, respectively. The local tap water hardness and prices can cause greater financial and environmental impacts than the type of material used for the water storage tank or the volume of the tank. The use of rainwater as a substitute for hard water in washing machines favours financial analysis. Although tap water hardness significantly affects the financial analysis, the greatest effect was found in the environmental analysis. When hard tap water needed to be replaced, it was found that a water price of 1 Euro/m(3) could render the use of rainwater financially feasible when using large-scale rainwater harvesting systems. When the water hardness was greater than 300 mg/L CaCO3, a financial analysis revealed that an net present value greater than 270 Euros/dwelling could be obtained at the neighbourhood scale, and there could be a reduction in the Global Warming Potential (100 years) ranging between 35 and 101 kg CO2 eq./dwelling/year. PMID:24262990

  18. Bias correction for selecting the minimal-error classifier from many machine learning models

    PubMed Central

    Ding, Ying; Tang, Shaowu; Liao, Serena G.; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C.

    2014-01-01

    Motivation: Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30–60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. Results: In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package ‘MLbias’ and all source files are publicly available. Availability and implementation: tsenglab.biostat.pitt.edu/software.htm. Contact: ctseng@pitt.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25086004

  19. Permutation Machines.

    PubMed

    Bhatia, Swapnil; LaBoda, Craig; Yanez, Vanessa; Haddock-Angelli, Traci; Densmore, Douglas

    2016-08-19

    We define a new inversion-based machine called a permuton of n genetic elements, which allows the n elements to be rearranged in any of the n·(n - 1)·(n - 2)···2 = n! distinct orderings. We present two design algorithms for architecting such a machine. We define a notion of a feasible design and use the framework to discuss the feasibility of the permuton architectures. We have implemented our design algorithms in a freely usable web-accessible software for exploration of these machines. Permutation machines could be used as memory elements or state machines and explicitly illustrate a rational approach to designing biological systems. PMID:27383067

  20. Evaluation of different time domain peak models using extreme learning machine-based peak detection for EEG signal.

    PubMed

    Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Cumming, Paul; Mubin, Marizan

    2016-01-01

    Various peak models have been introduced to detect and analyze peaks in the time domain analysis of electroencephalogram (EEG) signals. In general, peak model in the time domain analysis consists of a set of signal parameters, such as amplitude, width, and slope. Models including those proposed by Dumpala, Acir, Liu, and Dingle are routinely used to detect peaks in EEG signals acquired in clinical studies of epilepsy or eye blink. The optimal peak model is the most reliable peak detection performance in a particular application. A fair measure of performance of different models requires a common and unbiased platform. In this study, we evaluate the performance of the four different peak models using the extreme learning machine (ELM)-based peak detection algorithm. We found that the Dingle model gave the best performance, with 72 % accuracy in the analysis of real EEG data. Statistical analysis conferred that the Dingle model afforded significantly better mean testing accuracy than did the Acir and Liu models, which were in the range 37-52 %. Meanwhile, the Dingle model has no significant difference compared to Dumpala model. PMID:27462484

  1. Modelling of thermal conductance during microthermal machining with scanning thermal microscope using an inverse methodology

    NASA Astrophysics Data System (ADS)

    Yang, Yu-Ching; Chang, Win-Jin; Fang, Te-Hua; Fang, Shih-Chung

    2008-01-01

    In this study, a general methodology for determining the thermal conductance between the probe tip and the workpiece during microthermal machining using Scanning Thermal Microscopy (SThM) has been proposed. The processing system was considered as an inverse heat conduction problem with an unknown thermal conductance. Temperature dependence for the material properties and thermal conductance in the analysis of heat conduction is taken into account. The conjugate gradient method is used to solve the inverse problem. Furthermore, this methodology can also be applied to estimate the thermal contact conductance in other transient heat conduction problems, like metal casting process, injection molding process, and electronic circuit systems.

  2. Abstracts of SIG Sessions.

    ERIC Educational Resources Information Center

    Proceedings of the ASIS Annual Meeting, 1997

    1997-01-01

    Presents abstracts of SIG Sessions. Highlights include digital collections; information retrieval methods; public interest/fair use; classification and indexing; electronic publication; funding; globalization; information technology projects; interface design; networking in developing countries; metadata; multilingual databases; networked…

  3. Automatic Abstraction in Planning

    NASA Technical Reports Server (NTRS)

    Christensen, J.

    1991-01-01

    Traditionally, abstraction in planning has been accomplished by either state abstraction or operator abstraction, neither of which has been fully automatic. We present a new method, predicate relaxation, for automatically performing state abstraction. PABLO, a nonlinear hierarchical planner, implements predicate relaxation. Theoretical, as well as empirical results are presented which demonstrate the potential advantages of using predicate relaxation in planning. We also present a new definition of hierarchical operators that allows us to guarantee a limited form of completeness. This new definition is shown to be, in some ways, more flexible than previous definitions of hierarchical operators. Finally, a Classical Truth Criterion is presented that is proven to be sound and complete for a planning formalism that is general enough to include most classical planning formalisms that are based on the STRIPS assumption.

  4. 1971 Annual Conference Abstracts

    ERIC Educational Resources Information Center

    Journal of Engineering Education, 1971

    1971-01-01

    Included are 112 abstracts listed under headings such as: acoustics, continuing engineering studies, educational research and methods, engineering design, libraries, liberal studies, and materials. Other areas include agricultural, electrical, mechanical, mineral, and ocean engineering. (TS)

  5. 2016 ACPA MEETING ABSTRACTS.

    PubMed

    2016-07-01

    The peer-reviewed abstracts presented at the 73rd Annual Meeting of the ACPA are published as submitted by the authors. For financial conflict of interest disclosure, please visit http://meeting.acpa-cpf.org/disclosures.html. PMID:27447885

  6. Abstracts of contributed papers

    SciTech Connect

    Not Available

    1994-08-01

    This volume contains 571 abstracts of contributed papers to be presented during the Twelfth US National Congress of Applied Mechanics. Abstracts are arranged in the order in which they fall in the program -- the main sessions are listed chronologically in the Table of Contents. The Author Index is in alphabetical order and lists each paper number (matching the schedule in the Final Program) with its corresponding page number in the book.

  7. Meeting Abstracts - Annual Meeting 2016.

    PubMed

    2016-04-01

    The AMCP Abstracts program provides a forum through which authors can share their insights and outcomes of advanced managed care practice through publication in AMCP's Journal of Managed Care & Specialty Pharmacy (JMCP). Most of the reviewed and unreviewed abstracts are presented as posters so that interested AMCP meeting attendees can review findings and query authors. The Student/Resident/ Fellow poster presentation (unreviewed) is Wednesday, April 20, 2016, and the Professional poster presentation (reviewed) is Thursday, April 21. The Professional posters will also be displayed on Friday, April 22. The reviewed abstracts are published in the JMCP Meeting Abstracts supplement. The AMCP Managed Care & Specialty Pharmacy Annual Meeting 2016 in San Francisco, California, is expected to attract more than 3,500 managed care pharmacists and other health care professionals who manage and evaluate drug therapies, develop and manage networks, and work with medical managers and information specialists to improve the care of all individuals enrolled in managed care programs. Abstracts were submitted in the following categories: Research Report: describe completed original research on managed care pharmacy services or health care interventions. Examples include (but are not limited to) observational studies using administrative claims, reports of the impact of unique benefit design strategies, and analyses of the effects of innovative administrative or clinical programs. Economic Model: describe models that predict the effect of various benefit design or clinical decisions on a population. For example, an economic model could be used to predict the budget impact of a new pharmaceutical product on a health care system. Solving Problems in Managed Care: describe the specific steps taken to introduce a needed change, develop and implement a new system or program, plan and organize an administrative function, or solve other types of problems in managed care settings. These

  8. Developing a support vector machine based QSPR model for prediction of half-life of some herbicides.

    PubMed

    Samghani, Kobra; HosseinFatemi, Mohammad

    2016-07-01

    The half-life (t1/2) of 58 herbicides were modeled by quantitative structure-property relationship (QSPR) based molecular structure descriptors. After calculation and the screening of a large number of molecular descriptors, the most relevant those ones selected by stepwise multiple linear regression were used for developing linear and nonlinear models which developed by using multiple linear regression and support vector machine, respectively. Comparison between statistical parameters of linear and nonlinear models indicates the suitability of SVM over MLR model for predicting the half-life of herbicides. The statistical parameters of R(2) and standard error for training set of SVM model were; 0.96 and 0.087, respectively, and were 0.93 and 0.092 for the test set. The SVM model was evaluated by leave one out cross validation test, which its result indicates the robustness and predictability of the model. The established SVM model was used for predicting the half-life of other herbicides that are located in the applicability domain of model that were determined via leverage approach. The results of this study indicate that the relationship among selected molecular descriptors and herbicide's half-life is non-linear. These results emphases that the process of degradation of herbicides in the environment is very complex and can be affected by various environmental and structural features, therefore simple linear model cannot be able to successfully predict it. PMID:26970881

  9. D Modelling of Tunnel Excavation Using Pressurized Tunnel Boring Machine in Overconsolidated Soils

    NASA Astrophysics Data System (ADS)

    Demagh, Rafik; Emeriault, Fabrice

    2013-06-01

    The construction of shallow tunnels in urban areas requires a prior assessment of their effects on the existing structures. In the case of shield tunnel boring machines (TBM), the various construction stages carried out constitute a highly three-dimensional problem of soil/structure interaction and are not easy to represent in a complete numerical simulation. Consequently, the tunnelling- induced soil movements are quite difficult to evaluate. A 3D simulation procedure, using a finite differences code, namely FLAC3D, taking into account, in an explicit manner, the main sources of movements in the soil mass is proposed in this paper. It is illustrated by the particular case of Toulouse Subway Line B for which experimental data are available and where the soil is saturated and highly overconsolidated. A comparison made between the numerical simulation results and the insitu measurements shows that the 3D procedure of simulation proposed is relevant, in particular regarding the adopted representation of the different operations performed by the tunnel boring machine (excavation, confining pressure, shield advancement, installation of the tunnel lining, grouting of the annular void, etc). Furthermore, a parametric study enabled a better understanding of the singular behaviour origin observed on the ground surface and within the solid soil mass, till now not mentioned in the literature.

  10. Quantum Boltzmann Machine

    NASA Astrophysics Data System (ADS)

    Kulchytskyy, Bohdan; Andriyash, Evgeny; Amin, Mohammed; Melko, Roger

    The field of machine learning has been revolutionized by the recent improvements in the training of deep networks. Their architecture is based on a set of stacked layers of simpler modules. One of the most successful building blocks, known as a restricted Boltzmann machine, is an energetic model based on the classical Ising Hamiltonian. In our work, we investigate the benefits of quantum effects on the learning capacity of Boltzmann machines by extending its underlying Hamiltonian with a transverse field. For this purpose, we employ exact and stochastic training procedures on data sets with physical origins.

  11. Perspex machine: VII. The universal perspex machine

    NASA Astrophysics Data System (ADS)

    Anderson, James A. D. W.

    2006-01-01

    -linear perspex-machine which is very much easier to program than the original perspex-machine. We then show how to map the whole of perspex space into a unit cube. This allows us to construct a fractal of perspex machines with the cardinality of a real-numbered line or space. This fractal is the universal perspex machine. It can solve, in unit time, the halting problem for itself and for all perspex machines instantiated in real-numbered space, including all Turing machines. We cite an experiment that has been proposed to test the physical reality of the perspex machine's model of time, but we make no claim that the physical universe works this way or that it has the cardinality of the perspex machine. We leave it that the perspex machine provides an upper bound on the computational properties of physical things, including manufactured computers and biological organisms, that have a cardinality no greater than the real-number line.

  12. [A prediction model for the activity of insecticidal crystal proteins from Bacillus thuringiensis based on support vector machine].

    PubMed

    Lin, Yi; Cai, Fu-Ying; Zhang, Guang-Ya

    2007-01-01

    A quantitative structure-property relationship (QSPR) model in terms of amino acid composition and the activity of Bacillus thuringiensis insecticidal crystal proteins was established. Support vector machine (SVM) is a novel general machine-learning tool based on the structural risk minimization principle that exhibits good generalization when fault samples are few; it is especially suitable for classification, forecasting, and estimation in cases where small amounts of samples are involved such as fault diagnosis; however, some parameters of SVM are selected based on the experience of the operator, which has led to decreased efficiency of SVM in practical application. The uniform design (UD) method was applied to optimize the running parameters of SVM. It was found that the average accuracy rate approached 73% when the penalty factor was 0.01, the epsilon 0.2, the gamma 0.05, and the range 0.5. The results indicated that UD might be used an effective method to optimize the parameters of SVM and SVM and could be used as an alternative powerful modeling tool for QSPR studies of the activity of Bacillus thuringiensis (Bt) insecticidal crystal proteins. Therefore, a novel method for predicting the insecticidal activity of Bt insecticidal crystal proteins was proposed by the authors of this study. PMID:17366901

  13. Landscape epidemiology and machine learning: A geospatial approach to modeling West Nile virus risk in the United States

    NASA Astrophysics Data System (ADS)

    Young, Sean Gregory

    The complex interactions between human health and the physical landscape and environment have been recognized, if not fully understood, since the ancient Greeks. Landscape epidemiology, sometimes called spatial epidemiology, is a sub-discipline of medical geography that uses environmental conditions as explanatory variables in the study of disease or other health phenomena. This theory suggests that pathogenic organisms (whether germs or larger vector and host species) are subject to environmental conditions that can be observed on the landscape, and by identifying where such organisms are likely to exist, areas at greatest risk of the disease can be derived. Machine learning is a sub-discipline of artificial intelligence that can be used to create predictive models from large and complex datasets. West Nile virus (WNV) is a relatively new infectious disease in the United States, and has a fairly well-understood transmission cycle that is believed to be highly dependent on environmental conditions. This study takes a geospatial approach to the study of WNV risk, using both landscape epidemiology and machine learning techniques. A combination of remotely sensed and in situ variables are used to predict WNV incidence with a correlation coefficient as high as 0.86. A novel method of mitigating the small numbers problem is also tested and ultimately discarded. Finally a consistent spatial pattern of model errors is identified, indicating the chosen variables are capable of predicting WNV disease risk across most of the United States, but are inadequate in the northern Great Plains region of the US.

  14. Abstracting in the Context of Spontaneous Learning

    ERIC Educational Resources Information Center

    Williams, Gaye

    2007-01-01

    There is evidence that spontaneous learning leads to relational understanding and high positive affect. To study spontaneous abstracting, a model was constructed by combining the RBC model of abstraction with Krutetskii's mental activities. Using video-stimulated interviews, the model was then used to analyze the behavior of two Year 8 students…

  15. Metacognition and abstract reasoning.

    PubMed

    Markovits, Henry; Thompson, Valerie A; Brisson, Janie

    2015-05-01

    The nature of people's meta-representations of deductive reasoning is critical to understanding how people control their own reasoning processes. We conducted two studies to examine whether people have a metacognitive representation of abstract validity and whether familiarity alone acts as a separate metacognitive cue. In Study 1, participants were asked to make a series of (1) abstract conditional inferences, (2) concrete conditional inferences with premises having many potential alternative antecedents and thus specifically conducive to the production of responses consistent with conditional logic, or (3) concrete problems with premises having relatively few potential alternative antecedents. Participants gave confidence ratings after each inference. Results show that confidence ratings were positively correlated with logical performance on abstract problems and concrete problems with many potential alternatives, but not with concrete problems with content less conducive to normative responses. Confidence ratings were higher with few alternatives than for abstract content. Study 2 used a generation of contrary-to-fact alternatives task to improve levels of abstract logical performance. The resulting increase in logical performance was mirrored by increases in mean confidence ratings. Results provide evidence for a metacognitive representation based on logical validity, and show that familiarity acts as a separate metacognitive cue. PMID:25416026

  16. Use of Machine Learning Techniques for Iidentification of Robust Teleconnections to East African Rainfall Variability in Observations and Models

    NASA Technical Reports Server (NTRS)

    Roberts, J. Brent; Robertson, Franklin R.; Funk, Chris

    2014-01-01

    Providing advance warning of East African rainfall variations is a particular focus of several groups including those participating in the Famine Early Warming Systems Network. Both seasonal and long-term model projections of climate variability are being used to examine the societal impacts of hydrometeorological variability on seasonal to interannual and longer time scales. The NASA / USAID SERVIR project, which leverages satellite and modeling-based resources for environmental decision making in developing nations, is focusing on the evaluation of both seasonal and climate model projections to develop downscaled scenarios for using in impact modeling. The utility of these projections is reliant on the ability of current models to capture the embedded relationships between East African rainfall and evolving forcing within the coupled ocean-atmosphere-land climate system. Previous studies have posited relationships between variations in El Niño, the Walker circulation, Pacific decadal variability (PDV), and anthropogenic forcing. This study applies machine learning methods (e.g. clustering, probabilistic graphical model, nonlinear PCA) to observational datasets in an attempt to expose the importance of local and remote forcing mechanisms of East African rainfall variability. The ability of the NASA Goddard Earth Observing System (GEOS5) coupled model to capture the associated relationships will be evaluated using Coupled Model Intercomparison Project Phase 5 (CMIP5) simulations.

  17. Modelling the spindle-holder taper joint in machine tools: A tapered zero-thickness finite element method

    NASA Astrophysics Data System (ADS)

    Xiao, Weiwei; Mao, Kuanmin; Zhu, Ming; Li, Bin; Lei, Sheng; Pan, Xiaoyan

    2014-10-01

    This study presents a tapered zero-thickness finite element model together with its parameter identification method for modelling the spindle-holder taper joint in machine tools. In the presented model, the spindle and the holder are modelled as solid elements and the taper joint is modelled as a tapered zero-thickness finite element with stiffness and damping but without mass or thickness. The proposed model considers not only the coupling of adjacent degrees of freedom but also the radial, tangential and axial effects of the spindle-holder taper joint. Based on the inverse relationship between the dynamic matrix and frequency response function matrix of a multi-degree-of-freedom system, this study proposes a combined analytical-experimental method to identify the stiffness matrix and damping coefficient of the proposed tapered zero-thickness finite element. The method extracts those parameters from FRFs of an entire specimen that contains only the spindle-holder taper joint. The simulated FRF obtained from the proposed model matches the experimental FRF quite well, which indicates that the presented method provides high accuracy and is easy to implement in modelling the spindle-holder taper joint.

  18. Language and Tool Support for Class and State Machine Refinement in UML-B

    NASA Astrophysics Data System (ADS)

    Said, Mar Yah; Butler, Michael; Snook, Colin

    UML-B is a ‘UML-like’ graphical front end for Event-B that provides support for object-oriented modelling concepts. In particular, UML-B supports class diagrams and state machines, concepts that are not explicitly supported in plain Event-B. In Event-B, refinement is used to relate system models at different abstraction levels. The same abstraction-refinement concepts can also be applied in UML-B. This paper introduces the notions of refined classes and refined state machines to enable refinement of classes and state machines in UML-B. Together with these notions, a technique for moving an event between classes to facilitate abstraction is also introduced. Our work makes explicit the structures of class and state machine refinement in UML-B. The UML-B drawing tool and Event-B translator are extended to support the new refinement concepts. A case study of an auto teller machine (ATM) is presented to demonstrate application and effectiveness of refined classes and refined state machines.

  19. Mining machine

    SciTech Connect

    Parrott, G.A.

    1985-05-07

    A haulage system for a mining machine comprises a mining machine mounted on and/or guided by a conveyor and reciprocable with respect thereto, the conveyor being provided with a rack having plural rows of teeth of identical pitch, with the teeth of one row staggered with respect to an adjacent row(s), and the machine being provided with at least one power driven haulage sprocket comprising plural sets of peripherally arranged teeth of identical pitch, one set being angularly staggered with respect to an adjacent set(s), whereby one set is engageable with each row of teeth of the rack. The invention also includes a mining machine provided with such a power driven haulage sprocket, and a rack as above described and provided with end fittings for securing in articulated manner to an adjacent rack.

  20. Machine performance assessment and enhancement for a hexapod machine

    SciTech Connect

    Mou, J.I.; King, C.

    1998-03-19

    The focus of this study is to develop a sensor fused process modeling and control methodology to model, assess, and then enhance the performance of a hexapod machine for precision product realization. Deterministic modeling technique was used to derive models for machine performance assessment and enhancement. Sensor fusion methodology was adopted to identify the parameters of the derived models. Empirical models and computational algorithms were also derived and implemented to model, assess, and then enhance the machine performance. The developed sensor fusion algorithms can be implemented on a PC-based open architecture controller to receive information from various sensors, assess the status of the process, determine the proper action, and deliver the command to actuators for task execution. This will enhance a hexapod machine`s capability to produce workpieces within the imposed dimensional tolerances.

  1. Thyra Abstract Interface Package

    Energy Science and Technology Software Center (ESTSC)

    2005-09-01

    Thrya primarily defines a set of abstract C++ class interfaces needed for the development of abstract numerical atgorithms (ANAs) such as iterative linear solvers, transient solvers all the way up to optimization. At the foundation of these interfaces are abstract C++ classes for vectors, vector spaces, linear operators and multi-vectors. Also included in the Thyra package is C++ code for creating concrete vector, vector space, linear operator, and multi-vector subclasses as well as other utilitiesmore » to aid in the development of ANAs. Currently, very general and efficient concrete subclass implementations exist for serial and SPMD in-core vectors and multi-vectors. Code also currently exists for testing objects and providing composite objects such as product vectors.« less

  2. Abstracting and indexing guide

    USGS Publications Warehouse

    U.S. Department of the Interior; Office of Water Resources Research

    1974-01-01

    These instructions have been prepared for those who abstract and index scientific and technical documents for the Water Resources Scientific Information Center (WRSIC). With the recent publication growth in all fields, information centers have undertaken the task of keeping the various scientific communities aware of current and past developments. An abstract with carefully selected index terms offers the user of WRSIC services a more rapid means for deciding whether a document is pertinent to his needs and professional interests, thus saving him the time necessary to scan the complete work. These means also provide WRSIC with a document representation or surrogate which is more easily stored and manipulated to produce various services. Authors are asked to accept the responsibility for preparing abstracts of their own papers to facilitate quick evaluation, announcement, and dissemination to the scientific community.

  3. Modelling and control of a seven level NPC voltage source inverter. Application to high power induction machine drive

    NASA Astrophysics Data System (ADS)

    Gheraia, H.; Berkouk, E. M.; Manesse, G.

    2001-08-01

    In this paper, we study a new kind of continuous-alternating converters: a seven-level neutral point clamping (NPC) voltage source inverter (VSI). We propose this inverter for applications in high voltage and high power fields. In the first part, we develop the knowledge and the control models of this inverter using the connections functions of the semi-conductors. After that, we present two pulse width modulation (PWM) algorithms to control this converter using its control model. We propose these algorithms for digital implementation. This multilevel inverter is associated to the induction machine. The performances obtained are full of promise to use it in the high voltage and high power fields of electrical traction.

  4. Modeling using support vector machines on imbalanced data: A case study on the prediction of the sightings of Irrawaddy dolphins

    NASA Astrophysics Data System (ADS)

    Ying, Liew Chin; Labadin, Jane; Chai, Wang Yin; Tuen, Andrew Alek; Peter, Cindy

    2015-05-01

    Support vector machines (SVMs) is a powerful machine learning algorithm for classification particularly in medical, image processing and text analysis related studies. Nonetheless, its application in ecology is scarce. This study aims to demonstrate and compare the classification performance of SVMs models developed with weights and models developed with adoption of systematic random under-sampling technique in predicting a one-class independent dataset. The data used is a typical imbalanced real-world data with 700 data points where only 11% are sighted data points. Conversely, the one-class independent real-world dataset, with twenty data points, used for prediction consists of sighted data only. Both datasets are characterized with seven attributes. The results show that the former models have reported overall accuracy ranged between 87.62% and 90% with G-mean between 0% and 30.07% (0% to 9.09% sensitivity and 97.34% to 100% specificity) while the ROC-AUC values ranged between 75.92% and 88.78%. The latter models have reported overall accuracy ranged between 67.39% and 78.26% with G-mean between 66.51% and 76.30% (78.26% to 95.65% sensitivity and 52.17% to 60.87% specificity) while the ROC-AUC values ranged between 72.59% and 85.82%. Nevertheless, the former models could barely predict the independent dataset successfully. Majority of the models fail to predict a single sighted data point and the best prediction accuracy reported is 30%. The classification performance of the latter models is surprisingly encouraging where majority of the models manage to achieve more than 30% prediction accuracy. In addition, many of the models are capable to attain 65% prediction accuracy, more than double the performance of the former models. Current study thus suggests that, where highly imbalanced ecology data is concerned, modeling using SVMs adopting systematic random under-sampling technique is a more promising mean than w-SVM in obtaining much rewarding classification

  5. Monel Machining

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Castle Industries, Inc. is a small machine shop manufacturing replacement plumbing repair parts, such as faucet, tub and ballcock seats. Therese Castley, president of Castle decided to introduce Monel because it offered a chance to improve competitiveness and expand the product line. Before expanding, Castley sought NERAC assistance on Monel technology. NERAC (New England Research Application Center) provided an information package which proved very helpful. The NASA database was included in NERAC's search and yielded a wealth of information on machining Monel.

  6. Development of the first nonhydrostatic nested-grid grid-point global atmospheric modeling system on parallel machines

    SciTech Connect

    Kao, C.Y.J.; Langley, D.L.; Reisner, J.M.; Smith, W.S.

    1998-11-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). Evaluating the importance of global and regional climate response to increasing atmospheric concentrations of greenhouse gases requires a comprehensive global atmospheric modeling system (GAMS) capable of simulations over a wide range of atmospheric circulations, from complex terrain to continental scales, on high-performance computers. Unfortunately, all of the existing global circulation models (GCMs) do not meet this requirements, because they suffer from one or more of the following three shortcomings: (1) use of the hydrostatic approximation, which makes the models potentially ill-posed; (2) lack of a nested-grid (or multi-grid) capability, which makes it difficult to consistently evaluate the regional climate response to the global warming, and (3) spherical spectral (opposed to grid-point finite-difference) representation of model variables, which hinders model performance for parallel machine applications. The end product of the research is a highly modularized, multi-gridded, self-calibratable (for further parameterization development) global modeling system with state-of-the-science physics and chemistry. This system will be suitable for a suite of atmospheric problems: from local circulations to climate, from thunderstorms to global cloud radiative forcing, from urban pollution to global greenhouse trace gases, and from the guiding of field experiments to coupling with ocean models. It will also provide a unique testbed for high-performance computing architecture.

  7. CNC electrical discharge machining centers

    SciTech Connect

    Jaggars, S.R.

    1991-10-01

    Computer numerical control (CNC) electrical discharge machining (EDM) centers were investigated to evaluate the application and cost effectiveness of establishing this capability at Allied-Signal Inc., Kansas City Division (KCD). In line with this investigation, metal samples were designed, prepared, and machined on an existing 15-year-old EDM machine and on two current technology CNC EDM machining centers at outside vendors. The results were recorded and evaluated. The study revealed that CNC EDM centers are a capability that should be established at KCD. From the information gained, a machine specification was written and a shop was purchased and installed in the Engineering Shop. The older machine was exchanged for a new model. Additional machines were installed in the Tool Design and Fabrication and Precision Microfinishing departments. The Engineering Shop machine will be principally used for the following purposes: producing deep cavities in small corner radii, machining simulated casting models, machining difficult-to-machine materials, and polishing difficult-to-hand polish mold cavities. 2 refs., 18 figs., 3 tabs.

  8. The SIDdatagrabber (Abstract)

    NASA Astrophysics Data System (ADS)

    Silvis, G.

    2015-12-01

    (Abstract only) The Stanford/SARA SuperSid project offers an opportunity for adding data to the AAVSO SID Monitoring project. You can now build a SID antenna and monitoring setup for about $150. And with the SIDdatagrabber application you can easily re-purpose the data collected for the AAVSO.

  9. Making the Abstract Concrete

    ERIC Educational Resources Information Center

    Potter, Lee Ann

    2005-01-01

    President Ronald Reagan nominated a woman to serve on the United States Supreme Court. He did so through a single-page form letter, completed in part by hand and in part by typewriter, announcing Sandra Day O'Connor as his nominee. While the document serves as evidence of a historic event, it is also a tangible illustration of abstract concepts…

  10. Learning Abstracts, 2001.

    ERIC Educational Resources Information Center

    Wilson, Cynthia, Ed.

    2001-01-01

    Volume 4 of the League for Innovation in the Community College's Learning Abstracts include the following: (1) "Touching Students in the Digital Age: The Move Toward Learner Relationship Management (LRM)," by Mark David Milliron, which offers an overview of an organizing concept to help community colleges navigate the intersection between digital…

  11. Leadership Abstracts, 2002.

    ERIC Educational Resources Information Center

    Wilson, Cynthia, Ed.; Milliron, Mark David, Ed.

    2002-01-01

    This 2002 volume of Leadership Abstracts contains issue numbers 1-12. Articles include: (1) "Skills Certification and Workforce Development: Partnering with Industry and Ourselves," by Jeffrey A. Cantor; (2) "Starting Again: The Brookhaven Success College," by Alice W. Villadsen; (3) "From Digital Divide to Digital Democracy," by Gerardo E. de los…

  12. Leadership Abstracts, 1993.

    ERIC Educational Resources Information Center

    Doucette, Don, Ed.

    1993-01-01

    This document includes 10 issues of Leadership Abstracts (volume 6, 1993), a newsletter published by the League for Innovation in the Community College (California). The featured articles are: (1) "Reinventing Government" by David T. Osborne; (2) "Community College Workforce Training Programs: Expanding the Mission to Meet Critical Needs" by…

  13. Abstraction through Game Play

    ERIC Educational Resources Information Center

    Avraamidou, Antri; Monaghan, John; Walker, Aisha

    2012-01-01

    This paper examines the computer game play of an 11-year-old boy. In the course of building a virtual house he developed and used, without assistance, an artefact and an accompanying strategy to ensure that his house was symmetric. We argue that the creation and use of this artefact-strategy is a mathematical abstraction. The discussion…

  14. CIRF Abstracts, Volume 12.

    ERIC Educational Resources Information Center

    International Labour Office, Geneva (Switzerland).

    The aim of the CIRF abstracts is to convey information about vocational training ideas, programs, experience, and experiments described in periodicals, books, and other publications and relating to operative personnel, supervisors, and technical and training staff in all sectors of economic activity. Information is also given on major trends in…

  15. Leadership Abstracts, 1999.

    ERIC Educational Resources Information Center

    Leadership Abstracts, 1999

    1999-01-01

    This document contains five Leadership Abstracts publications published February-December 1999. The article, "Teaching the Teachers: Meeting the National Teacher Preparation Challenge," authored by George R. Boggs and Sadie Bragg, examines the community college role and makes recommendations and a call to action for teacher education. "Chaos…

  16. Double Trouble (Abstract)

    NASA Astrophysics Data System (ADS)

    Simonsen, M.

    2015-12-01

    (Abstract only) Variable stars with close companions can be difficult to accurately measure and characterize. The companions can create misidentifications, which in turn can affect the perceived magnitudes, amplitudes, periods, and colors of the variable stars. We will show examples of these Double Trouble stars and the impact their close companions have had on our understanding of some of these variable stars.

  17. Send Me No Abstract.

    ERIC Educational Resources Information Center

    Levy, Steven

    1985-01-01

    Discusses Magazine Index's practice of assigning letter grades (sometimes inaccurate) to book, restaurant, and movie reviews, thus allowing patrons to get the point of the review from the index rather than the article itself, and argues that this situation is indicative of the larger problem of reliability of abstracts. (MBR)

  18. Annual Conference Abstracts

    ERIC Educational Resources Information Center

    Engineering Education, 1976

    1976-01-01

    Presents the abstracts of 158 papers presented at the American Society for Engineering Education's annual conference at Knoxville, Tennessee, June 14-17, 1976. Included are engineering topics covering education, aerospace, agriculture, biomedicine, chemistry, computers, electricity, acoustics, environment, mechanics, and women. (SL)

  19. Water reuse. [Lead abstract

    SciTech Connect

    Middlebrooks, E.J.

    1982-01-01

    Separate abstracts were prepared for the 31 chapters of this book which deals with all aspects of wastewater reuse. Design data, case histories, performance data, monitoring information, health information, social implications, legal and organizational structures, and background information needed to analyze the desirability of water reuse are presented. (KRM)

  20. Reasoning abstractly about resources

    NASA Technical Reports Server (NTRS)

    Clement, B.; Barrett, A.

    2001-01-01

    r describes a way to schedule high level activities before distributing them across multiple rovers in order to coordinate the resultant use of shared resources regardless of how each rover decides how to perform its activities. We present an algorithm for summarizing the metric resource requirements of an abstract activity based n the resource usages of its potential refinements.

  1. Humor, abstraction, and disbelief.

    PubMed

    Hoicka, Elena; Jutsum, Sarah; Gattis, Merideth

    2008-09-01

    We investigated humor as a context for learning about abstraction and disbelief. More specifically, we investigated how parents support humor understanding during book sharing with their toddlers. In Study 1, a corpus analysis revealed that in books aimed at 1-to 2-year-olds, humor is found more often than other forms of doing the wrong thing including mistakes, pretense, lying, false beliefs, and metaphors. In Study 2, 20 parents read a book containing humorous and non-humorous pages to their 19-to 26-month-olds. Parents used a significantly higher percentage of high abstraction extra-textual utterances (ETUs) when reading the humorous pages. In Study 3, 41 parents read either a humorous or non-humorous book to their 18-to 24-month-olds. Parents reading the humorous book made significantly more ETUs coded for a specific form of high abstraction: those encouraging disbelief of prior utterances. Sharing humorous books thus increases toddlers' exposure to high abstraction and belief-based language. PMID:21585438

  2. 2002 NASPSA Conference Abstracts.

    ERIC Educational Resources Information Center

    Journal of Sport & Exercise Psychology, 2002

    2002-01-01

    Contains abstracts from the 2002 conference of the North American Society for the Psychology of Sport and Physical Activity. The publication is divided into three sections: the preconference workshop, "Effective Teaching Methods in the Classroom;" symposia (motor development, motor learning and control, and sport psychology); and free…

  3. Annual Conference Abstracts

    ERIC Educational Resources Information Center

    Journal of Engineering Education, 1972

    1972-01-01

    Includes abstracts of papers presented at the 80th Annual Conference of the American Society for Engineering Education. The broad areas include aerospace, affiliate and associate member council, agricultural engineering, biomedical engineering, continuing engineering studies, chemical engineering, civil engineering, computers, cooperative…

  4. Learning Abstracts, 1999.

    ERIC Educational Resources Information Center

    League for Innovation in the Community Coll.

    This document contains volume two of Learning Abstracts, a bimonthly newsletter from the League for Innovation in the Community College. Articles in these seven issues include: (1) "Get on the Fast Track to Learning: An Accelerated Associate Degree Option" (Gerardo E. de los Santos and Deborah J. Cruise); (2) "The Learning College: Both Learner…

  5. Computers in Abstract Algebra

    ERIC Educational Resources Information Center

    Nwabueze, Kenneth K.

    2004-01-01

    The current emphasis on flexible modes of mathematics delivery involving new information and communication technology (ICT) at the university level is perhaps a reaction to the recent change in the objectives of education. Abstract algebra seems to be one area of mathematics virtually crying out for computer instructional support because of the…

  6. Abstract Film and Beyond.

    ERIC Educational Resources Information Center

    Le Grice, Malcolm

    A theoretical and historical account of the main preoccupations of makers of abstract films is presented in this book. The book's scope includes discussion of nonrepresentational forms as well as examination of experiments in the manipulation of time in films. The ten chapters discuss the following topics: art and cinematography, the first…

  7. A general procedure to generate models for urban environmental-noise pollution using feature selection and machine learning methods.

    PubMed

    Torija, Antonio J; Ruiz, Diego P

    2015-02-01

    The prediction of environmental noise in urban environments requires the solution of a complex and non-linear problem, since there are complex relationships among the multitude of variables involved in the characterization and modelling of environmental noise and environmental-noise magnitudes. Moreover, the inclusion of the great spatial heterogeneity characteristic of urban environments seems to be essential in order to achieve an accurate environmental-noise prediction in cities. This problem is addressed in this paper, where a procedure based on feature-selection techniques and machine-learning regression methods is proposed and applied to this environmental problem. Three machine-learning regression methods, which are considered very robust in solving non-linear problems, are used to estimate the energy-equivalent sound-pressure level descriptor (LAeq). These three methods are: (i) multilayer perceptron (MLP), (ii) sequential minimal optimisation (SMO), and (iii) Gaussian processes for regression (GPR). In addition, because of the high number of input variables involved in environmental-noise modelling and estimation in urban environments, which make LAeq prediction models quite complex and costly in terms of time and resources for application to real situations, three different techniques are used to approach feature selection or data reduction. The feature-selection techniques used are: (i) correlation-based feature-subset selection (CFS), (ii) wrapper for feature-subset selection (WFS), and the data reduction technique is principal-component analysis (PCA). The subsequent analysis leads to a proposal of different schemes, depending on the needs regarding data collection and accuracy. The use of WFS as the feature-selection technique with the implementation of SMO or GPR as regression algorithm provides the best LAeq estimation (R(2)=0.94 and mean absolute error (MAE)=1.14-1.16 dB(A)). PMID:25461071

  8. A Semantic Theory of Abstractions: A Preliminary Report

    NASA Technical Reports Server (NTRS)

    Nayak, P. Pandurang; Levy, Alon Y.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    In this paper we present a semantic theory of abstractions based on viewing abstractions as interpretations between theories. This theory captures important aspects of abstractions not captured in the theory of abstractions presented by Giunchiglia and Walsh. Instead of viewing abstractions as syntactic mappings, we view abstractions as a two step process: the intended domain model is first abstracted and then a set of (abstract) formulas is constructed to capture the abstracted domain model. Viewing and justifying abstractions as model level transformations is both natural and insightful. We provide a precise characterization of the abstract theory that exactly implements the intended abstraction, and show that this theory, while being axiomatizable, is not always finitely axiomatizable. A simple corollary of the latter result disproves a conjecture made by Tenenberg that if a theory is finitely axiomatizable, then predicate abstraction of that theory leads to a finitely axiomatizable theory.

  9. Support Vector Machine and Artificial Neural Network Models for the Classification of Grapevine Varieties Using a Portable NIR Spectrophotometer

    PubMed Central

    Gutiérrez, Salvador; Tardaguila, Javier; Fernández-Novales, Juan; Diago, María P.

    2015-01-01

    The identification of different grapevine varieties, currently attended using visual ampelometry, DNA analysis and very recently, by hyperspectral analysis under laboratory conditions, is an issue of great importance in the wine industry. This work presents support vector machine and artificial neural network’s modelling for grapevine varietal classification from in-field leaf spectroscopy. Modelling was attempted at two scales: site-specific and a global scale. Spectral measurements were obtained on the near-infrared (NIR) spectral range between 1600 to 2400 nm under field conditions in a non-destructive way using a portable spectrophotometer. For the site specific approach, spectra were collected from the adaxial side of 400 individual leaves of 20 grapevine (Vitis vinifera L.) varieties one week after veraison. For the global model, two additional sets of spectra were collected one week before harvest from two different vineyards in another vintage, each one consisting on 48 measurement from individual leaves of six varieties. Several combinations of spectra scatter correction and smoothing filtering were studied. For the training of the models, support vector machines and artificial neural networks were employed using the pre-processed spectra as input and the varieties as the classes of the models. The results from the pre-processing study showed that there was no influence whether using scatter correction or not. Also, a second-degree derivative with a window size of 5 Savitzky-Golay filtering yielded the highest outcomes. For the site-specific model, with 20 classes, the best results from the classifiers thrown an overall score of 87.25% of correctly classified samples. These results were compared under the same conditions with a model trained using partial least squares discriminant analysis, which showed a worse performance in every case. For the global model, a 6-class dataset involving samples from three different vineyards, two years and leaves

  10. Support Vector Machine and Artificial Neural Network Models for the Classification of Grapevine Varieties Using a Portable NIR Spectrophotometer.

    PubMed

    Gutiérrez, Salvador; Tardaguila, Javier; Fernández-Novales, Juan; Diago, María P

    2015-01-01

    The identification of different grapevine varieties, currently attended using visual ampelometry, DNA analysis and very recently, by hyperspectral analysis under laboratory conditions, is an issue of great importance in the wine industry. This work presents support vector machine and artificial neural network's modelling for grapevine varietal classification from in-field leaf spectroscopy. Modelling was attempted at two scales: site-specific and a global scale. Spectral measurements were obtained on the near-infrared (NIR) spectral range between 1600 to 2400 nm under field conditions in a non-destructive way using a portable spectrophotometer. For the site specific approach, spectra were collected from the adaxial side of 400 individual leaves of 20 grapevine (Vitis vinifera L.) varieties one week after veraison. For the global model, two additional sets of spectra were collected one week before harvest from two different vineyards in another vintage, each one consisting on 48 measurement from individual leaves of six varieties. Several combinations of spectra scatter correction and smoothing filtering were studied. For the training of the models, support vector machines and artificial neural networks were employed using the pre-processed spectra as input and the varieties as the classes of the models. The results from the pre-processing study showed that there was no influence whether using scatter correction or not. Also, a second-degree derivative with a window size of 5 Savitzky-Golay filtering yielded the highest outcomes. For the site-specific model, with 20 classes, the best results from the classifiers thrown an overall score of 87.25% of correctly classified samples. These results were compared under the same conditions with a model trained using partial least squares discriminant analysis, which showed a worse performance in every case. For the global model, a 6-class dataset involving samples from three different vineyards, two years and leaves

  11. Protein Kinase Classification with 2866 Hidden Markov Models and One Support Vector Machine

    NASA Technical Reports Server (NTRS)

    Weber, Ryan; New, Michael H.; Fonda, Mark (Technical Monitor)

    2002-01-01

    The main application considered in this paper is predicting true kinases from randomly permuted kinases that share the same length and amino acid distributions as the true kinases. Numerous methods already exist for this classification task, such as HMMs, motif-matchers, and sequence comparison algorithms. We build on some of these efforts by creating a vector from the output of thousands of structurally based HMMs, created offline with Pfam-A seed alignments using SAM-T99, which then must be combined into an overall classification for the protein. Then we use a Support Vector Machine for classifying this large ensemble Pfam-Vector, with a polynomial and chisquared kernel. In particular, the chi-squared kernel SVM performs better than the HMMs and better than the BLAST pairwise comparisons, when predicting true from false kinases in some respects, but no one algorithm is best for all purposes or in all instances so we consider the particular strengths and weaknesses of each.

  12. Building a model to predict megafires using a machine learning approach.

    NASA Astrophysics Data System (ADS)

    Podschwit, H. R.; Barbero, R.; Larkin, N. K.; Steel, A.

    2014-12-01

    Weather and climate are critical influences of wildland fire activity. Climate change has led to an increase in the size and frequency of wildfires in many parts of the United States. These changes are expected to increase under current climate change scenarios, likely exacerbating so called "mega-fire" activity. Megafires are typically the most devastating and costly to suppress. It is then desirable to know when and where weather conditions will be conducive to the development of these fires in the future. However, standard statistical methods may not be suited to handle the data imbalance and high-dimensional features of such an analysis. We use an ensemble machine learning approach to estimate the risk of megafires based on weather and climate variables for each ecosystem in the contiguous U.S. Bootstrap aggregated trees are used to describe which suite of coarse scale weather conditions has historically best separated megafires from other large fires and to estimate the conditional probability of a "megafire" given ignition. The annual distribution of ignitions was estimated to calculate an overall probability of a "megafire" and spatial wildfire patterns were used to appropriately distribute this probability across space. This framework was then applied to future climate projections under the RCP8.5 scenario to estimate the future risk of these fire types. Our methodology was applied to various climate change scenarios and suggests that the frequency of these types of fires is likely to increase throughout much of the western United States in the next 50 years.

  13. Comparing ANNs, EAs, and Trees: a basic machine-learning approach to predictive environmental models.

    NASA Astrophysics Data System (ADS)

    Williams, J.; Poff, N.

    2005-05-01

    Machine learning techniques for ecological applications or "eco-informatics" are becoming increasingly useful and accessible for ecologists. We evaluated the predictive ability of three commercially available (i.e. user-friendly) software packages for artificial neural networks (ANNs), evolutionary algorithms (EAs), and classification/regression trees (Trees). We analyzed fish and habitat data for streams in the mid-Atlantic region of the U.S., which was collected by the U.S. Environmental Protection Agency (EPA). The data includes over 200 environmental descriptors summarizing watershed, stream, and water chemistry characteristics in addition to derived fish community metrics (i.e. richness, IBI scores, % exotics). In our analysis we predicted individual species presence/absence and fish community metrics as a function of these local and regional scale habitat variables. Predictive ability is evaluated with independent validation data. These approaches could prove especially useful for conservation or management applications where ecologists seek to utilize the most comprehensive data to make predictions at various scales. By employing "user-friendly" software we hope to show that ecologists, without extensive knowledge of computational science, can benefit from these techniques by extracting more information about complex ecosystems. Relative strengths and weaknesses of these three approaches are compared and recommendations for their use in conservation applications are presented.

  14. SPOCK: A SPICE based circuit code for modeling pulsed power machines

    SciTech Connect

    Ingermanson, R.; Parks, D.

    1996-12-31

    SPICE is an industry standard electrical circuit simulation code developed by the University of California at Berkeley over the last twenty years. The authors have developed a number of new SPICE devices of interest to the pulsed power community: plasma opening switches, plasma radiation sources, bremsstrahlung diodes, magnetically insulated transmission lines, explosively driven flux compressors. These new devices are integrated into SPICE using S-Cubed`s MIRIAD technology to create a user-friendly circuit code that runs on Unix workstations or under Windows NT or Windows 95. The new circuit code is called SPOCK--``S-Cubed Power Optimizing Circuit Kit.`` SPOCK allows the user to easily run optimization studies by setting up runs in which any circuit parameters can be systematically varied. Results can be plotted as 1-D line plots, 2-D contour plots, or 3-D ``bedsheet`` plots. The authors demonstrate SPOCK`s capabilities on a color laptop computer, performing realtime analysis of typical configurations of such machines as HAWK and ACE4.

  15. Modelling and analysing track cycling Omnium performances using statistical and machine learning techniques.

    PubMed

    Ofoghi, Bahadorreza; Zeleznikow, John; Dwyer, Dan; Macmahon, Clare

    2013-01-01

    This article describes the utilisation of an unsupervised machine learning technique and statistical approaches (e.g., the Kolmogorov-Smirnov test) that assist cycling experts in the crucial decision-making processes for athlete selection, training, and strategic planning in the track cycling Omnium. The Omnium is a multi-event competition that will be included in the summer Olympic Games for the first time in 2012. Presently, selectors and cycling coaches make decisions based on experience and intuition. They rarely have access to objective data. We analysed both the old five-event (first raced internationally in 2007) and new six-event (first raced internationally in 2011) Omniums and found that the addition of the elimination race component to the Omnium has, contrary to expectations, not favoured track endurance riders. We analysed the Omnium data and also determined the inter-relationships between different individual events as well as between those events and the final standings of riders. In further analysis, we found that there is no maximum ranking (poorest performance) in each individual event that riders can afford whilst still winning a medal. We also found the required times for riders to finish the timed components that are necessary for medal winning. The results of this study consider the scoring system of the Omnium and inform decision-making toward successful participation in future major Omnium competitions. PMID:23320948

  16. Abstractions of Awareness: Aware of What?

    NASA Astrophysics Data System (ADS)

    Metaxas, Georgios; Markopoulos, Panos

    This chapter presents FN-AAR, an abstract model of awareness systems. The purpose of the model is to capture in a concise and abstract form essential aspects of awareness systems, many of which have been discussed in design essays or in the context of evaluating specific design solutions.

  17. Plant microRNA-Target Interaction Identification Model Based on the Integration of Prediction Tools and Support Vector Machine

    PubMed Central

    Meng, Jun; Shi, Lin; Luan, Yushi

    2014-01-01

    Background Confident identification of microRNA-target interactions is significant for studying the function of microRNA (miRNA). Although some computational miRNA target prediction methods have been proposed for plants, results of various methods tend to be inconsistent and usually lead to more false positive. To address these issues, we developed an integrated model for identifying plant miRNA–target interactions. Results Three online miRNA target prediction toolkits and machine learning algorithms were integrated to identify and analyze Arabidopsis thaliana miRNA-target interactions. Principle component analysis (PCA) feature extraction and self-training technology were introduced to improve the performance. Results showed that the proposed model outperformed the previously existing methods. The results were validated by using degradome sequencing supported Arabidopsis thaliana miRNA-target interactions. The proposed model constructed on Arabidopsis thaliana was run over Oryza sativa and Vitis vinifera to demonstrate that our model is effective for other plant species. Conclusions The integrated model of online predictors and local PCA-SVM classifier gained credible and high quality miRNA-target interactions. The supervised learning algorithm of PCA-SVM classifier was employed in plant miRNA target identification for the first time. Its performance can be substantially improved if more experimentally proved training samples are provided. PMID:25051153

  18. Method and system employing finite state machine modeling to identify one of a plurality of different electric load types

    DOEpatents

    Du, Liang; Yang, Yi; Harley, Ronald Gordon; Habetler, Thomas G.; He, Dawei

    2016-08-09

    A system is for a plurality of different electric load types. The system includes a plurality of sensors structured to sense a voltage signal and a current signal for each of the different electric loads; and a processor. The processor acquires a voltage and current waveform from the sensors for a corresponding one of the different electric load types; calculates a power or current RMS profile of the waveform; quantizes the power or current RMS profile into a set of quantized state-values; evaluates a state-duration for each of the quantized state-values; evaluates a plurality of state-types based on the power or current RMS profile and the quantized state-values; generates a state-sequence that describes a corresponding finite state machine model of a generalized load start-up or transient profile for the corresponding electric load type; and identifies the corresponding electric load type.

  19. A Boltzmann machine for the organization of intelligent machines

    NASA Technical Reports Server (NTRS)

    Moed, Michael C.; Saridis, George N.

    1989-01-01

    In the present technological society, there is a major need to build machines that would execute intelligent tasks operating in uncertain environments with minimum interaction with a human operator. Although some designers have built smart robots, utilizing heuristic ideas, there is no systematic approach to design such machines in an engineering manner. Recently, cross-disciplinary research from the fields of computers, systems AI and information theory has served to set the foundations of the emerging area of the design of intelligent machines. Since 1977 Saridis has been developing an approach, defined as Hierarchical Intelligent Control, designed to organize, coordinate and execute anthropomorphic tasks by a machine with minimum interaction with a human operator. This approach utilizes analytical (probabilistic) models to describe and control the various functions of the intelligent machine structured by the intuitively defined principle of Increasing Precision with Decreasing Intelligence (IPDI) (Saridis 1979). This principle, even though resembles the managerial structure of organizational systems (Levis 1988), has been derived on an analytic basis by Saridis (1988). The purpose is to derive analytically a Boltzmann machine suitable for optimal connection of nodes in a neural net (Fahlman, Hinton, Sejnowski, 1985). Then this machine will serve to search for the optimal design of the organization level of an intelligent machine. In order to accomplish this, some mathematical theory of the intelligent machines will be first outlined. Then some definitions of the variables associated with the principle, like machine intelligence, machine knowledge, and precision will be made (Saridis, Valavanis 1988). Then a procedure to establish the Boltzmann machine on an analytic basis will be presented and illustrated by an example in designing the organization level of an Intelligent Machine. A new search technique, the Modified Genetic Algorithm, is presented and proved

  20. A Framework for Accurate Geospatial Modeling of Recharge and Discharge Maps using Image Ranking and Machine Learning

    NASA Astrophysics Data System (ADS)

    Yahja, A.; Kim, C.; Lin, Y.; Bajcsy, P.

    2008-12-01

    This paper addresses the problem of accurate estimation of geospatial models from a set of groundwater recharge & discharge (R&D) maps and from auxiliary remote sensing and terrestrial raster measurements. The motivation for our work is driven by the cost of field measurements, and by the limitations of currently available physics-based modeling techniques that do not include all relevant variables and allow accurate predictions only at coarse spatial scales. The goal is to improve our understanding of the underlying physical phenomena and increase the accuracy of geospatial models--with a combination of remote sensing, field measurements and physics-based modeling. Our approach is to process a set of R&D maps generated from interpolated sparse field measurements using existing physics-based models, and identify the R&D map that would be the most suitable for extracting a set of rules between the auxiliary variables of interest and the R&D map labels. We implemented this approach by ranking R&D maps using information entropy and mutual information criteria, and then by deriving a set of rules using a machine learning technique, such as the decision tree method. The novelty of our work is in developing a general framework for building geospatial models with the ultimate goal of minimizing cost and maximizing model accuracy. The framework is demonstrated for groundwater R&D rate models but could be applied to other similar studies, for instance, to understanding hypoxia based on physics-based models and remotely sensed variables. Furthermore, our key contribution is in designing a ranking method for R&D maps that allows us to analyze multiple plausible R&D maps with a different number of zones which was not possible in our earlier prototype of the framework called Spatial Pattern to Learn. We will present experimental results using examples R&D and other maps from an area in Wisconsin.

  1. Historical development of abstracting.

    PubMed

    Skolnik, H

    1979-11-01

    The abstract, under a multitude of names, such as hypothesis, marginalia, abridgement, extract, digest, précis, resumé, and summary, has a long history, one which is concomitant with advancing scholarship. The progression of this history from the Sumerian civilization ca. 3600 B.C., through the Egyptian and Greek civilizations, the Hellenistic period, the Dark Ages, Middle Ages, Renaissance, and into the modern period is reviewed. PMID:399482

  2. Generalized Abstract Symbolic Summaries

    NASA Technical Reports Server (NTRS)

    Person, Suzette; Dwyer, Matthew B.

    2009-01-01

    Current techniques for validating and verifying program changes often consider the entire program, even for small changes, leading to enormous V&V costs over a program s lifetime. This is due, in large part, to the use of syntactic program techniques which are necessarily imprecise. Building on recent advances in symbolic execution of heap manipulating programs, in this paper, we develop techniques for performing abstract semantic differencing of program behaviors that offer the potential for improved precision.

  3. Machine-learning model observer for detection and localization tasks in clinical SPECT-MPI

    NASA Astrophysics Data System (ADS)

    Parages, Felipe M.; O'Connor, J. Michael; Pretorius, P. Hendrik; Brankov, Jovan G.

    2016-03-01

    In this work we propose a machine-learning MO based on Naive-Bayes classification (NB-MO) for the diagnostic tasks of detection, localization and assessment of perfusion defects in clinical SPECT Myocardial Perfusion Imaging (MPI), with the goal of evaluating several image reconstruction methods used in clinical practice. NB-MO uses image features extracted from polar-maps in order to predict lesion detection, localization and severity scores given by human readers in a series of 3D SPECT-MPI. The population used to tune (i.e. train) the NB-MO consisted of simulated SPECT-MPI cases - divided into normals or with lesions in variable sizes and locations - reconstructed using filtered backprojection (FBP) method. An ensemble of five human specialists (physicians) read a subset of simulated reconstructed images, and assigned a perfusion score for each region of the left-ventricle (LV). Polar-maps generated from the simulated volumes along with their corresponding human scores were used to train five NB-MOs (one per human reader), which are subsequently applied (i.e. tested) on three sets of clinical SPECT-MPI polar maps, in order to predict human detection and localization scores. The clinical "testing" population comprises healthy individuals and patients suffering from coronary artery disease (CAD) in three possible regions, namely: LAD, LcX and RCA. Each clinical case was reconstructed using three reconstruction strategies, namely: FBP with no SC (i.e. scatter compensation), OSEM with Triple Energy Window (TEW) SC method, and OSEM with Effective Source Scatter Estimation (ESSE) SC. Alternative Free-Response (AFROC) analysis of perfusion scores shows that NB-MO predicts a higher human performance for scatter-compensated reconstructions, in agreement with what has been reported in published literature. These results suggest that NB-MO has good potential to generalize well to reconstruction methods not used during training, even for reasonably dissimilar datasets (i

  4. Integrating Subcellular Location for Improving Machine Learning Models of Remote Homology Detection in Eukaryotic Organisms

    SciTech Connect

    Shah, Anuj R.; Oehmen, Chris S.; Harper, Jill K.; Webb-Robertson, Bobbie-Jo M.

    2007-02-23

    Motivation: At the center of bioinformatics, genomics, and pro-teomics is the need for highly accurate genome annotations. Producing high-quality reliable annotations depends on identifying sequences which are related evolutionarily (homologs) on which to infer function. Homology detection is one of the oldest tasks in bioinformatics, however most approaches still fail when presented with sequences that have low residue similarity despite a distant evolutionary relationship (remote homology). Recently, discriminative approaches, such as support vector machines (SVMs) have demonstrated a vast improvement in sensitivity for remote homology detection. These methods however have only focused on one aspect of the sequence at a time, e.g., sequence similarity or motif based scores. However, supplementary information, such as the sub-cellular location of a protein within the cell would give further clues as to possible homologous pairs, additionally eliminating false relationships due to simple functional roles that cannot exist due to location. We have developed a method, SVM-SimLoc that integrates sub-cellular location with sequence similarity information into a pro-tein family classifier and compared it to one of the most accurate sequence based SVM approaches, SVM-Pairwise. Results: The SCOP 1.53 benchmark data set was utilized to assess the performance of SVM-SimLoc. As cellular location prediction is dependent upon the type of sequence, eukaryotic or prokaryotic, the analysis is restricted to the 2630 eukaryotic sequences in the benchmark dataset, evaluating a total of 27 protein families. We demonstrate that the integration of sequence similarity and sub-cellular location yields notably more accurate results than using sequence similarity independently at a significance level of 0.006.

  5. Perspective: Web-based machine learning models for real-time screening of thermoelectric materials properties

    NASA Astrophysics Data System (ADS)

    Gaultois, Michael W.; Oliynyk, Anton O.; Mar, Arthur; Sparks, Taylor D.; Mulholland, Gregory J.; Meredig, Bryce

    2016-05-01

    The experimental search for new thermoelectric materials remains largely confined to a limited set of successful chemical and structural families, such as chalcogenides, skutterudites, and Zintl phases. In principle, computational tools such as density functional theory (DFT) offer the possibility of rationally guiding experimental synthesis efforts toward very different chemistries. However, in practice, predicting thermoelectric properties from first principles remains a challenging endeavor [J. Carrete et al., Phys. Rev. X 4, 011019 (2014)], and experimental researchers generally do not directly use computation to drive their own synthesis efforts. To bridge this practical gap between experimental needs and computational tools, we report an open machine learning-based recommendation engine (http://thermoelectrics.citrination.com) for materials researchers that suggests promising new thermoelectric compositions based on pre-screening about 25 000 known materials and also evaluates the feasibility of user-designed compounds. We show this engine can identify interesting chemistries very different from known thermoelectrics. Specifically, we describe the experimental characterization of one example set of compounds derived from our engine, RE12Co5Bi (RE = Gd, Er), which exhibits surprising thermoelectric performance given its unprecedentedly high loading with metallic d and f block elements and warrants further investigation as a new thermoelectric material platform. We show that our engine predicts this family of materials to have low thermal and high electrical conductivities, but modest Seebeck coefficient, all of which are confirmed experimentally. We note that the engine also predicts materials that may simultaneously optimize all three properties entering into zT; we selected RE12Co5Bi for this study due to its interesting chemical composition and known facile synthesis.

  6. Workout Machine

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The Orbotron is a tri-axle exercise machine patterned after a NASA training simulator for astronaut orientation in the microgravity of space. It has three orbiting rings corresponding to roll, pitch and yaw. The user is in the middle of the inner ring with the stomach remaining in the center of all axes, eliminating dizziness. Human power starts the rings spinning, unlike the NASA air-powered system. Marketed by Fantasy Factory (formerly Orbotron, Inc.), the machine can improve aerobic capacity, strength and endurance in five to seven minute workouts.

  7. A new heat transfer analysis in machining based on two steps of 3D finite element modelling and experimental validation

    NASA Astrophysics Data System (ADS)

    Haddag, B.; Kagnaya, T.; Nouari, M.; Cutard, T.

    2013-01-01

    Modelling machining operations allows estimating cutting parameters which are difficult to obtain experimentally and in particular, include quantities characterizing the tool-workpiece interface. Temperature is one of these quantities which has an impact on the tool wear, thus its estimation is important. This study deals with a new modelling strategy, based on two steps of calculation, for analysis of the heat transfer into the cutting tool. Unlike the classical methods, considering only the cutting tool with application of an approximate heat flux at the cutting face, estimated from experimental data (e.g. measured cutting force, cutting power), the proposed approach consists of two successive 3D Finite Element calculations and fully independent on the experimental measurements; only the definition of the behaviour of the tool-workpiece couple is necessary. The first one is a 3D thermomechanical modelling of the chip formation process, which allows estimating cutting forces, chip morphology and its flow direction. The second calculation is a 3D thermal modelling of the heat diffusion into the cutting tool, by using an adequate thermal loading (applied uniform or non-uniform heat flux). This loading is estimated using some quantities obtained from the first step calculation, such as contact pressure, sliding velocity distributions and contact area. Comparisons in one hand between experimental data and the first calculation and at the other hand between measured temperatures with embedded thermocouples and the second calculation show a good agreement in terms of cutting forces, chip morphology and cutting temperature.

  8. Foundations of the Bandera Abstraction Tools

    NASA Technical Reports Server (NTRS)

    Hatcliff, John; Dwyer, Matthew B.; Pasareanu, Corina S.; Robby

    2003-01-01

    Current research is demonstrating that model-checking and other forms of automated finite-state verification can be effective for checking properties of software systems. Due to the exponential costs associated with model-checking, multiple forms of abstraction are often necessary to obtain system models that are tractable for automated checking. The Bandera Tool Set provides multiple forms of automated support for compiling concurrent Java software systems to models that can be supplied to several different model-checking tools. In this paper, we describe the foundations of Bandera's data abstraction mechanism which is used to reduce the cardinality (and the program's state-space) of data domains in software to be model-checked. From a technical standpoint, the form of data abstraction used in Bandera is simple, and it is based on classical presentations of abstract interpretation. We describe the mechanisms that Bandera provides for declaring abstractions, for attaching abstractions to programs, and for generating abstracted programs and properties. The contributions of this work are the design and implementation of various forms of tool support required for effective application of data abstraction to software components written in a programming language like Java which has a rich set of linguistic features.

  9. RVMAB: Using the Relevance Vector Machine Model Combined with Average Blocks to Predict the Interactions of Proteins from Protein Sequences

    PubMed Central

    An, Ji-Yong; You, Zhu-Hong; Meng, Fan-Rong; Xu, Shu-Juan; Wang, Yin

    2016-01-01

    Protein-Protein Interactions (PPIs) play essential roles in most cellular processes. Knowledge of PPIs is becoming increasingly more important, which has prompted the development of technologies that are capable of discovering large-scale PPIs. Although many high-throughput biological technologies have been proposed to detect PPIs, there are unavoidable shortcomings, including cost, time intensity, and inherently high false positive and false negative rates. For the sake of these reasons, in silico methods are attracting much attention due to their good performances in predicting PPIs. In this paper, we propose a novel computational method known as RVM-AB that combines the Relevance Vector Machine (RVM) model and Average Blocks (AB) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the AB feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We performed five-fold cross-validation experiments on yeast and Helicobacter pylori datasets, and achieved very high accuracies of 92.98% and 95.58% respectively, which is significantly better than previous works. In addition, we also obtained good prediction accuracies of 88.31%, 89.46%, 91.08%, 91.55%, and 94.81% on other five independent datasets C. elegans, M. musculus, H. sapiens, H. pylori, and E. coli for cross-species prediction. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM-AB method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool. To facilitate extensive studies for future proteomics research, we developed a freely

  10. RVMAB: Using the Relevance Vector Machine Model Combined with Average Blocks to Predict the Interactions of Proteins from Protein Sequences.

    PubMed

    An, Ji-Yong; You, Zhu-Hong; Meng, Fan-Rong; Xu, Shu-Juan; Wang, Yin

    2016-01-01

    Protein-Protein Interactions (PPIs) play essential roles in most cellular processes. Knowledge of PPIs is becoming increasingly more important, which has prompted the development of technologies that are capable of discovering large-scale PPIs. Although many high-throughput biological technologies have been proposed to detect PPIs, there are unavoidable shortcomings, including cost, time intensity, and inherently high false positive and false negative rates. For the sake of these reasons, in silico methods are attracting much attention due to their good performances in predicting PPIs. In this paper, we propose a novel computational method known as RVM-AB that combines the Relevance Vector Machine (RVM) model and Average Blocks (AB) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the AB feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We performed five-fold cross-validation experiments on yeast and Helicobacter pylori datasets, and achieved very high accuracies of 92.98% and 95.58% respectively, which is significantly better than previous works. In addition, we also obtained good prediction accuracies of 88.31%, 89.46%, 91.08%, 91.55%, and 94.81% on other five independent datasets C. elegans, M. musculus, H. sapiens, H. pylori, and E. coli for cross-species prediction. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM-AB method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool. To facilitate extensive studies for future proteomics research, we developed a freely

  11. The virtual machine (VM) scaler: an infrastructure manager supporting environmental modeling on IaaS clouds

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Infrastructure-as-a-service (IaaS) clouds provide a new medium for deployment of environmental modeling applications. Harnessing advancements in virtualization, IaaS clouds can provide dynamic scalable infrastructure to better support scientific modeling computational demands. Providing scientific m...

  12. Under the hood of the earthquake machine: toward predictive modeling of the seismic cycle.

    PubMed

    Barbot, Sylvain; Lapusta, Nadia; Avouac, Jean-Philippe

    2012-05-11

    Advances in observational, laboratory, and modeling techniques open the way to the development of physical models of the seismic cycle with potentially predictive power. To explore that possibility, we developed an integrative and fully dynamic model of the Parkfield segment of the San Andreas Fault. The model succeeds in reproducing a realistic earthquake sequence of irregular moment magnitude (M(w)) 6.0 main shocks--including events similar to the ones in 1966 and 2004--and provides an excellent match for the detailed interseismic, coseismic, and postseismic observations collected along this fault during the most recent earthquake cycle. Such calibrated physical models provide new ways to assess seismic hazards and forecast seismicity response to perturbations of natural or anthropogenic origins. PMID:22582259

  13. Modeling Epoxidation of Drug-like Molecules with a Deep Machine Learning Network.

    PubMed

    Hughes, Tyler B; Miller, Grover P; Swamidass, S Joshua

    2015-07-22

    Drug toxicity is frequently caused by electrophilic reactive metabolites that covalently bind to proteins. Epoxides comprise a large class of three-membered cyclic ethers. These molecules are electrophilic and typically highly reactive due to ring tension and polarized carbon-oxygen bonds. Epoxides are metabolites often formed by cytochromes P450 acting on aromatic or double bonds. The specific location on a molecule that undergoes epoxidation is its site of epoxidation (SOE). Identifying a molecule's SOE can aid in interpreting adverse events related to reactive metabolites and direct modification to prevent epoxidation for safer drugs. This study utilized a database of 702 epoxidation reactions to build a model that accurately predicted sites of epoxidation. The foundation for this model was an algorithm originally designed to model sites of cytochromes P450 metabolism (called XenoSite) that was recently applied to model the intrinsic reactivity of diverse molecules with glutathione. This modeling algorithm systematically and quantitatively summarizes the knowledge from hundreds of epoxidation reactions with a deep convolution network. This network makes predictions at both an atom and molecule level. The final epoxidation model constructed with this approach identified SOEs with 94.9% area under the curve (AUC) performance and separated epoxidized and non-epoxidized molecules with 79.3% AUC. Moreover, within epoxidized molecules, the model separated aromatic or double bond SOEs from all other aromatic or double bonds with AUCs of 92.5% and 95.1%, respectively. Finally, the model separated SOEs from sites of sp(2) hydroxylation with 83.2% AUC. Our model is the first of its kind and may be useful for the development of safer drugs. The epoxidation model is available at http://swami.wustl.edu/xenosite. PMID:27162970

  14. Modeling Epoxidation of Drug-like Molecules with a Deep Machine Learning Network

    PubMed Central

    2015-01-01

    Drug toxicity is frequently caused by electrophilic reactive metabolites that covalently bind to proteins. Epoxides comprise a large class of three-membered cyclic ethers. These molecules are electrophilic and typically highly reactive due to ring tension and polarized carbon–oxygen bonds. Epoxides are metabolites often formed by cytochromes P450 acting on aromatic or double bonds. The specific location on a molecule that undergoes epoxidation is its site of epoxidation (SOE). Identifying a molecule’s SOE can aid in interpreting adverse events related to reactive metabolites and direct modification to prevent epoxidation for safer drugs. This study utilized a database of 702 epoxidation reactions to build a model that accurately predicted sites of epoxidation. The foundation for this model was an algorithm originally designed to model sites of cytochromes P450 metabolism (called XenoSite) that was recently applied to model the intrinsic reactivity of diverse molecules with glutathione. This modeling algorithm systematically and quantitatively summarizes the knowledge from hundreds of epoxidation reactions with a deep convolution network. This network makes predictions at both an atom and molecule level. The final epoxidation model constructed with this approach identified SOEs with 94.9% area under the curve (AUC) performance and separated epoxidized and non-epoxidized molecules with 79.3% AUC. Moreover, within epoxidized molecules, the model separated aromatic or double bond SOEs from all other aromatic or double bonds with AUCs of 92.5% and 95.1%, respectively. Finally, the model separated SOEs from sites of sp2 hydroxylation with 83.2% AUC. Our model is the first of its kind and may be useful for the development of safer drugs. The epoxidation model is available at http://swami.wustl.edu/xenosite. PMID:27162970

  15. Wacky Machines

    ERIC Educational Resources Information Center

    Fendrich, Jean

    2002-01-01

    Collectors everywhere know that local antique shops and flea markets are treasure troves just waiting to be plundered. Science teachers might take a hint from these hobbyists, for the next community yard sale might be a repository of old, quirky items that are just the things to get students thinking about simple machines. By introducing some…

  16. Temperature drift modeling and compensation of fiber optical gyroscope based on improved support vector machine and particle swarm optimization algorithms.

    PubMed

    Wang, Wei; Chen, Xiyuan

    2016-08-10

    Modeling and compensation of temperature drift is an important method for improving the precision of fiber-optic gyroscopes (FOGs). In this paper, a new method of modeling and compensation for FOGs based on improved particle swarm optimization (PSO) and support vector machine (SVM) algorithms is proposed. The convergence speed and reliability of PSO are improved by introducing a dynamic inertia factor. The regression accuracy of SVM is improved by introducing a combined kernel function with four parameters and piecewise regression with fixed steps. The steps are as follows. First, the parameters of the combined kernel functions are optimized by the improved PSO algorithm. Second, the proposed kernel function of SVM is used to carry out piecewise regression, and the regression model is also obtained. Third, the temperature drift is compensated for by the regression data. The regression accuracy of the proposed method (in the case of mean square percentage error indicators) increased by 83.81% compared to the traditional SVM. PMID:27534465

  17. Man-machine Integration Design and Analysis System (MIDAS) Task Loading Model (TLM) experimental and software detailed design report

    NASA Technical Reports Server (NTRS)

    Staveland, Lowell

    1994-01-01

    This is the experimental and software detailed design report for the prototype task loading model (TLM) developed as part of the man-machine integration design and analysis system (MIDAS), as implemented and tested in phase 6 of the Army-NASA Aircrew/Aircraft Integration (A3I) Program. The A3I program is an exploratory development effort to advance the capabilities and use of computational representations of human performance and behavior in the design, synthesis, and analysis of manned systems. The MIDAS TLM computationally models the demands designs impose on operators to aide engineers in the conceptual design of aircraft crewstations. This report describes TLM and the results of a series of experiments which were run this phase to test its capabilities as a predictive task demand modeling tool. Specifically, it includes discussions of: the inputs and outputs of TLM, the theories underlying it, the results of the test experiments, the use of the TLM as both stand alone tool and part of a complete human operator simulation, and a brief introduction to the TLM software design.

  18. Prediction model of band gap for inorganic compounds by combination of density functional theory calculations and machine learning techniques

    NASA Astrophysics Data System (ADS)

    Lee, Joohwi; Seko, Atsuto; Shitara, Kazuki; Nakayama, Keita; Tanaka, Isao

    2016-03-01

    Machine learning techniques are applied to make prediction models of the G0W0 band gaps for 270 inorganic compounds using Kohn-Sham (KS) band gaps, cohesive energy, crystalline volume per atom, and other fundamental information of constituent elements as predictors. Ordinary least squares regression (OLSR), least absolute shrinkage and selection operator, and nonlinear support vector regression (SVR) methods are applied with two levels of predictor sets. When the KS band gap by generalized gradient approximation of Perdew-Burke-Ernzerhof (PBE) or modified Becke-Johnson (mBJ) is used as a single predictor, the OLSR model predicts the G0W0 band gap of randomly selected test data with the root-mean-square error (RMSE) of 0.59 eV. When KS band gap by PBE and mBJ methods are used together with a set of predictors representing constituent elements and compounds, the RMSE decreases significantly. The best model by SVR yields the RMSE of 0.24 eV. Band gaps estimated in this way should be useful as predictors for virtual screening of a large set of materials.

  19. Construction the model on the breast cancer survival analysis use support vector machine, logistic regression and decision tree.

    PubMed

    Chao, Cheng-Min; Yu, Ya-Wen; Cheng, Bor-Wen; Kuo, Yao-Lung

    2014-10-01

    The aim of the paper is to use data mining technology to establish a classification of breast cancer survival patterns, and offers a treatment decision-making reference for the survival ability of women diagnosed with breast cancer in Taiwan. We studied patients with breast cancer in a specific hospital in Central Taiwan to obtain 1,340 data sets. We employed a support vector machine, logistic regression, and a C5.0 decision tree to construct a classification model of breast cancer patients' survival rates, and used a 10-fold cross-validation approach to identify the model. The results show that the establishment of classification tools for the classification of the models yielded an average accuracy rate of more than 90% for both; the SVM provided the best method for constructing the three categories of the classification system for the survival mode. The results of the experiment show that the three methods used to create the classification system, established a high accuracy rate, predicted a more accurate survival ability of women diagnosed with breast cancer, and could be used as a reference when creating a medical decision-making frame. PMID:25119239

  20. Process Modeling In Cold Forging Considering The Process-Tool-Machine Interactions

    NASA Astrophysics Data System (ADS)

    Kroiss, Thomas; Engel, Ulf; Merklein, Marion

    2010-06-01

    In this paper, a methodic approach is presented for the determination and modeling of the axial deflection characteristic for the whole system of stroke-controlled press and tooling system. This is realized by a combination of experiment and FE simulation. The press characteristic is uniquely measured in experiment. The tooling system characteristic is determined in FE simulation to avoid experimental investigations on various tooling systems. The stiffnesses of press and tooling system are combined to a substitute stiffness that is integrated into the FE process simulation as a spring element. Non-linear initial effects of the press are modeled with a constant shift factor. The approach was applied to a full forward extrusion process on a press with C-frame. A comparison between experiments and results of the integrated FE simulation model showed a high accuracy of the FE model. The simulation model with integrated deflection characteristic represents the entire process behavior and can be used for the calculation of a mathematical process model based on variant simulations and response surfaces. In a subsequent optimization step, an adjusted process and tool design can be determined, that compensates the influence of the deflections on the workpiece dimensions leading to high workpiece accuracy. Using knowledge on the process behavior, the required number of variant simulations was reduced.